News:

Forum changes: Editing of posts has been turned off until further notice.

Main Menu

Fudge-like 2dX mechanic

Started by paulkdad, May 21, 2005, 07:29:47 PM

Previous topic - Next topic

paulkdad

I've been toying with a die roll mechanic that uses two dice of the same type and is based on the idea that most of the time a character's performance is going to fall within a particular range.

The base idea comes from the 68/95/99.7 rule of thumb in statistics. That is, when looking at a distribution that is approximately normal, roughly 68% of the data are going to fall within one standard deviation above and below the mean; roughly 95% of the data are going to fall within two standard deviations above and below the mean; and 99.7% of the data are going to fall within three standard deviations above and below the mean.

Applying this rule to randomization, I'm starting from the assumption that roughly 68% of the time a character should perform as expected. Note that I'm assuming a relatively narrow skill/trait range (1 to 10, 1 to 12, etc.), so that there is a fairly significant difference between skill/trait levels. I crunched the numbers for 2d4, 2d6, 2d8 and 2d10 and came up with the following:

2d4 result:
-2  =  6.25%  =  2
-1  =  12.50%  =  3
±0  =  62.50%  =  4 to 6
+1  =  12.50%  =  7
+2  =  6.25%  =  8

2d6 result:
-3  =  2.78%  =  2
-2  =  5.56%  =  3
-1  =  8.33%  =  4
±0  =  66.67%  =  5 to 9
+1  =  8.33%  =  10
+2  =  5.56%  =  11
+3  =  2.78%  =  12

2d8 result:
-4  =  1.56%  =  2
-3  =  3.13%  =  3
-2  =  4.69%  =  4
-1  =  6.25%  =  5
±0  =  68.75%  =  6 to 12
+1  =  6.25%  =  13
+2  =  4.69%  =  14
+3  =  3.13%  =  15
+4  =  1.56%  =  16

2d10 result:
-5  =  1.00%  =  2
-4  =  2.00%  =  3
-3  =  3.00%  =  4
-2  =  4.00%  =  5
-1  =  5.00%  =  6
±0  =  70.00%  =  7 to 15
+1  =  5.00%  =  16
+2  =  4.00%  =  17
+3  =  3.00%  =  18
+4  =  2.00%  =  19
+5  =  1.00%  =  20

I particularly like the spread when using 2d6 or 2d8. Compare this to the odds when using 3dF or 4dF (Fudge dice):

3dF result:
-3  =  3.7%
-2  =  11.1%
-1  =  22.2%
±0  =  25.9%
+1  =  22.2%
+2  =  11.1%
+3  =  3.7%

4dF result:
-4  =  1.2%
-3  =  4.9%
-2  =  12.3%
-1  =  19.8%
±0  =  23.5%
+1  =  19.8%
+2  =  12.3%
+3  =  4.9%
+4  =  1.2%

Assuming that even a +1 is a pretty significant advantage, I tend to prefer the distribution in this 2dX system over Fudge dice.

Finally, the 99.7% part of the rule can just as well be ignored, but it is easy enough to use the 95% part (i.e., two standard deviations from the mean) to assign critical successes or failures. 2d6 works wonderfully for this.
Paul K.

TheTris

Interesting idea.

I think that this depends what you mean by "as expected", and by what amount you increment failure/success outside this bound.

If two people of equal skill have a race, for instance, what is "as expected" performance?  Do they have a dead heat 68% of the time?

I think the most important things statistics contributes to game system design is the idea that there has to be a curve, I think that the type of task you are attempting, and the level of expertise you have modifies what type of curve you produce.

An amateur fighter may land good blows, or trip up.  A master is likely to be much more consistent.  A great chef might make good food 90% of the time, and masterpieces 10%.

Not sure if this helps at all, but it's something I've been trying to represent in a game I'm writing (well, thinking about a lot, anyway :-).
My real name is Tristan

Justin Marx

I like the idea, am working on a 2d12 system with the same philosophy myself. I like the idea of expertise effective skill rolls in this way, and as you said, a +1 makes a big difference. However the problem I have is that to introduce situational modifiers and calculate the statistics is a nightmare, so I have opted for a subsystem based on Margins of Success (although in its current form, it is unweildy.... streamline, streamline, streamline...). As such, any modifiers throw the entire thing out of whack, a +1 for having a good-hair-day or whatever else makes rookies as good as masters. This surely shouldn't be the case. With the 2-24 range of 2d12 this is minimised somewhat, but at the expense of the expertise ranks' (out of 12 for what I am writing up) effectiveness.

I have done some excel sheets for the 2d12 mechanics if you are a true masochist and are interested, PM me if you want me to send em.

As for the TheTris's suggestion, it is an excellent idea, but if you are still around, could you give me some examples on how you can adjust expertise curves without: a) complicated mathematics (that increase resolution time) or b) complicated charts (that do the same). I have not encountered a system that does it well, if anyone knows one I would be very interested.

TheTris

Okay, well my current solution is this:

Skill level goes from 1 to 10 and determines number of dice rolled.
You then take the best 3 dice.  If you get three 6's each extra 6 adds +1 to the result.
Attributes go from -3 to +3 and modify the final result.

Skill 1-3 are used as starting levels for skills, depending on the skill (for instance microelectronics would start at 1 as you have no chance of success if you are unskilled; swordfighting starts at 3 as its obvious that swinging the sharp bit into your opponent is good)
Skills 4-6 range from trained up to pretty damn good, 7-10 are the province of really good people.
Attributes should follow a bell curve, so 68% odd of people should have 0 in a given attribute, 95% should fall between + and - 1 and so on.

This way, the more skilled you are, the more predictable and the better your result will be.  If you are naturally gifted but untrained, you get less consistent results which tend to be better than someone who is untrained and not gifted.  Higher skill levels start to make less of a difference - a grandmaster with Skill 9 in chess will sometimes beat Kasparov (and it will usually be a close run thing) and someone with some small level of training in chess (4) will almost always beat me (3).

You could use this system for just skill resolution, I use it for pretty much everything - giving weapons a damage skill (modified by wielder strength) and so on.
My real name is Tristan

paulkdad

Quote from: Justin MarxI have done some excel sheets for the 2d12 mechanics if you are a true masochist and are interested, PM me if you want me to send em.
Thanks. I already did Excel sheets on 2d4, 2d6, 2d8, 2d10 and 2d12. What I found was simply this: the range you want to produce will determine which dice you use (in the system I posted earlier). For instance, 2d4 will produce a range from -2 ot +2; 2d8 will produce a range from -4 to +4; 2d12 will produce a range from -6 to +6, etc. Of course, this is strictly looking at the mean and standard deviation for each type of dice used.

Quote from: TheTrisI think that this depends what you mean by "as expected", and by what amount you increment failure/success outside this bound.
Absolutely, and thanks for the clarification. The system I was toying with here relies upon target numbers, and simply matches them against skill level to determine the margin of success. You can look at margin of success like this: (one) with a target number for a passive object, any non-negative result succeeds, and the margin of success simply indicates the level of expertise applied to the task; (two) with a head-to-head competition (chess, combat), the greater the margin of success/failure, the better/worse the result. With a zero success, the opponents counter one another; (three) with a side-by-side competition (a race, golf), the same is true, but with a zero success you could decide a narrow margin of victory by some other (equally simple) means, such as: coin flip, higher skill level, higher base attribute, etc.

Basically, I think skill level is more consistent than it is often portrayed in RPGs. Depending upon the skill range, a character with a certain level of expertise should function precisely at that level most of the time (from a simulationist perspective). The 68/95/99.7 rule just gives me a rule to apply to that idea. If you look at the results of most swiss system chess tournaments, you'll see that the idea of consisent performances is true. There are occasional upsets, but they are "upsets" because the rating system is generally reliable at predicting a player's performance in a tournament. Even relatively small variations in skill level can make a big difference. And the truth of the matter is, Kasparov's worst blunder is always going to counter my best move, no matter how fortunate I am with my die roll.

Here's another "side-by-side" example: before we had our daughter, my wife and I used to play disc golf. A friend we played with was better than my wife, but not as good as I was. Our skill levels (from 1 to 10) might have been 2, 3 and 4. Well, this friend has continued to play regularly, while I haven't. Still, he really isn't very good in the grand scheme of things, so we'll put him at 5 now (with me still at 4). Does he win about 70% of the games we play? Absolutely. Is there a huge difference between our levels of skill? No. Conclusion: a little extra skill can go a long way.
Paul K.

TheTris

I agree absolutely about the variance of results in some RPGs being far too big (speaking simulationally), although perhaps in routine tasks where you would expect someone to perform with the same level of skill almost all the time you don't need to ask for a roll.

I also like what you are trying to do - I get the impression that too many people knock together a system without thinking through why they write it like they do.

You can stop reading now if you like, it's all criticism below here :-)

Okay - of the 68% of the time I perform within 1SD of the mean, 1 of those times I will be at -0.99SD the next at +0.99SD.  That's a difference of 1.98SD, which is almost the same as the difference between my mean performance and a performance I can only expect 5% of the time.  But if you lump all my 68% performances under one number, these two performances are treated the same.  Is that reasonable?

The break points of whole numbers of standard deviations don't relate directly to anything.  You could just as easily set break points at 0.2SD, 0.75 SD and 1.33333 SD.  You are using a bell curve, and then squishing it in certain places. (So it looks more like a set of steps than a bell curve).  I don't understand why you are doing this.
My real name is Tristan

NN

Question: why are you introducing dice into your game?

paulkdad

Quote from: TheTrisOkay - of the 68% of the time I perform within 1SD of the mean, 1 of those times I will be at -0.99SD the next at +0.99SD. That's a difference of 1.98SD, which is almost the same as the difference between my mean performance and a performance I can only expect 5% of the time. But if you lump all my 68% performances under one number, these two performances are treated the same. Is that reasonable?
Well, it's only the same if you look at it as linear distance from the mean, not in terms of area under the curve. The 68/95/99.7 rule is talking about area under the curve. The point of the curve is to point out that all linear distances from the mean are not created equal (in terms of the chances of encountering a particular result).

Plus, the dice results within 1 SD either way of the mean aren't measuring anything. These results are ignored. So, when using 2d6, there is absolutely no difference between rolling a 5 and rolling a 9. What the system is doing is assigning specific rolls to the odds that a character will perform significantly better or worse than expected. The dice just point you to a table (though a pretty simple one). The table tells you how well your character performed.

Is it reasonable? Well, it's based on a reliable and generally accurate statistical rule-of-thumb. IMO, it's much more reasonable than simply saying (for instance), "I always want even the lowliest character to be able to beat anybody--if s/he's lucky enough." At some level, adopting any mechanic is going to require a decision on the part of the designer, and all decisions are part of a process that seems arbitrary to people outside the process. So really, if you asked me if it was just arbitrary I'd say "Yes, absolutely." and that wouldn't bother me a bit. :^)

Another wrinkle I was playing around with was the possibility for doubles to count as "side" successes (much like rolling a six in Over the Edge). Though a character might fail, nevertheless something good comes from the attempt. This would at least remove a few instances of complete "whiff", though the odds of encountering doubles would vary with the type of dice used (from 25% chance with 2d4 to 10% chance with 2d10).
Paul K.

paulkdad

Quote from: TheTrisAttributes go from -3 to +3 and modify the final result.
In my opinion, situational modifiers are very significant, and highlight the importance of good strategy and tactics. Attributes, on the other hand, are insignificant, and don't get "tacked onto" the skill level at all. This just adds an unnecessary complication to the rolls.

Instead, attributes can be used to determine two things in relation to skills: (one) how quickly a skill is learned; and (two) where the skills are capped. So, a character with Intellect of 4 (on a scale of 1 to 10) is going to learn Chess more slowly than someone with Intellect of 9. Furthermore, s/he is going to reach her/his maximum potential at skill level 4, and be able to progress no further.

You might wonder about skills that characters just try along the way, even though their players can't relate these attempts to something the characters know. Again, this happens all the time in RPGs and very rarely in life. How many people do you know who actually try things that they have no interest or experience in? Or, if they do, how much does their natural talent actually impact their success? Again, I see this as an unnecessary complication.
Paul K.

TheTris

Okay, well, if it's arbitrary then it's arbitrary.  But surely then the statistical stuff behind it is irrelevant?  You might decide that people need to perform significantly differently from average 50% of the time, and it would be no less valid.  So I still don't understand the reasoning behind basing it on whole numbers of standard deviations?

"Well, it's only the same if you look at it as linear distance from the mean, not in terms of area under the curve."

But the linear difference is what actually means something here - it is the difference in performance.  If you plot IQ of a population, you should get a bell graph, and the linear difference between two scores is the difference in intelligence, isn't it?
My real name is Tristan

TheTris

About attributes:  I disagree fairly fundamentally.  (although I agree that situational modifiers are very important, I put them into the difficulty, rather than the performance of the player.)

Situational modifiers affect the difficulty of a roll.  A "driving" total of 15 is as good as a "driving" total of 15.  This is intuitive.  Driving between obstacles strewn across your path while pursued by evil bad men and with a half shot up car might be difficulty 17.  Driving around a sharp bend at 40mph might be difficulty 10.  Modifying the result based on difficulty is less intuitive, I think.

Attributes are hugely important.  The first time I play any board game, whenever a friend asks me for a game of squash or table tennis (sports I don't play more than once/year), seeing a Sudoku puzzle for the first time, when I get asked to do a literature review for something I know little about at work...

My agility, strength, intelligence...all very important in doing these things.  And this IS how attributes affect real life.

I'm willing for players to have to add up to +3 to the result of a die roll, to get this much more accuracy.  I can understand people who wouldn't, but I don't agree that it isn't accurate.

Done your way, the guy with intellect 4 will have a 50/50 shot against the guy with intellect 9 in their first game of chess.  The 7 stone weakling will be 50/50 playing rugby against the 15 stone bodybuilder, if neither has played before.  And the guy who has serious learning difficulties will be a star sportsman in time, because it's his strength that affects how he learns these techniques.

Unless you base all learning from intelligence, but then no other attributes matter?
My real name is Tristan

Justin Marx

The only problem I see with this example is the lack of granularity with the results... which is fine for some people obviously. Dividing the results into SD's means that you rarely perform above a +1 or below a -1. As NN mentioned, why have dice at all? The probabilities are so fixed that it is basically a Karmic system with rare deviation from the norm.

I agree with paulkdad's de-emphasis on attributes, it makes a lot more sense if you are going down the simulationist road, IMHO. However the importance of attributes-to-skills depends upon the skill in question, which means some skills seem to be specialities of attributes (such as running for instance) while others require more training and no rookie savant can grasp it easily without this training (for instance, neurosurgery). I am working on a variable attribute-to-skill relationship, depending on the skill, however it is currently quite cumbersome and I am trying to streamline it. For now I am using a simple trichotomy of Talents - Skills - Disciplines, in order of neccesity of training to master.

My interest in bell curve distribution is simulationist in desire, and I think that having a higher granularity means that more detail can be gleaned from a result whether success or failure while still keeping the results in line with the bell curve (which is presumed, I suppose, as an example of realism). SD's are simply are too big a unit for me. As I said, it may as well be a Karmic system with a plus-or-minus 1, which is not really my approach. For otherse I do not question that it would be more appropriate of course.

I was also trying to alter the chance of fortune depending upon the nature of the skill - for instance, you may be able to get lucky and jump the fence even though you have lousy attributes, but the chances are much lower (probably impossible) that a beginner could correctly perform a triple bypass without the patient dying no matter how smart and dextrous you are.

Keeping this simple while allowing for detail is the trouble. Hence why I read these posts with keen interest.

paulkdad

Quote from: Justin MarxThe only problem I see with this example is the lack of granularity with the results... which is fine for some people obviously. Dividing the results into SD's means that you rarely perform above a +1 or below a -1. As NN mentioned, why have dice at all? The probabilities are so fixed that it is basically a Karmic system with rare deviation from the norm.
Bingo. It's nearly diceless. It's just designed to account for those times when someone does much better or worse than expected. To use the chess tournament analogy: percentage-wise there aren't a lot of upsets in rated tournaments, but it's not unusual for there to be at least one upset in every tournament.

Quote from: TheTrisDone your way, the guy with intellect 4 will have a 50/50 shot against the guy with intellect 9 in their first game of chess. The 7 stone weakling will be 50/50 playing rugby against the 15 stone bodybuilder, if neither has played before.
Here's a better comparison: Using a 1 to 10 scale, one player is a fairly experienced amateur chess player with mediocre intelligence (skill 4, attribute 0); another player just knows the basics and has very high intelligence (skill 1 attribute 3). If these two players were to meet, they'd be on equal footing? Not a chance. The experienced player would trounce the brainy guy every time--no contest.

Quote from: TheTrisUnless you base all learning from intelligence, but then no other attributes matter?
That's really a question for another thread. But if my only reason for linking attributes to skills is to "make attributes matter," I think I'd need to revamp my understanding of attributes in my system. I'm not being tongue-in-cheek here, either. IMO, the rationale of "making players care" about certain attributes is often the justification behind associating them with skills... but it's a poor one.

[Edit] TheTris wrote: "But the linear difference [from the mean] is what actually means something here - it is the difference in performance."

Normal distributions are used all the time to calculate the odds of encountering a particular performance result (for example, in standardized tests). If the mean of a test is 70, the odds of a student scoring a 40 are much less than half the odds they will score a 55. Similarly, the odds that they will score a 100 are much less than half the odds that they will score an 85. This isn't an unusual application of the concept. The curve simply measures the fact that the further you move away from a mean performance, the less likely you are to encounter a particular result.

If you want more specificity than allowed for here (for instance, if it's important to your game to determine between a test score of 67 and 73), then fine, develop that. I don't see the need for it, especially if the absolute novice is ranked at a 1 and the grandmaster of all time is ranked at a 10. Each skill level already encompasses a broad range, so using 1SD seems perfectly reasonable to me, otherwise you get novices opening a can of whupass on grandmasters (which to me seems anything but reasonable).
Paul K.

Valamir

On the attributes vs skill question, I think it boils down to how granular you want your skill levels to be.

I'm 100% in agreement with the idea that pretty much every activity you engage in is a skill and your attribute just impacts how quickly you can pick the skill up and how good you can eventually get.  The "natural athelete" vs. the "average guy" both have to learn how to play football, but at the end of training camp the natural athelete will be much more skilled at it.

I think that's a functional model, but I also think it won't really work (as a complete model) unless you're taking your skill levels down pretty fine grained.

Consider the chess example.  What's REALLY going on between 2 players (one very bright one less so) neither of whom has ever played chess before?  Well, the moment of sitting down at the board and reading the rules counts as "training".  A certain number of "Training Points" would be immediately earned by each and those points would be converted into Skill Levels such that the high intelligence guy winds up at a higher level (say level 3) than the lower guy (say level 1).  They would then play the game and the "brighter" guy would have the advantage.  He wouldn't have the advantage because he's smarter, he'd have the advantage because he's more skilled.  Its just that he was able to achieve a higher level of skill much more quickly because he was smarter.  This to me most accurately represents the relationship between natural ability and training in reality.

The problem is two fold.  First you need to have a system with LOTS of skill levels.  If Skill level 1 represents a novice, level 2 a professional, and level 3 a master than the above outline doesn't work.  There just isn't enough granularity there to represent the small difference that would come from a single reading of the rules between the two players.  Further you'd need to have a system where small differences in skills make a difference in a way that at minimal skill levels you don't simply have a battle of the whiff factors.  

Second, keeping track of that level of granularity and "leveling up" every new skill during play can get really tedious really fast.  

What most games do then is allow skills to default to an "Attribute Test".  IMO the "Attribute Test" does NOT represent how well a character can do with no skill at a task based solely on natural ability.  What it actually represents is how well a character can do with minimimal training based on how quickly minimal training can be converted to skill due to ability (my chess example above).  The Attribute Test is just a big short cut to what otherwise could be a really annoying mechanic.


So I think you need to give some consideration to 1) how things would mechanically work in your model based on what the actual model says about the math, and then 2) what concessions and shortcuts you need to make to the model to streamline actual play.

In other words, I think Tris is wrong about the actual model should work based on reality.  But I think that 9 times in 10 he's right from the standpoint of using attributes as a convenient shortcut method.

I'd be most interested in seeing a system that is that other 1 time in 10 where you can remain true to the model without it being cumbersome in play.

paulkdad

Quote from: Justin Marx...some skills seem to be specialities of attributes (such as running for instance) while others require more training and no rookie savant can grasp it easily without this training (for instance, neurosurgery).
Or, are we just forgetting all the experience we got running, jumping, climbing, etc., when we were children? Would an adult who never climbed a tree as a child have any climbing skill at all? I think Valamir is right in saying that attributes represent potential, but the realization of that potential is to be found in skills (if I interpreted that correctly).

The skill/attribute equation can get extremely cumbersome, and I'm not into cumbersome... during play. But I do think it's perfectly OK to shift some of that burden to the bookkeeping phase, especially if it serves to streamline play.

On the other hand, I really think we're drifting here, and the discussion of the mechanic I presented is pretty much complete. Justin Marx or TheTris, would either of you be interested in starting a new thread on the relationship between attributes and skills?

EDIT: I defer to either of you because it seems to be more your concern than mine.
Paul K.