News:

Forum changes: Editing of posts has been turned off until further notice.

Main Menu

Nonstandard Rant: Opposed rolls ARE more random, dammit!

Started by Walt Freitag, October 05, 2002, 11:37:34 PM

Previous topic - Next topic

Walt Freitag

In an Indie Game Design thread, JMendes wrote:

Quote
Quote from: kevin671wrote:
I much prefer to make "opposed" tasks that little bit more random.

Er... less random, you mean. Remember, once you go the opposed route, the overall result tends to be a more balanced, center-heavy distribution (there's a name for this... anyway, it's the 4th order momentum of the distribution). What it comes down to is that this means that opposed rolls are actually less random than unopposed ones. Some might consider this counterintuitive. I consider it highly intuitive.

Intuitive or counterintuitive, it just isn't so.

1. What kind of opposed and unopposed rolls am I talking about?

I'm talking specifically about comparing between different resolution mechanisms within the same system that work as follows:

Opposed test: succeeds when someDieRoll + effectiveness > someDieRoll + countereffectiveness, othewise fails

Unpoosed test: succeeds when someDieRoll + effectiveness > target number, otherwise fails

Effectiveness is usually a player character skill score or requisite or combination thereof. Countereffectiveness usually arises as the skill and/or requisite of the opposing character. (In all-opposed systems it also can stand for the difficulty of the challenge.)

The target number for the unopposed roll is based on the difficulty of the challenge. The target number is often arrived at by adding a certain fixed number to a variable realtive difficulty estimate. The fixed number is normally set to or near the median outcome of the someDieRoll, which creates an "equal challege" (a challenge where success is close to a 50-50 chance) when the variable relative difficulty estimate is equal to the player's effectiveness.

For the comparison to be valid, someDieRoll must be the same roll (same number of the same type of dice) in all cases, which is usually the case in real systems that have both opposed and unopposed rolls. It doesn't change the analysis very much if you substitute >= instead of > as the rule in both cases, or if you reverse both rules into "roll under" instead of "roll over" mechanisms. However, mechanisms with variable sized dice pools aren't covered here.

2. How do we define 'more random' or 'less random'?

The most useful definition I've been able to come up with is that "less random" means that the stats that go into the resolution mechanism are more influential in determining the outcome. The more likely the side with a given numerical advantage is to win, the less random the mechanism.

The least predictable outcome is when the chance of success is 50%. If an action has a statistical advantage (such as effectiveness higher than the opposing effectiveness, or effectiveness higher than the variable relative difficulty), then the chance of success increases above 50%. The more random the resolution system is, the less the chance of success would increase above 50% for a given degree of advantage. Similarly, if an action has a statistical disadvantage, the chance of success decreases below 50%. The more random the resolution system is, the less the chance of success would decrease below 50% for a given degree of disadvantage.

In a more random system, the chance of success stays closer to 50% for a given degree of numerical advantage or disadvantage.

When there is no randomness at all, we have a Karma system where the "chance of success" can be said to be 0% or 100% in any individual case. For example, success might be guaranteed in an opposed situation if and only if one's effectiveness stat is higher than the opposing effectiveness stat. Low randomness = farther from a 50% chance, in this case as far as one can get. The stats have all the influence over the outcome; random fortune has none.

If an action were totally random, to the point where the stats are irrelevant, then the chance of success can be pegged at 50% regardless of any stats. High randomness = closer to a 50% chance, in this case as close as one can get. The stats have no influence over the outcome; random fortune alone prevails.

To really pin this down let me be more specific about the statistical advantage or disadvantage in the context of the systems we're talking about. This can be summarized in any individual case as a stat differential.

For opposed rolls, the stat differential is the character's effectiveness score minus the countereffectiveness score.

For upopposed rolls, the stat differential is the character's effectiveness score minus the relative difficulty, which is the non-fixed portion of the target number.

So, the complete definition of "more random" is: a resolution mechanism is more random than another if, for most or all stat differentials, its probability of success is closer to 0.5 than the probability of success yielded by the other mechanism at the same stat differential.

With this definition we can test the popular idea that "opposed rolls are less random."

3. Example: mechanisms using a roll of 2d6.

Unopposed mechanism: roll 2d6 + Skill > 6 + Relative Difficulty to succeed
Opposed mechanism: roll 2d6 + Skill > 2d6 + Skill to succeed

DIFF = Character skill – opposing skill, or character skill – relative difficulty

The probability of success based on stat differential is:


TABLE 1

DIFF        -6   -5   -4   -3   -2   -1    0    1    2    3    4    5    6    7

PROBABLITY OF SUCCESS

UNOPPOSED  0.0  0.0 .028 .083 .167 .278 .417 .583 .722 .833 .917 .972  1.0  1.0

OPPOSED   .027 .054 .097 .159 .239 .336 .444 .556 .664 .761 .841 .903 .946 .973


The result:

ALL of the opposed rolls are closer to a success probability of 0.5 than the corresponding unopposed rolls.

In other words, chance plays more of a role in the results in opposed rolls than in unopposed rolls.

In other words, statistical advantages and disadvantages are more likely to produce an outcome consistent with who has the advantange in unopposed rolls than in opposed rolls.

In other words, if the GM gives me a choice between making an opposed or an unopposed roll in this system, and I'm out to maximize my effectiveness, I'll choose an opposed roll if the odds are against me, because it gives me a higher chance of overcoming my numerical disadvantage, and I'll choose an unopposed roll if the odds are in my favor, because it gives me a higher chance that my numerical edge will swing the outcome in my favor.

In other words, opposed rolls are more random, dammit! Or at least, they can be.

Is one example enough?

It's enough to disprove the blanket assertion that "opposed rolls are [always] less random (or more reliable)." But it's certainly not enough to prove the reverse. And indeed, I'm not trying to prove that the reverse is true in all cases, just show that it's true in a wide variety of the most common ones. (I believe it might be true in all cases, but proving it mathematically would be more fun than I could stand right now.)

I started with a two-dice roll example because the results there are a little more clear-cut than the even more typical single die case. Let's look at a single die mechanism now.

Unopposed mechanism: roll d10 + Skill  >=  6 + Relative Difficulty to succeed
Opposed mechanism: roll d10 + Skill >= d10 + Skill to succeed

DIFF = Character skill – opposing skill, or character skill – relative difficulty

The probability of success based on stat differential is:


TABLE 2

DIFF       -6   -5   -4   -3   -2   -1    0    1    2    3    4    5    6

PROBABLITY OF SUCCESS

UNOPPOSED  0.0  0.0  .1   .2   .3   .4   .5   .6   .7   .8   .9   1.0  1.0

OPPOSED    .10  .15  .21  .28  .36  .45  .55  .64  .72  .79  .85  .90  .94


In this case, for most differential scores the opposed roll has a success probability closer to 0.5 than the unopposed roll. So we can still say that in general, the opposed roll is more random than the unopposed roll. However, there are a few exceptions: for differentials of 0, 1, and 2 the unopposed roll is the one with the success chance closer to 50%.

This comes about, ultimately, because the median roll on a single die is a fractional value that can't actually be rolled. The whole outcome distribution has to be, essentially, rounded up or down by half a die face (in this case, 0.05), and the two distributions can't actually align quite correctly.

There are various ways to correct this and bring the distributions into alignment. Generally I doubt that such measures would often be worth incorporating into an actual system, but let's consider one here just for the sake of argument. One possiblity is to say the unopposed roll succeeds when d10 + skill > 5 + difficulty, and when there's a "tie" and d10 + skill = 5 + difficulty, roll a 50-50 tiebreaker to determine success. This shifts the distribution by the necessary half a die face, resulting in:


TABLE 3

DIFF       -6   -5   -4   -3   -2   -1    0    1    2    3    4    5    6

PROBABLITY OF SUCCESS

UNOPPOSED  0.0  .05  .15  .25  .35  .45  .55  .65  .75  .85  .95   1.0  1.0

OPPOSED    .10  .15  .21  .28  .36  .45  .55  .64  .72  .79  .85  .90  .94


Now it's a clear case of the opposed roll being more random than the unopposed roll. All the opposed rolls are closer to 0.5 probability than the corresponding unopposed rolls, except at a DIFF of –1 and 0, where they're the same (and already very close to 0.5).

Where did the belief that "opposed rolls are less random / more reliable" come from?

I'm just guessing, but I'd consider blaming two factors. One is the sensitivity of the distribution alignment between the opposed and unopposed mechanisms. Even the half-a-die-face offset that's inherent in all single-die opposed vs. unopposed distribution comparisons gives a character a slight, but perhaps noticeable depending on the die used, advantage or disadvantage in unopposed rolls during close contests. In Table 2 above, the unopposed roll is disadvantageous for all DIFF scores up through 2, and only becomes advantageous (due to being less random) at DIFF scores of 3 and higher. The shift can also equivalently go the other way with any of several small tweaks, so that a slight advantage exists during close contests for unopposed rolls. (That's probably slightly preferable.) But adding an additional unncessary shift of another whole point or more – such as setting the fixed add-on for the target number wrong, or using ">" when you should use ">=" -- could skew the whole business and cause one mechanism to have a more or less consistent advantage or disadvantage in success rate over the other.

The other factor is those pretty bell curves. You look at the bell curve and think, it's center-weighted so it must be more reliable somehow.

The flaw in that reasoning is that adding dice increases the center weighting overall, but it also spreads the whole distribution wider, which more than makes up for the center weighting. The reuslt is less reliability rather than more. It all comes down to two facts:

-- Adding more dice, whether added or subtracted, in a roll does not increase the likelihood of the most likely outcome.

- Adding more dice, whether added or subtracted, increases the number of possible outcomes.

For example, with 1d6 all outcomes are equally likely with a probability of 1/6. Add a second die and the most likely outcome still has a probability of 1/6. However, there are now five more distinct outcomes than there were before. Since the distribution is symmetrical, that means that the outcomes adjacent to the most likely outcome must now be less likely, in order to "make room" for the more extreme outcomes that are now possible. Add a third die, whether added or subtracted, and the probability of the most likely outcome becomes 27/216, which is less than 1/6. So the peak of the bell curve is now flatter, not sharper, than it was before; the results are being shared between an even larger set of outcomes. As long as the proper offsets are provided to keep the distribution centered, the randomness can only increase.

- Walt
Wandering in the diasporosphere

Christoffer Lernö

formerly Pale Fire
[Yggdrasil (in progress) | The Evil (v1.2)]
Ranked #1005 in meaningful posts
Indie-Netgaming member

M. J. Young

Quote from: Walt a.k.a. wfreitagWhere did the belief that "opposed rolls are less random / more reliable" come from?
Well, I don't know either; but when I saw it stated, I thought it came from this:

In an opposed roll, you know that the roll of the dice is going to tend to average to the mid point. Thus the average of all rolls of 2d6 will be 7; the average of all rolls of d20 will be 10.5. Whatever the other modifiers are, this factor will tend toward stability.

On top of that, if all modifiers are eliminated, the odds of rolling a number equal to or better than a number rolled on the same die or set of dice, before any dice are rolled, is always better than even. If we're rolling d6 against d6, of the 36 possible combinations, the second die will meet or beat the first 21 times.

The perception, at least, of an unopposed roll is that the number selected is more arbitrary. That is, the referee may be told that the difficulty level should be at least one and not greater than six, and that these difficulties should be evenly distributed, with as many low as high, but in practice the referee will tend to use higher difficulties more frequently than lower difficulties.

The perceived randomness of unopposed rolls does not come from actual randomness in the die mechanic, but in the subjective nature of the determination of difficulty level (when such exists) and the potential for referees to set such difficulties in a manner that does not reflect standard deviation.

Anyway, that's what I perceived when I first read the comment.

--M. J. Young

Christoffer Lernö

Actually the simplest way to see the randomness of the opposed roll is to realize that the amount of dice actually doubles.

2D6 vs 2D6 is actually the same as rolling a 4D6 against a static value.

In fact the 2D6-2D6 is exactly equivalent to 4D6-14.
formerly Pale Fire
[Yggdrasil (in progress) | The Evil (v1.2)]
Ranked #1005 in meaningful posts
Indie-Netgaming member

JMendes

Hey, :)

Your reasoning is flawless but is based on a premise that I do not share, namely your definition of "more randomness".

(Actually, I was gonna say it's an outright invalid premise, but then again, it may well be a valid premise within a different, yet valid, mathematical body <called an axiomatic body, for those of you that care>. Within the framework of the generally adopted and accepted body, however, it is invalid.)

Quote from: wfreitag"less random" means that the stats that go into the resolution mechanism are more influential in determining the outcome

"Less random" means that the standard deviation of the results distribution is lower. This means that the distribution is more predictable.

For those of you that don't know it, standard deviation is the square root of the average of the squares of the distances from each possible value to the mean, weighed by the probability of each value. Like so:

sDev = sqrt ( sum ( p(v) x (v - avg) ^ 2 ) )

Now, why is a distribution with higher sDev less predictable than one lower sDev? Well, let's work with some non-standard dice, shall we?

Consider 2d6-1 and d11. (Wanna get a d11? Get a d12 and reroll all results that read 12.) Those have the same range: 1-11. Now, try to make an educated guess as to what the result of a single roll will be, for both cases. With the d11, you will be right one out of eleven times. With the 2d6-1, if you guess 6, you'll be right one out of six times. Ergo, more predictable.

Remember that, to make meaningfull comparisons, you must work with the same range of possible values. The d4 is obviously more predictable than 2d20, no matter what definition of 'randomness' you go for. Why? Simply because higher-sided dice are more random than lower-sided dice.

Now, let me requote your definition:

Quote from: wfreitag"less random" means that the stats that go into the resolution mechanism are more influential in determining the outcome

(I am assuming that, for influence, you are not talking about the actual values. I think you do not contend that adding a d6 to a fixed value of 137 is somehow less random than adding a d6 to a fixed value of 4.)

So, as it turns out, for distributions with a lower sDev, the fixed part will have more influence in determining the result, as compared to one with a higher sDev. This is simply because lower sDevs tend to produce more centered, predictable results. Thusly, your statement is true, but it is a consequence, not a premise of the reasoning.

QuoteIn a more random system, the chance of success stays closer to 50% for a given degree of numerical advantage or disadvantage

Also, this is generally not true. Sure, it's true for a symetrical distribution, but not for asymetrical distributions.

Not a definition, but symetrical distributions are those where the probability of rolling below the mean is equal to that of rolling above the mean. For examples of asymetrical distributions, consider Shadowrun's exploding d6 (mean=4.2, probability of rolling below the mean = 66%) and L5R's exploding d10 (mean=6.1(1), probability of rolling below the mean = 60%).

(By the way, again, for those who don't know this stuff, symetry can be quantified. It is the cube root of the cubes of the differences yadda yadda yadda... It's really not important.)

To be true, your statement would have to read: In a more random system, the chance of success stays closer to the mean for a given degree of numerical advantage or disadvantage

(Incidentally, opposed rolls are always symetrical distributions.)

QuoteWhen there is no randomness at all, we have a Karma system where the "chance of success" can be said to be 0% or 100% in any individual case.

Yes, but this is because the standard deviation here is zero. Thusly, the distribution is completely predictable and the randomness is nonexistant.

Folks, this is not rocket science, it's 10th grade math (in the US; in Portugal it's 11th grade math; go figure).

Now, let's say that I wanted "opposed" actions to really be more random. My suggestion: roll a d12 for "opposed" actions and a d6 for "unopposed" ones. Then, you'll truly be more random.

In summary: more dice does not yield more randomness. It just yields more different values. But they are more predictable.

I hope I at least made sense.

Cheers,

J.

P.S. To Mike: I still think all this is rather intuitive. :)
João Mendes
Lisbon, Portugal
Lisbon Gamer

Christoffer Lernö

J, you miss the point.

2D6+skill against fixed target number, standard deviation = 2.4...

2D6+skill against 2D6+skill, standard deviation = 3.4... (since it's equal to rolling 4D6)

So contrary to what you state

QuoteIn summary: more dice does not yield more randomness. It just yields more different values. But they are more predictable.
...more dice IS more randomness.

Based on the earlier definition of you stating that
Quote"Less random" means that the standard deviation of the results distribution is lower.

Now if you had said xD6 divide by x then YES the cuve would have been more centered and the standard deviation would go down. But usually opposed mechanics does not go around normalizing things this way.

For those interested, here's a way to calculate the standard deviation of any nD6 roll: sqrt (35*n/12).


Edit: For nDx the standard deviation is sqrt[(x*x-1)*n/12], in case anyone was interested. That roughly makes it proportional to the number of sides on the dice incidentally. Keeping the sides fixed we find that the deviation is proportional to the square root of the number of dice. Maybe basic, uninteresting stuff, but what the heck.
formerly Pale Fire
[Yggdrasil (in progress) | The Evil (v1.2)]
Ranked #1005 in meaningful posts
Indie-Netgaming member

Walt Freitag

M. J., those are some very good points. Perception is important here because end-user usability is critically important here. For example, if GMs tend to think of an opposed roll against an equally skilled opponent as a "typical" challenge, but are tempted to add a few extra points of difficulty to the target number in every "typical" unopposed skill test, then the comparison no longer works.

So I have no objection to saying "opposed roll mechanics are more reliable," when "reliable" is used in the sense of it being somewhat easier to design and/or use the mechanism so that it is consistent and well-behaved. JMendes's point about rolls with asymmetrical distributions, such as occur when exploding dice are used, emphasizes this. With an opposed roll, most statistical wierdness cancels out, while with an unopposed roll, one can be left scratching one's head over whether the base target number should be scaled so as to represent the mean, median, mode, or some other point in the outcome distribution.

However, the determination of the difficulty level has potential problems of subjectivity and misunderstanding of the system whether the mechanism is opposed or unopposed. With opposed rolls one problem that could occur is understating the point differential -- "This chasm should be very easy for this character to jump over, so I'll set its difficulty level at three points below the character's jumping skill" -- especially if the GM believes that the die roll outcomes are more center-weighted than they really are, which would lead to overestimating the advantage a three-point differential would confer.

These sorts of problems are why I personally prefer systems based on exponential functions, so that one can look at difficulty modifiers on a scale that tells you exactly how many points of stat will make a challenge "twice as difficult."

JMendes, thanks for joining in. I quoted your post because it was an unusually clear and concise statement of an idea that's widely held here. It's a question I've been looking at for some time and have discussed before, mostly with Mike Holmes. So please don't think I was playing jump-all-over-the-relatively-new-guy. :-)

I admit that my definition of randomness is nonstandard from a pure math point of view (as is yours, see below). It applies only in the specific context of comparing one resolution mechanism to another similar resolution mechanism. What I'm talking about is the relation between the stat differential and the predictability of the results. I call a system less random if its results are more predictable over most or all of the range of stat differentials.

The standard deviation of the dice rolls in the resolution system is one of the factors that figures into the (as-I've-defined-it) randomness. But not the only factor. To paraphrase one of your own examples, a resolution mechanism of rolling 2d6 to beat a target of 127 (which, therefore, can produce success only if the stat differential is 116 or higher) is far more predictable and less random than rolling 2d6 to beat a target of 7, even though the standard deviations of the outcome distributions are identical in both cases.

From a pure math standpoint, standard deviation has no relation to randomness. If I lay out a table of the sums of all 36 possible pairs of the integers 1 through 6, that set of numbers, which is not in any way random, will have the same standard deviation as a sufficiently large number of truly random fair d26 rolls. Testing for randomness requires much more sophisticated statistical analysis. I doubt any mathematician would take seriously the notion that a random string of ones and zeros was somehow less random, by virtue of its lower standard deviation, than a random string of integers between 1 and 1 million.

And the same holds true for practical system design purposes as well. A roll of 2d6, 1d100, 123d47, or flipping a coin are all equally random. They're all governed solely by chance. They're all utterly meaningless until a system puts an interpretation on the outcome. Mathematically rigorous defiinitions of degree of randomness won't help us at all.

Whereas I can be completely confident that when success and failure are the only possible outcomes, a resolution instance with a success probability farther from 0.5 is more predictable in its outcome than a resolution instance with a success probability closer to 0.5. (And I'm willing to back that belief up with hard cash bets.) Calling a mechanism "more random" when its results are consistently less predictable over a wide range of possible conditions seems appropriate to me.

But even if we did define randomness as the standard deviation of the die roll, which is useful up to a point, more dice does add more randomness. It increases the standard deviation, as Pale Fire pointed out.

I think Pale may be correct in guessing that you're thinking in terms of normalized systems. None of the mechanisms I'm talking about is normalized. Adding an oppostion roll means adding more dice, not breaking larger dice down into multiple smaller dice with the same outcome range. You compare 2d6-1 and 1d11, and I can hardly argue with your assessment of them. Certainly 2d6-1 has a smaller standard deviation, and if used in a resolution mechanism would probably yield a less random mechanism, than 1d11. But this comparison involves two different die rolls carefully selected to have the same outcome range. It was nothing like the comparisons I was making or that are usually being made when people compare opposed and unopposed resolution mechanisms within the same system.

Your suggestion about rolling a larger die for opposed actions has some merit. An opposed action is more random than an unopposed one if the same die rolls are used, but it might not be as much more random as some players believe or desire it to be. The opposed roll is not by any stretch of the imagination "twice as random" even though twice as many dice are involved. Those who want more randomness in opposed rolls (for example, to create combats in which even the superior fighters get beat up a lot) should consider that option.

- Walt
Wandering in the diasporosphere

Jeremy Cole

What about the phrase;
I much prefer to make "opposed" tasks that little bit more "fortune heavy".

Maybe if we use the term fortune heavy, there would be no confusion with the probability definition.  Everyone would know that it meant based more on the fortune of the dice than the statistics of the players.

So then, I don't think people can argue that, all else remaining equal, an opposed test is more 'fortune heavy' than an unopposed test.

Jeremy
what is this looming thing
not money, not flesh, nor happiness
but this which makes me sing

augie march

Valamir

Quote(I am assuming that, for influence, you are not talking about the actual values. I think you do not contend that adding a d6 to a fixed value of 137 is somehow less random than adding a d6 to a fixed value of 4.)

Actually I do.  In fact, this difference is what I think of when talking about whether a system is more or less random.

1d6 + 137 yields a range of results of 138-143.
The highest possible result is 3.6% higher than the lowest possible result and 1.8% higher than the mean.

1d6 + 4 yields a range of results of 5-10.
The highest possible result is 100% higher than the lowest possible result
and 33.3% higher than the mean.

Thus, I would most certainly perceive the latter to be far more random than the former even though the random element in both cases is identical.

Mike Holmes

Interestingly, this debate makes the point I was making in my rant about Opposed rolls, which is that people adopt Opposed or Unopposed rolls with certain assumptions about them. When those assumptions are obviously untested. As we can see here, even after some hardcore analysis there is still major debate about the validity of a common assumption, or it's opposite argument.

My point has always been that if one uses a single system that such debate becomes meaningless.

This is important because the much more difficult question has yet to be answered, which is whether or not "opposed" tasks are more random in Real Life (or worse, in the perception of players). Which is problematic as it involves quite a bit of subjectivity, and I think there is little in the way of objective study that has been done.

And in any case, the end result is just a model, and will not model things all that accurately in any case (we'll find that the bell curve needs to be supplanted by a natural log function or something).

Much easier to just assume that all cases are equally random, and have the advantage of using a single system to resolve them all. And just as "Accurate". Or so I would contend. One finds that such systems produce data that is satisfactory to players. Which is the only real test, isn't it?

Mike
Member of Indie Netgaming
-Get your indie game fix online.

Christoffer Lernö

You bring up a valid point Ralph given that the modifier is proportional to target numbers and such.

For example if Good is 7 and Excellent is 10 with a d6+4, and 140 is Good and an 143 is Excellent with the d6+137, then obviously the randomness is identical.

However if the modifier is proportional to target numbers, so maybe for the +137 game then 100 is good and 150 is Excellent, then obviously your results won't vary regardless of what you roll.

I guess we could make another measurement which is (standard deviation of die roll)/"Good skill rating". This is only applicable to the nDx+bonus kind of rolls though.

It might be of interest to look at some:

on a d100, "good" is usually around 50: 0.58
a d20, "good" is 10: 0.58
d10, "good" is 10: 0.29
d6+4, "good" is 4: 0.43
d6+137, "good" is 130: 0.013

Although this doesn't strictly have anything to do with the opposed rolls, this might be a way to measure practical randomness. "If I have a d20 and "good" is 30, how does that compare to 3d6 where "good" is 11?" for example (values 0.19 and 0.27 respectively so d20 with 30 is less random)
formerly Pale Fire
[Yggdrasil (in progress) | The Evil (v1.2)]
Ranked #1005 in meaningful posts
Indie-Netgaming member

Walt Freitag

Quote from: nip...dipMaybe if we use the term fortune heavy, there would be no confusion with the probability definition. Everyone would know that it meant based more on the fortune of the dice than the statistics of the players.
Quote

Along similar lines, I was thinking about the term "stat-sensitive." More stat-sensitive means a given stat differential is more of an advantage or disadvantage.

("Stats" here is inclusive of all numbers that figure into the roll, including requisites, skill scores, and situational modifiers.)

Ralph and Christoffer's points allude to the idea that relative stats can also be thought of as ratios instead of differences, which changes all the math. In the "roll + stat vs. target" or "roll + stats vs. roll + stats" systems it makes sense to think of them as differences, since a given difference yields the same outcome distribution regardless of the absolute magnitude of the stats involved. An opposed roll of skill 100 vs. skill 101 is no different from skill 1 vs. skill 2. But it's also possible to conceive of mechanisms where to have the same advantage over a skill of 100 as a skill 2 has over a skill 1, you'd need a skill of 200.

Mechanisms with opposing dice pool rolls of varying size (where stats determine the number of dice rolled on each side) tend to be somewhere in between. Typically in such mechanisms the advantage of a skill 2 over a skill 1 is much larger than that of the advantage of a skill 10 over a skill 9 (same difference in skill stats), but it's also much smaller than the advantage of a skill 10 over a skill 5 (same ratio of skill stats). The stat-sensitivity, however it's measured, varies in too complex a way for any simple comparisons to be made. So... even opposed rolls can get tricky.

Mike's point continues to be underscored, in spades.

- Walt
Wandering in the diasporosphere

JMendes

Hey, all, :)

(I know some of you guys hate this, but some of these long posts contain so many different points that it's hard to respond meaningfully to them without breaking them up.)

Quote from: wfreitagJMendes, thanks for joining in. <...> So please don't think I was playing jump-all-over-the-relatively-new-guy. :-)

Thanks for the welcome and I thought no such thing. :)

QuoteThe standard deviation of the dice rolls in the resolution system is one of the factors that figures into the (as-I've-defined-it) randomness. But not the only factor. To paraphrase one of your own examples, a resolution mechanism of rolling 2d6 to beat a target of 127 (which, therefore, can produce success only if the stat differential is 116 or higher) is far more predictable and less random than rolling 2d6 to beat a target of 7, even though the standard deviations of the outcome distributions are identical in both cases.

Quote from: Valamir also
Quote(I am assuming that, for influence, you are not talking about the actual values. I think you do not contend that adding a d6 to a fixed value of 137 is somehow less random than adding a d6 to a fixed value of 4.)
Actually I do.  In fact, this difference is what I think of when talking about whether a system is more or less random.

Ack. This doesn't make any sense to me. I can't compare 2d6 vs. 7 with 2d6 vs. 127, unless I'm adding a substantial amound to the die roll in the second case. And in any case, my example was that 2d6 vs. 7 has exactly the same 'randomness' as 120+2d6 vs. 127. The relationship between the 'highest possible value' and the 'lowest possible value' is of utter inconsequence in this discussion.

QuoteFrom a pure math standpoint, standard deviation has no relation to randomness. If I lay out a table of the sums of all 36 possible pairs of the integers 1 through 6, that set of numbers, which is not in any way random, will have the same standard deviation as a sufficiently large number of truly random fair d26 rolls. Testing for randomness requires much more sophisticated statistical analysis. I doubt any mathematician would take seriously the notion that a random string of ones and zeros was somehow less random, by virtue of its lower standard deviation, than a random string of integers between 1 and 1 million.

Erm... no. The 'table' has a standard deviation of 0. It's there. It's not random. At all. As for a 'sufficiently large' number of 2d6 rolls, that's also irrelevant. We're talking about rolling once. That's your random variable. (Sometimes, it's easy to confuse the random variable with the distribution of the variable. This may or may not be what you were doing.)

As for the string of random 1's and 0's. Is it fixed length? Is it being generated 50/50? If the answer is yes, then this is just the same as randomly generating a number between 0 and 2^n-1 (where n is the length of the string), and it has the same standard deviation as rolling a die with that many number of faces. If it's not fixed length, then you can still calculate its deviation, but you need other tools such as spectral analysis of some kind or other, but now you're entering into the realm of what I call rocket science... ;) Either way, such a string definitely has a much higher sDev than simply tossing a coin.

QuoteAnd the same holds true for practical system design purposes as well. A roll of 2d6, 1d100, 123d47, or flipping a coin are all equally random. They're all governed solely by chance. They're all utterly meaningless until a system puts an interpretation on the outcome. Mathematically rigorous defiinitions of degree of randomness won't help us at all.

Ok, now we're getting to the gist of our argument, and I must say that I have to concede to your point that a non-mathematical definition of randomness will be more useful.

I suppose that a 'theory of rpg randomness' is in order. :) Factors to consider are the average skill level, the average target number, and the average and standard deviation of the die. And that's just for a simple, linear, additive roll. Never mind dice pools. I'll think about this some more before pontificating on it...

QuoteBut even if we did define randomness as the standard deviation of the die roll, which is useful up to a point, more dice does add more randomness. It increases the standard deviation, as Pale Fire pointed out.

I think Pale may be correct in guessing that you're thinking in terms of normalized systems. None of the mechanisms I'm talking about is normalized. Adding an oppostion roll means adding more dice, not breaking larger dice down into multiple smaller dice with the same outcome range.

Quote from: Pale FireNow if you had said xD6 divide by x then YES the cuve would have been more centered and the standard deviation would go down.

Pale is correct (and you are right in thinking that he was). I was indeed thinking about normalized mechanics. I think Mike and I agree on the basic premise that many designers introduce dual mechanics for the wrong reasons and without thinking out the consequences. As such, normalized mechanics seemed like an easy route to expose such unconscious assumptions.

QuoteYou compare 2d6-1 and 1d11, and I can hardly argue with your assessment of them. Certainly 2d6-1 has a smaller standard deviation, and if used in a resolution mechanism would probably yield a less random mechanism, than 1d11. But this comparison involves two different die rolls carefully selected to have the same outcome range. It was nothing like the comparisons I was making or that are usually being made when people compare opposed and unopposed resolution mechanisms within the same system.

This, IMHO, is one such assumption. Once you have opposed and unopposed rolls, it is no longer the same system, any more than the 2d6-1 and the d11 are the same system.

QuoteYour suggestion about rolling a larger die for opposed actions has some merit.

Any one who wants it is free to use it. :) One note on balance: if you use the d6 for unopposed and the d12 for opposed, for instance, consider whether you need to add 3 to the target number for opposed actions, in order to keep the skills used under balance. (Depending on how target numbers are calculated, this may or may not be a factor.)

Anyway, I'll get back to you guys on that 'theory of randomness' stuff. :)

Cheers,

J.
João Mendes
Lisbon, Portugal
Lisbon Gamer

Valamir

Quote from: JMendes
Quote from: Valamir also
Quote(I am assuming that, for influence, you are not talking about the actual values. I think you do not contend that adding a d6 to a fixed value of 137 is somehow less random than adding a d6 to a fixed value of 4.)
Actually I do.  In fact, this difference is what I think of when talking about whether a system is more or less random.

Ack. This doesn't make any sense to me. I can't compare 2d6 vs. 7 with 2d6 vs. 127, unless I'm adding a substantial amound to the die roll in the second case. And in any case, my example was that 2d6 vs. 7 has exactly the same 'randomness' as 120+2d6 vs. 127. The relationship between the 'highest possible value' and the 'lowest possible value' is of utter inconsequence in this discussion.

Actually I think its of paramount importance in a discussion of mechanics that are percieved by users to be more or less random.

It speaks directly to the following:

Premise:  We are discussing some mechanical system where the final result of various factors is a numerical result.  This numerical result is then interpreted by some form of If/then style test to result in an indication of success/failure or degree of success/failure.

Most of these systems are going to involve a variety of parts which for sake of this discussion can be divided into three categories.  The random component, the non random component, and the scale of the test.

The random component is usually 1 or more dice to roll.  The non random component may be a fixed modifier to add to the dice, or a fixed target number to roll against, etc.  The scale is how the final result is interpreted.  A roll of X is "superior success", a roll of Y is "marginal success" etc. (or straight success/failure).

My contention is that the greater proportion of the final result that comes from the random component vs the non random component, the more "random" the system will be perceived.  

For instance:

1d6 + 4.  Where the scale goes from 1 abysmal failure to 10 maximum success.  I can get anything from a 5 to a 10 on this roll.  60% of the range of total possible outcomes is available to be rolled and the best result I can get is twice as good as the worst result I can get (assuming a linear scale).

1d6 + 137. Where the scale goes from 1 abysmal failure to 200 maximum success.  I can get anything from a 138 to 143 on this roll.  This is only 3% of the possible range of results and the best result I can hope for is only 1.03 times as good as the worst.  

Thus, even though both results are numerically identical 1-6 range with flat distribution for each number, the first is far more "random" in possible outcomes than the latter.

There is much more at work than simple probability distributions of the dice rolled.  You need to do a possibility distribution for the total range of all possible outcomes in the game, relative to the possible outcomes for a given roll.

The above examples are extreme intentionally for illustration purposes.

JMendes

Heya, :)

Quote from: Valamir
Quote from: JMendesThe relationship between the 'highest possible value' and the 'lowest possible value' is of utter inconsequence in this discussion.
Actually I think its of paramount importance in a discussion of mechanics that are percieved by users to be more or less random.

Premise:  We are discussing some mechanical system where the final result of various factors is a numerical result.  This numerical result is then interpreted by some form of If/then style test to result in an indication of success/failure or degree of success/failure.

Erm... I'd prefer that reworded as: We are discussing some mechanical system where the final result of various factors is an indication of success/failure and degree thereof. This indication is based on interpretation of some generated numerical result by some form of if/then style.

My point is that the final result is the indication of (the degree of) success or failure. Any intermediate numerical results are just that: intermediate.

QuoteThe non random component may be a fixed modifier to add to the dice, or a fixed target number to roll against, etc.

This is all well and good, as long as you acknowledge that any given roll is only going to have one fixed component, not two. Thusly my claim:

Quote from: Imy example was that 2d6 vs. 7 has exactly the same 'randomness' as 120+2d6 vs. 127

This is because the non-random component is the same in both cases, to wit, seven (in the second case, arrived at via the simple calculation 127-120=7).

QuoteMy contention is that the greater proportion of the final result that comes from the random component vs the non random component, the more "random" the system will be perceived.

I agree with this. Actually, my original words were not at all contrary to this, and neither were any of the follow-ups. Also, this contention bears no relation to the percent comparison between the highest possible value versus the lowest possible value, unless you take extreme care to actually figure out the correct non-random quantity against which to do the math.

Quote1d6 + 4.  Where the scale goes from 1 abysmal failure to 10 maximum success.  I can get anything from a 5 to a 10 on this roll.  60% of the range of total possible outcomes is available to be rolled and the best result I can get is twice as good as the worst result I can get (assuming a linear scale).

1d6 + 137. Where the scale goes from 1 abysmal failure to 200 maximum success.  I can get anything from a 138 to 143 on this roll.  This is only 3% of the possible range of results and the best result I can hope for is only 1.03 times as good as the worst.

Actually, for both examples, and until you specify how you arrive at the 4 and the 137, you can always get 100% of all possible results. Also, the scales are mismatched. For the examples to be related to my original words, you'd have to get the second scale to read 134 abysmal failure to 143 maximum success. Now, even though your best result is still 1.04 of your worst, you are still getting 60% of the possible results. Thusly, and my original point, what matters is the 60%, not the 1.04, which is of utter inconsequence.

A clue is the fact that you state you "assume a linear scale". Well, any function that includes an added constant is, by definition, not linear. For those that care, the definition of a linear function is any function for which the following is true:

f ( k x + y ) = k f ( x ) + f ( y )

QuoteThus, even though both results are numerically identical 1-6 range with flat distribution for each number, the first is far more "random" in possible outcomes than the latter.

Of course, this statement is true. But, it is true because of the scales, not because of the added part. In other words, rolling 1d6 on a scale of 1-10 is far more random than rolling 1d6 on a scale of 1-200, regardless of the 4 or the 137. Again, the ratio between the high roll or the low roll is irrelevant.

Cheers,

J.
João Mendes
Lisbon, Portugal
Lisbon Gamer