The Forge Archives

Archive => Indie Game Design => Topic started by: Don Lag on June 12, 2001, 11:30:00 PM

Title: Algorithms, complexity and what any of this has to do with R
Post by: Don Lag on June 12, 2001, 11:30:00 PM
First off, I must admit to being a computer engineering & physics major in progress. So all this is predictably spwaned from way too much contact with these disciplines.

Many decisions about hte system I currently working on have to do with minimizing the complexity of determining results. For example, I despise AD&D's add-a-few-hundred-modifiers approach to just about everything (although D&D 3rd ed has addressed this issue, simplifying the amount of mods).

In trying to acheive maximum simplicity (and therefore playing speed), I have been tempted in applying algorithm analysis techniques such as considering that ordering N dice would be an O( log(N) ) task at average.

I'm aware the example isn't very bright, but I was just wondering if anyone has seriously attempted to do this kind of analysis.

[ This Message was edited by: Don Lag on 2001-06-12 23:31 ]
Title: Algorithms, complexity and what any of this has to do with R
Post by: greyorm on June 12, 2001, 11:35:00 PM
Quote
In trying to acheive maximum simplicity (and therefore playing speed), I have been tempted in applying algorithm analysis techniques such as considering that ordering N dice would be an O( log(N) ) task at average.

I'm aware the example isn't very bright, but I was just wondering if anyone has seriously attempted to do this kind of analysis.
I'm utterly lost...come again?
(I know nothing about algorithm analysis techniques or anything mathematically similar...so that gives you my resposne: I haven't! [grin])
Title: Algorithms, complexity and what any of this has to do with R
Post by: Don Lag on June 13, 2001, 12:13:00 AM
heh

umm... I'm not sure how well my explainer abilities measure up, but....

In designing algorithms (basically a structured set of steps to take in order to solve any of a family of problems, for example: sorting a list), one analyses (is that the word?) the time it takes to execute said algorithm.

This "time" is measured in steps, depending on the size of the input. For example, to seek a value in a list (i.e. to find the number '5' in th list { 2, 8, 5, 10, 1 } ) will take an average of N/2 steps. If the list is ordered from least to greatest a better algorithm (Binary search) can be applied. This takes an average of logarithm base-2 of N steps to complete, less than N/2.

My idea come from the observation that a game that takes an addition of N dice on average to solve a combat result is more complex than one who takes searching for the largest die. This is obvious, what I really wonder if in general it is any use to put common algorithm analysis to work on slimming down RPG rules.
Title: Algorithms, complexity and what any of this has to do with R
Post by: james_west on June 13, 2001, 02:09:00 AM
The phenomenon is one we've addressed here before, although we've called the issue search time (finding relevant rules) and handling time (applying them after you've found them).

It is a very relevant issue, and if you can come up with something clever based on the algorithms you're familiar with, I'm sure it would be well received.

        - James
Title: Algorithms, complexity and what any of this has to do with R
Post by: Ron Edwards on June 13, 2001, 11:09:00 AM
Hi there,

Bit of background: I co-opted the terms search time and handling time from ecology, in which they refer to food-foraging strategies. Both are discussed at the end of the "System Does Matter" essay, which I am convinced is usually read only halfway-through.

To clarify a bit: search time refers to any and all attention spent right up to the moment of the resolution mechanic being employed (e.g. dice hitting the table, for most RPGs); handling time refers to all such attention spent from that moment on until that action is considered "done." Damage rolls, for instance, would be included as part of handling time.

Reducing search time has received a lot of attention in commercial RPG design, but reducing handling time is still in development. I modestly claim that Sorcerer and Elfs do a nice job in this department, as do The Dying Earth, Story Engine, and Ghost Light. Zero, Orkworld, Unknown Armies, Swashbuckler, and Hero Wars come in a very close second, I think. One of the few flaws of Extreme Vengeance is the grunt-work of its handling time.

Not surprisingly, Drama-mechanic games like The Window, some options in Everway, and Puppetland have very low to absent search and handling times, and I think this is generally true as long as there are pretty strict rules about how to employ the mechanic. The bid-based Drama-mechanic games like Pantheon and Soap add some handling time, but as we know, well within acceptable limits.

A related issue has to do with one of the fundamental mechanics differences between Simulationist and Narrativist design; the former tends to go with step-by-step, chronological resolution, reducing non-constructive negotiation as much as possible; and the latter tends to go with Fortune-in-the-middle (when Fortune is employed), retroactive descriptions, and encouraging constructive negotiation as much as possible.

Just how one goes about reducing search and handling time within either of these approaches is going to differ a lot between them. In The Window and Fudge, for instance, I think that the Simulationist mechanic is stripped down about as far as it can go. [Side note: I also think the philosophy of resolution in The Window, no matter how light it is, is contradictory to its Three Precepts, which are Narrativist.] Whereas Hero Wars, for instance, has a lot more point-keeping and dice rolls than The Window, but the mechanics are focused precisely on Narrativist concerns, especially insofar as everyone at the table is engaged, and thus do not generate a "wait" feeling during play.

Best,
Ron
Title: Algorithms, complexity and what any of this has to do with R
Post by: Don Lag on June 13, 2001, 10:25:00 PM
Funny, I wasn't referring to that aspect of complexity (which seems important know that I think of it), but rather the complexity that arises from the actual rules once applied.

For example, an impossibly complicated game could state that one must roll 3 dice, and the value of the roll is the highest common divisor amongst them. On the other hand, a system that asks for a roll with a single die and whatever posp up is the value, is much more simpler.

Adding modifiers would rise the complexity also, as would any algebraic operation between the dice (adding, subtracting, multiplying..). Searching for the maximum amongst a number of dice seems to be simple enough operation (even for 10+ dice), but comparing two different rolls of many dice is a pretty heavy operation.

This all seems rather subjective, and qualitative, and I feel motivated to apply coputation techniques in analysing it deeper (even if it's just out of curiosity).

If I *DO* come around to actually analysing this stuff, first I'll let you all know, and I'll try to see how the Searching & Handling idea fits in.
Title: Algorithms, complexity and what any of this has to do with R
Post by: Zak Arntson on June 14, 2001, 01:36:00 AM
(Confessing to also being a programmer)

I'd love to see what you come up with.  Complexity & algorithms always interests me (more so with JavaScript since I have to balance the interpreted language AND the code that interprets) ...

Anyhow, I think that the algorithms should apply to the search & handle time.  It makes sense that a die roll and a comparison is easier than a die roll +/- any modifiers, look up on chart, etc. etc.

Also remember that humans are doing the computations.  So comparison will be fastest, addition comes next.  Then probably subtraction & doubling.  Division seems to me to be the end of the computing spectrum (unless you have players doing square roots or logs or something!)

You'll also have to figure in any reference lookups.  Takes up lots of time if you have to reference charts.

And special rules.  I just got done playing D&D and boy do we consult the rulebook a lot just for things like subdual damage.

Anyhow, you've got at least one person excited about complexity ...
Title: Algorithms, complexity and what any of this has to do with R
Post by: Don Lag on June 14, 2001, 01:55:00 AM
yay! Another computer freak.

I actually had a test today for my Algorithms Design & Analysis course, though I swear I had this idea a long time ago and it isn't some weird falling in love with the course thingie :smile:

You're comment on humans doing the computations brings up another idea I've had from a time since... but it's a really dry mathematical discussion (or at least I haven't found the proper way to explain it in commonspeak).

Anyway, I haven't done any real study on the subject, but for starters I think a good starting point would be to consider:

Adding N dice: O(N) //obvious

Searching for highest die amongst N dice:
 Humnas are far better at analyzing a group of elements adn determining global properties than computers. So while a computer would take O(logN) to find the maximum die, a human (I think) would take so little that it would be fair enough to consider it O(1) for most reasonable values of N (up to 10 for example).

I'm stopping myself right away here. I'm starting to see that there are perhaps som considerations missing in formulating this problem.

I'll give it a thought then post later.
Title: Algorithms, complexity and what any of this has to do with R
Post by: Zak Arntson on June 14, 2001, 02:48:00 PM
Wow.  This is cool.  We get to assign complexities based on human behavior.  You know, my workmate has got this great book on applying psychology to user interface.  Things like how many simultaneous tasks can a human reasonably perform, how accurate can someone remember multiple pieces of information (I think if you go beyond 3 it's bad).

And since people are pattern-matching fiends, and not so good at quick bit-flipping ... the complexities are WAY different than a computer.

So yeah, I would say that:

Picking Highest/Lowest Die among X dice : O(n)
Comparing Roll to Number : O(n)
Comparing Roll to Number & Determining Higher/Lower : O(n)

For more complicated things, I think some research would have to be done.

And reducing complexity wouldn't work the same.  Say adding numbers is O(x * n), well that doesn't reduce to O(n) since a human can't add 50 numbers as fast as 1.  So maybe it needs to be O(n^(x-1)) or something?

Just some thoughts.
Title: Algorithms, complexity and what any of this has to do with R
Post by: jburneko on June 14, 2001, 02:57:00 PM
I just wanted to jump in here and say that there is yet another person who understands this conversation.  I have a degree in computer science and a minor in mathematics.  I also still own all my text books and they are in an accessible place so if you need anything looked up just let me know.

This is very cool.  I'd never really thought about this before but when you think about it you realize that it's a very important design consideration.

Jesse
Title: Algorithms, complexity and what any of this has to do with R
Post by: John Wick on June 14, 2001, 06:39:00 PM
I just wanted to jump in and say...

I DON'T understand this conversation - and I think its very interesting that "I want a simple system" and "algorithms" are being used in the same sentence. :wink:

Take care,
John

[ This Message was edited by: John Wick on 2001-06-14 18:39 ]
Title: Algorithms, complexity and what any of this has to do with R
Post by: Mytholder on June 14, 2001, 06:52:00 PM
For John and any other non-techies...

Big-Oh notation is a way of describing how long a particular algorithm (method of doing something) will take to complete. It's expressed in terms of the number of things the algorithm has to process. So if a computer has to sort N things, an O(N) algorithm will take an amount of time that's directly proportional to the number of things. An O(N-squared) algorithm will take a long time, proportional to the square of the number of things. In constrast, an O(1) algorithm will always take the same amount of time.

(It's a year since I graduated, and longer since I actually looked at books on computing theory, so I might be getting some of this wrong.)

Title: Algorithms, complexity and what any of this has to do with R
Post by: John Wick on June 14, 2001, 07:09:00 PM
Sorry Myth, but your explanation didn't help me much.

How about someone using English - preferably small words for us Philosophy types.
Title: Algorithms, complexity and what any of this has to do with R
Post by: Zak Arntson on June 14, 2001, 08:24:00 PM
I'll try.

When you write an algorithm, it will take a certain amount of time to complete.  This can be dependent on the amount of data your algorithm must munch before its done.  Programmers get raises (at least they used to, I bemoan the current Microsoft way of "faster computers = we can push bloated code) with quicker programs, so making a program run fast is key.

If an algorithm is O(n), this means that it gets as complex as what you throw in.  There's a one-to-one relationship.

If the algorithm is O(n2), it's not so quick.  I give you three numbers, you do nine (n = 3, n2 = 9) calculations.

And so on.  There's all sorts of complexities, like O(log n) and O(2n), but don't worry about that for now.

What's neat about this method (for programming, anyway) is that you can figure out your algorithm's complexity, say: n2 - 3n + 2, and crunch that down to a complexity of O{n2).  Then you know where to concentrate retooling your code to make it faster.




So how does this apply to gaming?  Here's my take.  Say you have a system where you roll x dice, add them up, and compare that to a difficulty.  The complexity of this would be (I'm just throwing complexities around, I don't know any tested complexity of rolling, comparison, etc)  This is way different than computer theory, since people are good at different things:

(roll 4 dice) + (total them up) + (compare to difficulty) ->
(roll 4 dice) + (sum 4 items) + (comparison) ->
(4n) + (n(4 - 1)) + (1) ->
n3 + 4n + 1

Now, when determining complexity, you take the POTENTIALLY BIGGEST piece, so here we drop the 4n and the 1 to get a complexity:

O(n3)

At a glance, we can tell that where we total the rolls is the most time-consuming.  So if we want to quicken up the mechanic, we should start there.  Say we pick the highest, which is easy for humans.

(roll 4 dice) + (pick highest roll) + (compare to difficulty) ->
(roll 4 dice) + (seek highest of 4) + (comparison) ->
(4n) + (4n) + 1 ->
8n + 1

Complexity: O(8n + 1) = O(8n).

SIDE NOTE: With computers, you can just drop the 8 to get a complexity of O(n).  But with people, I think we should leave it, since a complexity of O(20n) would take a lot more time for a human

Wow ... we're on the forefront of gaming theory (well, probably not, but I bet it's a first for rpgs anyway).




Real Life example (though I didn't do it in theory, I did it intuitively at the time).  In my Chthonian game, combat went Attacker vs. Defender.  Then the defender had a chance to attack, prompting a second Attacker vs. Defender roll.  Rolls in this game are d12 + modifiers.  Then a difference.

I thought this wasn't rules light enough to foster the kind of game I wanted, so my revision (not posted to the web yet) makes a single combatant vs. combatant roll.  Using complexity, the first situation a single "round" of combat would be (with an average number of modifiers of 2):

(1 roll + 2 modifiers) + (1 roll + 2 modifiers) + (figure difference) + (record difference) + (1 roll + 2 modifiers) + (1 roll + 2 modifiers) + (figure difference) + (record difference) ->
4 * (1 roll + 2 modifiers) + 2 * (subtract 2 items) + 2 * (record difference) ->
4 * (1n + nmod - 1) + 2 * (n2) + 2 * (1n) ->
4n + 4n1 + 2n2 + 2n ->
6n + 4n + 2n2 ->
10n + 2n2

Complexity: O(2n2 + 10n).  I'm not going to simplify the complexity, since we're dealing with VERY SLOW COMPUTERS (as in people computing).

Once I get rid of the second bit of combat, and just have two people roll, highest wins and applies difference to loser:

(1 roll + 2 modifiers) + (1 roll + 2 modifiers) + (figure difference) + (record difference) ->
(1n + n2 - 1) + (1n + n2 - 1) + (n2) + (1n) ->
[hand-waving] ->
5n + n2.

Complexity: O(n2 + 5n).  I've halved my complexity (which I guess is intuitively obvious, since the original was just two "rolls").

Yikes.  Time for me to stop writing.

The key here is that we come up with some good complexities for things.  How complex is adding two numbers?  Subtracting them?  Adding 5 numbers?  Comparing one number against another?

And how to reduce the complexity, too.  The typicaly Computer Science way of O(n2 + 5n + 3) = O (n2) isn't going to work.  O(10n) is appreciably slower than O(5n) when it comes to human beans.


_________________
Zak
zak@mimir.net
Harlekin Maus Games

[ This Message was edited by: Zak Arntson on 2001-06-14 20:30 ]

[ This Message was edited by: Zak Arntson on 2001-06-14 20:31 ]
Title: Algorithms, complexity and what any of this has to do with R
Post by: Don Lag on June 14, 2001, 09:37:00 PM
Thanks guys, I was aiming for people with computer science background mostly because I didn't have time to post a big Introduction to Algorithms message (like Zak's!).

I can't help but mention something that has come slightly into discussion, this is humans as computing machines. I'm using the term "machine" in a very general fashion, basically considering that the human brain provides a mecanism for solving problems (I'm not about to delve into the philospher's interpretation of the question).

Does the human brain provide a better mechanism than the Turing Machine? Could one conduct experiments in trying to emasure the humanly optimal algorithm for sorting (putting lots of people to sorta many items, in different numbers and try to observe any type of general behaviour: logN, N, ..)

Random thoughts mostly...
Title: Algorithms, complexity and what any of this has to do with R
Post by: Clay on June 15, 2001, 11:14:00 AM
For the Philosophy majors in the audience, here's a solution that solves the same problem, but doesn't involve the rather unpleasant mathematics described here.

Try your system out on people who have never used it before. See how long it takes.  If it takes too long for your target audience, simplify.  If you want to know where to attack it, time each portion of the resolution and start working on the slowest one; alternatively, start working on the one that adds the least value to the game.  

For instance, if I were trying to simplify Deadlands, I'd drop the whole hit-locations things completely. It adds a roll and a lookup to get a result that could have been achieved just as effectively without the additional role or lookup (e.g. count raises towards damage bonus, as in Sorcerer).  Of course, I'm looking at it from a narrativist viewpoint, not a simulationist.  The simulationist may see value in the existing way of doing it.

This highly empirical technique will irritate the theoretical physicists in the audience and at least a few of the mathematicians.  The manufacturing engineers among you will love it.  It's only true merit is that it works, without breaking your head with the math if math isn't your thing.
Title: Algorithms, complexity and what any of this has to do with R
Post by: Zak Arntson on June 15, 2001, 11:58:00 AM
Quote
On 2001-06-15 11:14, Clay wrote:
For the Philosophy majors in the audience, here's a solution that solves the same problem, but doesn't involve the rather unpleasant mathematics described here.

You rule!  Hah!  But yeah, I would agree that playtesting is a way better tool than math for figuring out the fun-factor of a game.

I see the complexity issue coming up during initial mechanics design, before playtesting.  Or if the playtest runs badly and you can't figure out just what is messing things up.

It certainly isn't a replacement for playtesting, more of another tool (like G/N/S) for design.
Title: Algorithms, complexity and what any of this has to do with R
Post by: greyorm on June 15, 2001, 12:00:00 PM
Quote
Thanks guys, I was aiming for people with computer science background
Hey, now, I've been in programming and technical repairs for the last three years and in all that time I've never once even needed to glance at anything vaguely algorithmic.
Heck, the last time I recall them being mentioned was when our chemistry teacher went on a physics tangent during High School.

I don't even recall the subject being broached in my college Physics or Math courses (then again, I've been piss poor at math for years, so I never moved much beyond basic Algebra except as it related directly to physical theory).

I guess I just wanted to point out all us tech-heads aren't math geeks, too. [grin]
Title: Algorithms, complexity and what any of this has to do with R
Post by: Supplanter on June 15, 2001, 12:17:00 PM
I've been messing with a die-roll idea that seemed, intuitively, to be pretty fast, and to produce a bell curve without any addition: Roll 3 dice and take the median. Since it's a compare, it seems faster than an add. I've grown to like opposed rolls, so I'd have each side doing this.

Where one goes from there brings up more serious design questions: e.g. where character traits come into the picture, what you do with the opposed rolls etc. I've been intrigued lately by the idea of results with dimensionality and, in combat specifically, flow. I've thought about determining success purely with the "attacker's" roll, while determining advantage shifts (initiative) purely with the "defender's." IOW, the side with the advantage is pressing and the side being pressed wants to turn the tables.

Say the "combat" is basketball. One side is making a run - they are in the "attacker" position. Their roll determines the level of success of their run. The other side is trying to weather the storm - they are in the defender position. Their roll determines a possible reversal of the flow of events - a high roll for the defender represents the timely three-point bucket or blocked shot that ends the "attacker's" run and shifts momentum to the former defender's side. As of the next roll, attacker and defender are reversed.

Anyway, since the emphasis is on flow and speed, fast dice-handling seems to be called for. Thoughts?

Best,


Jim

Title: Algorithms, complexity and what any of this has to do with R
Post by: Ron Edwards on June 15, 2001, 12:34:00 PM
Hey Jim,

"Pick a die" from a rolled set of several dice is an excellent method. I've only seen "highest die" methods so far - Sorcerer, Orkworld, Deadlands, and the Dream Pod 9 system.

Just to clarify, in what you propose, you mean that rolling a 4, a 2, and a 1 gets you a "2," right?

I'm not sure the "median" method does much to change the general goal, as opposed to "highest die." Also, I'm puzzled about within-the-three ties. How would you handle a roll like 4, 4, and 2?

Now for my other thought: target values vs. opposed rolls.

This is a curse and a blight that has afflicted role-playin games for decades. If my attack is a "task," and his defense is a "task," then many systems come up with highly epicyclical methods of comparing my success at my task with his success at his task. You get systems like Cyberpunk and Vampire and D6, all of which have outrageous handling times.

Anyway, as you know, Sorcerer and Over the Edge and Prince Valiant (etc) all use opposed rolls in resolving CONFLICT rather than tasks. This puts us in the tricky situation of "I win" or "he wins" without any "both succeed" or "both fail" to round out the plausible possibilities. (I think I solved this for Sorcerer by co-opting Zero's round-resolution.) Hero Wars represents yet another ever-so-slightly clunky attempt to cope.

But now for the utterly obvious yet so under-utilized option, employing target numbers. How about if the player is always ROLLING OFFENSE vs. the opponent's DEFENSE SCORE, and always ROLLING DEFENSE vs. the opponents OFFENSE SCORE? No rolling for the GM at all. This is what you're proposing, right?

This is what The Whispering Vault does, and you know, it's the ONLY version of target-number mechanics that I have found to be sensible/plausible and fast. And for the life of me, I don't think it LOSES any nuance of the more clunky versions of target-number resolution.

Just to clarify: In WV, I have an Attack of +5, so I roll and add my 5. Maybe I get a total of 13. Well, this opponent's Defense is 11, so I whopped it. When its turn to whop me back comes up, the GM doesn't roll - I roll my dice, adding my Defense of +4, maybe for a total of 9. Well, shit, its Attack is a 14, so it whopped me.

So each opponent has Attack and Defense target numbers that I use for my Defense and Attack rolls, respectively, at the appropriate moments during combat.

I think expanding this brilliant and wonderful concept to social and other instances of RPG conflict is long, long overdue. Mike Nystul gets the credit for WV, that smart fellow.

Best,
Ron
Title: Algorithms, complexity and what any of this has to do with R
Post by: Don Lag on June 15, 2001, 01:58:00 PM
Just to set things straight, I think even mentioning "algorithms" already reduces the fun factor of anything :smile:

I was just curious if the same techniques could be applied... even if they could, would they be useful? I doubt it. Is the idea interesting? I think so, brings us very near to perhaps discovering new measures of complexity designed by the human brain.

Clay: I can't help but grin at the idea that such things as "theoretical" physicists could claim to exist, being theirs a discipline that validates itself by observation exclusively.

Algorithms don't get seen at collge/university unlñess you're in an academic computer science or math science track.

The median roll sounds interesting, the only immediate drawbacks I can see is, since you can't add more dice (it makes the rolling mechanism change from simple to complex right away), you'd have to add modifiers to the rolls.  I for one oppose to summed modifiers since they make low results not only improbable but impossible right away (you can't get a 5 if you have a +6). I've always preffered a system where even if you're bad at something you can get lucky and be as succesful as someone who is good at it. And viceversa, you should be able to get a rotten result even if you're an expert with (very) low probabilities, but it shouldn't be impossible. Besides, although less important IMHO, modifiers always tend to mean writing down even more numbers on you're character sheet and longer rolling times as you start to add stuff.

Ron: There's something somewhat asymetric in you're approach it would seem. Although I agree it would be a great timesaver for the GM (I think something similar was given as a tip for DM's in a D&D manual... or magazine maybe, that basically meant players rolled monster's attacks).
Title: Algorithms, complexity and what any of this has to do with R
Post by: Don Lag on June 15, 2001, 03:01:00 PM
Ok, so I'm compulsive about this shit...

I built the probability distribution for the median rolling mechanism. The value frequencies (the probability of obtaining a roll equal to X) aren't a bell, but rather an inverted parabola (Ax2 + Bx + C = D ), kinda looks like a bell but not quite, it's steeper at the extremes.

The probability for beating a difficulty of X is almost linear, in fact it doesn't change much (less than 10% discrepancy all the way) than that of rolling 1d20 (D&D style) and checking that it's higher than the diff.

If anyobody is actually interested in this I'd be glad to send them the excel sheet I worked on, just e-mail me.

Also, if you have any questions regarding prob. distributions of some game system or whatnot, just let me know.
Title: Algorithms, complexity and what any of this has to do with R
Post by: Supplanter on June 15, 2001, 04:02:00 PM
Hi Ron, thanks for the message. On a roll of 4-4-2, 4 is the median. That's how, using d6s, you can get ranges resulting from 1-6. (Excel will be happy to confirm my definition of median for you - just put 4,4 and 2 in three cells and run Median() on those cells et voila.)

I think you've made Sorc the fastest dice pool mechanic possible, even faster than OTE, which, I would argue, is Sorc's mechanical father.

Couple of things got me thinking this way: Fading Suns and Hero Wars, both of which use single d20s. The contrast is that in Hero Wars, masteries in your best traits give a bias toward success on checks, except against extraordinary opposition. In Fading Suns the bias is toward failure.

I prefer biases toward success when dice are used, a la HW's bumps. Failure then comes from, as you put it, conflict or opposition. (Or the rare bad roll.)

Many many years ago we spent a lot of time seeking cultural explanations for violence in RPGs, but I always thought there was a simple mechanical one as well. Consider Runquest: You might have a Persuade roll of 55%. What's more, per the rules you got to use it once per encounter. An unmodified 45% chance of failure is substantial - it implies that, if all of Glorantha is really playing Runequest, that that world is full of thoroughly disagreeable people.

In that same game I might also have a Broadsword roll of 55%. But if I miss with the broadsword, I get to try again! Being 55% with broadsword is far more efficacious than being 55% with persuasion. And all those failed persuasion rolls are opportunities for, oh yeah, violence!

Then there is the problem of players feeling their progress stymied by a series of failed nonviolent action checks - the persuade don't work, the interrogate don't work, the pick locks don't work, nor the library roll neither. Let's just go kill their ass!

Bond, which was one of the first great dramatist game designs, IMHO, is the first game I remember to substantially bias toward success, with its Quality Result system. Alas, the designer made the handling time longer than it needed to be by requiring two multiplication operations and then a compare for most rolls. Instead of your interrogate failing outright, you were more likely to get a Q4 result and have it take 4 times as long as it might otherwise take to get the information you wanted, etc. Plus the Expertise rules were the first I recall that allowed players to generate fiat successes in at least some areas of a game.

Which is somehow wandering afield of median rolls per se, but it's tied in there somewhere. I think because, when one goes to opposed rolls and comparing successes, one can have routine competence (which is how HW describes the first level of mastery), which is good for moving action forward, balanced with failure against worthy opposition.

One way to "correct" FS in the direction of bias toward success is to simply roll a single d10 instead of a d20 but keep the score tallies the same. That means that the target number will frequently be higher than the maximum number of pips on the die. (In FS, you want to roll under the target number but close to it.) Hero Wars suggests what to do with those "extra" points in the target roll - turn them into automatic successes. So if your Seduce target is 14 on a d10, you get 4 successes automatically, plus the value of your roll - analogous to HW bumps. Now add an opposition roll where the opposition may also have extra points too.

Best,


Jim

Title: Algorithms, complexity and what any of this has to do with R
Post by: Ron Edwards on June 15, 2001, 04:52:00 PM
Hi Jim,

I'm with you on the "bias toward success" issue, and I'll tag first TFT, and then Champions, as the games which acted as my Sweet Relief from the ever-frustrating whiff factor.

I believe it was the late-70s Murphy's Rules that mocked RuneQuest's outcome in which two average-intelligent speakers of the same language, in casual conversation, had something like a 40% chance of completely misunderstanding one another. That's a lot of "Huh?" and "What?" around the dinner table.

It all becomes more sensible to me if we conceptually turn our attention toward conflict resolution as opposed to task resolution. That's one reason why either roll vs. roll (without target numbers) or that offense/defense target system from The Whispering Vault seem like the two best Fortune-based resolvers to me.

I did want to point out that Hero Wars does a very nice job of letting Very Competent People (when the task is pretty basic) sometimes have A Really Hard Time (when opposed by someone in the same ballpark of competence). If its mastery system was combined with a WV-style target number system ... boy, that might really be something.

Hey, I just realized, for an AmberWay Drama dude, you're pretty hot with this dice talk.

Best,
Ron
Title: Algorithms, complexity and what any of this has to do with R
Post by: Epoch on June 15, 2001, 05:23:00 PM
Hey, the median-of-three-dice thing is unbelievably cool!

It's got all sorts of wonderful possibilities.  For example:

Built-in tie-breaking:  If two people have the same score, then look at their high-die.  The higher one wins.  If their high dice are also the same, look at the low-die.

Color code the dice and use them to "flavor" the results (this one's courtesy of Justin Bacon, from whom I stole the idea).  So, if you've got a red die, a green die, and a blue die, then the red die can imply a more skill or finesse-based solution (or failure to resolve, as the case may be), the blue die can be brute force or stubbornness, and the green die can be lucky breaks (flavor to fit).

Critical successes or failures -- if two or three dice come up the same, then the result is a higher order of magnitude.

Plus, and this is one of my own little hobby-horses, you can code information into the die roll that the rollers don't necessarily see.  So, for my theoretical "hidden magic" system, suppose you had a "roll under" system, with a target number you were trying to roll less than, a la BESM.  You're rolling 3d10.  Suppose the TN is 6.  If you roll a 1, 3, 6, then you succeed with a roll of 3.  But the GM also glances over and notes that your high die also succeeds and decides that you've tapped into some minor magical blessing that goes along with your skill use.
Title: Algorithms, complexity and what any of this has to do with R
Post by: Gordon C. Landis on June 15, 2001, 05:26:00 PM
In the arena of being compulsive about this shit . . .

Some months back Jared was looking for some statistical info regarding rolling "sets" on various numbers of d6 .  E.g., if I roll 5d6, what's the chance I get 2 of a kind?  3 of a kind?  People did some clever analysis and I'm pretty sure they gave him what he needed.

I, on the other hand, now have on my SQL Server tables containing every possible combination of values on up to 9 d6 (something over 64 million, if I remember right), and have continued to work on queries to tell me the chances for all the permutations - what's the chance of a pair, a triple, and a quad on 9 dice?  What about three pair and a triple?

As I moved past 7 dice, the number of such combiniations got kinda outta control, and my analysis has taken a very deep back burner in the face of real work (I actually was doing some performance analsis where looking for an optimal way to build the huge table of all possibilities was relevant to work . . . the analysis, alas, is not).  If I simplified to where all I cared about was "at least 2 pair" or "at least a pair and a triple", my life would be much simpler - but I'm committed to that full analysis, damnit!  When I'm done, I will have THE definitive answer to all questions about d6 dice pools.  Wouldn't be too hard to expand to other d's, as well . . .

Very sad.  Especially since I'm not really a big fan of dice pool systems of ANY stripe!

Gordon C. Landis
Title: Algorithms, complexity and what any of this has to do with R
Post by: Supplanter on June 15, 2001, 05:43:00 PM
Hey, I just realized, for an AmberWay Drama dude, you're pretty hot with this dice talk.


Heh. It's funny. I spend all my time playing diceless games - I haven't rolled a die in an RPG since coming back to gaming about four years ago, unless you count a single session of Pantheon, which I emphatically do not. Heck, the games on my "experiment list" are Nobilis, Puppetland, Epiphany, Swashbuckler and &Sword, and you can see that that's a total of two diced games that I mean to get around to playing.

But then I spend all my time thinking about mechanics. Frex,  suppose you had a true "d20 system" where your attack roll also determined the damage you did. (Or a d10 system, whatever.) Against an unarmored opponent, your range of results runs from clean whiff through glancing blow to whatever you decide is Maximum Damage. Now stick armor on the target. The range is really the same - clean whiff through the same Maximum Damage (You're dead, I tell you! Dead! Dead! Dead!) It's just that the result skews to the harmless end of the range. But many armor rules reduce the Maximum result, taking away the possibility of the clean, well-placed shot. The Loose Ends crowd was musing on this issue for a possible computerized resolution system. Okay, says I, here's what you do. For a certain grade of armor - call it Light, or maybe Medium - the attacker rolls two dice instead of one, and takes the worse result. The resulting range of values is the same, but biased toward no or minimal damage. For Heavy armor, roll 3 and take the worst.

It's clear that adding dice skews "downward" very quickly, and it's not a system that differentiates scale mail from banded chain from boiled resinite or what have you, and we never really did anything with the idea. But it provides at least a "Conan-level" of detail pretty cleanly.

For my alternate Sorcerer dice mechanic, I did consider a roll-two-take-the-worst system of opposed rolls though.

Also, you have played Hero Wars a lot and I not at all, so you are far more competent to say how the mechanic works, but the impression I got was that, while Mastery-vs.Mastery rolls drop you right back to failure-bias in a lot of cases, the victory level table corrects you back to some level of relative success for one side or the other. Nevertheless, I think I mentioned on the GO HW forum some time ago, that with the opposing values suggested for natural phenomena (cliffs, waterslides, etc.) that they tended to put the masterful character right back in extremely chancy circumstances and the character whose score falls short of mastery in even worse shape. Your sensible advice was along the lines of toss that default 14 opposing value out the window and adjust the suggested oppositions accordingly.

A final note for the historical record: The vast majority of decisions in Amberway are made on the basis of Karma, with occasional adversions to Fortune. Now the thing about Everway is that unless you restrict your card readings to pure "good card/bad card," that even Fortune has to be interpreted in the light of Karma or Drama. It strikes me that such drama-tinged decisions that get made in the game are influenced far more by symbolism, metaphor and theme than plot as such. I think you could make a fair case that narrativism as you describe it comprehends theme, though symbolism and metaphor seem to bulk small in your model.

Best,


Jim

Title: Algorithms, complexity and what any of this has to do with R
Post by: Epoch on June 15, 2001, 05:56:00 PM
Ron:

A little known, quickly-failed game by the name of Metascape did the "GM never rolls" deal (players rolled their offense versus a static target number to attack NPC's, and rolled their defense versus a static target number when NPC's attacked them), and, I think, did it one better.

Metascape was unabashed space opera of the most ludicrous degree -- it had open-ended die rolls, and, by open-ended, I don't mean "if you roll the top number on the die, reroll and add," I mean, "If you roll the top number (16) on the multiplier die then reroll the multiplier and toss that into the multiplication as well."  So, in the couple of games of Metscape I played, it generally happened a couple of times a session that someone who generally expected to get between, say, a 10 and a 40 on his die roll would get a number in excess of 1,000.  I recall having great fun thinking of all the insane things we could do with these ultra-critical results.

The "players only roll" rule very neatly dovetailed into this by ensuring that the bad guys could never roll really lucky and blow away a PC, thus reinforcing the game convention of a space opera with heroes.

While Metascape was clunky and silly in many ways, I really think that it had some of the most innovative design of the period in which it was produced (which was roughly '92).

[Edits to remove little typoes]

[ This Message was edited by: Epoch on 2001-06-15 17:58 ]

[ This Message was edited by: Epoch on 2001-06-15 17:58 ]
Title: Algorithms, complexity and what any of this has to do with R
Post by: Supplanter on June 15, 2001, 10:44:00 PM
Quote
I built the probability distribution for the median rolling mechanism. The value frequencies (the probability of obtaining a roll equal to X) aren't a bell, but rather an inverted parabola (Ax2 + Bx + C = D ), kinda looks like a bell but not quite, it's steeper at the extremes.

The probability for beating a difficulty of X is almost linear, in fact it doesn't change much (less than 10% discrepancy all the way) than that of rolling 1d20 (D&D style) and checking that it's higher than the diff.

If anyobody is actually interested in this I'd be glad to send them the excel sheet I worked on, just e-mail me.

I for one would be greatful to see your Excel sheet, yes - I did mine by brute force, which means, for my d10 calculations, there are a thousand lines of permutations.

Interesting to hear that the result is a true parabola rather than a bell. Now, while it may be steeper at the ends, it's also truncated, so in a sense the end points are fatter than the alternatives. To be clear on what I consider the alternatives, let me say that this started with my desire to turn Fading Suns from failure-basis to success-basis rolls. FS uses a d20. I've learned from some mailing list archives that some FSers, the ones addicted to bell curves, use 2d10s instead of a d20. (Oh and btw, FS, with its separate die pool roll for damage strikes me as a game whose search and handling times are way out of whack for what it seems to want to be. Of course what it wants to be is a combination of Wolfe, Simmons and Silverberg with the serial numbers only partly filed away, but there are worse things.)

The switch from d20 to 2d10 has some serious significance, since 19 is an automatic miss and 20 is a fumble - with a single d20 a fumble has a 5% chance of happening, while with 2d10 there is only a 1% fumble chance. (There's a critical chance in there too, but I forget whether it's on a roll of 1 or on a roll of your exact target number, so leave it aside.)

In FS you otherwise get as many successes as your roll if you roll under your target number, but 0 if you roll over. That at least is a handling time dream to do. IOW, if your target number is 14 and you roll 13, you get 13 successes. If you roll 15 you get bupkus. (What happens with successes is a little outside the scope of the subject at hand, so leave it aside.)

My first idea was to use a single d10 instead of a d20 but keep the scores the same and give automatic successes for any points of target number over 10. IOW, if you have a target number of 14, take 4 automatic successes and then add the value of the d10. A roll of 10 would be an automatic failure, the same 10% chance of failure that OTS FS gives you, where 19 or 20 hose you. Then consider a roll to have fumbled only if the character was accenting (trading roll difficulties off for increased chances of minimal success or increased degrees of success depending on the direction you choose to accent).

Since it was clear from reading that some people just love bell curves, and since I consider almost all the existing ways of getting them to be too slow, I thought up the median die idea. It isn't, as you say, a true bell curve. But I don't see it as mostly linear - looking at the d10 curve, once you get outside of 4 or 5 range, the raw chance of success changes by more than 10% in terms of basis points over a linear roll - almost 50 basis points for a target of 2 and more than 70 basis points for a target of 1.

So with the understanding that I consider, say, the median d10 as a replacement for either a single d20 or 2d10, one thing I like is that the end points of the median d10 fall nicely between the end points of the other two options. On a d20, the end point (20) has a 5% chance of occurring. On 210, the end point (also 20) has a 1% chance of occurring. On, um, med(d10), the end point (10) has a 2.8% chance of occuring. So if one thinks 5% is too high a fumble chance and 1% too low, well, 3% is between them.

QuoteAlso, if you have any questions regarding prob. distributions of some game system or whatnot, just let me know.

Yes! Since I always envisioned a med(d10) roll as part of an opposed-roll pair, how close to a bell curve do you get when high is subtracted from low? Or, to complicate it past the point I would have any right to expect you to spend time on, what about this:

Suppose that each player has a target number that is in the range common to Sorcerer dice pools - say 2 to 7. Say that your median die scores like Fading Suns dice. IOW, my Soma score is 4. I roll median d6 (NOT d10!). If I roll 1, 2, 3, or 4, my result is my roll. If I roll 5 or 6, my result is zero! If my score is 6, my result is my roll unless I roll a 6. If my score is 7, my result is one point (because my score is one more than 6, plus the result of my roll (because my success number is 6), unless I roll a 6 - in which case I get 0.

Now someone else is doing the same thing, and we're subtracting higher score from lower to get the number number of victories. If we each get zero successes and tie, the side with the LOWER target number scores one victory. If we both get positive successes and tie (e.g. both roll 2), the side with the HIGHER target number scores one victory.

Now here's the question. How does the victory spread compare to the victory spread for Sorc's dice pool method?

Best,


Jim

Title: Algorithms, complexity and what any of this has to do with R
Post by: Don Lag on June 16, 2001, 12:37:00 AM
For the guy with the big SQl tables :smile:

There's a math technique that's pretty useful called Probabilities (I'm not being sarcastic, it's just obvious you aren't too familiarized with it).

The small analysis I made for the median rolls, had nothing to do with writing brute force combination tables. Rather, by using probabilities I arrived that the formula for obtaining a certain value X on a median roll of 3 N-sided dice is:

P(x) = [ 6(x-1)(N-x) + 3(N-1) + 1 ] / N3

I made a very small table based on this formula for adding up cases and could get most of the interesting data from it.

I'd be most happy to work on any weird dice mechanics any of you guys come up with (it SEEMS I have a sturdier mathematics background than the rest). I'll be giving a look at the current propositions.

I'll be posting results at http://www.seba.cl/seba/dMechs
soon.

[ This Message was edited by: Don Lag on 2001-06-16 02:01 ]
Title: Algorithms, complexity and what any of this has to do with R
Post by: james_west on June 16, 2001, 02:09:00 AM
Being intellectually lazy when it comes to these issues, I've always approached the probability distribution of a game (when I cared that much) by just having the computer do a million rolls and give me the result distribution. You can write that code in about five minutes, it'll run in a minute or so, and it doesn't break your head, no matter how byzantine the rolling procedure (and some games have -darned- complex rolling procedures). It's also gonna be pretty darn close to the 'real' probability distribution.
Title: Algorithms, complexity and what any of this has to do with R
Post by: Gordon C. Landis on June 17, 2001, 08:10:00 PM
Quote
There's a math technique that's pretty useful called Probabilities (I'm not being sarcastic, it's just obvious you aren't too familiarized with it).

No offense taken - applying probablity theory IS a better way to go, most of the time.  I am familiar with it, but not especially sophisticated in its' use - and the particular problem the huge SQL tables were put together to solve does not fit easily into any probability equation I'm familiar with.  

There's also, as someone else here mentioned, the Monte Carlo route - roll a million times and track what happens.  That's often a "good enough" answer, and how (if I remember right) Jared got what he needed on the doubles-triples-etc. question.

I went with the big SQL tables because 1) I could, 2) It actually fit as part of a work project, 3) It would become a resource I could then continue to ask other questions of (OK, now lets count triples and quads as "the same" level of success - what's that look like?), and 4) It would produce a complete, absolute answer.

Gordon C. Landis