News:

Forum changes: Editing of posts has been turned off until further notice.

Main Menu

Game Theory -- Can it help Game Design?

Started by xenopulse, January 06, 2005, 10:05:11 PM

Previous topic - Next topic

xenopulse

This post is inspired by an exchange I've had with Mike Holmes.

Game Theory is often taught in political science and business classes as one way of representing how actors (individuals, corporations, nations) make decisions. It helps to understand strategies and attitudes that develop out of certain relationships in which several actors stand to gain or lose something valuable depending on their decisions and those of others. The question of this thread is, how can we use those well-developed tools when designing a role playing game?

As an introduction, the first game that's often taught is the Prisoner's Dilemma game. Two individuals were apprehended for a crime. They are separated. Both can decide whether to confess ("defect") or remain silent ("cooperate"). If both remain silent, they are convicted on circumstantial evidence for a minor sentence of, let's say, 2 years each. If both confess, they get 5 years each. If one of them confesses, he has struck a deal with the prosecution and gets off free whereas the other one gets 10. Now, would you confess or remain silent? (I think the approach to answering this question can already indicate one's Creative Agenda leanings.)

The strategy changes drastically when you run several games in a row with the same partners. Both would benefit the most if they always cooperated, but the temptation of a defection is still there and will quite likely be realized, unless the players have meta-game factors such as trust involved.

There are actually Game Theory games that are like mini-RPGs. I used to run one where groups of students played the US, China, India, Pakistan and Japan at the time that Pakistan was acquiring nuclear weapons. Each group had certain actions at its disposal (such as sanctions, bombardment of Pakistan's nuclear facilities, etc.). The game was played through deliberations and basically role-play until a group decided to take an action. The outcome of the game could be decided through deliberations or certain final actions.

More about Game Theory can be found, for example, at this place:
http://www.gametheory.net/

I believe that Game Theory can help understand how RPG systems will work out. Many players are rational self-interest maximizers in the way Game Theory describes; it all depends on the player's interest. Game Theory, obviously, applies directly to Gamist decisions. But it can also help indicate when a system has the wrong type of reward. For example, if player influence on the narrative in a supposedly Narrativist-oriented game is based on character abilities, and those are developed through reward mechanisms in the system, many players will make decisions based on accumulation of abilities instead of character concept or immediate narrative. This can lead to an endless pursuit of power for power's sake.

Does this mean that the frequent attempt to achieve Narrativist goals through Gamist rewards is doomed from the start?

How else can we apply Game Theory to RPGs?

Can we always frame the issue of Rewards in RPGs in Game Theory terms?

Those are just a few starter questions. Feel free to comment on anything else that stands out.

Mike Holmes

I've already said on the site that I think that Game Theory is something that most designers should get to understand. So for the moment, I'm going to await some responses before adding my full comments.

I do have some substantial things to say about the topic, however.

Mike
Member of Indie Netgaming
-Get your indie game fix online.

Michael S. Miller

Quote from: xenopulseDoes this mean that the frequent attempt to achieve Narrativist goals through Gamist rewards is doomed from the start?

I've always thought it would be cool to design a Narrativist game wherein the reward mechanics are set up in such a way that a Gamist playing the game would make the same decisions as a Narrativist. I think With Great Power... comes close to this goal.
Serial Homicide Unit Hunt down a killer!
Incarnadine Press--The Redder, the Better!

Tobias

If you hadn't looked at it before, also consider Drama Theory - the scuffling of in-game limits and goals that goes on before the game actually kicks off (since the game, in it's most basic version, is a fully understood and inflexible network of options with 1 or more 'optimal' paths).

Drama theory is about re-defining the game before you enter the game (basically making the full process an iteration of evaluating the expected outcome of the game (Game part) and trying to re-define the rules (Drama part) in your favor so the expected outcome (Game part) changes in your favor, etc.)
Tobias op den Brouw

- DitV misses dead gods in Augurann
- My GroupDesign .pdf.

CplFerro

Dear xenopulse,

Game Theory has something to say about RPGs in the same sense that Newtonian physics have something to say about the Universe.  That is, as overarching analyses, they're dead wrong.  "Rational self-interest maximisers" is the RPG equivalent of the Second Law of Thermodynamics—both apply only to closed, deductionist systems, not to the RPG or Universe itself.

The example you give of the US-Asia political RPG exemplifies this:  We may notice patterns insofar as the participants choose to act like economic robots, but what if they all decide to form a big peace treaty?  What if they decide to build a Pacific Rim railroad bridge?  What if they cooperate on developing nuclear-powered desalination plants to assist in solving the Middle East's water shortage problems, thus helping stabilise the region into a natural hub of world trade, thus affecting all the participants?  If these weren't options in the game, then the game is nothing but a Game Theory brainwashing exercise—a computer program, trying to imprint an "artificial intelligence" into the minds of the players.

Someone who plays the game /as a game/, is not playing an RPG, he is playing a /videogame/.  The rules in an RPG are always merely metaphors for the imagined events.  A "Gamist" who wants to play an RPG, must play the game, not play the rules, per se.  Playing the rules is a corruption, like expecting that any set of combat rules will literally simulate all possible combat situations, and so running war simulations off of them.  Game Theory can at best provide springboards for unusual moral dilemmas, and data on how people respond /statistically/ to them.  But no individual is a statistic!

I'll give you an example.  A friend's father recounted one time when he was in the military playing a war game in the field.  The group was divided into two sides who were sent out into the field to "make war."  My friend's father doubled back to the supply convoy and stole a handful of "thunderflashes," explosives designed to make a loud noise and a moderate flash, without hurting anyone.  Upon rejoining his group, they were attacked, so he hurled one, and scared the shit out of the attacking soldiers.  The Sergeant came up and demanded to know what was going on.  "I'm making use of commandeered materiel, sir!" was the reply.  "How many did you take?"  "Three, sir."  "Then give me the other two!"  He complied.  The exercise was continued, and this time the Sergeant was participating.  My friend's dad hucked another thunderflasher that he'd kept in reserve, this time right in front of the Sergeant.  When he recovered his wits, he stormed over to the soldier, "What the hell are you doing!  You said you only had three!"  "I lied, sir."

Characteristic humans "cheat" in this way.  Animals follow rules.  One "Game Theory" example of this in terms of history was the Christmas celebrations between German and Allied soldiers in the trenches of World War I.  Groups of foes, nearby each other for sufficiently long times, realised that it wasn't in their interest to be killing their fellow humans.  So spontaneous armistices arose, where they would shoot to kill only if forced to, and they wouldn't bomb the enemies mail and medic trucks and the like.  Was this "rational self-interest"?  Statistically, yes.  But individually, each man who participated was cheating at the game of war.  That's how the Thirty Years War was ended—when the two sides cheated by introducing the principle of the "advantage of the other" as the basis for the Treaty of Westphalia.  Humans advance when they /don't/ play by the rules, or rather, when they introduce new principles of natural law.

The point is, the whole game is meta-factors.  Even the systemic rewards are only metaphors for a certain in-context advancement.  My players, for instance, got quite fetishistic about getting new skill points, even though the system itself was so stingy that advancement required ungodly amounts of the things.  They understood that /playing the system/ was fun, but not the important thing.  This didn't stop them from dealing with Game Theory situations at all, they had great fun "gambling" at various times, but it tended to be a /human/ form of gambling, rather than a casino form.   I explained one of my historical techniques of scenario design to them one time:  "I don't design scenarios with solutions.  I design what would reasonably exist, and if there's no solution, then tough bananas."  One of them piped up in response, "That's why we have to cheat!"  To which I nodded, "Exactly."

Sincerely,



Cpl Ferro

Marco

Quote from: Michael S. Miller
Quote from: xenopulseDoes this mean that the frequent attempt to achieve Narrativist goals through Gamist rewards is doomed from the start?

I've always thought it would be cool to design a Narrativist game wherein the reward mechanics are set up in such a way that a Gamist playing the game would make the same decisions as a Narrativist. I think With Great Power... comes close to this goal.

This was my impression of Sorcerer, really. Nicotine Girls too.

-Marco
---------------------------------------------
JAGS (Just Another Gaming System)
a free, high-quality, universal system at:
http://www.jagsrpg.org
Just Released: JAGS Wonderland

Brendan

xenopulse, I for one would be interested in learning more about game theory for myself.  Do you have any recommendations of books that would be a good introduction to the subject for someone with experience in, say, computer science but not economics?  (For that matter, does anybody else?)

xenopulse

Dear Cpl Ferro,

Thanks for the long response.

Quoteas overarching analyses, they're dead wrong . . . both apply only to closed, deductionist systems, not to the RPG or Universe itself.

If we want to get into truth of science, this is going to be a long discussion. IMO, theories are only true insofar as they are useful. I follow William James' Pragmatism here, and think that it's the only viable definition of truth that stands up to the scrutiny of skepticism. In addition, whatever scientific laws you believe to be "true," i.e., not "dead wrong," are simply just the current paradigm and will most likely be replaced in the future (cf. Thomas Kuhn, "The Structure of Scientific Revolutions").

You cannot have theories outside of a closed system. Knowledge is the construction of differentiated objects and fuzzy patterns, based on the application of our sense of time-ness and space-ness. We cannot know the Whole, instead, we arbitrarily cut it into separate pieces through our cognitive apparatus and use fuzzy association to describe patterns among these arbitrarily created subsections of reality. In the end, we have a model of knowledge that can predict events within the model, but we'll never have direct knowledge of reality, as reality (the Thing-In-Itself, per Kant) is undifferentiated.

Given this view, I tend to focus on the utility of a theory. Even the GNS theory is not true per se, it seems to me. It's a tool to be used in game design, and its "truth" is determined through its usefulness in that task. Game Theory can be a similar tool, though with different applications.

QuoteIf these weren't options in the game, then the game is nothing but a Game Theory brainwashing exercise—a computer program, trying to imprint an "artificial intelligence" into the minds of the players.

Actually, the players could deliberate any outcome they wanted, only limited by what the players perceived that the states could reasonably do. The pre-given actions triggered special events and could end the game, but the players could have agreed to increase humanitarian aid and build wells in exchange for Pakistan giving up its nuclear program, and that would have been just fine, even though there were no specific rules for that.

QuoteA "Gamist" who wants to play an RPG, must play the game, not play the rules, per se. Playing the rules is a corruption, like expecting that any set of combat rules will literally simulate all possible combat situations, and so running war simulations off of them.

I would be careful with normative statements on what is "right" roleplaying. If the players get fun out of playing the rules, what's the harm? I know many Gamist players who see the story of a game mostly as a way of interlinking combat scenarios or other challenges in which they can use the rules and their character's kickass abilities. Are they corrupt roleplayers? I tend not to judge them for how they like to play the game.

QuoteBut no individual is a statistic!

I certainly agree with you here. I don't think Game Theory can somehow describe everything about a player's (or even human's) choices, but it can illuminate one of the ways in which people approach decision making. Whenever I personally make a decision, there is a rational self-interest maximizing part of me that calculates how I could best profit from the decision, in one of many ways. And there's also a moral part of me that tells me independent of the outcome what the deontologically right choice would be. Both weigh in, along with other factors such as the balancing of long-term versus short-term goals. But I can understand at least a part of this process in terms of Game Theory.

Quote"I lied, sir."

Hmm. I think you are mistaking my interest in Game Theory for an attempt to rigidly structure games with rules so that player choices will be limited. That's not my intention.

QuoteOne "Game Theory" example of this in terms of history was the Christmas celebrations between German and Allied soldiers in the trenches of World War I.

Some of them actually exchanged gifts. It seems to me, however, that Game Theory is perfectly applicable. "Players" on each side would benefit if they cooperated about not attacking medical units. But if one side defected, they would gain the advantage over the other side. It's quite a typical Prisoner's Dilemma, really. Apparently, they managed to play in a signaling tactic which involves cooperating even when the other side is still defecting, until the defecting side realizes that mutual cooperation is possible. Notice that the "self-interest" is not necessarily defined in an absolute way. It depends on what each side wants in any given scenario.

I think the point I am getting to is that Game Theory is not focused on stringent rules. It's an analytical tool that measures rewards and shows how each side could go about earning the reward. If "cheating" is an option, that's what Game Theorists would simply consider the "defection" choice, whereas sticking to the rules would be the "cooperation" choice. Your players are confronted with a scenario you created. They can play by the rules—and earn no reward—or they can cheat, beat the scenario, and be rewarded. As was to be expected, they choose to cheat.

So I think that we're just working on a different definition on what Game Theory is about. I am not suggesting that we insert clear-cut, two-choice problems into RPGs and take away the very freedom that makes them great. I am saying that we can use a reward analysis to see whether players will stick to a certain system; whether players will cooperate with each other or screw the other characters if the reward is great enough; whether the GM should ignore the rules if it leads to a better game; etc.

xenopulse

Michael,

That's one of the things I am talking about. You need to be aware of the way rewards work when players make decisions. And with that awareness, you can direct which choices players will make.

Seeing that my question was whether Gamist rewards are doomed in their attempt to further Narrativism, I guess your answer is no :) Would you care to explain how it's possible to set up a reward structure that does not become the goal of the game itself? For example, if the player decides on the character's way of developing, isn't that setting it up for the player to create characters with easy rewards in mind?

Tobias,

Do you have a place to point at where I could read more on Drama Theory? It certainly sounds interesting.

Brendan,

There's lots of information on the net, for starters, such as the place to which I linked. I don't have any GT books from my graduate assistant time anymore, but I'll try and look around at what the current state of the written field is.

Roger

I think there are some things game theory can show us.

Game theory presupposes that we are dealing with rational agents.  In order for them to act rationally within the game, they need to understand the rules.

This is often difficult or impossible, in a couple of different ways:

1)  The rules may be so complex that it's very difficult to understand what the actual risks and rewards are.  Some players may consider this a feature of the system.

2)  The GM (or other player fulfilling this task) may be flat-out lying to the other players.  This makes it fundamentally impossible for the other players to act as rational agents.

This leads to a relatively common form of Gamism (or pseudo-Gamism) play in which the GM pretends to provide risk, but doesn't.  The players either authentically believe that there is risk, or pretend to.  There is a lot of Illusionism required to implement this sort of Step On Down game.

It's potentially not even all that dysfunctional, at least in the short term, but I think it eventually becomes unstable.  Either the players find it increasingly hard to keep up their suspension of disbelief, or the GM gets tired of faking it all the time, or the Gamists in the group decide they want actual Challenges rather than pale imitations.

I think this can also occur in Narrativist games, particularly those which are (theoretically) addressing the Premise: Can the underdog overcome terrible odds and triumph in the face of adversity?  This is often a loaded question -- a foregone conclusion -- and thus not a real addressing of the Premise at all.  In order to ensure that the Story has the "correct" outcome, the so-called underdogs need to become the top-seeded favourites.  


The concept of risk adversity is also interesting.  Many settings, including most versions of D&D, have a written or unwritten assumption that the characters are risk-seeking individuals.  Some go as far as to define heroism as being equivalent to risk-seeking.  However, not all players are risk-seeking -- some are fairly risk-adverse.  I suspect it's one of those Dials that varies from person to person, but to some extent, also from group to group.  Too much risk-avoidance or too much risk-seeking compared to the rest of the group could easily lead to charges of "No no no -- you're roleplaying this all wrong!"




Cheers,
Roger

CplFerro

Dear xenopulse,

I must agree with you about GNS theory, for strictly speaking it is what is known as procedural analysis.  Instead of describing truth, it observes and catalogues procedures.

Getting at truth demands an hypothesis that, when proven, resolves a given ontological paradox.   An example I commonly use is the doubling of the square.  Linear action gives us the power to double any given increment of line, for instance.  But linear action is inadequate when we wish to generate a square exactly double the area of an original.  Applying our calculators, we end up with an irrational number, and so can never, in principle, generate the solution that way.  So, we have a paradox between our method, and our goal.  

The real method is constructive geometry.  We take planar action as a separate /species/ of action, a "higher power" that can do things linear action cannot.   Experimenting, we take two squares of the original size, as the total area desired, needing only to be rearranged.  As in Plato's Meno dialog, the solution is thus available to us: diagonal cuts, giving us the principle of doubling the square.

That is the nature of truth in germ.   Thus, proper science is not merely a series of "useful lies," but rather the discovery of truthful principles of action, /from which/ a pragmatic, deductive theorem-lattice is elaborated as if hereditarily.  Scientific revolution does not debunk any truthful principle; rather, it contributes new principles to resolve the paradoxes arising when the old hereditarily-derived theorem-lattice is applied to the Universe as a whole.

So, my approach to the matter of RPGs is that of hypothesis.  When I state that there are "right" and "wrong" modes of roleplaying, I don't mean the One True Way (yet).  Rather, I mean that certain kinds of "roleplaying" are not roleplaying at all, because they don't partake of the essential principle involved.  To use an ironic example, sweeping the checker pieces off the table onto the floor isn't "playing checkers."  Likewise, "playing an RPG" primarily in terms of the fixed rules, is much closer to playing a videogame than to playing a true RPG.  Tomb Raider and the like are not RPGs, they just resemble them.  Your political game by contrast appears to be an example of a true RPG.

This all said, I'm essentially giving a caveat.  Specifically, the system I use (Phoenix Command) is about as Gamist as one can get; I'm usually particular about applying the rules correctly.  So it naturally becomes my concern as to how this system affects the experience of the players.  That's where procedural analysis comes in, to figure out what's happening, and what tends to happen, given particular rules sets.  But "the experience of the players" is itself a function of their relative moral self-development; to use your Game Theory example, "self-interest" is a variable term (but not an arbitrary one).  True human self-interest converges with morality; divergence is non-human, strictly speaking.  And the result of applying humanity is novelty, the introduction of unpredictable new truth, which requires the "game" theorised about to be rewritten.  So, I agree that we don't wish to limit player choice, through rules (though we may limit it on principle, in the interest of fair play, etc.).  However, we should be able to see what is really going on, in principle, not just in practice.  

Thus, I'm interested in how you propose to apply your idea.  I doubt anyone here will disagree that form should not ideally follow function.

Sincerely,



Cpl Ferro

xenopulse

Cpl Ferro,

I am afraid that if we get any deeper into this epistemological discussion, we're getting off track of the thread. :) Suffice it to say that I think that no claim to absolute truth of any theory is tenable given the challenges of skepticism and the limits of perception and cognition. We have tools like mathematics that we construct logically consistent, but that does not mean they produce truth.

QuoteTrue human self-interest converges with morality

Here we have another topic for much philosophical discussion. :)

QuoteThus, I'm interested in how you propose to apply your idea.

I assume you mean applying Game Theory to game design? I was merely humbly suggesting that one could predict player choices by analyzing what will be the reward for certain actions and how the player goes about reaping that reward, and then whether the process of striving for the reward supports or opposes the intended goal of the game.

---

Roger,

I think you make some good points. People who are used to making Game Theory matrices can more easily focus on decision points within complex rule systems and figure out whether they always benefit a certain type of decision.

The problem if the GM "faking" risk is one I am familiar with. I used to think that was part of my GM job, in my early days, because players would be really upset if their characters died, but they wanted to face tough challenges as well. That is one instance where the group needs to figure out whether the Gamist path is really for them.

This ties in with risk adversity, IMO, as the more characters grow and develop, the less people want to lose them. They grow in power, relationships, history, personality, etc., so that all types of gamers grow fond of their characters. Therefore, the risk/reward balance shifts as play goes on, and it's definitely something the group should keep an eye on.

CplFerro

Dear xenopulse,

How would I know if my system was appropriate?  That is, how would I know if there was a problem waiting to happen?

This is partly why I study the idea of RPGs, to see what its functions are, and so what forms are needed.  Subsumed by this we have GNS et al describing special functions that imply the need for special forms.

Sincerely,



Cpl Ferro

John Kim

Quote from: xenopulseI assume you mean applying Game Theory to game design? I was merely humbly suggesting that one could predict player choices by analyzing what will be the reward for certain actions and how the player goes about reaping that reward, and then whether the process of striving for the reward supports or opposes the intended goal of the game.  
Game Theory will only help in predicting if the players can be shown to be trying to maximize a measurable quantity.  The issue is, what are they maximizing and how do we measure it?  

This is a tricky question.  Even for Gamist players, I don't think this is clear.  One difference between rgfa and The Forge is that The Forge tends to characterize Gamism as being about manipulation of rules.  On rgfa, Gamism was about challenge or test of skill, which could range from combat to solving mysteries to political intrigue.  While "XPs earned" is a clearly maximizable quantity, "mysteries solved" is not.
- John

Wormwood

I've written an article on this particular matter,  here. The conclusion I came to was that the algorithms used to find solutions to game theoretic problems were of the greatest utility to design, both to ensure that these algorithms work poorly, and to ensure that they work well. Only some of these algorithms are optimal in nature. Rather than attempting to predict what players are doing in a bulk manner, it is more useful to ensure that a player attempting to enhance some outcome will be forced to follow particular algorithms for optimization, causing their actions to direct the game, rather than break it.

On the other hand if you are attempting to identify a quantity which is optimized many exist, some more apt than others. The real problem comes when you account for the complexity of the space (since multiple players optimize simulataneously potentially different quantities). In this situation the dynamics of sub-optimal versus optimal can have a huge impact on outcome. This is why I suspect it is more valuable to investigate from the cognitive science side, and observe how people reason under uncertainty and make decisions in unpredictable games. Look for Kahneman and Tversky if you are interested in that side of things. Also a mathematical approach to game theory fits this better, such as in an undergraduate yellow book (such as Peter Morris' Introduction to Game Theory). And eventually you'll need to dip into dynamical systems theory to really get a handle on it.

Game theory can be very helpful, but it's important to recognize where the assumptions of the theory lie, and in what contexts things break down. Whether it's uncomputable pay-offs or infinite dimensional decision spaces, RPGs can often find themselves drifting there, and then you need to re-evaluate things from the beginning.

I hope that helps,

   - Mendel S.