home  previous   next 
The Fourth Annual Game Design Think Tank
Project Horseshoe 2009
brainstorming graphic

Group Report:
Choosing Between Right and Right: Creating Meaningful Ethical Dilemmas in Games

Participants: A.K.A. "Team HJ"

Ian Schreiber Independent

Link Hughes CCP Games
Coray Seifert THQ/Kaos Studios Bryan Cash Schell Games
Carlos Pineda Schell Games Thom Robertson Aggressive Game Designs
Jim Preston Electronic Arts Facilitator: Linda Law, Fat Labs

Mission statement: Providing a framework to understand and implement ethical dilemmas in games

         “A helpless, starving orphan girl stands before you on the street. Do you: (A) give her some money for a hot meal and escort her safely to the nearest shelter, or (B) take out your sword and stab her in the face?”
         This is, more or less, the state of the art in ethical decision-making in games today. Games that portray themselves as having an “ethical” component are often limited to giving the player obvious decisions where one choice is clearly good and the other is clearly evil. There is generally no middle ground, nor is there any ethical ambiguity.
         There is nothing inherently wrong with this approach. Games such as Fable, Knights of the Old Republic, and the aptly-named Black & White have been popular and well-received by players and reviewers. However, we believe that games as a medium are capable of doing more. This report is intended to give game designers additional tools by which they can make compelling games that connect with players on a deep emotional level and tackle ethical decisions that are at least as interesting as those found in the real world.

Motivation: Who Cares?
         If today’s games are good enough already, why bother doing more? Why should anyone take creative risks by including something that has not been proven to work in earlier games? We offer the following motivations:

  • As Sid Meier famously said, games are “a series of interesting decisions.” Ethical dilemmas and tradeoffs in the real world can be very interesting indeed. By bringing these decisions into games, designers have one more tool they can use to make their games more compelling.
  • Games have been fighting for cultural and artistic legitimacy for some time now. As an artistic medium, games have the opportunity to become more culturally relevant by addressing ethical issues.
  • Since ethical dilemmas have not been heavily explored in games so far. This provides some “low-hanging fruit” that a modern game could explore—even if only in a rudimentary fashion—that could get a lot of positive press for the game.
  • Players bring their own ethical systems with them when they enter a play space. By highlighting this, we can give our players experiences that create emotionally powerful and moving moments that they will remember for years.

Vocabulary: What are we Talking About?
         The application of ethics is a complicated topic that has been theorized on and debated about since ancient times. While we aim herein to provide a survey and analysis of this topic oriented towards practical application in game design, it is important that we first clarify the use of some terms that we use heavily in this report.
         For our purposes, ethics refers to the study of systems of rules which are used to distinguish “right” actions—those which are ethical, moral, and valuable—from “wrong” actions. These systems are referred to as ethical systems, while specific rules that comprise an ethical system are called ethical principles. Note that for this article, we will use the terms morality and ethics interchangeably without regard to the subtle differences with which they are imbued in scholarly discussions of Philosophy. Likewise, we do not make a distinction between normative, applied, or metaethics.
         An ethical dilemma, meanwhile, is an unavoidable choice between two or more competing and equally important ethical principles that are in direct conflict within an ethical system. The ability to reconcile or resolve ethical dilemmas is one metric by which ethical systems can be compared and valued relative to each other.

Three Layers of Ethics in Games
         As important as defining our terminology, we must clarify where we will be targeting our discussion of ethics and game design. There are three different arenas, or Layers, in which ethics can impact games. Addressing all of them is not within the scope of this report. However, as these distinctions are of some interest unto themselves, we will briefly elaborate upon them.
         The narrative layer describes the ethical language used to construct a game’s story and world context. In games with a strong sense of character, the protagonist or other non-player characters may be confronted by purely narrative ethical dilemmas, the outcome of which the player cannot affect. For example, in Uncharted 2: Among Thieves, the main character confronts and resolves moral dilemmas entirely outside of the player’s control. For example, at one point he has to choose whether to help a wounded man escape (endangering himself and two other companions, with no guarantee that the wounded man lives) or abandon him (which guarantees his death, but gives the others a better chance of living). The player does not have any choice in the matter; the character acts as part of the narrative arc of the game. Despite the player’s inability to determine the flow of the story, the main character’s actions raise ethical issues within the game. Ethical dilemmas existing only at the narrative layer are a staple of compelling, non-interactive storytelling and are frequently employed in linear media such as television and books.
         Skillfully employing narrative layer ethics is partly a question of writing technique and this aspect of the first layer is beyond the scope of our discussion. However, the narrative frame for dilemmas posed to the player can have a very significant impact upon the veracity and emotional resonance of those choices. In Ultima IV: Quest of the Avatar, it is partly due to the eight “Virtues” bearing evocative names like “Humility” and “Valor” that elevates the game to a thought-provoking discussion of what sacrifices are necessary to live an exemplary and moral life. If, instead, they were abstracted into a single “Exemplar” stat, the game would have been far less engaging. Using the narrative in this way to flavor the perception of decisions made by the player is pertinent to our discussion in this article.
         Above the ethics woven into a game’s plot, setting, and action lays ethical choices which are posed directly to the player. These comprise the second layer, the interaction layer—the ethics of decisions that the player makes based on his/her own ethical system. As an example, in Star Control 2, the player is presented with the option to sell their crew to alien slavers in exchange for resources. While the framing for this decision is part of the narrative layer, the player’s chosen course of action and the game’s response all exist at the interaction layer. The interaction layer is the primary focus of this report.
         Finally, there is a third layer above even the level of the player’s interactions with the game, the design layer. This layer is comprised of the many decisions made by game designers in the creation of the game. Ethical concerns associated with the design of games include the value statements which designers imbue their products with. If a game poses a dilemma to the player, the consequences which the game mechanics or narrative impose upon the player as a result must be decided by a designer. These consequences cannot help but be colored by the designer’s ethical system and world view. However, these results can be intentionally quite skewed, such as the glorification of criminal activity prominent in the Grand Theft Auto series. In short, behind any game situation where the designer chooses to reward the player for virtuous or vicious behavior, the game designer is making an ethical statement.
         Other concerns at the design layer include whether the mechanics employed by a game are ethical or not. Many social-media games, for instance, were (until recently) largely monetized by lead-generation scams. The designers were fully aware of the impact that these games had upon their player base. Choosing whether or not to employ these mechanics is an ethical choice that every designer creating games in that space must confront.
         The third layer is quite deep and merits significant analysis and discussion on its own. This article, however, is focused on the practical application of ethics and ethical dilemmas in games. As such, we will not be looking significantly into the concerns of the design layer.

Ethics 101 for Game Designers
         Few if any games have their morality based on any ethical theory. However, we feel that it is important that game designers have some sophisticated understanding of ethics because there is an unavoidable ethical component to all game design. Even those who claim that ethics play no role in game design are making an ethical judgment because they imply that designers ought to behave in certain ways. While it is true that the vast majority of games are amoral—that is, games like Tic-Tac-Toe have no ethical content at all—it is also true that designers by definition are creating rules that govern the behavior of players, and ethics is the discipline that specifically studies right and wrong behavior.
         Casuistry, as it is generally used in philosophical writings, is a negative term used to describe determining proper behavior on a case-by-case basis. Applied ethics must inevitably deal with actual ethical decision making, but the goal is to come to a deeper, more systematic understanding of the underlying ethical principles. Likewise, in this report we will discuss some examples of ethical play in video games, but the ultimate goal is to help designers better understand the ethical principles that are implicit in the games they are helping to create.

Three Major Western Ethical Theories

         What follows is a summary for designers within the Western philosophical framework. Note that non-Western philosophies have their own set of traditions to draw from, which are outside the scope of this report.
         The oldest of the three major ethical traditions, Virtue Theory was born in ancient Greece, and is best articulated in the dialogues of Plato and in Aristotle’s Nicomachean Ethics. These thinkers place an emphasis not on learning specific rules but on developing good habits of behavior (virtues) while simultaneously avoiding bad habits (vices). The Greeks typically held there to be four cardinal virtues (Courage, Justice, Wisdom, Temperance) as well as a host of other minor virtues that were all a part of living an ethical life.
         This is part of the teleological (goal-directed) tradition: namely, that humans have a certain function, and that function is best achieved by living virtuously. It is by practicing these virtues that one becomes more fully human. The major Western religions also make use of some types of Virtue Theory to codify proper behavior of their faithful.
         A second tradition, Duty Theory, has many variations but is most heavily influenced by the writings of Locke and Kant. This position holds that all people have natural rights and with those rights come corresponding obligations or duties. Because, for example, all people have a natural right to life, murder is wrong because it violates the duty we have to respect another’s right.
         Kant is generally considered the greatest of the deontological (duty-based) ethicists because he reduced many obligations down to a single duty, the categorical imperative. And while there are different expressions of this imperative, the one that is most easily understandable by all of us is: treat other people as ends and not as means. That is, treat people like people and not as tools to be exploited.
         One last position, Consequentialism (or Utilitarianism), is the mirror image of Duty Theory and holds that it is not the principle of an action that determines its rightness, but the consequences of that action. Bentham and Mill are most closely associated with this theory, which argues that if we do a simple cost analysis on an action, we can determine its moral correctness. If we can account for all the pleasure and pain caused by the action both to ourselves and to others, then we can determine whether the action is right or wrong.
         Consequentialism has broad appeal because of the practical nature of its position: pain is bad, pleasure is good. Of course, many challenges immediately arise, such as whether it is justifiable to torture a single person in order to prevent the possible pain to many people. But sophisticated theories have been developed to address these challenges.

Anatomy of an Ethical Decision: a Handy Cheat Sheet

         Suppose you want to include more interesting ethical decision making in your games. It may be handy to have a little checklist to help understand the many different aspects that are involved in almost all ethical decisions. The following is a simple guide to help actual working designers understand those aspects. This list, however, is fraught with enormous ethical assumptions and would be quite controversial among actual working ethicists. We beg forgiveness in the name of practicality, and ask designers to understand that this list (and indeed, this entire report) is a guideline, not a rigid and inflexible rubric.
         Suppose you wanted a character in your game to have the option to behave generously by giving to the needy. Consider these as the major components of this decision:

  • Goal – what is the behavior you want to do? (Example: you want to help the poor)
  • Method – how do you intend to accomplish the goal? (Example: you give money to the poor)
  • Degree – to what intensity do you perform the method? (Example: how much money do you give the poor?)
  • Knowledge – do you realize what you’re doing? (Example: you have a full understanding of the consequences, both positive and negative, of giving money to the poor)
  • Motivation – why do you make the choice? (Example: you’re doing this out of a genuine desire to help the poor, not to gain extra Experience Points or some other reward)

Classic Textbook Examples of Ethical Dilemmas

         Recall that an ethical dilemma is a situation where two competing ethical principles (both of which are “right”) come into conflict. While the rest of this report will focus primarily on ethical dilemmas in games, here are ten examples from the literature.

  1. Jean-Paul Sartre’s example of the French student whose brother is killed during the German invasion in 1940. He now has to decide between staying at home to care for his mother as he is her sole support, or joining the French resistance movement.
  2. Is it okay to kill an innocent homeless man to assuage the fears of an entire town? In other words, is it okay to violate the rights of an individual for the greater emotional good of the entire community?
  3. Survival Lottery: If you could prove that organ donation saved more lives than it would kill (e.g. one person is killed and their organs can save the lives of many), would it be ethical to create a lottery where one healthy person is randomly chosen to be killed and their organs harvested?
  4. Is it okay to murder five innocent lives to potentially save the lives of a greater number of twenty?
  5. Plank of Carneades: Two shipwrecked sailors see a plank that can only support one of them. One reaches the plank first. The other gets to the plank and pushes the other off in order to save himself. The first sailor drowns. The second sailor is later rescued. Can the surviving sailor be tried for murder, or was it self-defense?
  6. Violinist thought experiment: There is a violinist that goes into a coma. Through medicine, they determine you are a perfect match for him, but the healing procedure requires you to be doing nothing while strapped to him for 9 months. The violinist has great cultural value. Is it ethical for you to choose not to keep him alive, and unplug yourself?
  7. Bad Samaritan: You are in a car wreck. You’re in an extreme amount of pain but not in any life-threatening situation. The other person tries to help you but isn’t really good at it. In the process, he inadvertently causes permanent damage. Is he ethically responsible for that damage?
  8. Charity dilemma: Suppose that the act of helping people causes them to become dependent on your charity and ultimately incapable of taking care of themselves, so that withdrawing the charity is now immoral (instead of amoral). Do you help them in the first place?
  9. Concentration camp: A sadistic guard is about to hang your son for an escape attempt. He tells you to pull the chair out from underneath your son, killing him, and threatens that if you refuse he will kill your son and another innocent person. You believe he is sincere. Do you pull out the chair?
  10. Pregnant woman: You and a group are spelunking. One member of the group, a pregnant woman, gets stuck in the only exit out of the cave. A rising tide threatens to drown the group. Her head is above water, so she would be fine, but the rest of the group is doomed. However, you do have a stick of dynamite…

Ethical Game Design Patterns
         Creating game systems that cause the player to make interesting ethical decisions is not a simple task, and many ethicists would find our efforts to be insufficient and inadequate. As designers, we must acknowledge that we do have limitations for what we can create. Ethical systems are complex, and our players will be coming to our game with their own perspectives that the designers cannot always fully anticipate. However, we must remember that the ultimate goal is not to create a fully realized ethical system, but to make use of the field of ethics to provide interesting experiences for our players. This section provides a list of concepts that are used in games when adding ethical decisions.
         At the heart of all ethical dilemmas is a choice that the game presents to the player. The player then makes a decision based on the choices available. Finally, the game provides a result in response to the particular decision that the player makes. Ethical dilemmas can vary based on the kind of choice, the way the player makes a decision, and the different kinds of results the game can provide. The following patterns each relate to one of these elements.


Number of Choices
         One of the simplest patterns for an ethical dilemma is the A/B choice (or dichotomy). This is where the player is given two discrete choices and must choose between them, typically by selecting one or the other from a menu. These map quite easily to games because they are relatively simple to design and implement, as well as being easy for a player to parse.
         The A/B choice can be extended in several ways. Additional choices can be provided (an A/B/C or A/B/C/D choice, for example). More interesting is providing a continuum between A and B (instead of asking whether to kill one innocent to save five lives, ask how many lives would have to be saved). However, the more choices there are, the more difficult it becomes to implement.

Repeated Choices
         In some cases, the player is presented with the same or similar ethical choices over time. This can be an effective technique when the game initially gives the player limited information about the choice they're making, but expects them to learn as they repeatedly make decisions. Repeated choices are also useful when it is the collective weight of these decisions that determines the outcome (rather than any single decision). This pattern is often tracked by an ethical continuum (see below).

Player-Player and Player-System Choices
         Is it morally acceptable to kill one innocent to save the lives of many? What about if that one life is your friend’s character, and the many are computer-controlled NPCs? In this case, there is one level of player decision being made due to the mechanics of the game, but another kind of decision involving the real and ongoing social ties between real human beings outside of the game.
         When presenting ethical choices to players that are purely mechanical (i.e. the choices are still ethical ones even if the narrative is completely removed/abstracted), it is much easier in multiplayer, as the choices are about how you interact with other players rather than how to interact with the game’s systems.
         There is an unavoidable ethical component in all multiplayer games, because there is another ethical agent that you are interacting with. These components may or may not be particularly deep or interesting or meaningful (e.g. “do I behave as a griefer to the rest of my team?”) but they exist. By creating incentives for players to behave ethically and unethically, designers can strengthen this aspect of their game.


Making Decisions
         The simplest way to allow the player to make a decision is through an explicit menu option. The game presents the player with a set of choices in text, and the player decides on one of the available choices. For lack of a better term, we can call this a menu decision.
         An alternative is a verb decision, where the game tracks direct interaction through the player’s use of existing gameplay verbs. For example, suppose a greedy factory owner falls asleep with his wallet beside him. There is a homeless family nearby. If the player is presented with a “steal the wallet? (Y/N)” menu, this would be a menu decision. If instead the player has the ability to pick up objects and to give objects to NPCs, the player may implicitly make this decision through their use of available gameplay verbs.
         In verb decisions, the player is not given specific moments where they are forced to make an ethical decision, but instead have ethical decisions as part of the overall gameplay experience. Two examples of this are Ultima IV where your actions in combat affect your morality, and The Suffering where using a special power increases the chance that your character had killed his wife in the past.

Judging the Player’s Decision
         How do we judge the player's decision? Do we judge based on the act that they committed or the result that occurred? Do we do both? Judging based on players’ acts follows the deontological approach to ethics and provides a very different experience from the result-based approach of utilitarianism.
         As an example, suppose the player lies to an old woman, telling her that her son died in a valiant attempt to save the life of another. In reality, the woman’s son was a bandit and had been killed by the player earlier. Does the game judge the act (noting that the player lied), or the result (noting that the player acted compassionately), or both? Said another way, does the game consider the ends or the means?
         Judging acts is relatively simple for both the designer and player. Designers know the available menu choices and verbs, and the player can quickly map how the game responds to their act. Judging results is trickier, partially because results are not always assured, and they can require the player to have to think ahead and consider the ramifications of their decision. In the example above, what if the old woman (having been lied to) now wants to seek out those responsible for her son’s death and punish them? While the player was trying to act compassionately, the eventual result was an old woman with thoughts of murder and vengeance. This type of potential complication can make decisions more interesting, but care must be taken to ensure the results never feel arbitrary to the player.


Ethical Continuums
         An ethical continuum is a finite line where points on the line represent ideal ethical stances. The player's alignment usually starts in the very middle of the line, and their actions will push and pull them along the bar. While this can be a very simplistic way of representing ethical systems in games, in some cases it can inhibit interesting ethical dilemmas.
         One example of an in-game ethical continuum is in Star Wars: Knights of the Old Republic where the continuum has “Light Side” on one end and “Dark Side” on the other. A player action might give “+3 Dark Side Points” which would move their point on the continuum 3 ticks towards the Dark Side. In practice, players often select whether they are doing a Light or Dark Side playthrough from the beginning, and then always pick the choices that match that philosophy.
         Note that this method has weaknesses. In the case of the Star Wars game, each action in the game is given a specific amount that it moves the player along that continuum. This leads to strange notions like how murdering an innocent could be 'equalized' by donating to charity a couple of times. The characteristics of the two ethical systems (Light and Dark Side) and the means of judging a player's individual in-game ethics makes it difficult to have interesting ethical dilemmas. This is further exacerbated by the fact that different gameplay powers are available only to those near an end of the continuum, forcing a player to push their alignment one way or the other from the onset.
         One method of making the continuum a more interesting ethical game concept is to have the continuum span two aspects of one overall philosophy with movement along that axis relating to consistent action. Rather than one continuum in the game, there could be multiple continuums, each representing a different aspect of the character's overall philosophy. The continuum(s) could also be hidden from the player and thus not so easily min-maxed. Another consideration is how virtue theory represents a “virtue” existing between two extremes; a continuum could work such that rather than a player attempting to maximize their position on the line, they attempt to balance it in the middle. 

Certainty of Results
         When creating player choice, one of the most important design concerns is the feedback provided. What is the result of the choice made? What feelings do we want the player to have? In the case where the player has incomplete information, this is quite important as we are asking the player to learn from the choices they are making.
         In a certain result, a known outcome is guaranteed to happen. With an uncertain result, a known outcome may or may not happen. For example, building factories in a game might be guaranteed to increase income which can be used to help the entire population (certain result), and they might also have a small chance of injuring the nearby population (uncertain result). This would lead to the ethical dilemma of whether it is okay to potentially harm the few for the definite good of the many.
         Certain and uncertain results are recognizable as aspects of risk and reward. When framed as an ethical dilemma, however, their consequences become interesting in a new way. Rather than trying to figure out the most advantageous result, the player determines the most “right” one. 

Consistency in Results
         In the real world, people generally behave consistently according to their own values. If a person acts according to a certain value system in one situation, they will probably act the same in similar situations. One does not suddenly commit murder for no reason, just because they have stored up enough “good karma” from other deeds and feel that they can.
         If the player makes ethically inconsistent choices, the game can react to that (or not) in a number of ways. The simplest and most common way is to not alter the result at all. Rather, the game simply makes all choices available at each decision point, and each decision has the same relative result no matter what the player had done previously. Kill the beggar and get +2 Dark Side Points, or save them for +2 Light Side Points, period.
         One alternative to this is to only make certain choices available to the player if they have met certain preconditions. Maybe the option to kill the beggar does not appear unless the player has already accumulated a certain number of Dark Side Points, or unless the player has killed enough other beggars previously.
         Another alternative is to modify the results based on the player’s earlier decisions. If a “good” player kills the beggar, it might cause more of an alignment shift than if they had previously established themselves as primarily “evil.” If other NPCs in the game find out that a previously “good” player committed a heinously evil deed, they might see the PC not as “more neutral”… but rather as psychologically unstable and dangerous.

Analyzing Ethical Dilemmas in Games
         For games that provide ethical decision-making at the interaction layer, how do we evaluate them? What makes for a compelling ethical decision? Within the context of the game experience, how can we judge the effectiveness of a decision? For this, we provide the following questions that should be asked of any ethical dilemma:

  1. Is this a true ethical dilemma?
  2. Do the dilemmas apply the same across different ethical systems?
  3. Is there a gameplay consequence to the ethical dilemma?
  4. Is the dilemma a “juicy” decision?
  5. What are the conditions in which the ethical dilemma is raised?
  6. How do players react to the ethical dilemmas in the game?

Let us examine each of these in detail.

Is this a true ethical dilemma?

         Remember that a dilemma involves two or more competing ethical principles, not merely a choice between “obviously right” and “obviously wrong.” As the decision is framed in the narrative, ask if it is an ethical dilemma at all. Does it have all of the required components?
         If this is unclear, one approach we use in our case studies is to analyze each of the available choices, given their goal, method, degree, knowledge, and motivation. By looking at these choices and comparing, it can sometimes be clear whether the choices provide any kind of ethical conflict.

Do the dilemmas apply the same across different ethical systems?

         Some ethical decisions are clear-cut in some systems but a difficult dilemma in others. If the situation is an ethical dilemma in many ethical systems, it may touch a wider audience. Is each situation an actual dilemma only in one major ethical system, or in many systems?
         Additionally, if your game has multiple ethical dilemmas, consider them as a whole. Do they all apply as dilemmas within the same ethical system(s), or do they challenge different systems?

Is there a gameplay consequence to the ethical dilemma?

         With any game action (ethical or not), the designer must decide on the outcome of that decision. For ethical dilemmas, are they framed within the game’s systems, or are they merely choices that affect the interactive narrative of the game? That is, does the player get additional money, items, or special abilities in the game as a direct outcome of their choice… or do they just get to see a different ending cinematic?
         There are potential hazards either way. If there is a gameplay consequence, this can reduce the emotional impact of the ethical decision, because the player may make the choice based on game mechanics (“min-maxing”) and not ethics. The player may no longer see an ethical decision (save the mayor or save the townsfolk) but as a gameplay decision (obtain the ultimate armor or get the powerful sword).
         On the other hand, if there is no gameplay consequence at all, that can also reduce the emotional impact, because the choice ultimately doesn’t change the game’s outcome. Ask yourself if either of these applies to the game you are considering.
         Whether there is a gameplay consequence or not, there is an implicit value statement of what is “right” or “more right” in any game. If there are gameplay consequences, the relative value of those consequences act as moral weights. If there are no gameplay consequences, that can be construed as a value statement that there is no weight and that the ethical decision has no real impact. What is the implicit statement of values in the game, and does it match the designer’s intention?

Is the dilemma a “juicy” decision?

         Some decisions (including, unfortunately, most “ethical” decisions in games today) are fairly obvious. The player decides ahead of time to either play a “good” or “evil” character, and then makes each decision accordingly. These decisions are generally quite obvious and do not require much analysis on the part of the player. Mechanically speaking, they are not particularly interesting decisions.
         A key component here is information. What knowledge does the player have? Does the player understand the ramifications of their actions? What does the player know at this moment? Is the player even aware that a decision is being made? If a player is forced to make an uninformed decision where they are not aware of the potential outcomes, it can reduce the impact of that ethical decision. The player has little knowledge and thus feels very little ethical responsibility for the outcome. This is a particular danger with verb-based dilemmas, where the player may be unaware that their routine gameplay actions have hidden consequences.
         For one-shot A/B choices, for the player to feel any responsibility, it is critical that they be given explicit information to inform their choice. This includes both an understanding of their choices, and the consequences of each choice. However, for ongoing cumulative decisions, this restriction can be relaxed. If the player makes many repeated decisions over time and is able to see the long-term results unfold slowly as they realize what they have been doing, this can be quite impactful over the course of the longer game.
         The tactic of intentionally hiding gameplay consequences from the player can allow the player to focus their decision-making on the narrative aspect of the dilemma while still feeling the repercussions of their decisions through gameplay. However, once players learn that each ethical dilemma has a gameplay consequence attached to it, they may begin to focus on the gameplay instead of the ethics. Additionally, this tactic can backfire by giving players the perception that the game isn’t playing fair. If a choice is framed ethically at the beginning, any gameplay reward or punishment later on can seem arbitrary, since the original choice was not made with the gameplay consequences in mind.

What are the conditions in which the ethical dilemma is raised?

         Suppose you have a game where an NPC dies during the opening credits. This has a different emotional impact than if that same NPC had been the player’s constant companion throughout the game, and was then killed just before the final boss. Changing the context of the NPC’s death makes for a very different player experience.
         Likewise, ethical decisions feel different based on where in the game they are placed. Consider the narrative and gameplay context in which the dilemma is framed. Did the dilemma just happen out of the blue, or was there a long narrative arc leading up to it? How does this impact the player experience, if at all?

How do players react to the ethical dilemmas in the game?

         At the end of the day, designers are creating an experience for the player. In our desire to craft a wonderful experience in theory, we must not forget to look at how real players interact with the game. Do players react emotionally to the ethical dilemmas raised by the game? If so, how? Different players may react differently, of course, but can you see trends?

Note: these questions should be used as ways to view and consider an ethical dilemma within a game, rather than presenting a single “right way” to present these situations. What is “right” in one game may be “wrong” in another, and it is up to the designer—and not this report—to make the best design decisions for each individual game.

Design Caveats


         Including ethical dilemmas creates a very different atmosphere than offering a less ambiguous good/evil dichotomy. When the player is given a choice to be entirely good or entirely evil, it offers the player the opportunity to explore an evil persona from the safety of a game. This is not interesting decision-making so much as escapism, and it can be fun at times to play an evil genius or a sadistic bastard, as much as it can be to play the virtuous hero.
         By contrast, strong ethical dilemmas put people in highly uncomfortable situations. As such, they are psychologically painful. They can make a game more meaningful, but they will not necessarily make a game more “fun.” In other words, ethical dilemmas do not always make for good gameplay. Ethical dilemmas are not a panacea, but rather a new tool that can be considered for those types of games that can best support it.
         Furthermore, as this is a largely unexplored aspect of gameplay, many unanswered questions remain. Under what circumstances is it preferable to tie ethical decisions to gameplay consequences, and when is it better to divorce the two? Is the impact of ethical decisions inherently blunted by the fact that the player is playing a game? Are there limitations on what kinds of ethical decisions can (or can’t) be modeled satisfactorily in games? Questions such as these will only be answered as more games are made.


         Designers should also be aware that to create more powerful ethical dilemmas, they may have to think about breaking certain game design rules. The following is a series of game design issues that you, the designer, may want to consider in the process of designing your game. Note that there are no clear cut answers here. The answers will depend on what you think is most important in your game.
         One game design guideline is that player actions should always have in-game consequences, and that balanced choices should provide rewards of equal value. As mentioned earlier, when games have an ethical dimension to them, this may no longer be the case. In some cases, players may “game the system,” making choices based on the gameplay rather than ethical consequences. The designer may wish to deliberately “unbalance” the game’s choices in order to present the player with a specific ethical system that is reinforced by gameplay.
         Another general principle of game design is that games should be tolerant and forgiving of player mistakes. However, this works differently with ethical decisions. Making a choice to kill your best friend isn’t very painful when you know that you can bring him back to life at the end of the day. In the event that there are gameplay consequences to the player’s decisions, will some of these consequences result in the player being punished? Ask yourself if these consequences are in line with what your game is trying to convey.
         A corollary here is to consider how saved games are handled. Most games allow players to use multiple (often unlimited) save slots, so that players can revisit their favorite moments in the game. Most games also allow players to save frequently, so that players are never in danger of losing too much progress and never have to replay lengthy sections of the game that they have already conquered. However, in games that ask players to make difficult ethical decisions, the ability to save easily in multiple slots can dilute the impact: the player can simply change their mind about a decision if they don’t like the outcome. Limiting the player to a single save slot (especially if the game auto-saves frequently) can give player choices more permanence, but could be potentially frustrating to some. Consider if there are better ways for players to revisit earlier parts of the game (such as a “quest/level select” screen or “replay cut-scene” feature) while still protecting against a forced replay if the power goes off or the player is defeated (such as an auto-save or quick-save system). Another consideration is whether the player’s choices have consequences in the short or long term; if a decision early in the game only manifests much later (as with many of the choices in Dragon Age: Origins), the choices have more effective weight because a player may not be willing to replay dozens of hours of the same character just to see both effects of an A/B choice.
         One final game design principle worth mentioning is consistency. For most game mechanics, it is important that the game provide the same results for the same player actions, so that the player can better understand the systems. This deserves a second look when the player interacts with ethical systems in the game. As a player makes choices in the game, they form an ethical system through their actions; if a player suddenly acts inconsistently to the system they seemed to be following, the game can make note of this and respond.
         For example, suppose the player has so far been a virtual saint in the game. They give out money freely, help others, and never accept a reward. In one play session, they rob a shopkeeper and this is detected by other NPCs. One possible design is that the NPCs could recognize the player’s past good behavior and the fact that this is a first offense, and the PC may be given a lighter punishment.
         This can provide the player with a feeling that their actions are having an impact on the world and their character. However, there is a danger that a player's actions may seem perfectly consistent to them, while not seeming that way to the game system. Care must be taken to provide information to the player about the game's concept of their character's ethics, while providing the player the means to respond.

Case Studies
         As a test to see if our framework is usable with real games, we selected games that are known in part for having some kind of ethical dimension to them. We then analyzed these games according to the six questions detailed earlier, and then saw if this suggested any ways to make the ethical decisions in these games more meaningful. Out of necessity, what follows may contain spoilers.

Bioshock (2K Games, 2007)Bioshock
Image from: http://www.wired.com/images/article/magazine/1509/pl_bioshock3_f.jpg

         In Bioshock, the player is given a choice when first encountering the Little Sisters, young girls that have been brainwashed to become the player’s enemy. The player is given a repeated A/B choice of “saving” or “harvesting” these girls. Harvesting the girls kills them, but gives you a greater amount of ADAM which can be used to purchase the most important power-ups in the game. While the game does not tell the player right away, it turns out the player can actually get more ADAM in the long run by saving the Little Sisters, as they are awarded a large bonus at the end of each level if they save all of the Little Sisters on that level. This choice is unrelated to the player’s ultimate goal: to escape the underwater city of Rapture.
         Is this a true ethical dilemma? To answer this question, let us analyze the ethical dimensions of the choice made by the player:

         Saving the Little Sister:

    • Goal: save (cure) an innocent
    • Method: hit the Y button
    • Degree: no variation of degree; you either save her or you don’t
    • Knowledge: the player is told that the consequence of saving the girl is that she is cured but the player will get less ADAM
    • Motivation: the player either wants to do the “right” thing, or they want to see the good ending

         Harvesting the Little Sister:

    • Goal: get ADAM to survive and flee Rapture
    • Method: hit the X button
    • Degree: no variation of degree
    • Knowledge: the player is told this will produce more ADAM, but the girl will be killed in the process
    • Motivation: the player wants a better chance of survival, access to more powers/abilities, or wants to see the bad ending

         Immediately we see a problem: why is escape an ethical goal? Other things being equal, escaping a city does not help anyone other than the main character, so this is not much of an ethical issue. The chance of personal survival, on its own, is a weak ethical dimension.
         This means there is no ethical dilemma in this case. The choice is whether or not to commit murder for entirely selfish reasons. Few if any ethical systems would say that the lives of many innocents are equal to one murderer
         Do the dilemmas apply across different ethical systems? No. The only system we found that could frame this as even a weak ethical argument would be extreme utilitarianism.
         Is there a gameplay consequence to the ethical dilemma? As mentioned before, there is. The player gets more ADAM by harvesting. If the player only uses the information that the game provides, this actually enhances the emotional impact of the decision; as a player you want more ADAM, but as an ethical human being you want to save the girls.
         However, the actual gameplay systems tell another story. You get more ADAM in the long run by saving the girls. In effect, the game rewards “good” behavior. This appears to be an intentional design decision, and is a clear value statement.
         Is the dilemma a “juicy” decision? Saving or harvesting is not a particularly difficult decision, especially once the player realizes they get more ADAM for doing the “right” thing. Once the player makes the decision the first time, repeated decisions carry no additional reasons for the player to change what they have been doing; they will just do what they did before.
         What are the conditions in which the ethical dilemma is raised? The narrative presents the decision to the player explicitly, with two NPCs giving the player the two choices and the apparent consequences. The decision is framed to the player with incomplete information, although in practice many players had complete information from reading a FAQ or hearing about the system from friends who were talking about it.
         How do players react to the ethical dilemmas in the game? Player reaction was mixed and controversial. Overall, however, this game won many awards and was well-loved. A lack of difficult ethical dilemmas did not prevent the game from receiving both critical and financial success.

         If we wanted to make the ethical decisions in Bioshock more interesting, what low-cost ways are available? The answers to the questions above provide some possibilities:

  • Take away the end-of-level awards for saving the Little Sisters, so that the player actually does get more ADAM from harvesting. This puts gameplay rewards in conflict with player ethics.
  • While the save/harvest choice is a repeated decision, the player has no reason to ever choose differently; it is the same decision every time. The decisions could be made distinct by making successive choices cumulatively more difficult. For example, suppose that the game’s difficulty scaled with the player’s ADAM powers, so that giving up ADAM made the game measurably harder… and the more the player gave up (through repeated Save choices), the more hazardous the game would become.
  • We could also “turn up the dials” on any or all of the dimensions of each decision:
    • Goals. The goal of the original game is player survival. How can this goal be given more emphasis? Player death could have more of a consequence. Or, narratively, the PC could be given social or cultural significance in the game world so that the death of the PC has more meaning than the death of an unnamed NPC.
    • Method and Degree. Instead of a simple A/B choice, we could add varying degrees. What if the player could harvest the girls, but in such a way that they die painlessly and mercifully? What if “saving” the girls doesn’t always work, or might backfire in some nasty way? What if the girls are all terminally ill and are going to die in a short period of time, whether you save them or not?
    • Knowledge. During the course of a level, you may get to know (through audio tapes) the backstory behind a particular little girl, adding weight to the decision of whether to save that one specifically.
    • Motivation. The PC’s motivation could be changed. What if the primary goal is not “escape” or “save my own skin” but rather “save the city”? What if the entire adult population of Rapture was trapped and the player was trying to rescue them, and must weigh the lives of these little girls against the lives of the remaining innocent population?

Ultima IV (ORIGIN Systems, 1985)Ultima IV

         Few games have as much of a reputation for ethics as the classic RPG Ultima IV. The player's character comes from the real world and is magically transported to the land of Britannia. Unlike most RPGs of the time, the goal was not to defeat a villain, but rather to become a hero. The player must quest to become the Avatar, the spiritual leader and pinnacle of the eight Virtues: Honesty, Compassion, Spirituality, Sacrifice, Honor, Valor, Justice, and Humility. Here we examine two of the game’s systems: character creation, and the Virtue system.

Character Creation
         At the very beginning of the game, the player travels to a Renaissance Fair where they are ushered into a Gypsy's wagon. Here, they are asked a series of questions that will determine what class their character will be. Each of the questions is a hypothetical ethical dilemma where each answer represents ones of the eight Virtues. The questions narrow down until the player selects the one Virtue they hold higher than any other. The mechanic is a series of A/B choices. An example of a choice is: “Thee and thy friends have been routed and ordered to retreat. In defiance of thy orders, dost though A) stop in Compassion to aid a wounded companion, or B) Sacrifice thyself to slow the pursuing enemy, so others may escape?”
         Is this a true ethical dilemma? Again, we analyze the ethical dimensions of the choice made by the player:

         Stop in Compassion to aid a wounded companion:

    • Goal: Assist the wounded. Note that this is an Uncertain Result. There is no guarantee you will be able to help the wounded.
    • Method: hit the A button
    • Degree: no variation of degree
    • Knowledge: The situation is presented as a hypothetical one. However, the player has no advance knowledge that their answer will play a part in determining their character class. The player is provided (in the form of documentation) a lengthy description of the world, the virtues, and the concept of the Avatar. Although the game is called 'Quest of the Avatar' it is not likely that a player would be aware that this is character creation. Unless the player is observant, they may not even notice that the series of questions eliminates virtues until only one remains.
    • Motivation: The player may feel it is the more virtuous option, or (if they have knowledge of the systems) they may want to play as a specific class.

         Sacrifice oneself to slow the pursuing enemy, so that others may escape:

    • Goal: Try to ensure your friends’ survival. Note that, as with the other choice, this is an Uncertain Result. You are not sure how many (if any) friends you will assist by slowing the enemy in this way.
    • Method: hit the B button
    • Degree, Knowledge, Motivation: same as the other choice

         Here the player is presented two choices that both can be seen as ethically good. Assisting others is ethical, and the hypothetical provides two ways in which a player may assist their friends. Therefore, this is indeed an ethical dilemma, and it is up to the player’s own ethical system to resolve the conflict.
         Do the dilemmas apply across different ethical systems? Yes, and they apply differently. In virtue theory, the player is trying to do good for its own sake, and must decide how to be the most good. With deontological theory, the player should universally be willing to risk danger to assist others, but must decide how best to do so. From a utilitarianism viewpoint, this is not a dilemma; figuring out how to save the most people is more a matter of calculating risks and rewards.
         Is there a gameplay consequence to the ethical dilemma? Yes. How you answer will determine what class the character is and where they start in the world. Once the gameplay consequence is known, the emotional impact is mostly lost. The situations are hypothetical, and as such have no real meaning, while the gameplay consequences are huge and can affect the difficulty and play style of the rest of the game. A quick-and-dirty analysis shows that most classes are relatively balanced with one another (some are better in fighting, dodging, or magic use than others, with the best in one category lacking in others and the more well-rounded classes being mediocre in everything). The only major outlier would be the Shepherd class (representing the virtue of Humility) who is generally not good at anything. Based on the lore of the world, this is quite likely intentional, with the Humble having the hardest time being an Avatar.
         Is the dilemma a “juicy” decision? What are the conditions in which the ethical dilemma is raised? How do players react to the ethical dilemmas in the game? The decision is a fairly interesting one presenting two choices that can both be seen as good. Additionally, while the situations presented are hypothetical, because the player's view has been first person so far, the question seems posed to them personally. The decision is made with very little information and the player is not made aware of the ramifications of their choices. Once they do become aware, the decision loses a significant amount of its importance.

Becoming the Avatar
         Once in the game, the character must quest to become the Avatar by embodying the eight Virtues in all their actions. The player is not told how they should embody the Virtues, but instead must discover that through talking with NPCs, questing, and through experimentation. We will analyze a specific case of the player being in combat with some non-evil but still-dangerous creatures (snakes).
         This is a Verb Dilemma where the verb choices are to attack NPCs (the snakes in this case) or to run from battle. Each of the eight Virtues is an ethical continuum whose two endpoints represent “no virtue” and “Avatar-like virtue.” The player’s goal is to maximize all eight Virtues, through their actions.
         Is this a true ethical dilemma? There are essentially two choices here: attack the snake, or run away from battle. Here is an analysis of the two choices:


    • Goal: You want to show your prowess in battle and are willing to die to protect others.
    • Method: Move close to the snake and order your character to attack.
    • Degree: No variation of degree.
    • Knowledge: The player is initially unaware of how this action will affect their Virtues. They must explore the game, learn from NPCs, try things, and visit in-game locations/NPCs that reveal their current alignment on each Virtue's ethical continuum. Mechanics wise, the player may eventually find out or read a guide that tells them: attacking a non-evil creature carries a penalty of -5 Compassion and -3 Justice, while getting killed in battle gives +1 Sacrifice.
    • Motivation: The player may feel it's the virtuous thing to do, and you want to embody the eight Virtues to win the game. The player may also simply be conditioned from other games of this genre that they should attack and kill everything that moves.

         Run away:

    • Goal: You do not want to attack a non-evil creature.
    • Method: Move to the edge of the combat screen, which triggers the action of escaping.
    • Degree: No variation of degree.
    • Knowledge: As above, the player is initially unaware of the systems. In this case, fleeing from combat against a non-evil creature grants the player +2 Compassion, +2 Justice, -2 Sacrifice and -2 Valor.
    • Motivation: The player may feel it’s the virtuous thing to do. Or, the player’s party may be weak and they may be fleeing to ensure survival.

         Since both goals seem potentially virtuous, this does present an ethical dilemma to the player.
         Do the dilemmas apply across different ethical systems? In virtue theory, the player is trying to be good for its own sake, and must weigh the relative goodness of protecting friends from danger against not killing innocent creatures, making this a dilemma. In deontological theory, the player should not kill that which has no real malice; but, the player should be willing to fight and sacrifice against danger. In utilitarianism, again the ethical dilemma does not apply. The major thing being judged here is the act, rather than the result.
         Is there a gameplay consequence to the ethical dilemma? Yes. The player is actively trying to maximize their alignments on the ethical continuums to win the game. The fact that ethics are framed as a gameplay system does reduce the emotional impact of the decision here; the actions themselves are not that meaningful, and once the mechanics behind them are known the dilemma does not become ethically interesting. Because each Virtue is tracked independently of the others, choosing one particular action or another may be more “right” in a given case.
         Is the dilemma a “juicy” decision? The decision is most juicy when just a small amount of information is had. If the player is completely unaware of how their actions affect the Virtues, then they will feel no dilemma. If they understand completely how their actions affect the Virtues, they will be able to min-max the system and can make their decisions entirely based on how best to raise the Virtues. The most ideal state is when the player has an idea of how the Virtues are affected, but not a complete understanding of the mechanics.
         What are the conditions in which the ethical dilemma is raised? This situation is a relatively routine combat with no previous backstory connection to the NPC snakes, so the emotional connection between the player and the decision is low. The information that the player has to make the decision varies over time, depending on how long the player has been playing and how much of a grasp they have of the Virtue systems.

         There are two morality systems in Ultima IV: the character creation system and the Virtue system. While those systems were certainly advanced for their time (even for contemporary times), there is room for improvement on both.
         The primary issue with the character creation system is linked to its gameplay effects. Players answered dilemmas and through that selected their character class. The questions are at once incredibly meaningful (in that they determined your class) and yet meaningless (beyond your class, they had no bearing on the rest of the game). Choosing Honesty made you a Mage, but had no other real difference or bearing on your game. What it just came down to is "Picking Honesty = Spells and Low Health." This is not an interesting ethical decision.
         A potential way this could have been changed is minimizing the feature of class selection, and emphasizing ethical consistency. If the class differences are relatively minor, or able to be shifted easily, then the focus moves from the gameplay mechanics to more about the ethics. The questions posed during character creation were interesting because they actually featured the player having to deal with conflicts of interest between the different virtues. While the rest of the game featured this to a limited degree, it really did not emphasize the conflict. How different would it have been if the player, having answered ethical dilemmas within the comfort of a Gypsy's tent, then had to deal with moments in the actual gameplay where they were faced with similar situations? The game could also notice if the player was choosing Honesty above the other virtues, or if they had found a time that compromised that preference. The situation described with fighting/fleeing from enemies touched the surface of this, but more gameplay features that caused the player to balance out the virtues would have made for interesting ethical decisions.
         As for the Virtue system, understanding what is the most 'right' action during the gameplay of Ultima IV is very interesting. The player is introduced to an ethical system that they must try to live up to in the game. This involves investigation, study, analysis, and the player attempting to have the character act in a virtuous manner. The fact that actions can affect the Virtues differently (some directly opposing one another) is a delightfully rich interaction where a player must choose which 'good' they favor. Unfortunately, once the specific numbers are discovered, the experience becomes less interesting as the mechanics can more easily be gamed (such as stealing from shopkeepers and then ‘buying off’ the negative karma by giving to the poor). This degree of moral min-maxing could be reduced by modifying the systems. For example, you could remove the minimum of zero on each Virtue, and ensure that it costs more to buy off negative actions than you’d get from the positive ones. Limits could also be placed on positive actions, so that (for example) giving blood only gives a bonus to the Sacrifice virtue once (rather than a potentially infinite number of times).

PeaceMaker (Impact Games, 2007)

         PeaceMaker is a turn-based strategy game simulating the Israeli-Palestinian conflict in the Middle East. The player acts as either the Israeli or Palestinian Prime Minister (PM) and must govern, negotiate, and manage their state until peace can be reached.
         The primary goal of the game is to raise two values to 100 points while ensuring that neither value reaches -50. For the Palestinian side, these values are National & World Approval. For the Israeli side, they are Israeli & Palestinian Approval. Players are given an extensive list of actions that are available to them, and every turn must select one of them. Actions are partitioned into three major categories: Security, Political, and Construction. Each of these categories has a tree of subcategories, eventually leading to a decision. (For example, Construction->Social Services->Provide Health Care->Funded by Palestinian Allies)
         Every turn, the player must choose their government’s action. They then see the results of the action, about a week in-game passes, and then the player sees what events have occurred during that time. The primary mechanic of the game is a series of repeated choices. Choices are made by menu selection, which is the primary interaction method of the game.

Achieving Peace
         ‘Peace’ is achieved in the game by raising the approval value of two specific groups while ensuring that neither drops too low. How these values are precisely calculated is hidden from the player. Players can raise these values by building up popularity with different factions in the game, as well as achieving various accomplishments (which themselves may require popular support from factions).
         While no major ethical dilemma is explicitly presented to the game, the player implicitly creates one based on their interactions with the in-game factions and attempting to reach eventual peace. The player has the following information:

  • Each faction’s Political Powers, Goals, Fears, and Possible Actions they can perform. They also have a ‘temperature’ bar for each faction representing how much they approve of the player’s actions.
  • Polling data about the opinions of their people
  • Two advisors offering opposing perspectives on each decision being made.

         The types of advisors, factions available, and opinions being polled differs based on whether the player is the Israeli or Palestinian PM.
         For the purposes of the rest of the case study, we will be working from the side of the Palestinian PM.
         As the player interacts with the game, they come to realize that accomplishing anything requires carefully balancing the satisfaction of each of the different factions. A Palestinian PM knows that praising and thanking the Israeli government may make it easier to later push to regain water control rights, but doing so can potentially anger some of the more radical elements of their own population. This can lead to violence, which in turn can lead to more Israeli crackdowns.
         The different factions essentially represent opposing ethical systems that all have different concepts of what they consider good. The temperature stats are ethical continuums where the player’s alignment on the bar represents that faction’s approval; approval being how closely the player seems to match the goals/fears/concerns of that faction’s ethical system. The goal of the PM is to balance out those different systems while trying to achieve their goal of peace.
         The player begins the game with a sense of what is good but immediately is asked to analyze that in the face of the greater good. In attempting to satisfy the different groups, the player must occasionally perform actions that may seem immoral, but leads to the ultimate goal (and good) of peace.
         The dilemma we will be addressing is the Palestinian PM trying to get the Israeli government to trust them more.
         Is this a true ethical dilemma? Due to the incredible number of different game verbs available, here is a simplified summary of the possible choices.

         Improve the mood of the Palestinians

    • Goal: Improve the mood of the Palestinians to promote better safety and security for all, which will lead to Israel feeling more confident in our government.
    • Method & Degree: Provide speeches (low degree), attempt construction of civil projects, or speak out vehemently against Israel (high degree).
    • Knowledge: You have a basic knowledge of what the factions think of you and what kinds of action will make them happy. Initially, you have no real sense of how successful your actions will be, nor do you know exactly how the other factions will react. Through repeated play, you will come to get a better sense of this. However, every action is never a guarantee, and there is almost always a chance of failure.
    • Motivation: Ensuring your population’s happiness will ensure the safety of your position. Long term happiness in your population will raise Israel’s confidence in your government.

         The different methods to achieve this goal have quite different effects, and their results are mostly uncertain. Your people may respond well to the speeches, or declare it more hot air and lose confidence in their government. Civil projects will certainly encourage them, but that money could also be spent on security or economic concerns to provide more long term benefits. Additionally, civil projects will require outside funding, which will leave you beholden to other groups. Speaking out violently against Israel will certainly anger that government, and may cause violent outbursts in your people, but there is also a chance that it may unify your people and prevent 3rd party groups from performing violence that they think you won’t do. That may give you enough time to accomplish something.

         Appeal to 3rd Party groups

    • Goal: Appeal to 3rd Party groups (Fatah, Hamas) in your population to help keep safety and security.
    • Method & Degree: Try to disband them (low degree), Provide speeches, or offer financial support (high degree).
    • Knowledge: Same as previous.
    • Motivation: These groups are very popular among your population and could influence the undercurrent of popular opinion.

         These groups have been known to be violent, and a mistake in dealing with them could lead to even worse violence against Israel, which will then be blamed on you. On the other hand, getting them on your side could stop violence which will make Israel happy. While giving them money will make them quite pleased with you, the money could lead to corruption in their ranks and future troubles. Additionally, that money could go towards rebuilding schools or homes for your people.

         Reassure Israel

    • Goal: Assure Israel that we are controlling militants in our population.
    • Method & Degree: Increase police units (low degree), speak out against the violence, confiscate weapons, or even attempt to assassinate radical leaders (high degree).
    • Knowledge: Same as previous.
    • Motivation: Showing that you will not tolerate militant threats to Israel will get them to better recognize you as someone they can deal with.

         The Israelis have attacked Palestinian people, and attempting to get into their good graces will mostly involve harassing and fighting against your own people and the groups who they have learned to trust. You can confiscate weapons in your population, but won’t that leave them defenseless in the shadow of the perceived threat of Israel?

         Appeal to foreign powers

    • Goal: Appeal to foreign powers (USA & UN).
    • Method & Degree: Provide speeches (low degree), use finances on projects they would approve of, crack down on militant groups (high degree).
    • Knowledge: Same as previous.
    • Motivation: Foreign powers can exert a great deal of influence on Israel.

         Foreign powers can provide great benefits to the Palestinians by influencing the opinion of Israel. However, appealing to them will anger the more militant groups like Fatah and Hamas which can lead to further instability and anger in your population.
The player is attempting to create long term peace and prosperity for their people, which is certainly an ethical goal. Selecting between each of the different choices is no simple matter, and one would need to carefully consider the method of achieving each goal. The player must balance out long term and short term happiness, the happiness of different factions, and must occasionally act just to ensure the security of their governmental position so as to continue working towards peace. This is most certainly an ethical dilemma.
         Do the dilemmas apply across different ethical systems? Yes. For the deontological side, providing security for people is good, but a player must select to which group they provide that security. For utilitarianism, the player is constantly needing to balance short term negative effects in return for potential long term benefits, as well as balancing the good that occurs for different factions.
         Is there a gameplay consequence to the ethical dilemma? Very much so. The game is based around dealing with the ethical dilemmas of needing to balance various ideas of what is good. Here, the gameplay and ethics are intrinsically tied and that actually leads to it being quite compelling.
         Is the dilemma a “juicy” decision? The decisions are not easy, and the player is constantly attempting to use the bits of information they have been given to make the best decision they can. In many ways, the game capitalizes on the inherent juiciness of balancing out risk & reward, long term & short term benefits. With the primary interaction of the game being selection of items on a menu, the experience seems to actually recreate the dispassionate and distanced role of politics. However, the game punctuates the action with real photos and videos of the events going on in Israel, which makes the experience very compelling.
         What are the conditions in which the ethical dilemma is raised? The player is given a cursory awareness of the situation in Israel and Palestine and is provided news reports as events occur in the game. From the moment the game starts, they are faced with a violent act from the opposing side, and then must work to achieve peace.

         Most critiques we would provide would be minor ones, and it is sometimes difficult to judge whether certain aspects of the game are intentional. Occasionally it feels as though the different factions in a group did not pay much attention to the player’s past actions (“Oh, I can crack down on militants this turn. I’ll give a few speeches, build a school, and they’ll have forgotten all about it quickly.” Overall, the game does a fine job of presenting ethical dilemmas and incorporating them into the gameplay. The player really begins to grasp the difficulties, conflicts, and compromises that are necessary to achieve peace.

Appendix: Game Canon

This is a non-exhaustive list of games that have some kind of ethical dimension to them, or that have systems that could be co-opted for interesting ethical decision-making. Those interested in studying ethical dilemmas in games can use this list as a starting point. For the purposes of simplicity, the developer, publisher and release date listed here refer to the first release of the game and does not include ports to other systems.

Games are listed in order of release date, so that those studying ethical decisions can see a general progression and trends over time.

Diplomacy (1959, developed by Allan B. Calhamer, published by Avalon Hill)
Mechanics: In this non-digital war game, the play is highly social.  The wargame aspect is simple, deterministic, and marginalized. Each turn consists of player negotiations, followed by a secret and simultaneous choice of each player deciding where their units are moving and what they are doing.
Gameplay consequences: Of particular note is the “support” action, which allows one player to interfere with a combat between two other players. Who to side with in such conflicts is a choice between players. During negotiations, players can make any deals or promises they wish, but none are binding. Players are therefore given every opportunity to stab one another in the back.

Hamurabi (1973, developed by David H. Ahl)
Mechanics: This is a vintage-era turn-based resource management game. Resources include population, land, food, and money.
Gameplay consequences: Players are presented with some hard choices that can lead to “people” “dying.”

Ultima IV (1985 Sep 16, developed by Origin Systems, published by Origin Systems)
Mechanics: On character creation, player is explicitly asked a series of ethical dilemmas to determine their character class (based on which of the eight Virtues they prefer over the others). During the game, the eight Virtues are all treated as stats, and different verb decisions affect the Virtues positively or negatively. To win the game, you must get all eight Virtues to sufficiently high levels.
Gameplay consequences: The initial decision of character class determines the main character’s capabilities as well as starting location; some classes are better than others. During gameplay, interestingly, the Virtues are the victory condition for the game (rather than the more conventional “beat the final boss”).
Additional note: Note the subtle mismatch between the player’s ethical motivations (do good for the external reward of beating the game) and the character’s motivations (do good for its own sake).

Ultima V (1988 Oct 5, developed by Origin Systems, published by Origin Systems)
Mechanics: Of note, there is a dungeon where a group of children attack the player’s party. This presents a verb decision where the player could kill them, walk around them and take a few hits, cast sleep on them to pass by nonviolently, etc.
Gameplay consequences: No gameplay consequence, but you as the player know what you did.
Additional note: In playtesting, Garriott’s entire family reportedly rebelled against including “child killing” in the game. But of course it isn’t child killing—that is one of many choices—which convinced him that this was something that needed to be in the game after all.

SimCity (1989, developed by Maxis, published by Brøderbund and Maxis)
Mechanics: Power plants have consequences: pollution (coal plant) vs. meltdown (nuclear plant). Pollution is a certain negative result, while meltdowns are an uncertain (random) result.
Gameplay consequences: Coal plants are cheaper to build but they pollute the surrounding area, reducing health and property values. Nuke plants can occasionally explode, irradiating nearby tiles and making them useless forever.
Additional note: Note that the game’s design layer could be interpreted as making a political statement about the dangers of nuclear power.

Conflict: Middle-East (1990, developed by David J. Eastman, published by Virgin Interactive)
Mechanics: [From Wikipedia] The player can choose to deploy a brigade to police Gaza and the West Bank, which is supposed to help reduced discontent. Once a brigade has been deployed, the player can choose "soft" or "hard" tactics. In a separate decision, during the annual UN convention, an offer can be made to form a Palestinian homeland.
Gameplay consequences: Hard tactics presumably are more effective than soft, but cause international outcry, presumably reducing relations with the USA and so affecting their annual military grant. At the UN convention, forming a Palestinian homeland permanently removes this problem and improves relations with the USA; the only disadvantage seems to be that if you go to war with the country that offered the homeland territory, you find yourself at a territory loss, usually two bars on the war progress meter.
Additional note: While dry and strategic, this turn-based mid-east simulator introduces players to the complex moral choices needed when governing Israel.

Ultima IV (1990, developed by Origin Systems, published by Origin Systems)
Mechanics: “Karma” is a single-continuum trait affected by the player’s verb decisions. Of note, at one point in the game the main character experiences a shift between viewing Gargoyles as “monsters” to seeing them as sentients, which changes how your Karma trait changes when you interact with them.
Gameplay consequences: Maintaining a relatively high Karma is necessary to complete the game. As with the earlier Ultima IV, there is no final boss.

Civilization (1991, developed by MicroProse, published by MicroProse)
Mechanics: Pollution is a consequence of industrialization, which gives players a gameplay advantage but also induces global warming. Players can also build nuclear weapons.
Gameplay consequences: Pollution is a tradeoff of helping yourself while hurting everyone. Nuclear weapons are a Prisoner’s Dilemma mechanic.
Additional note: These choices also appeared in Civilization 2 (1996).

Star Control 2 (1992 Nov, developed by Toys for Bob, published by Accolade)
Mechanics: Complex dialog trees allow the player to make game-changing menu decisions. Some of these decisions are ethical in nature.
Gameplay consequences: Varies by decision. One example is a choice to sell some of your human crew into slavery, in order to gain resources and curry favor with an alien race.

Star Wars: X-Wing (1993 Feb 15, developed by Totally Games, published by LucasArts)
Mechanics: None.
Additional note: Gameplay is perfectly linear, but there are moral elements in the narrative layer.

Harvest Moon (1996 Aug 9, developed by Pack-In-Video, published by Nintendo)
Mechanics: A/B verb decisions presented to the player at various points during gameplay.
Gameplay consequences: Varies based on the decision. One example is the choice to stay with your wife during childbirth (which boosts your relationship), but if you do your cows get sick and you lose income for several days that follow.

Oddworld: Abe’s Oddysee (1997 Sep 19, developed by Oddworld Inhabitants, published by GT Interactive)
Mechanics: None.
Additional note: Traditional action game, but the characters are making decisions as part of the narrative. Player makes no decisions, but the game itself is making a moral (anti-Corporate) statement.

Planescape: Torment (1999 Dec 12, developed by Black Isle Studios, published by Interplay)
Mechanics: The premise of the game is that the main character makes choices about his own mortality, in an effort to avoid the consequences of his sins in the afterlife. Throughout the game, the player is given ethical dilemmas in the form of menu choices, representing the actions and dialogue of the PC. The PC possesses an alignment along two continuums (good/evil and lawful/chaotic), and decisions in the game can alter the player’s alignment along either or both.
Gameplay consequences: The decisions in the game enable the (amnesic) PC to “remember” who he was, affecting the story. The player’s alignment also opens and closes alternate story and gameplay paths.

Hitman: Codename 47 (2000 Nov 19, developed by IO Interactive, published by Eidos Interactive)
Mechanics: One of your "weapons" is a syringe and bottles of anesthetic which you can use to incapacitate "innocents" (servants, police, people out shopping for fruit, etc.) for a brief period instead of killing them. You can also choose to just avoid them completely (with difficulty).
Gameplay consequences: The game tracks enemies killed and "innocents" killed. It technically doesn't affect the game in any way (aside from keeping track and displaying it at the end of each mission).
Additional note: This system also appears in successive games in the Hitman series.

Dragon Warrior 7 (2000 Aug 26, developed by Heartbeat and ArtePiazza, published by Enix)
Mechanics: Menu A/B choices.
Gameplay consequences: Varies based on the choice. One example is a guy with a pet monster who he loves. You must decide to either kill it to protect the town, or leave it alone because the owner says it’s tame. You can change your mind mid way, but that usually results in a bad outcome (the monster destroying the village).

Black & White (2001 Mar 25, developed by Lionhead Studios, published by EA Games)
Mechanics: This “god game” tracks the verb decisions of the player along a single good/evil continuum.
Gameplay consequences: The player’s alignment impacts how their people behave, how the player’s giant pet behaves and looks, and what godlike powers the player has.

Silent Hill 2 (2001 Sep 24, developed by Konami and Team Silent, published by Konami)
Mechanics: The game tracks the player’s verb decisions throughout the game, to choose an appropriate ending.
Gameplay consequences: Silent Hill 2 does not have a canonical ending (according to official statements from Konami), leading most to believe that the large number of endings is a critical aspect of the game. The different endings do portray the main character as being of different moral character.
Additional note: The game does not make the player aware of the mechanics at all.

Grand Theft Auto 3 (2001 Oct 22, developed by DMA Design, published by Rockstar Games)
Mechanics: Open world “sandbox” gameplay gives the player a feeling of complete freedom to engage in moral or immoral acts.
Gameplay consequences: If the player commits an illegal act in view of police, they gain a police shield. The more shields the player has, the more aggressive police will be in tracking the player and the more powerful equipment and vehicles the police will have. Shields decrease over time, and can decrease faster through other means.
Additional Note: Note that it is the results of the player’s behavior (and not the motivation behind it) that the game tracks, and then only if the player is caught.

Star Wars: Knights of the Old Republic (2003 Jul 15, developed by BioWare, published by LucasArts)
Mechanics: Series of menu A/B choices that are “light side” or “dark side” that move the player’s alignment along a single continuum.
Gameplay consequences: Different subplots/quests available based on alignment. Some gameplay benefits (bonuses and penalties to the cost of certain powers) based on alignment.

EVE Online (2003 May 6, developed by CCP Games, published by CCP Games and SSI)
Mechanics: Very little explicitly designed in the game. Players are given an open-ended dynamic system.
Gameplay consequences: From the open world, player morality systems have socially emerged over time.
Additional note: This design-layer morality was intentional as a way to foster PvP. Interestingly, EVE Online China has had very different social systems emerging that were unexpected: the players figured out they could work together to collectively come out ahead, making for surprisingly little PvP.

Beyond Good & Evil (2003 Nov 11, developed by Ubisoft Montpellier, published by Ubisoft)
Mechanics: None.
Additional note: Traditional action game, but the characters are making decisions as part of the narrative. Player makes no decisions, but the game itself is making a moral (anti-Corporate) statement.

The Suffering (2004 Mar 9, developed by Surreal Software, published by Midway Games)
Mechanics: Explicit verb decisions (you can choose to help, ignore, or kill NPCs). You can also turn into monster form as a combat action.
Consequences: Through the player’s decisions in the game, the backstory is filled in to explain what happened and what kinds of choices the PC made in the past. Using monster form helps greatly in combat, but using it increases the chance that the PC killed his wife in the past.

Fable (2004 Sep 14, developed by Lionhead Studios, published by Microsoft Game Studios)
Mechanics: Series of menu A/B choices that are good or evil that move the player’s alignment along a single continuum.
Gameplay consequences: Different subplots/quests available based on alignment. Some gameplay benefits (bonuses and penalties to certain powers) based on alignment.

Star Wars: Knights of the Old Republic 2 (2004 Dec 6, developed by Obsidian Entertainment, published by LucasArts)
Mechanics: Same as the original game in the series. Series of menu A/B choices that are “light side” or “dark side” that move the player’s alignment along a single continuum.
Gameplay consequences: As with the original game, there are different subplots/quests based on alignment. Additionally, each prestige class is only available to players at one extreme alignment or the other, and players must choose a prestige class to finish the game; the game cannot be finished with a middle-of-the-road character.

Jade Empire (2005 Apr 12, developed by BioWare, published by Microsoft Game Studios)
Mechanics: As with earlier BioWare games, a series of menu A/B choices that are good or evil that move the player’s alignment along a single continuum.
Gameplay consequences: Different subplots/quests available based on alignment. Some gameplay benefits (bonuses and penalties to certain powers) based on alignment.

Shadow of the Colossus (2005 Oct 18, developed by Team Ico, published by Sony Computer Entertainment
Mechanics: None.
Additional note: The premise of the game is that the PC is on a mission to destroy 16 giant Colossi, to resurrect his beloved. The morality is entirely in the narrative layer; it is the character and not the player who chooses to attack the Colossi (who are, for the most part, innocent targets). However, since the player is the one defeating the Colossi through gameplay verbs, many players still feel a sense of guilt that taints the sense of achievement from slaying each Colossus.

Civilization 4 (2005 Oct 25, developed by Firaxis Games, published by 2K Games and Aspyr)
Mechanics: When occupying an enemy city, you can raze it or occupy it. Additionally, one of the technologies players can research is Slavery.
Gameplay consequences: Capturing a city gives control of that city (with all of the benefits and responsibilities of owning a new city), while razing a city gives the player immediate cash instead. Slavery has gameplay consequences as well: it lets you sacrifice population to build something faster. Aside from the direct effect of Slavery, if someone else declares Emancipation, your unhappiness increases.
Additional note: In both examples listed here, there are not only gameplay consequences but also moral ones.

Peacemaker (2007 Feb 1, developed by ImpactGames)
Mechanics / gameplay consequences: The point is for the player to understand the complexity of the Israel-Palestine conflict through interacting with the game’s systems. In-game decisions and consequences are similar to the real world.

BioShock (2007 Apr 12, developed by 2K Boston / 2K Australia, published by 2K Games)
Mechanics: Repeated menu A/B choice to “save” or “harvest” the Little Sisters.
Gameplay consequences: Get more ADAM in the short term with the “evil” choice, but more in the long run with the “good” choice. Choice influences whether you see the “good” or “bad” ending.

Portal (2007 Oct 9, developed by Valve Corporation, published by Valve Corporation)
Mechanics: In one puzzle, a “Weighted Companion Cube” must be carried to safety, then thrown in the incinerator.
Gameplay consequences: At the game content’s behest, many players become (illogically) emotionally attached to the “Weighted Companion Cube,” and find themselves affected by its destruction. While this is not a decision (and in fact it is not even an ethical dilemma, since the player has no choice in the matter), many players nonetheless feel guilty about this since they were the ones who pushed the in-game button that destroyed the Cube.

Mass Effect (2007 Nov 20, developed by BioWare, published by Microsoft Game Studios)
Mechanics: Series of menu A/B choices that are good or evil. Each decision increases one of two continuums, Paragon (“good”) or Renegade (“evil”), which are independent of one another.
Gameplay consequences: Increasing the Paragon meter lets the player build their Charm skill, which gives a discount in all stores. Increasing the Renegade meter lets the player build their Intimidate skill, which gives more money for selling items. There are also certain menu choices that become available for sufficiently high Charm or Intimidate skills.

Spore (2008 Sep 7, developed by Maxis, published by Electronic Arts)
Mechanics: Early evolution of your species leads to it being Carnivore, Herbivore or Omnivore. As it develops further, this becomes a tendency towards Commerce, War or Religion. The type is determined by a repeated verb decision that moves your alignment towards one extreme end of a single continuum. The continuum is interpreted as one of three results, based on whether you are at one end, the other end, or in the middle.
Gameplay consequences: Each type gives you explicit abilities, bonuses, and powers. These are unique for each type.

Fable 2 (2008 Oct 21, developed by Lionhead Studios, published by Microsoft Game Studios)
Mechanics: Two continuums, one good/evil and one for “purity.” Additionally, there is one notable choice in the game to either resurrect everyone affected by a genocide that you don’t care about, or only the few people you care about, or no one at all.
Gameplay consequences: Purity affects avatar appearance only. The other continuum offers different subplots/quests based on alignment, and offers bonuses and penalties to certain powers. For the resurrection choice, there is a gameplay consequence; resurrecting no one gives the player a lot of money, while resurrecting only those you care about gets you your dog back (which is a gameplay mechanic); resurrecting everyone you don’t care about nets no gameplay benefit at all.

Fallout 3 (2008 Oct 28, developed by Bethesda Game Studios, published by Bethesda Softworks and ZeniMax Media)
Mechanics: Through dialog trees and combat, the player can make both menu and verb decisions with radical impact on the game world.
Gameplay consequences: Varies based on the choice. As one example, early in the game, the player can choose to detonate or disarm a nuke in the middle of a town. That decision is one of many that continue to resonate throughout the story.

Train (2009, developed by Brenda Brathwaite)
Mechanics: In this non-digital race-to-the-end game, players load passengers on a train and try to get it from one end of the track to the other. When one player reaches the end, they are notified they have just delivered their passengers to Auschwitz.
Gameplay consequences: The ethical systems of this game are mostly in the design layer. It is making a statement about the brutally efficient systems in place in Nazi Germany. There are also subtle but important interaction-layer ethics. The most obvious is the player choice of whether (and how) to keep playing once the destination is revealed.
Additional note: Some of the rules are left intentionally vague to allow players to interpret them. One possible interpretation, for example, allows the players to work together to save all the Jews. In this respect, choosing how to play (or choosing to play at all, if an “experienced” player is playing for a second time with friends who have not seen the game before) is a player decision.

Lose/Lose (2009, developed by Zach Gage)
Mechanics: This is a simple 2d scrolling shooter game with no explicit morality.
Gameplay consequences: If you get hit, the game deletes itself from your hard drive. For every alien that you kill, it permanently deletes a random other file from your hard drive.
Additional note: The game’s design layer is an ethical statement on the morality of killing waves of aliens (as happens in many vintage-era arcade games). Note how different the game would be if the consequences were performed on someone else’s hard drive instead of yours!

inFAMOUS (2009 May 26, developed by Sucker Punch Productions, published by Sony Computer Entertainment)
Mechanics: Single hero/villain continuum. Player is given a series of verb decisions during certain points in the storyline which are explicitly called out during cut scenes (e.g. having the main character say to himself that he could fire into a crowd to keep a bunch of food for himself… and then the game waits for a few minutes to see if the player does this or not).
Gameplay consequences: Alignment directly affects which powers are unlocked, and also affects the story’s subplots and ending.

Cute Knight Kingdom (2009 Aug 18, developed by Hanako Games, published by iWin)
Mechanics: Complex “Sin” system. Sins are broken down into categories and you can only get so much sin for a certain act no matter how often you do it (diminishing returns on each type of action), so you cannot become an arch villain by repeatedly stealing apples. Killing anything with feelings (or injuring them, if they didn’t injure you first) is a sin. It is possible to avoid combat, or scare things off without killing them, to avoid the sin penalty. Once you accumulate sin, the only way to remove it is to pledge to never do it again, and if you break your pledge your sin all comes back.
Gameplay consequences: Sin mostly affects the game ending. Even a small numerical amount of sin can get a bad ending if it was for a serious crime (like murder), or if you’ve repeatedly stolen and then put yourself in a position of temptation. Other times, sin has no effect other than the emotional impact on the player knowing what they’ve done.

Dragon Age: Origins (2009 Nov 3, developed by BioWare, published by Electronic Arts)
Mechanics: Unlike previous BioWare games, moral alignment isn’t tracked directly. However, the main character has a relationship meter with each of their companions which acts as a continuum. Each companion has their own morality system, and when the player makes choices that a particular character feels strongly about, it influences their relationship with that character.
Gameplay consequences: Close relationships with a party member gives that character progressive stat bonuses based on how much you’ve maxed out their relationship meter; you can also unlock character-specific side quests with sufficiently high relationship values. Having a high enough relationship also opens up romantic/sexual options with certain characters, which have no effect on gameplay.
Additional note: Note that character relationships are not strictly based on ethical choices. You can also modify relationships with characters by giving them gift items and by completing certain quests. Also note that character relationships tend to be unchanged if the character is not currently in your party, so the player can do morally objectionable things as long as they head out with characters who don’t care.

Appendix: References and Further Reading

We always recommend reading primary texts whenever possible, although some can be challenging without a guide or teacher to help with context. Here is short list of primary and secondary texts for designers. There are also several great philosophy websites that target the general reader.

Primary texts:

  • Plato’s Republic
  • Aristotle’s Nicomachean Ethics
  • David Hume’s A Treatise of Human Nature
  • Immanual Kant’s Groundwork for the Metaphysics of Morals
  • J.S. Mill’s Utilitarianism
  • Friedrich Nietzsche’s Beyond Good and Evil

Secondary texts:

  • Alasdair MacIntyre’s After Virtue
  • Bernard Williams’s Morality: An Introduction to Ethics
  • Simon Blackburne’s Being Good: A Short Introduction to Ethics
  • Michael Sandel’s Harvard course on Justice, which can be found on the web here: http://www.justiceharvard.org/

An in-depth analysis of the mechanics of Ultima IV, showing (among other things) how a player can totally game the system by knowing the underlying effects of the “ethical” choices:
As an example, see http://lparchive.org/LetsPlay/Ultima%204-6/Update%206/index.html, where the author details how to gain Virtues by stealing from blind shopkeepers.

section 3

next section

select a section:
1. Introduction  2. Executive Summary  
3. Choosing Between Right and Right: Creating Meaningful Ethical Dilemmas in Games
4. How to make Social Games Better
5. Free Your Inner Suit: A Capitalist’s Guide to Survival in 21st Century Game Development
6. Schedule & Sponsors