Hard Hitting Combos

Posted on 2 CommentsPosted in 2017 Workgroup Topic Proposals

How do you create a system of game mechanics that best creates the opportunity for interesting, emergent behavior in combination with each other?

Many games have interesting combo systems, and some are better than others.  Magic: The Gathering and Terraforming Mars.  Skill trees in Diablo III or World of Warcraft.

What are techniques for designing a set of mechanics that have interesting combos? What types of combos are there?  How can a set of mechanics be evaluated for how interesting their ‘combo’s are?  Is it a systemizable problem? How can it be made fun and comprehensible?  What is the best I could do procedurally?

Building better social bonds in competitive games

Posted on Leave a commentPosted in 2016 Workgroup Topic Proposals

Playing with strangers on team-based competitive games can be a very hostile experience.  I’d like to discuss ways to build better player communities including:

  • Matching players likely to form friendships early based on personality traits we can intuit or implicitly monitor or geographic location
  • Providing ways to align incentives to mentor (or at least not condemn) fellow players
  • Segregating those with negative behavior
  • Incentivizing positive behavior on forum

Iteration with Metrics

Posted on Leave a commentPosted in 2015 Workgroup Topic Proposals

I’m interested in creating a formal model for how to incorporate player behavior into an economic or systems model.

For example, when creating an initial model there are several types of variables- predictors of behavior (20% of players will wan this), aesthetic settings (score should be in 1000s not 10s), balance knobs (enemies should do 50 damage).  Then you create your model, and it has several outputs- “Gold earned per second”, “Damage dealt per player” “Average HP for a level 20 enemy”.

When we’ve balanced systems, we tend to pick inputs and outputs we want to set for aesthetic or pacing reasons, and then we want to solve for the remaining inputs and outputs through monte carlo (or other method) of simulation.  If our collected metrics fail to match the model, we figure out whether a behavior predictor was wrong or if our model was wrong and adjust accordingly.

We’ve never done this formally (although I am sure several horseshoers have), but it would be really useful to have a good way of setting up this iteration that could be applied to many game design problems.