Duration: 6 weeks

My Role: Designer & Researcher

Domain: Education, Game

 

See stick figure's adventure below :)

 

About Math Run

Math Run is an educational game that aims to improve mental math skills. The game is designed and developed by two CMU students as a final project for course Design of Educational Games. The purpose of this game is to let people practice mental math in an interesting and exciting way, and be able to transfer the skills into daily lives. The game is designed based on the well known game design principle called MDA: Mechanics, Dynamics, and Aesthetics. You can download the game here (currently only support Windows system). 


Design Process

Theory support: MDA framework 

 
Reference: R. Hunicke, M. LeBlanc, R. Zubek, MDA: A formal approach to game design and game research, in: D. Fu, S. Henke, J. Orkin, (Eds.), Challenges in Game Artificial Intelligence, Papers from the 2004 AAAI Workshop, The AAAI Press, Menlo Park, CA, 2004, pp. 1-5.

Reference: R. Hunicke, M. LeBlanc, R. Zubek, MDA: A formal approach to game design and game research, in: D. Fu, S. Henke, J. Orkin, (Eds.), Challenges in Game Artificial Intelligence, Papers from the 2004 AAAI Workshop, The AAAI Press, Menlo Park, CA, 2004, pp. 1-5.

The game is held together very well by the principles of MDA. Each principle works in concert with the others to create a game that maintains a consistent style of ascetics, while still achieving our desired educational outcomes. In regards to the ascetics of Math Run, we attempted to keep the art as basic as possible (no one on our team is an artist). We wanted Math Run to revolve around a sort of stick figure art style. So that it looks like it is out of a sketchbook. Everything is black and white in the game except for the coins, which were left in color to emphasize their importance. Sound was also a very important ascetic.Players hear a sound when they collect a coin and when they hit barriers. Also, the music was vibrant and was enjoyed by everyone who played the game.

Design Brainstorm

 

We began the brainstorming process by trying to determine the best way to keep the game interesting, while still helping players improve on their mental math skills. We began thinking that the game would be very simple to Temple Run. There would be a track that the player would be on and there would be obstacles in you path. You would have to quickly answer the prescribed questions to either dodge the obstacles or make a turn. In this version, there would not be a life system, but instead there would be a monster chasing you from behind that gets close if you miss questions.

After considering the fact that this version of our game would have to be done in 2D we moved to a different idea. It was suggested that instead of running on a single track, there would be three lanes on the screen that the player would be have to in between in order to collect coins. There are obstacles in these lanes that player will either have to clear or switch lanes. Players can only switch lanes upon approaching the obstacles. In this version we decided to remove the monster chasing the player and instead decided that the player will be able to miss 3 questions before losing. We felt that it may be too much going on for the player if they had to worry about a monster chasing them, while simultaneously worrying about switching lanes and collecting coins.

 
Paper prototype sketch version 1.0

Paper prototype sketch version 1.0

Paper prototype version 2.0

Paper prototype version 2.0


Cognitive Task Analysis 

 

The Cognitive Task Analysis includes two phases: paper prototype and actual game playtesting. Both phases triggered valuable insights that were integrated in the game design. The paper prototype CTA mainly focused on how to make the mechanics work in the actual game, and the game playtesting mainly focused on how to improve game play experience and better meet educational objectives in the game. We asked players in both phases to “think aloud” during the game and gave suggestions and feedback after the game play.

 
Use a straw to simulate the stick ninja running on the lane. The user testing is super fun and our participants are very engaged! 

Use a straw to simulate the stick ninja running on the lane. The user testing is super fun and our participants are very engaged! 

 

Key findings from paper prototype CTA:

  • Playtesting this game is difficult in paper format - as it was impossible to really create the effect of moving the player.
  • Player was frequently looking ahead at other problems.
  • It is hard to move the paper character in a constant speed as it can do in real games, and when the character approaches the hurdle, it’s difficult to count down for 3 seconds and then continue to move the character.
  • If players choose the shortcut, it is very possible that they will keep running in the middle lane from the beginning to the end.
  • Jumping into game right from instructions is difficult and hard to understand. The pure text instruction is long and confusing.
 

Educational Objectives

Keep balance between learning and fun

 

Our ultimate educational objective with Math Run is to improve performance with mental math. This is a simple educational objective, but regardless being able to do mental math is a valuable skill in about any situation. In Math Run, this is achieved by answering questions before the player runs into a barrier. With this, players do not have enough time to write out the problem and solve it. 

 
 

The mechanics are closely related to the learning objectives, as people keep practicing mental math problems to make the stick figure keep running. The faster and more accurate they answer the questions, the longer distance they can reach, and the more coins they can collect. Another mechanic is about changing the lanes to avoid blocks (thick hurdle that cannot cross) or collect more coins. While designing the game, we put coins in different lanes and try to “manipulate” players to change lanes frequently. Since people tend to answer the simplest question to survive the game, we put coins to encourage people answer harder questions to collect coins at another lane. This intervention works well especially for people who have played the game for longer time. They started to put emphasis on collecting coins instead of just survival, and they got higher score on leaderboard because they collected more coins. For novice players, their cognition level was too high to sparing attention for changing lanes, so they usually ended up with lower score on the leaderboard.

Instructional Principles

We applied multiple instructional principles in the Math Run game. According to Koedinger and Klahr (2013)1, there are three categories of instructional principles: memory/fluency, induction/reflection, and sense-making/understanding. Our game targeted each category with scaffolding principle, feedback timing principle, and explanation principle. For scaffolding principle, we provide step by step instruction at the beginning of the game, and break each action into small segmentations for better understanding. The game also allows players to skip the instruction, but the player needs to double check they want to skip. For feedback timing principle, the game provides immediate feedback on errors in both direct and indirect ways. The goal is to make players realize the consequence of their actions and make modification based on the feedback. For explanation principle, the game provides prompts for self-explanation instead of giving out the answer directly. That allows players to think and fix the error by themselves, and triggers active thinking as well. We also applied personalization principle by asking the player to type in their username at the beginning of the game. The username will show on the leaderboard, which allows the player to track their achievement and progress. The leaderboard also serves as a strong motivation factor for players to keep playing and get higher score or beat their friends.

 
 
Instructional Principles that are applied in this game  Reference: Koedinger, K. R., Booth, J. L., & Klahr, D. (2013, November). Instructional complexity and the science to constrain it. Science , 342 (6161), 935-937.

Instructional Principles that are applied in this game

Reference: Koedinger, K. R., Booth, J. L., & Klahr, D. (2013, November). Instructional complexity and the science to constrain it. Science , 342 (6161), 935-937.

 
 
 

Game Evaluation

 

The following data was tracked when doing game evaluation: participant's name (ID in the game), time spend for pretest, accuracy for pretest, time spend for game playing, highest score they got in leaderboard, time spend for posttest, accuracy for posttest.

 
 
Game evaluation process

Game evaluation process

 
 

Results analysis for pretest and posttest: In this part, we asked participants to answer the questions as quickly as they can. The questions are more difficult than those in the game. The questions for the actual game are one step math problems, such as 3+10 and 4*7. The pretest and posttest questions need two steps to get the answer, such as (10/5)+12, (50/5)-15. The reason we design harder questions in pre and posttest is because it can better show the improvement (if any). The results show dramatic difference among individuals, and since we only have four participants, it is somehow difficult to tell the trend from the data. The graphs for accuracy and time spend can be found in the appendix. In the accuracy graph, 2 participants have decreased accuracy, 1 participant has increased accuracy, and 1 participant has the same accuracy. From the graph we can see a big “cross” for accuracy changes. There are several possible explanations for that: we only have 10 questions, which means each wrong answer will lead to dramatic accuracy drop; the questions are not difficult enough, so people have high accuracy and it is difficult to tell the difference before and after playing the game; the sample size is not big enough, so that the general trend is nearly impossible to catch. The other graph for “time spend for pre and post test” has similar situation. There is another big “cross” for time spend changes, which means two participants spend much more (or less) time on pre and post test, but it is difficult to explain why. Maybe their focus level changed, or they became more (or less) cautions while doing the problems. In the ideal situation, we expect players spend less time doing the post test, and get higher accuracy in the post test.

 
Accuracy for pre and post test

Accuracy for pre and post test

Time spend for pre and post test

Time spend for pre and post test

 

Results analysis for post-survey: this part mainly measure players’ engagement level and enjoyment level for the game. We can tell from the graphs (see appendix) that people really enjoy the game, and they feel very engaged while playing the game. The average score for engagement level is 4.5 out of 5, and enjoyment level is 4 out of 5. Some participants explained that they did not give the full score for enjoyment level because there were some bugs in the game, and they got confused or frustrated when some system errors occurred. For example, one participant said that “The screen is too small. it is hard to see the equations clearly in such a small screen.” The average difficulty level is 3 out of 5. The score met our expectation because we want this game to be medium difficult for players. Literature indicates that high difficulty level can frustrate players, while low difficulty level can lead to boring. The other graph show players’ rating about how helpful they think the game can improve their mental math skills. The average score is 4.25 out of 5, which is very promising for potential learning transfer. Since the players are all adult, we assume that they have adequate metacognition level, so that their measurements can reflect the actual helpful level they got from this game.

 
 

<- Previous: ChitChat

Project         Research         Hue       About