Steel Seal Game Testing

This trimester, we were tasked to create a commercial mobile game, and release it to the Google Play Store. Because we were releasing it to the play store so that the public can download and play our game, there was a lot of testing to go into our game to make sure it was balanced.

There are a few different types of play testing techniques commonly used in the industry, and each have their pro's and cons. I talked about two of these in an earlier blog which can be found here. In that blog I went through two common techniques used, but in this blog I will be focusing on what technique I used for our mobile game, Steel Seal, and why I chose it.

For our game, we used vertical slice testing, which gives the player a slice of gameplay that is fully functioning. We used this technique because of the way our "levels" were designed. Because Steel Seal is an endless runner of sorts, it doesn't have definitive start and finish levels. Instead, the levels are a form of difficulty scale, and blend with eachother as the game progresses and becomes more challenging. To test levels, we loaded which ones we required at what time. Sometimes we would load just one or two difficulties, other times we would load multiple in, depending on what we wanted to test at the time. What this allowed us to do, was to give testers specific difficulty levels to test, so we could get feedback on them, rather than giving them the full game and hoping they reached later difficulties.

With our vertical slice testing, we could give players specific difficulties and get feedback on that. A common thing I would ask my testers was what difficulty they thought they were playing on, on a scale from 1 to 5. Their answer allowed me to tweak the level to spawn more or less of something, depending on the feedback. This type of testing allowed us to tweak the difficulty of each specific difficulty, to make the entire game feel more balanced and fair when playing.

The way I ran full game tests was fairly straightforward, but gave me enough information to improve the game play experience. Ideally, when play testing you don't want people who are good friends, because they tend to say more favorable things and sugarcoat feedback, which defeats the purpose of having a play test. For the testing I did at home I used a mixture of acquaintances, and good friends to compare feedback between them to try and get the most accurate feedback. The first test I did was within Unity, using a mouse to control. I launched the game in full screen to the menu, and got them to take control.

The second play tests I ran was on a friends mobile device. I booted the game on his Samsung Galaxy Note 4, and got a few people, both acquaintances and friends, to play it. I received pretty much the same feedback as the computer play tests, with the addition of an extra comment. Most of the play testers noticed performance issues with the game, including stuttering and frame drops which made the game harder to play.

From these play tests, it was noted that the game was extremely difficult because of how fast and how many enemies were coming. Another bit of feedback that was received was the size of the enemies. They seemed a bit too large and if there was more than one enemy on the screen at once, it was almost impossible to dodge. Both of these issues were fixed quickly, with the speed of both the Orca and Shark being reduced, as well as both of them being slightly shrunk. As for the performance, the feedback was taken into consideration and the problem was found out to be with the animations and sprites. The sprites were then resized to a smaller resolution and the animations had their frame rates cut so there wasn't so much information being read. This helped the performance greatly.


St. John, V. (2016). Gamasutra - Best Practices: Five Tips for Better Playtesting. Retrieved 16 December 2016, from