Deep Dive into Games User Research with Heather Desurvire: Part One of Two
This article is the first of a two-part interview with User Behavioristics Founder & Principal, Heather Desurvire. We are delighted to have spoken with Heather about software and games user research, as her experience and stature in the field is highly regarded.
We talked about a wide-range of topics, from playtesting best practices and analysis to the history of user research in games and software development. We spoke specifically about how user research can help game studios – large and small – to make tangible improvements to first-time player experience, fundamental game mechanics, and help to reduce player churn.
Games User Research: A Two-Part Deep Dive
Part one of this two-part interview focuses on Heather's background and her experience as an expert in the field of games user research. We begin by discussing playtesting and how the knowledge gained from playtesting can offer invaluable insights, with a particular focus on playtesting mobile games.
The second part of our interview with Heather will be published on August 7th and will focus entirely on playtesting, analysis, and tips for those just getting started with games user research.
UPDATE: Check out part two of our interview with Heather now!
Having done this work now for a long while, I can say that I am such a firm believer in qualitative research. I've been involved in tens of thousands of hours of user testing sessions, and the most accurate and actionable way to know why there is a barrier to optimal player experience – and how to fix it – is by observing the user. It's how you get actionable results.
A Short Introduction to Heather Desurvire
Heather is one of the world's foremost specialists in user and player experience. She is the Founder & Principal of User Behavioristics and is also part of the faculty at USC School of Cinematic Arts, Department of Interactive Media & Games.
User Behavioristics is an industry leader in UX research. They have tested over one hundred video games, with many industry leading publishers, and over two hundred software products, websites, and apps.
Can you tell me a little about yourself and your background?
HD (Heather Desurvire): I've been doing usability and user research since the usability field began. I began testing various software and hardware products, including printers, voice recognition, and voice dialing.
When I got into this field, I was in the right place, at the right time, with the right skills. I was lucky to be around some of the formative people, places, and companies, who were willing to investigate, and who set the stage for the whole usability and user research field.
I've been involved in both the academic and practitioner side of things and started early enough to have influenced the methods that are now the industry standard.
I think that being involved in both the theory and practice of user research has kept me challenged and interested in continuing to contribute to this field. I love being in service of the people, players, and companies who all work together and want to create great games – it takes all sides to create the ultimate user and player experience.
After working for these first champion companies of user research, I began working as a consultant for Microsoft. I was one of their principal consultants during that period and developed a lot of experience as a practitioner working with various high profile companies. The work I did in conjunction with Microsoft helped to set the field as it currently exists. I was able to do quite a lot of academic research during that period too, and the both combined to help develop the most important and effective methods for getting excellent data.
The data and analysis we get from users are actionable for developers and designers, which is great because they can take the information we collect and immediately improve their games by removing the barriers to the optimal experience we uncover.
Even though I got close to getting a Ph.D. in statistics, I found that I've done more qualitative work. Quantitive work is necessary, and we use it as well, but I believe that qualitative work like we do with PlaytestCloud is the most actionable. It helps us to understand, not just that there's some issue, but also why there's an issue.
Having done this work now for a long while, I can say that I am such a firm believer in qualitative research. I've been involved in tens of thousands of hours of user testing sessions, and the most accurate and actionable way to know why there is a barrier to optimal player experience – and how to fix it – is by observing the user. It's how you get actionable results.
I got into testing and effecting player experience in games while I was working in New York, back when games weren't considered as much of a viable consumer product. It's there that I teamed up with the people at Microsoft who first started doing usability studies on games – this was back in the days of the original Halo series.
Games are so rich, and there are exponentially more issues than with productivity software user research. Games run the whole gamut of usability research. You can never, ever, solve the puzzle in games user research. There's always something surprising and exciting because of the variance and pace involved – this is what makes them fun to work on! Everything from new player experience to immersion, engagement, emotional connection, and the differences in how people have fun playing games. Games are incredibly multilayered.
There's never a bad time to begin user research, knowledge is always power, and at least when you know what the barriers to optimal player experience are, then you can figure out how to address them.
What's the most common misconception that you find about playtesting?
HD: There are still many people who don't understand what playtesting is and why it's of value. It’s getting more awareness now and the situation is better than it was even three years ago, but it's still not widely understood.
I would say the biggest misconception that people have is that playtesting means focus groups. Many people are more familiar with marketing research and focus groups. Focus groups are great for idea generation but not necessarily for improving the player experience. And it’s often the first question, "Oh, can we do a focus group?" and it’s like, "Well, sure, but first it'll be good to chat about what you want to learn from your research..."
Then you have to explain why focus groups aren't always necessarily the most effective for games user research. There is a time and place for focus groups, but for what we're doing with playtesting, and with PlaytestCloud, focus groups won't help us achieve our goals. The goal is not to understand what the best marketing message is or to generate ideas – it's to understand the player experience and why players had the problem they did. This leads directly to how to fix that barrier.
In a focus group, you get an idea of what might be a problem, but it's based on opinion, and people are terrible at reporting their own experience – unless it's in real-time.
When it comes to playing a game or using a product, we human beings aren't great at saying what it is we just did, and how well we did it. Our experience is colored by our own image of how we think we did. Some people say they did worse than they did, other people say they did better than they did, other people don't want to look bad in front of other people.
And then there's groupthink, where the group wants to agree with the alpha character in the room. So, there's a lot of things at play that aren't at play when we do the one-on-one observational studies like we do with PlaytestCloud.
DC (Dillon Cleaver): I think the biggest problem is communicating that message you just summarized, that playtesting is quite different from a focus group.
HD: Absolutely, and it's crazy how many people still have that notion that says a focus group is still the best way to learn how to optimize the player experience. People still ask, "Can we do focus groups?" and sometimes you can be convincing and demonstrate the difference to offer them better and actionable results.
I usually give an example of the general result you can get from a focus group and the specific result you can get from playtesting. Typically, that helps, but sometimes there are other powers that be.
Sometimes we have to go ahead with a focus group and sneak in a little playtesting too.
Being an expert in the field means that I can help to make suggestions and offer insights into why players don't understand the controls or the UI and what's needed to eliminate the barriers to a good experience. But the essence of the game is really up to whatever the developers and designers have created.
Is there a big difference between games user research and user research of other types of software?
HD: I would have said ten years ago that there's a huge difference. The main similarity is that the tools are very task and utility based. Elements like status, score, being able to exit easily, and ease of navigation are similar – everything beyond that is very different.
The expectation now is that all types of apps and software are enjoyable to use.
When I think about user research and talk about it, I always use the terms, "Useful, usable, and delightful" and that delight factor is where the ground intersects a lot with player research because when it's delightful, it means that you're charmed, or immersed, excited, and engaged.
With games, there's a whole different set of elements we look at, that are to do with the game mechanics and how the game plays. With productivity software user research, the goal is to make the software easy to use, and easy to navigate without any hesitations.
Now, if it's easy to use and easy to play, that can be a problem. But, we still want the tools to be easy to use. For example, if you're playing a console game, you want the controller layout to be easy to figure out, "Oh I press X or I press Y now, etc."
DC: Literally, the usability of the control input?
HD: Right. The control input, maybe it's an Xbox controller or the HUD (head-up display) telling me what my health is or how many primary and secondary weapons I have.
DC: So, it's the UI as well?
HD: Yes, anything to do with utility. It includes UI and anything to do with the ability to access the tools needed to have the possibility of playing the game.
Once you're playing the game, it's no longer about usability at all; then it's about the playability.
DC: Playability? So far I've been thinking along the lines of usability, as in, "What can I see on the UI? Do I know how to use the controller?" You're saying that once usability is good, the question then becomes about how the in-game mechanics work and how they're playable, enjoyable, and engaging. Is that correct?
HD: Right, exactly. Another way I've been thinking about games lately is that essentially the goal is to get rid of all the barriers.
What we do with PlaytestCloud is try to eliminate all the barriers between the player and the essence of the game.
Being an expert in the field means that I can help to make suggestions and offer insights into why players don't understand the controls or the UI and what's needed to eliminate the barriers to a good experience. But the essence of the game is really up to whatever the developers and designers have created.
Our job is to remove any barriers that would prevent the essence of the game from being revealed to the players.
We can understand what the barriers are, and at times, even identify ways to enhance the game. There are a couple of games we've tested with PlaytestCloud where I think we made a huge difference in actually improving the gameplay itself.
(Note: Check out part two of our interview on August 7th for more about when playtesting has made a big difference in improving core gameplay mechanics.)
In your opinion, when is the best time to begin playtesting a game?
HD: There are various good times, and really, the earliest we can get in is optimal. Even just a vertical slice, because if we know what some of the main mechanics are, and if we can get in and look at those, that helps to eliminate foundational problems that are too hard to fix later on.
DC: Can you explain what a "vertical slice" is?
HD: In the game industry, a vertical slice is a later level that development teams will work on – usually not the first level or an early level – just to test out the basic mechanics and see how those work.
It's not the whole game. Imagine the whole game is horizontal and we just take a vertical slice of it. When we get a vertical slice, it's wonderful for us.
I'm advocating getting the vertical slice early because it gets more expensive and harder to fix as time goes on. Most of the time people come to us when they're pre-alpha or alpha, or beta or pre-beta – a little bit later on in the development cycle.
A lot of the time, especially with the games we work on with PlaytestCloud, patching the game later isn't as difficult because you can update as people are playing. But once the foundations are set, they become almost impossible to fix.
The optimal way is to iterate on the findings from a vertical slice, and then keep doing them, slice-by-slice, until you feel comfortable with what the player experiences and with those main mechanics.
The new player experience is also essential and has to be seamless. Ultimately, we never want a game to feel like you have to play a tutorial to get to play the game. It needs to feel like playing but you also need to be learning, and this is a huge problem that I've grappled with since I started doing games, which was back when games became a more viable consumer product.
I've developed a set of principles called PLAY (Playability Principles) and GAP (Game Accessibility Principles). I developed those as a way to help raise the bar with designers, and also with some of the clients and designers that Christian (PlaytestCloud Co-Founder, Christian Ress) and I work with, to help create more of a seamless new player experience.
There are some principles we know about how we learn as human beings, and if we include those in the early gameplay, we have more success, and more possibility of success, and a good early experience.
People don't have a lot of room to spend the time to figure out how to play a game, typically, especially on mobile. There's an expectation of an immediate entry. People don't have a problem losing ninety-nine cents if they don't enjoy the first few moments of your game. They'll just move onto another game that'll be easier to get into.
You'd rather be the fly on the wall and be able to do something about it, than find out when it's really too late.
DC: Yeah, right. As a gamer, I feel like my threshold is pretty low. Like, if it's not fun pretty quick, I'm not going to stick around.
HD: Exactly, yeah. Absolutely.
DC: To summarize, it seems to me like the optimal time to test is early, before the foundations of the game are set. It also seems that it would be ideal if there is a "vertical slice" to playtest, and finally, that games should be developed with the GAP principal in mind from the beginning. Is that the perfect scenario?
HD: The reality of the situation is that people and companies aren't even ready to come in and test things. We'd love to change that, and I think that sometimes, some studios do abide by that and we're going to get that support early on.
So, it's optimal, and it's great to do it early, and there's no problem with starting once you have something that's pretty playable.
DC: So, as soon as it's playable, it's time?
HD: Right, and definitely by the time it's alpha, pre-beta – it's great to get in there as early as possible. I think that's the message.
DC: Right, because you're going to get your user test eventually when the game is released.
HD: Right, you'd rather be the fly on the wall and be able to do something about it, than find out when it's really too late.
Ultimately, games will do better if the studio's pause, do the user research, ingest it, and then integrate the findings into their game's design.
Okay, so I think this has been answered already, but... Is there a point when it's too late?
HD: I think for mobile it's almost never too late. Just because of the ability to patch in updates. There is a diminishing return though.
It truly does get harder to make fundamental changes over time, because so much gets ingrained or embedded throughout the game code and the foundation of the game.
The point of diminishing returns is when the game has been established, and you have players who are used to the way the game's being played, and you already have a good user base. That situation makes it tough to make changes.
But, if you find that you're not reaching your target numbers, and you're not getting as many players as you expected, or you're getting players who churn, which is super common – we don't want to say to those people, "Well, you're too late, you should have done it earlier," because then people don't want to do it at all.
We can always help to identify potential changes. If you're a designer or a team, and you have churn – you're not getting the numbers that you expect – then we can help you to identify where the problems are.
We are excellent at figuring where the problems are. We've been successful in the identification of some significant issues that weren't clear by doing analytics. But, with PlaytestCloud and the analysis that we've done with Christian, we can identify exactly what the problems are – it's been great.
But it's even better if you can do it earlier.
How should game developers integrate playtesting into the game development process?
HD: Optimally it's built into the design process.
I've been embedded inside design teams before, and I've consulted, also through PlaytestCloud, and I think it's still not common for people to expect to plan research that much. Usually, they're on really tight timelines, and the last thing people want to hear is, "Oh, we're going to get another week delay."
Ultimately, games will do better if the studio's pause, do the user research, ingest it, and then integrate the findings into their game's design.
The reality of the case is that most are still on their rapid-train, and they get the results, and they're like, "Okay, maybe we can fit this result in… and okay, here's the most catastrophic issue so let's fix this." So they end up, a lot of the time, fixing some of the problems but not all of the problems.
What I want to make clear here is that it's not my job to ever step on designer's toes and say, "The way you're doing it is all wrong." But we grapple with this all the time. We're on a schedule that's set by people above, the money people, the VPs, et cetera, who say, "We need to have this done in this period of time." Which means that unless you have an executive that understands user research, and makes sure there's time for it, then we're really trying to fit in as much as we can, as rapidly as we can – so that the developers and designers can make as many changes as possible. So what we do then is to help identify the priorities; what are the most important things to fix, and what are the things that are hard to fix and also a low priority? With limited time, we need to focus on identifying the showstoppers.
There's never a bad time to begin user research, knowledge is always power, and at least when you know what the problems are and why they're problems, then you can figure out how to address them.
And when you find those issues earlier, they're easier to fix. They just are.
DC: In a way, this is just common sense, right?
HD: Yes, but I also never want to dissuade anyone. Even if it's later on, there are still opportunities to fix things.
The underlying message is that it's never too late, and the knowledge you gain from user research gives you the power to know what to do.
DC: Would you say it's best to build user research into your development roadmap and have checkpoints where you stop to make sure things are on the right track?
HD: Yes. Definitely. Planning on having checkpoints is perfect for doing user research, then it doesn't stop the design goal or the rapid design process. It's so great to have that knowledge gained from user research – whether people work in sprints, or however they work, the results can then be integrated into that portion of the design.
That's the optimal way, integrated into the design of the development plan.
UPDATE: Check out part two of our interview with Heather now!