PlaytestCloud runs more playtests and games research studies than any other platform, and nearly 80% of the highest grossing game companies in the world run their remote playtests with us.
As the scale of playtests on our platform surpassed 5,500 studies annually, we gave in to curiosity and set out to uncover trends in player insights, understand how the last few years have changed games research, and break down exactly what it is that successful studios do differently.
Once we compared the habits of our (more than 8,000!) Player Insights Platform™ users to wider industry statistics, we discovered that successful studios approach playtesting and games user research differently. We loosely defined “successful studios” as those who were publicly listed as reporting over $50M in revenue, and “other studios” as, well, everyone else.
Let’s pause here to state the obvious: studios who have more resources to invest in research invariably run more research. But publishing a report that says “spend more money on research” wouldn’t help anyone (and, luckily, that’s not the TLDR of this document).
The scale of your research is less important than the type of research you’re running. That means that game studios of any size and research budget can apply these methods to their practices and increase the return on investment from their research efforts.
Most studios know continuous playtesting is important, and not just a one-and-done chore to wrap up the development process (it pains us to even write that sentence). Studios of all stages and sizes are testing more frequently and at different phases of development.
But our data shows that successful studios continuously test the same games more than any other studios. They also iterate more rapidly, with less time between each test, which picks up speed as time goes on.
Just how much more are top studios playtesting the same games? Our research showed that dozens games were playtested more than 75 times! When compared to less profitable studios, the successful game studios were testing the same games at least 4 times at a rate of 1.5 times more, and they were twice as likely to test the same games at least 10 times or more.
Among all video game studios, we saw a 62% increase in iterative playtesting between 2019 and 2024. In 2019, 831 individual games were tested at least twice on PlaytestCloud. In 2024, that number jumped to 1,345 individual games.
This trend is even more evident when we looked at games that were tested more than 10 times each: in 2019, 99 games were tested more than ten times, whereas in 2024 209 games were tested more than 10 times.
Successful studios test more often, and this trend solidifies with the number of iterations. The more they test the same game, the shorter the time between test iterations. The average time between the first two playtests on the same game is approximately 25 days, where the average time between the ninth and tenth playtest is 15 days.
While the trend of remote playtesting has increased steadily since 2020, the adoption by successful studios has skyrocketed since then. Not only are they spending nine times as much on playtesting, research and player insights as the other studios, but the sharpest increase has been during one of the most economically challenging periods the industry has faced.
With other budget areas cut, an increase in these areas shows that player centricity and a polished user experience are critical to a game’s success. Prioritizing early and frequent validation of both the FTUE and long term playability is the safest and most efficient path to building strong retention and ultimately ensuring higher revenue.
In 2020, successful studios were spending over three times more on playtesting than other studios. By 2024, that increased to nine times higher spending on playtesting by successful studios than other studios In 2024, successful studios literally doubled down on research spending while average studios held steady. This shows that successful studios know research and playtesting is the best strategy to de-risk game development.
Make smarter and faster decisions backed by player insights. The earlier you get concepts in front of your actual target players (not just friends or loyal community members), the easier it is to course-correct before development costs have been wasted.
Continuing to check small changes means faster, more iterative development cycles, and killing features or even games sooner means budgets stretch further (while avoiding high profile flops).
While most playtests for most studios are 15-60 minute unmoderated (what we call a Single Session playtest) successful studios use more methods for playtests and studies, including longer playtests to understand engagement and retention.
These studios also conduct far more concept and prototype playtests, validating decisions and ideas much earlier with player insights before too much development budget has been spent.
Make sure your method is the right choice for your research question. Use moderated playtests to observe your player while they play, interject and ask questions, or interview them. Use longer playtests to understand retention issues, and use surveys to help refine your research questions for your next playtest.
While most playtests on our platform use basic audience targeting (player's age range, gender, location, gamer type, gaming history, and device type), successful studios are far more likely to use custom targeting or their own screener to refine their audience even further.
Successful studios also use different audience groups to validate the same concepts with A/B tests, and are more likely to use moderated playtests and surveys to determine which audience is the ideal profile for the game.
Standard targeting (basic demographics, location and genre experience) is sufficient for most playtests. More advanced targeting will pay off when playtesting for retention, engagement, competitor analysis and skill level. And don’t forget about testing different versions of your own audience against each other—like when comparing a newcomer’s experience to a hardcore player’s.
Most studios run playtests to uncover gameplay issues and overall first impressions that could keep players from coming back. Successful studios continue to test returning player experiences beyond the first hour to understand retention, player motivations and long-term playability.
Longer playtests allow you to observe how the experience evolves over time, witness players establishing habits and routines, and seeing if monetization levers are effective. It’s the best way to accurately simulate players becoming fans (or deciding to churn).
The insights from these longer studies support decisions to postpone launch and fix known problems (especially when run in parallel to soft launch for contextualizing in-game metrics), but can also help uncover new target audiences.
The cost of User Acquisition has increased significantly, so playtesting can be much more cost-effective than acquiring an unvalidated audience and relying on game metrics for the full picture. You’ll also have a much better understanding of player behavior and experience, while the game is still early enough in the production life cycle to make changes.
that’s a lot of footage! Thank goodness for AI analysis, right?
Longitudinal studies help accurately predict and mitigate player churn by answering these questions:
Does the game remain easy to understand? Are there later features that are confusing or a difficulty spike?
How does pacing, difficulty, and economy balancing evolve?
Do social features work well? Are they engaging? Are the features well integrated with the core gameplay loop?
How does monetization factor in? Are the options appealing and understandable?
Now, this one is harder to copy: successful studios running playtests on our platform see players playing for longer than the required time when compared to other studios.
Our player panel is compensated (to date, at the highest rate of all playtest platforms and in actual currency, not gift cards or points!) for their participation in playtests, but they are aware that they do not receive additional rewards for playing additional time. They are also compensated the same whether or not they enjoy the game, and our platform passes their unfiltered gameplay and responses directly to you.
We’ve seen playtest recordings where players don’t want to put the game down so they can “see how the story ends” or “just to beat that extra level!” They’re not paid for their overtime, and they’re using their own free time: a clear indication that the game is sticky.
To get your game into the coveted “golden overtime,” you need to know the point where the game becomes hard to put down. Identify which earlier FTUE issues might come between the player and a “play this forever” mentality by varying your testing methodologies and observing player behavior. Run competitor tests to see where players find the most enjoyment and pinpoint moments of stickiness in similar titles, and try longer playtests to understand how player behavior changes as the game goes on.
If there were only one lesson to take away from this entire report, it would be this: successful studios playtest their competitors’ games as much as—or even more than—their own games. In the last two years, the frequency of competitor testing among successful studios has increased even more.
Testing the competition provides a clear map of good practices you can copy, bad practices you should avoid, and the holy grail: common bad practices you can use to your advantage by cracking the code and making it actually work!
|
1. Monopoly Go! |
|
2. Township |
|
3. Family Island |
|
4. Travel Town merge Adventure |
|
5. Call of Duty Warzone mobile |
Targeting your competitor’s players and testing their games (on their own, or against yours) is the most valuable batch of player insights a studio can get. It’s the best kept secret among successful studios (well, it used to be, before we published this report). Keep your friends close, keep your enemies even closer.