Background. Game development is a constantly evolving multi billion dollar industry, and the need for quality products is very high. Testing the games however is a very time consuming and tedious task, often coming down to repeating sequences until a requirement has been met. But what if some parts of it could be automated, handled by an artificial intelligence that can play the game day and night, giving statistics about the gameplay as well as reports about errors that occurred during the session? Objectives. This thesis is done in cooperation with Fall Damage Studio AB, andaims to find and implement a suitable artificial intelligent agent to perform automated test on a game Fall Damage Studio AB are currently developing, Project Freedom. The objective is to identify potential problems, benefits, and use casesof using a technique such as this. A secondary objective is to also identify what is needed by the game for this kind of technique to be useful. Methods. To test the technique, a Monte-Carlo Tree Search algorithm was identified as the most suitable algorithm and implemented for use in two different types of experiments. The first being to evaluate how varying limitations in terms of the number of iterations and depth affected the results of the algorithm. This was done to see if it was possible to change these factors and find a point where acceptable levels of plays were achieved and further increases to these factors gave limited enhancements to this level but increased the time. The second experiment aimed to evaluate what useful data can be extracted from a game, both in terms of gameplay related data as well as error information from crashes. Project Freedom was only used for the second test due to constraints that was out of scope for this thesis to try and repair. Results. The thesis has identified several requirements that is needed for a game to use a technique such as this in an useful way. For Monte-Carlo Tree Search specifically, the game is required to have a gamestate that is quick to create a copy of and a game simulation that can be run in a short time. The game must also be tested for the depth and iteration point that hit the point where the value of increasing these values diminish. More generally, the algorithm of choice must be a part of the design process and different games might require different kind of algorithms to use. Adding this type of algorithm at a late stage in development, as was done for this thesis, might be possible if precautions are taken. Conclusions. This thesis shows that using artificial intelligence agents for gameplay testing is definitely possible, but it needs to be considered in the early part of the development process as no one size fits all approach is likely to exist. Different games will have their own requirements, some potentially more general for that type of game, and some will be unique for that specific game. Thus different algorithms will work better on certain types of games compared to other ones, and they will need to be tweaked to perform optimally on a specific game.