How to train your TRex — using Neural Network & OpenCV (SuperVised method)

Abhishek Singh
4 min readDec 4, 2019

In my last post, I showed how we can use a simple CNN network model along with the OpenCV library to do gesture recognition. And in the end I showcased the responsiveness of it by playing Chrome’s in-browser TRex/Dino game — Let’s play Chrome’s Dino game with gestures only !!

What are we trying to achieve this time?

In this post let me show you how to use Supervised Machine Learning technique to teach our beloved TRex to play and then let it play the game itself!

Now keep in mind, I am not going after a perfect logic that will enable our TRex to play forever (that would be cheating right?).

What is Supervised Learning?

“Supervised learning is the machine learning task of inferring a function from labeled training data.[1] The training data consist of a set of training examples. In supervised learning, each example is a pair consisting of an input object (typically a vector) and a desired output value (also called the supervisory signal). A supervised learning algorithm analyzes the training data and produces an inferred function, which can be used for mapping new examples. An optimal scenario will allow for the algorithm to correctly determine the class labels for unseen instances. This requires the learning algorithm to generalize from the training data to unseen situations in a “reasonable” way. — Wikipedia “

The simplest analogy I can think of is the way we are taught at school, where the teacher teaches by first asking a problem and then provides a solution to it. For eg: 2 + 3 = 5.

How are we going to teach?

If you have played this game then you would already be knowing that this TRex character can be controlled with the following actions:

- Jump (Up Arrow key or Space key)

- Crouch (Down Arrow key) or

- Just Run (default action)

So all we require to teach our TRex is when to jump and when not to jump (For simplicity we are ignoring Crouch action, Crouching is for noobs …).

For this, our TRex should have following two capabilities:

- Eyes: Ability to see what we are teaching it. We could use the OpenCV library to capture the screen contents through an external camera Or directly read back the screen contents more like taking continues screenshots. I have used the second option.

- Brain: Ability to make decisions of when to jump and when not. We would use the Convolution Neural Network for this. Images that we capture above will be fed into the Neural Network, first to train it and later for predictions. Since we have to make just two decisions of Jump or Not Jump, we will train this model for two sets of training image data. One set of image samples will tell when to Jump and the other set for when not to Jump.

Note: We don’t want to capture the whole screen contents but just the game area of the browser where TRex is playing should be sufficient for our need.

Now let's generate our training image sample data. For this, I would suggest you to play the game yourself, if you have not played before. Just open a Chrome browser, disconnect the Internet and try to open any website. You should see “There is no internet connection” message. Just press Up Arrow key to initiate the game.

Carefully monitor when you are pressing the Up Arrow key to jump. You should easily notice that as soon as any obstacle comes near to TRex, up-to a certain distance, you make TRex to jump over the obstacle. You don’t want to jump too early or too late. This also means that we are only concerned about that small region around TRex which extends till this ‘certain distance’ on the right of TRex and we will capture image contents of this region instead of complete-game region for better efficiency due to less image processing involved.

In the code, I have added below logic to save training image sample

if I press Up Arrow key:     Save the screen contents as Jump sample imageelse:     Save the screen contents as No Jump sample image

And here are how my training image samples look like:

Jump sample set :

No Jump sample set:

Once I trained the model using the above image samples, I used the trained model to do the predictions on a live game execution. By the way, the prediction outputs (Jump or No Jump) are passed to the Chrome browser i.e Up Arrow Key (Jump) & no input (for No Jump) to control TRex. And this is how it turned out.

Youtube link — https://youtu.be/ZZgvklkQrss

This is still not perfect for instance, as you progress the TRex velocity increases and this also impacts the instance you need to make it jump. Current image samples don’t handle this scenario. But like I said before, I am not going after a perfect model, maybe later. Perhaps if you are interested then you can perfect it, will be good exercise.

Here is the source code for this project:

https://github.com/asingh33/SupervisedChromeTrex

Please provide your feedback for this series, what you liked, not like, any improvements, suggestions will be highly appreciated. For next post, I am planning to implement the same project using the Reinforcement Learning method which is an Unsupervised method of learning.

--

--