Cloud Gaming

I’ve been talking to a few people lately about the future of gaming and the cloud.  There are a few potential models for cloud gaming, so let’s lay them out:

  1. Remote Gaming – This is what services like OnLive and Gakai are offering.  The game is run on remote hardware and the user input is collected, sent , processed, graphics are generated, compressed and sent out as a stream of video.
  2. Local Client, Remote Server – This is what all the MMO games on the planet are.  This segment also covers the majority of Facebook games.  The game graphics and input systems are run on a local application or in a browser.  The game logic is run on remote hardware, passing game events back and forth between the server which runs the game logic and the client which handles and displays the user interface and rendering.
  3. Local Client, Remote Distribution and Storage – This model is closest to the traditional gaming model with the addition of online storage and distribution.  The game input, graphics and logic are all run locally.  The online component is a remote server which supplies the assets, which are then cached locally and a location for save games.
Let’s dig into each model and hit the high points.

Remote Gaming:

Remote gaming sounds great as a concept:

  • You don’t need a high end computer or console to play a game.
  • You can get gaming “instantly”
  • You can game from home, work or on the road
But, the reality is a little less spectacular once you dig into what is actually going on.  At some level, there is a computer running the game you are about to play.  The “instant” gaming part saves you clicking an icon and watching an initial loading screen.  The game will still need to load levels, load all the character models, start the game, etc.
Latency:  To me, latency is the elephant in the room for cloud gaming.  As a game developer I obsess over the user experience, I obsess over the feel of the controls, the responsiveness of the character and the “feel” of the game.  Let’s look at what happens on a game running on your computer.

If you look at the image,  the black lines are what happens when a game loop is run normally (or on a local machine).

The green lines indicate areas where extra work is performed.

The red lines indicate data transmission over the internet.


This doesn’t look too bad until you take into account the latency of each step.  The black steps are basically instant.  The stages take varying amounts of time, but for a game running at 60 frames per second, all these stages happen 60 times per second. (So they need to fit into 1000 msec / 60 times/sec = 16.7 msec)

The green lines require extra work, but video compression/decompression is a well resolved problem.

The red lines are where the latency kicks in.  We have latency in collecting the input and sending the video.  Back in the day, I played a lot of Quake.  Anything below 120 msec of ping and you were at a severe disadvantage.  Quake did a lot of smart things to hide and compensate for the latency as well, most multi-player games do.  Single player games however, do not.  What this means is every time you move the mouse, it must be collected, sent to the server then handled and then the game calculates the result and sends back the video.  If you have any form of lag it shows up in everything, mouse movements, clicking, typing, etc.  This is the Achilles heel of remote gaming, you are using a computer miles away using a remote control.  The remote has some lag from your input until you can see results, which causes the experience to degrade in quality



As a game developer, my first concern is for the consumer of my game.  I want the experience to be the best possible and I want to ensure they get the benefit of the hard work my team has put in.  Loosing control of the “feel”  of the game is to me a sub-optimal solution.

I’ve already spent too much time on this, so I’ll come back later and hit the other modes.