Frank Puranik, Product Director for iTrinegyย , identifies network conditions as the big “unknown quantity” for online gaming performance, and offers a solution.

I’ve spent decades looking at networks and their impact on application delivery in different industries and none more so than network/online gaming. The global casino/gaming industry was purported to have a net worth of over $125 billion last year, and with online gaming taking an increasingly significant piece of the pie itโ€™s essential that their users are happy if they are to continue to grow revenue.

Networks are an essential part of the online gaming world and with all the different hosting offerings available e.g. Cloud and virtualised data centre choices it can be tricky to deliver a good online game experience to users. To do this requires 3 things:

  • Software thatโ€™s nice to use, efficient and looks good
  • Servers that have enough capacity for peak demand, without going slow
  • and, responsive Network delivery to the usersโ€™ preferred device

Technically weโ€™d put all of these factors together and call it the users โ€˜Quality of Experienceโ€™ (QoE) with the servers and networks providing the essential delivery of the game, which weโ€™d measure as Quality of Service (QoS). The server infrastructure has ability to be strictly controlled: cpu power provisioning, virtualization level, users per virtual server etc. But, the network, on the other-hand, poses a real challenge, as we cannot control it: in online gaming, for example, we just have to take whatโ€™s out there (a mix of home networks, corporate, mobile networks etc) and from your personal experience you know how variable and poor these networks can be at times.

With cloud and virtualised data centre choices it can be tricky to deliver a good online game experience to users.

For the right commercial reasons the gaming industry hasnโ€™t helped itself technically, setting up data centres in offshore locations for tax reasons which have often had poor network access.ย  This is changing due to new (UK)ย legislation ย to tax bets where the consumer is located, and no longer where the servers or the gaming corporations are based. This now adds the prospect of being able to move data centres to โ€˜betterโ€™ locations, but the definition of โ€˜betterโ€™ itself may be cost driven, now freeing gaming companies to look at solutions located in places such as Iceland – with its abundant geo-thermal energy and therefore low energy prices – without again being primarily focused on the network access.

The reason for this is a constant open reporting on how bandwidth is cheap and getting cheaper, which leads the non-savvy to the assumption that the network can fundamentally be ignored because, like the server structure, if you run out just add more… Unlike the server infrastructure, however, this is absolutely not true! Bandwidth is not always abundant to all locations, it is not inexpensive to all locations and even more, it may come with large amounts of latency, packet loss and other network related issues, which to the player appears as the dreaded LAG!

LAG is not to be trifled with. ย Apart from an uninspiring, poorly crafted game, LAG is the single most prevalent reason that a user will leave the game (either temporarily or permanently). Even if you have purchased and controlled the best MPLS/Private network into your datacentre there are still issues surrounding the final delivery to the end user โ€“ the last mile i.e. adsl, the corporate network (QoS policies), mobile cellular networks, poor wifi etc. and, in addition the actual distance to the data centre e.g. Iceland. All these create delay and network latency which translates directly to LAG.

So what can we do?

Better Game Design, Development and Testing
Firstly, when building the game the design must tolerate these (unpredictable) real world networks, and cope with the vagaries of the last mile. This means, we need, ideally to have those networks available to us through the development and testing process of the game.

The problem here is that often the developers have access only to fixed and typically excellent networks (great WANs or LANs), because the development environment is highly controlled, and so writing the game to tolerate poor networks is not done. The test environment will be quite similar, perhaps with a few different network scenarios available.

Network Emulation has the answer here providing an ability to create different networks on-demand for any different last mile networks types (such as mobile, WiFi, ADSL with high latency, packet loss characteristics, etc) as well as being to able to simulate the MPLS/WAN into the data centre(s).

Our company iTrinegy has a proven role to play here, one such recent example was by emulating real world network conditions for a graphics-intensive multi-player online game. Whilst all the testers were physically sat next to each other, in the virtual world we created the network conditions that put them all over the globe connected through all sorts of networks and networking technologies.

Network Emulationย has the answer here providing an ability to create different networks on-demand for any different last mile networks types.

Monitoring and Measuring the Network Experience
Secondly we must, where possible, measure and control those networks ensuring that they do not become overloaded as running out of bandwidth invariably creates additional latency and LAG. Major events need special consideration as we need to cope with peak load, not just the normal loads.

To do this you need to be measuring the game performance across the network on a constant ongoing basis to look for increasing delays, loss of game data (causing communications to be repeated) and out of bandwidth problems (which lead to queuing). All of these ultimately translate to game LAG.

We also need to be looking at all of these for future capacity and ongoing game performance over the networks (good old fashioned capacity planning – but this time with a strong network bias). Furthermore we need to be looking at โ€˜headroomโ€™ for the big event as well as trends in increased network usage (likely caused by more players) which is just good common sense (capacity planning). The data for this is made available by an Application Aware Network Profiling tool.

Datacenter and Cloud Moves (Transformation)
Lastly, if moving a data centre is on the radar, then we really need to re-test and (re-certify the game) to ensure that it will play as well or as better from the new data centre, with the new networks to that location. The game is going to have to cope with a network that may have more latency due to its physical location related to users, or simply because the networks to those locations are not quite as good. This risk can be removed by emulating the new network in advance and testing the game play in the emulated network to ensure that the game still performs at least as well as or better, as in the original infrastructure. (It should be noted that this is the standard unwritten data centre SLA – the Application should perform at least as well after any transformation as it did before – the big joke being that most donโ€™t know how it performed before the transformation – hence measure first). Either way successfully measuring and emulating will de-risk any type of datacentre, or cloud move.

In summary this implies the following process:

1) measurement of the gameโ€™s performance today (through Network Profiling)

2) accurate prediction (via Network Emulation) and

3) Network Profiling in the new network to verify that all is as expected.

Outsourcing your infrastructure and network, does not outsource your responsibility to provide a good gaming experience.

Donโ€™t think these points donโ€™t apply if you are outsourcing or moving to a cloud provider. Outsourcing your infrastructure and network, does not outsource your responsibility to provide a good gaming experience โ€“ In the end the buck stops with you not the outsourcer.

All of these technical points mentioned above are fundamentally all focused on providing a good quality of service which translates to winning and keeping more customers, by giving customers the best possible experience. Not doing this risks your customer retention and therefore your bottom line.

Andrew McLean Headshot
Website | + posts

Andrew McLean is the Studio Director at Disruptive Live, a Compare the Cloud brand. He is an experienced leader in the technology industry, with a background in delivering innovative & engaging live events. Andrew has a wealth of experience in producing engaging content, from live shows and webinars to roundtables and panel discussions. He has a passion for helping businesses understand the latest trends and technologies, and how they can be applied to drive growth and innovation.

Unlocking Cloud Secrets and How to Stay Ahead in Tech with James Moore

Newsletter

Related articles

Securing Benefits Administration to Protect Your Business Data

Managing sensitive company information is a growing challenge. Multiple...

Which Cloud Type Suits You โ€“ Public, Private, Hybrid?

Valuable lessons have been learnt about cloud deployments over...

A Business Continuity Cheat Sheet

Right, let's be honest. When you hear "business continuity,"...

Challenges of Cloud & Ultima’s Solution to Transform Business

With the way that AWS and Microsoft dominate technology...

The Role of Artificial Intelligence in Subscription Management

AI has revolutionised the landscape of sales and reinvented...