The impact of server memory on data centre performance, capacity, power and costs

By Debbie Fowler of KingstonConsult

It’s not exactly front page news that virtualisation, big data and the cloud are major drivers changing the rules for flexibility and elasticity in data centres. We all know that data centre growth is escalating, costs are increasing and there is massive demand to support data that has to move quickly and efficiently.

Demand for data driven by ubiquitous access to content from both consumers and business is stretching resources.

server-memory

We all know that managing the performance of systems in the data centre is a time and resource consuming priority, and issues of capacity, performance, power and cost are major considerations. You may be surprised to learn that the humble memory module can play a vital role in enabling your business and IT goals. Memory is often overlooked as a solution to improving over-all server performance, capacity and power saving; the immediate reaction is normally to add more servers without fully analysing the optimal way to maximise the under-utilised servers already in place.

Unfortunately today, knowing how to choose the right type of memory configuration required to achieve the desired results and business goals is not easily understood. Both memory and server technologies have evolved rapidly over the last five to ten years.
Balancing low power with capacity and performance requires an understanding of the role that server memory plays. Memory has evolved to become one of the most important components of the data centre. Server processors (often under-utilised) are able to process multiple threads over many cores to maximise the utilisation of a single server. Without sufficient or optimally configured memory, performance degrades and servers do not reach their full potential. Rather than automatically adding more servers to improve performance, in many cases additional memory will address the issues and reduce complexity and cost.

Rather than automatically adding more servers to improve performance, in many cases additional memory will address the issues and reduce complexity and cost.

The following are considerations and recommendations to identify how additional server memory can help data centres efficiently improve overall performance and how to ensure new memory eases resource allocation without business disruption.

First, identify the role and goals for a given server or servers in the data centre. Prioritise the importance of better performance and speed, reduction of power consumption, or increased capacity. While not mutually exclusive, the prioritisation of these factors will dictate the optimal memory choices. Minimising memory power consumption can save between 5 percent and 10 percent or more of a server’s total power draw, which obviously gets multiplied across a data centre.

Although memory is considered a commodity today with industry standards in place, it doesn’t mean every memory module or memory configuration will be supported by every server. There are many compatibility considerations related to the components on the memory module and your server. Fundamentally there are no differences between branded servers and white-box servers. There may, however, be subtle differences in motherboard design or system height that require usage of a specific memory type. An IBM server, for example, may have height restrictions and require memory with very low-profile (VLP) designs.
Minimising memory power consumption can save between 5 percent and 10 percent or more of a server’s total power draw, which obviously gets multiplied across a data centre.

Or an HP ProLiant server might have compatibility issues with specific register components or DRAM brands. It’s very important to select the memory that is guaranteed to be compatible with your specific server system. It is also worth noting that older server systems may not be compatible with the latest memory module technologies or best practice configurations.

Ensure that new memory is installed correctly and follow the server’s channel architecture guidelines. For example, it has become the norm over the years to install memory modules in pairs. When triple- and quad-channel servers were introduced, many wrongly assumed that continuing to install in pairs was the correct way to go. In fact this is not the case and often leads to memory channels being incorrectly populated and the potential performance of the server being compromised. Memory incompatibility problems typically manifest themselves as system lockups, blue screens or error-correction logs. Memory performance, however, is not so easy to diagnose. To fully understand if the memory is performing as desired, it must be correctly benchmarked or closely monitored.

Choosing the cheapest solution may not be the wisest choice to meet long-term goals. For example, data centre managers, when evaluating new memory, may see that 8GB DIMMs are fairly inexpensive, and purchasing 16 of them for the server can achieve their capacity goal of 128GB. The other option would be to choose eight 16GB DIMMs instead, which may be more costly at the outset but will provide savings over the long term on energy consumption (fewer DIMMs drawing less power) and provide headroom (open sockets) to expand memory in the future.

This is where Kingston Technology, a company with 25 years experience of manufacturing computer memory can be of use to your business. KingstonConsult offers you an independent opinion on whether the memory configuration you are currently using or are planning to use is balanced and optimised for your organisational goals and business needs.

KingstonConsult services are offered free of charge following qualification and do not require you to be a current Kingston customer.

The following service offerings are at the core of KingstonConsult:

  • KingstonConsult experts will look into your existing or planned server configuration and work with you to understand your individual business requirements. Based on our findings we will supply you with a tailored ‘Server Assessment Report’ which will address commercial and technical issues in order to demonstrate which memory would best support your specific business objectives.
  • KingstonConsult offers product evaluations to enable you to conduct a “proof of concept” review of our Server Assessment recommendations in your own real life environment.
    KingstonConsult’s experts offer server configuration training designed to educate both MIS and business stakeholders regarding the technical and commercial challenges and benefits.

To find out more see: www.kingston.com/Think or email [email protected]

+ posts

Unlocking Cloud Secrets and How to Stay Ahead in Tech with James Moore

Newsletter

Related articles

Driving the Future of Connectivity with Data Centres

As an astonishing matter of fact, there are 5.3 billion people...

Willow’s Breakthroughs in Quantum Stability

The Start of a New Era Have you heard the...

Cloud Computing Demands Robust Security Solutions

Modern organisations are increasingly reliant on cloud computing to...

Unlocking the future of manufacturing; AIOps network optimisation

Global manufacturing activity has failed to show signs of...

Why is the hybrid cloud the future of computing?

Have you ever wondered how companies can benefit from...