Computing and data advances on the final frontier

Imagine a server being strapped to a rocket, blasted over 26,000 miles to geosynchronous orbit, and then living a life bombarded by solar flares and radiation while expected to perform flawlessly. This is precisely the challenge facing IT hardware manufacturers intending to compete in the emerging field of space-based data storage and processing.

An interesting new approach from Hewlett-Packard Enterprise is showing great promise for delivering performance and lifespan with standard, off-the-shelf equipment sent into space. This development could, in turn, help push enterprise IT in new directions as well.

From physical hardening to software protection

Until recently, the best way to help sensitive computer systems withstand the forces of takeoff and the environment of space has been so-called “hardening.” It’s a shorthand for what amounts to creating a super-tough Toughbook. Manufacturers build specialised systems or retrofit commercial hardware, doing their best to insulate them physically from potential harm.

Hardening is expensive and time-consuming, and that’s a big reason why there isn’t a lot of compute or storage happening in near-earth orbit or beyond. We just don’t send true supercomputers outside our atmosphere today. Instead, data from the International Space Station, Mars rovers, and interstellar probes is beamed back to earth, where there are plenty of servers, storage arrays, and other IT hardware to deal with it.

There are downsides to this approach, familiar to any IT pro—namely limited bandwidth and high latency. And they’re far worse than for any enterprise. The 300 Mbps available for the Space Network is a limited commodity, and the ISS and various satellites must share it to control systems, download images from the Hubble Telescope, and allow astronauts to talk to their families.

As nations and private interests look to establish bases on the moon, put people on Mars, and explore the galaxy with increasingly powerful optics and sensors, there is good reason to take a more “edge-like” approach—moving compute and processing closer to where the data is created. For example, this might enable detailed analysis of onboard experiments, with only results, not raw data, being transmitted to ground stations, leading to a significant reduction in network traffic.

There are also those who envision a future in which commercial data centres could be put into orbit, a public cloud operating above the clouds, so to speak. Such facilities would benefit from cool temperatures, the lack of humidity and weather extremes, and a zero-G environment spinning drives especially like.

The HPE gambit & remaining barriers

To make all this possible, however, we need better ways to help expensive, cutting-edge computers survive in space. HPE scientists wondered if a software approach could replace hardware-based protections at a lower cost. Specifically, they hypothesized that slowing down processors during assaults, such as solar flares, could prevent damage and data corruption.

To test this theory, HPE sent a Linux system to the ISS in 2017 and kept a nearly identical machine in a lab on earth for comparison. They hoped for a year of error-free operations in orbit. Over 550 days later and counting, everything is still running fine. It’s going so well, in fact, the test machine hasn’t been booked a return ticket.

This success is great news, but hardly the final hurdle for space-based computing. After all, any data centre manager knows that without undergoing 5G pressure at max q during launch or being attacked by space radiation once in place, IT equipment has still been known to fail.

Fortunately, advances in machine learning are pushing remote monitoring and repair capabilities to space-friendly heights. At present, it is possible for a system to learn to proactively detect faults and diagnose the underlying cause of IT issues from a distance. Such capabilities are already helping enterprises achieve near-constant uptime for banking, social media, and other technologies we all rely on. With AI advances, the industry can expect remote hardware-management systems to become increasingly predictive and able to initiate fixes further in advance of critical failures.

Moving to more software-based functionality—as the IT industry is doing with software-defined networking and, eventually, the fully software-defined data centre—will enhance flexibility and remote upgradability. This will be a boon for space-based computing. Such capabilities added to a long-range space mission will allow for more post-launch adjustment and lasting reliability.

The remaining challenge is that human engineers are eventually required in data centres to swap out a drive or move in a replacement server, and we don’t have the robotics to take care of these jobs on the ground, let alone in space. Robotics is a budding field, however, so this shortcoming will not remain a barrier.

Implications on earth

What does all this mean for the IT professional not working for NASA, SpaceX, or any of the other interests in the new space race? As we found out with Tang, advances driven by space exploration are often adapted to broader purposes. Putting innovation dollars behind research into more hardy IT equipment could bring the average enterprise more reliability and lower costs. Pushing the envelope of remote monitoring will transform IT hardware maintenance. And if we must have robots available to fix future generations of Google servers in orbit, data centre managers will one day install similar systems for truly lights out operations closer to home.

These developments will also help to sustain the computing we are doing in increasingly remote locations. Microsoft has installed an undersea data centre pod off the Orkney Islands for testing purposes. Green Mountain put a commercial data centre underground near an Arctic fjord. And the ALMA Correlator, a unique supercomputer, exists at 16,500 feet in the Chilean Andes. Computing in harsh and isolated environments and sending IT resources completely off-the-grid are becoming common. Advances made in space may help make these endeavors more successful and fuel a radically mobile lifestyle for earthlings.

In the meantime, for space geeks everywhere, the successful HPE experiment gives us one more piece of information to feed our interminable debates about what it will take to survive, thrive, and compute happily as we move into the final frontier.

Website | + posts

President and CEO, Park Place Technologies

Unlocking Cloud Secrets and How to Stay Ahead in Tech with James Moore

Newsletter

Related articles

A Business Continuity Cheat Sheet

Right, let's be honest. When you hear "business continuity,"...

Challenges of Cloud & Ultima’s Solution to Transform Business

With the way that AWS and Microsoft dominate technology...

Data privacy concerns linger around LLMs training

We have all witnessed the accelerated capabilities of Large...

Securing Benefits Administration to Protect Your Business Data

Managing sensitive company information is a growing challenge. Multiple...

Which Cloud Type Suits You – Public, Private, Hybrid?

Valuable lessons have been learnt about cloud deployments over...