Think the Internet is big? It’s beyond big. There are more devices on the Internet than there are people on Earth.

There are now at least two billion PCs, a billion mobile phones plus 4-5 billion servers, consoles, tablets, smart TVs and embedded devices all joined together by enough cable to reach to the Moon and back 1,000 times over.

All that hardware could fill every football stadium in Britain to the brim – and it uses more electricity than the whole of Japan. The Net has become a vast, multi-trillion-dollar planetary machine; the most expensive and complex thing that humans have ever made. And everything about it is growing.

Over 1.5m smartphones are registered daily; 5x the human birthrate. CPUs and GPUs are finding their way into everything from fridges to teddy bears and Moore’s Law continues to hold; computing power keeps doubling every year or two, making today’s laptops quicker than yesterday’s supercomputers.

The sum total of the Internet’s home computing power is almost beyond imagination. Its also over 1000x greater than all the supercomputers on Earth combined. And we’re wasting almost all of it.

Sure, all those billions of PCs, tablets and phones are “being used”. A third of all PCs are never switched off. But even though we are using these devices constantly, at any given time the average CPU load across the Internet is less than 2%. Unless video-encoding or playing the latest 3D game, the typical PC is almost completely idle. People don’t type at GHz speeds or view holiday photos at 60 fps.

As processors keep getting faster and more numerous, the ratio of used-to-idle computing continues to increase. Almost everyone has more than they need – almost, that is, because some people can never have enough; physicists, biologists, engineers, climatologists, astronomers, chemists, geologists – pretty much anyone doing fundamental research.

Science and industry spend billions on ever-faster-supercomputers for this reason; they have become indispensable to our modern way of life. Cars, planes, medicines, oil wells, solar cells, even the shape of Pringles crisps was designed by supercomputer. They are the most useful and productive research tools ever made.

But they don’t last. The IBM Roadrunner, the world’s fastest computer in 2009, was decommissioned in 2013 because it was obsolete. It cost over $100m, as will its successor. The owners, Los Alamos National Lab, can use the same floor space and energy far more efficiently with new hardware. Like all supercomputers though, Roadrunner’s limited shelf-life was unavoidable. Computers do not age gracefully.

The sum total of the Internet’s home computing power is almost beyond imagination…
And we’re wasting almost all of it.

Contrast this with billions of idle CPUs on the Internet that are continually being replaced by their owners. Broken devices are repaired, old ones upgraded and more are constantly added. The Internet is unique among machines: effectively self-healing, self-improving and permanently switched on. Parts of it may turn on and off every second but, considered as a single system, the Net has 100% uptime and it always will do.

Using the Internet as a platform has been happening for years. Berkeley University’s BOINC software has enabled dozens of science projects to harness over $2 billion’s worth of donated CPU time over the last decade, from over 6 million idle PCs. The concept of volunteer computing is technically proven, the only issue is persuading device owners to allow it. Considering that the Internet is wasting over $500m in unused computing per day, it is certainly an endeavour worth pursuing.

It is true that many HPC and cloud applications are not suitable for heterogeneous WAN computing. Low latency is out of the question (even with that new 99.7c hollow optic fibre) and highly secret data is unlikely to ever leave its owner’s building without military-grade homomorphic encryption. But there are millions of tasks that are perfectly suited; Monte Carlos, parameter sweeps, climate modelling, rendering – anything that doesn’t need Infiniband, shouldn’t be queuing for HPC and will happily use public resources to gain 10x-100x more computing for the same budget.

Using the Internet to compute also permits a whole new class of tasks which might prove to be the most interesting – and the most valuable – of all: those which can only be served by a global grid of millions of CPUs working together. For some applications, $100m supercomputers will always be just too small.
Originally published on Compare the Cloud 2 April 2013 by Mark McAndrew.

Andrew McLean Headshot
Website | + posts

Andrew McLean is the Studio Director at Disruptive Live, a Compare the Cloud brand. He is an experienced leader in the technology industry, with a background in delivering innovative & engaging live events. Andrew has a wealth of experience in producing engaging content, from live shows and webinars to roundtables and panel discussions. He has a passion for helping businesses understand the latest trends and technologies, and how they can be applied to drive growth and innovation.

Unlocking Cloud Secrets and How to Stay Ahead in Tech with James Moore

Newsletter

Related articles

Driving the Future of Connectivity with Data Centres

As an astonishing matter of fact, there are 5.3 billion people...

Willow’s Breakthroughs in Quantum Stability

The Start of a New Era Have you heard the...

Cloud Computing Demands Robust Security Solutions

Modern organisations are increasingly reliant on cloud computing to...

Unlocking the future of manufacturing; AIOps network optimisation

Global manufacturing activity has failed to show signs of...

Why is the hybrid cloud the future of computing?

Have you ever wondered how companies can benefit from...