Autonomous vehicles – Is the hype warranted?

So where is your self-driving car? Those of us waiting in hope must reckon with the daily news cycle, as tech companies big and small contribute to a hype that obscures the real progress. Elon Musk has predicted several times that we would achieve ‘Level 5 autonomy’ inside a year, but these predictions have failed to materialise.

Meanwhile, there are huge teams of engineers deep in the technical soup, working out the best sensors to use, how much data to collect, how best to communicate with other vehicles, and the dozens of interconnected technical challenges. The goal is full autonomy, a vehicle that drives itself, anywhere and everywhere, with optional human being. And where are we today? Well, you could have your self-driving car tomorrow – just don’t expect to get more than a mile or two without “disengagement”.

It’s clear that we’re nowhere near our expectation of perfection when it comes to autonomous vehicles. We’d like all the skill and nuance of a human driver, without any of the flaws. Autonomous vehicles should be (very nearly) perfectly safe, all the time, irrespective of the world around them. But even with the rate of technological progress, will that ever be possible?

Before we explore this question, it’s worth reminding ourselves of the industry’s agreed levels of autonomy:

Level 1 is ‘driver assistance’. This is where a single aspect is automated, but the driver is very much still in charge.

Level 2 is ‘partial automation’. This is where chips control two or more elements. In broad terms, this is where we are today, where vehicles are intelligent enough to weave speed and steering systems together using multiple data sources.

Level 3 is ‘conditional automation’. This is where a vehicle can manage safety-critical functions. Although all aspects of driving can be done automatically, the driver must be on hand to intervene.

Level 4 is ‘high automation’. This is where vehicles will be fully autonomous in controlled areas. This will see vehicles drive in geofenced metropolitan areas, harnessing emerging technology in HD mapping, vehicle-to-vehicle communications, machine vision and advanced sensors.

And finally, Level 5. This is ‘fully autonomous’, anywhere, in all environmental conditions. The key difference between this and level 4 is that the human driver is optional.

Software at its core

Investor extraordinaire Marc Andreessen told us back in 2011 that “software is eating the world”. That remains true. At its heart, creating autonomous vehicles is a problem of software, or in fact many complex pieces of interleaved software. The first Level 5 vehicle on the road will look very similar to the Level 2 vehicles of today. The body, sensors, data feeds and so on will look the same. The key differences will be the human interaction and the software, the lines of code that make sense of the world around, making predictions and creating actions, at lighting speed.

Software will consume sensor data and analyse the vehicles surroundings. Software will help us navigate around difficult terrains, deciding which route to take and when to change course, avoiding dangerous routes when there is heavy rain. Software will detect objects, from cat’s eyes to lampposts. Software will track the motion of a child running after a football on the pavement, even going one step further, applying its advanced grasp of physics to predict the risk that the ball’s trajectory will land it on the road.

It all sounds great, right? But will these advances ever bring us to complete safety, or is it ok that they will simply be safer than human drivers?

This is where we enter an adjacent, fascinating field of study. There is something unique about how humans perceive technology. It’s my view that we’ll be extremely unforgiving about the mistakes of an autonomous vehicle. Statistical proof that they’re safer than with human drivers won’t be enough when there’s something… unnerving, even appalling about a machine making a decision that causes a crash or ends a life. So better will never be enough. We’ll need to see a huge leap forward in autonomous vehicle safety for these vehicles to see mainstream adoption. The only wildcard I see is that insurers may provide big incentives to drivers that leave the driving to the car, but even then I can see reluctance continuing.

So this is a core challenge facing the industry: These vehicles must be almost absurdly safe as we move up the levels towards level 5.

The solution is in the code

New vehicles can carry up to 150 million lines of code (more than modern fighter jets) and the role of software is only increasing with each new model that is rolled out. Despite the challenges the industry faces in moving up the autonomy levels, we are seeing rapid advances in the software, particularly where it’s powered by machine learning.

Machine learning is vital to the way an autonomous vehicle perceives its environment and will have a role in the decisions it makes about which actions to take, although some decisions are likely to be more rules based. The challenges in a vehicle perceiving the world as a human does is that the road environment – particularly in dense urban settings – is very complex. It’s also subject to environmental conditions, such as rain, fog, smoke and dust, which make it even harder to understand what’s happening around the vehicle. This is one of the areas we’ve been looking at: before Christmas the Cambridge Consultants team demonstrated a technology called DeepRay which removes this distortion from video in real time, and would enable an autonomous vehicle to see more clearly in real-world conditions.

These advances in the software must be complemented by historical datasets, which has given rise to the race for tracked miles. This data then feeds through to new software releases, providing reassurance that vehicles will respond to situations as expected and that they learn. None of this is fast. Elapsed time means more learning and better software. Ultimately, it’s these advances in software, allied to robust and methodical testing practices, that will move the industry through the autonomy levels, all the while building the user trust that represents the biggest challenge to adoption.

And when will you have your self-driving car? Elon Musk’s most recent timeline suggests September this year, but with multiple caveats. Today we’re engaged in the big leap from level 3 to 4, in which all safety-critical functions are performed by the vehicle. This is a viable vision for the next three to five years, so perhaps by 2022. The timeframe for these vehicles to be trusted by the mass market remains an open question.

Website | + posts

Dr Sally Epstein is a Senior Machine Learning Engineer for Cambridge Consultants, where she drives core R&D into state-of-the-art AI. Based at the company’s Digital Greenhouse AI research lab, Sally is focused on developing novel approaches to deep learning and working with major clients to transform diverse organisations and markets across the world. Sally has significant experience in presenting these technologies to varying audiences, demystifying AI and illustrating the practical potential of deep learning. Sally holds a degree in Mathematics from the University of Oxford and a PhD in biomedical engineering from Kings' College London.

Unlocking Cloud Secrets and How to Stay Ahead in Tech with James Moore

Newsletter

Related articles

Securing Benefits Administration to Protect Your Business Data

Managing sensitive company information is a growing challenge. Multiple...

Which Cloud Type Suits You – Public, Private, Hybrid?

Valuable lessons have been learnt about cloud deployments over...

A Business Continuity Cheat Sheet

Right, let's be honest. When you hear "business continuity,"...

Challenges of Cloud & Ultima’s Solution to Transform Business

With the way that AWS and Microsoft dominate technology...

The Role of Artificial Intelligence in Subscription Management

AI has revolutionised the landscape of sales and reinvented...