Driverless cars would need sci-fi AI

image

Science fact or fiction? | Photo: Mercedes

Post started 2016. Last updated 2025

A 2016 newspaper report said Volvo would test its driverless cars in the UK in 2018 using real families in fully ‘autonomous’ cars on public roads.

The report said the UK government hadn’t signed the international convention requiring a driver to be in the front seat of a car, but was working on its own regulation.

Perhaps the government should have considered the serious problems to be solved before such cars can safely run on public roads.

Volvo’s beguiling PR phrase, ‘autonomous driving’ tells us not to think ‘driverless’ but to think ‘independent’.

But PR can’t solve the problem of how a computer can ‘read’ the ‘map’ – the live, continuous, 360-degree, 3D digital model, overlaid on a previously-scanned model of the road and it’s surroundings. The models are made by integrating information from an array of cameras and sensors.

The cleverly-produced model is no use without the ability to meaningfully – and accurately – understand it.

Can the computer distinguish between, say, a child standing still at the side of the road and something else about the same size that wasn’t there during the pre-scanning, when travelling at 30 mph in poor visibility – like a driver could?

Such an ability would need a level of artificial intelligence – or rather artificial consciousness – found only in science fiction.

This is yet another fine example of the media swallowing PR guff about driverless cars.


Postscript 1

The UK government has now promised to introduce legislation to enable driverless cars to be insured under ordinary policies. The transport minister said:

    ‘Driverless cars might seem like science fiction but the economic potential of the new technology is huge, and I am determined the UK gets maximum benefit.’

(£100m of taxpayers’ money was being wasted in pursuit of this illusory pot of gold.)


Postscript 2

I put this to some driverless car experts and computer vision academics. The only respondent, a driverless car expert, said there’s no problem with image recognition.

Hmm. See Postscript 9, below, nine years later.


Postscript 3

In a TED talk, Google’s head of self-driving cars said of computer vision:

    ‘It’s really just numbers at the end of the day. How hard can it really be? It’s really a geometric understanding of the world.’

Really? Tw*t.


Postscript 4

Wikipedia’s article on computer vision, under the heading Autonomous vehicles, says:

    Several car manufacturers have demonstrated [vision] systems for autonomous driving of cars, but this technology has still not reached a level where it can be put on the market.

Quite. But the marketeers – and their useful idiots in media and government – can’t wait.


Postscript 5

December 2022: Unsurprisingly, four years on from Volvo’s ‘2018’ there’ve been no live tests in the UK.


Postscript 6

A September 2023 article on US financial news website TheStreet, Engineering whistleblower explains why safe Full Self-Driving can’t ever happen, says that according to engineer Michael DeKort (who exposed Lockheed Martin’s subpar safety practices in 2006):

    Artificial general intelligence (an AI with human-level intelligence and reasoning capabilities) does not exist. So the AI that makes self-driving cars work learns through extensive pattern recognition. Human drivers, he said, are scanning their environment all the time. When they see something, whether it be a group of people about to cross an intersection or a deer at the side of the road, they react, without needing to understand the details of a potential threat (color, for example).

    “The problem with these systems is they work from the pixels out. They have to hyperclassify,” DeKort told TheStreet. Pattern recognition, he added, is just not feasible, “because one, you have to stumble on all the variations. Two, you have to re-stumble on them hundreds if not thousands of times because the process is extremely inefficient. It doesn’t learn right away.”

Exactly. The article went on:

    Self-driving cars would have to clock billions to hundreds of billions of miles using their current methods to achieve a fatality rate in line with that of human drivers: one per 100 million miles, a 2016 study by Rand found…Tesla’s beta version of FSD [full self-driving], according to Elon Musk, has covered some 300 million miles; the company would have to scale up mileage by 100 to 1,000 times to create a system that is as good as human, according to Rand’s calculations.

Well, we can trust Musk to behave sensibly, can’t we? 🤪


Postscript 7

May 2024 – In spite of the obvious dangers, the UK government blithely ploughed ahead with its ill-informed, gung-ho promotion. A ‘news story’ on information website gov.UK, Self-driving vehicles set to be on roads by 2026 as Automated Vehicles Act becomes law, said:

    Self-driving vehicles could be on British roads by 2026, after the government’s world-leading Automated Vehicles (AV) Act became law.

Jesus. In July 2024 a pseudo-pragmatic Labour government took over from the daft Tories. But the gung-ho self-driving bandwagon would probably self-drive on.


Postscript 8

But… what about robot taxis? Alphabet/Google’s fully autonomous Waymo Driver taxis are operating in some US cities within specific, pre-defined geographical areas, known as ‘geo-fenced areas’. In December 2024, Waymo analysed liability claims related to collisions from 25 million fully autonomous miles and claimed:

    The Waymo Driver demonstrated better safety performance when compared to human-driven vehicles, with an 88% reduction in property damage claims and 92% reduction in bodily injury claims. In real numbers, across 25.3 million miles, the Waymo Driver was involved in just nine property damage claims and two bodily injury claims. Both bodily injury claims are still open and described in the paper. For the same distance, human drivers would be expected to have 78 property damage and 26 bodily injury claims.

So Alphabet says Waymo is statistically safer than human-driven cars. But would you trust it? It was reported in May 2024 that the US NHTSA (National Highway Traffic Safety Administration) was investigating incidents in which autonomous taxis behaved erratically and sometimes disobeyed traffic safety rules or were involved in crashes.

The NHTSA learned of 22 incidents in which self-driving Waymo cars ‘exhibited driving behavior that potentially violated traffic safety laws’ (according to a document posted by NHTSA), including situations in which the vehicles ‘appeared to disobey traffic safety control devices.’ In some cases, the vehicles collided with stationary objects such as gates and chains. This sometimes happened after the vehicles ‘exhibited unexpected behaviors.’

The NHTSA also investigated Zoox, the autonomous technology subsidiary of Amazon. In two separate incidents, self-driving cars operated by Zoox braked suddenly and unexpectedly, and then were rear-ended by motorcyclists. In one case, a motorcyclist was injured. The NHTSA investigators were looking into the Zoox self-driving system’s ‘behavior in crosswalks around vulnerable road users, and in other similar rear-end collision scenarios.’

Er, no thanks.

In February 2025 the NHTSA, which is investigating the ‘full self-driving’ software in 2.4 million Tesla cars after four collisions including a 2023 fatal crash, suffered a four-percent staff cut – coinciding with Elon Musk’s ‘DOGE’ cuts across the federal government.

Tesla is developing its own Cybercab robotaxi. How much easier with less regulation!


Postscript 9

Google search, AI Overview, March 2025:

    Moving Images: While AI is improving in analyzing moving images, it still struggles with complex, changing scenes compared to human perception.

You don’t say.

Top 🔺

Please feel free to comment

2 thoughts on “Driverless cars would need sci-fi AI

  1. Thanks for your comment, Colton. Yes, human perception is the so-called “hard problem” of understanding consciousness. As the Wikipedia entry shows, it’s complicated! Human drivers make mistakes and AI has greatly improved, but I wouldn’t trust an autonomic car.

    Like

Leave a comment