Driverless cars would need sci-fi AI

image Science fact or fiction?
Photo: Mercedes

A 2016 newspaper report said that Volvo would test its driverless cars in the UK in 2018 using real families in fully ‘automous’ cars on public roads.

The report said that the UK government hadn’t signed the international convention requiring a driver to be in the front seat of a car, but was working on its own regulation.

Perhaps the government should have considered the serious problems to be solved before such cars can safely run on public roads.

Volvo’s beguiling PR phrase, ‘autonomous driving’ tells us ‘don’t think “driverless” – think “independent”‘. But clever PR can’t solve the problem of how a computer can ‘read’ the ‘map’ – the live, continuous, 360-degree, 3D digital model, overlaid on a previously-scanned model of the road and it’s surroundings. The models are made by interpreting information from an array of cameras and sensors.

But the cleverly-produced model is no use without the ability to meaningfully – and accurately – understand it.

Can the computer distinguish between, say, a child standing still at the side of the road and something else about the same size that wasn’t there during the pre-scanning, when travelling at 30 mph in poor visibility – like a driver could?

Such an ability would need a level of artificial intelligence – or rather artificial consciousness – found only in science fiction.

This is yet another fine example of the media swallowing PR guff about driverless cars.

Postscript 1: The UK government has now promised to introduce legislation to enable driverless cars to be insured under ordinary policies. The transport minister said: ‘Driverless cars … might seem like something science fiction [sic] but the economic potential of the new technology is huge, and I am determined the UK gets maximum benefit.’ (Millions of pounds of taxpayers’ money has already been wasted in pursuit of this illusory pot of gold.)

Postscript 2: I put this to some driverless car experts and computer vision academics. The only one kind enough to reply so far (a driverless car expert) thinks there’s no problem with image recognition.

Postscript 3: In a TED talk, Google’s head of self-driving cars said that the cars’ computer vision is ‘really just … numbers at the end of the day … how hard can it really be? … It’s really a geometric understanding of the world’. Really?

Postscript 4: Wikipedia’s authoritative article on computer vision (under the heading Applications) says, ‘Several car manufacturers have demonstrated [vision] systems for autonomous driving of cars, but this technology has still not reached a level where it can be put on the market.’ Quite. But the marketeers – and their useful idiots in media and government – can’t wait.

Top 🔺

Please feel free to comment

Artificial stupidity again

Image: Futurama

A recent newpaper report on robot reporters that are supposedly able to write financial reports and TV previews, gave an example:

‘We catch up with our dastardly group along the rippling waters of the Riverlands.’

If that’s a fair example, journalists’ jobs are safe. When will human journalists stop swallowing the latest ridiculous claim for AI?

Top 🔺

Please feel free to comment.

AI fake news – Turing test (not) passed

Guardian letter 1 (June 2014)

(A Guardian report gullibly repeated a ridiculous claim that the famous test had been passed.)

Alan Turing

The Turing test has not been ‘officially’ passed at all. Turing said that most of the interrogators had to be fooled, and that the conversation would have to take a long time. Plus, it’s a chatbot, not an artificial intelligence program; and pretending to be a child whose first language is not English is clearly a cheat.

Ai is impossible for the foreseeable future. Intelligence, evolved over millions of years, is a highly complex phenomenon that is not understood and therefore cannot be reproduced by computer code. 

Take humour, for instance. Take one aspect of humour: irony. We take it for granted, but the subtleties of its production and perception are a million miles from ‘AI’ capability.

The AI project has produced some wonderful and life-saving developments in analysis and robotics, but it’s misnamed – and the discussion about intelligent robots is ridiculous.

To begin to create AI, developers (and their funders) should drop the impossible and wildly hubristic current top-level project.

They should slow it down and try to reproduce the evolution of intelligence by analysing its basic but highly complex components: vision and the other senses – and their coordinated interpretation.

Turing was right to make language the test for AI. It’s an end-product of the evolution of social animals. Its structure can easily be ‘analysed’, but it can’t easily be reproduced. The chatbot simulations are just pathetic.

Top 🔺 

Please feel free to comment.