Google’s Demis Hassabis – misuse of artificial intelligence ‘could do harm’

0
1583

It is a technology so powerful that – on a distant day well into the future – it could mean computers that are able to advise on the best way to treat patients, tackle climate change or feed the poor.

With such potential power, comes huge responsibility.

Demis Hassabis, the head of Google’s £400m machine learning business and one of the world’s leading authorities on the subject, has now called for a responsible debate about the role of ethics in the development of artificial intelligence.

“I think artificial intelligence is like any powerful new technology,” Mr Hassabis, DeepMind’s co-founder, told me.

“It has to be used responsibly. If it’s used irresponsibly it could do harm.

“I think we have to be aware of that and I think that people developing that – us and other companies and universities – need to realise and take seriously our responsibilities and to have ethical concerns at the top of our minds.

“We engage very actively with [the artificial intelligence] community – at MIT, at Cambridge, at Oxford – so there are a lot of academic institutes thinking about this and we engage with them very actively and openly with our research.

“I think there are valid concerns and they should be discussed and debated now, decades before there’s anything that’s actually of any potential consequence or power that we need to worry about, so we have the answers in place well ahead of time.”

Mr Hassabis was responding to concerns about the development of artificial intelligence raised, among others, by Elon Musk, the technology entrepreneur and a DeepMind investor, and Professor Stephen Hawking.

Prof Hawking told my colleague Rory Cellan-Jones that artificial intelligence could “end mankind”.

Making machines smart

Mr Hassabis is not at the “robots” end of artificial intelligence.

His work focuses on learning machines which are able to sift huge amounts of data and support human understanding of the exponential rise of digitised information.

“Artificial intelligence is the science of making machines smart,” he said.

“If we’re able to imbue machines with intelligence then they might be able to help us as a society to solve all kinds of big problems that we would like to have a better mastery of – all the way from things like disease and healthcare, to big questions we have in science like climate change and physics, where having the ability for machines to understand and find insights in large amounts of data could be very helpful to the human scientists and doctors.”


Intelligent Machines – a BBC News series looking at AI and roboticsIntelligent Machines graphicHis world is a long way from Hollywood’s take on artificial intelligence. Terminator or the beguiling Ava in Ex Machina might make for “good entertainment” but the world is fanciful.

Computers, Mr Hassabis says, are nowhere near being able to ape human behaviour or over take human thinking.

“Terminator is one of those examples that is very iconic, but extremely unrealistic in a number of ways.

“Certainly that’s not what I worry about,” Mr Hassabis said.

“It’s more where there are unintended things – something you might have missed, rather than people intentionally building systems to control weapons and other things.”Artificial intelligence imageImage copyrightScience Photo Library

And this touches on the knotty subject of regulation.

Who is overseeing the development of artificial intelligence which, whatever its present limitations, does have the potential to fundamentally change the way we live?

Some have described its development as being as significant as embryology research and the ability to manipulate DNA.

Mr Hassabis said that Google is setting up an ethics committee to look at the work his company is doing.

“General AI is still in its infancy,” he said.

“So I think for a very long time it will be a complementary tool that human scientists and human experts can use to help them with the things that humans are not naturally good at, freeing up the human mind to make the leaps in imagination that I think humans are particularly well suited to.”

He reveals that there is already a lot of “dialogue” with official bodies, including the UK government.

“I think it’s much too early to think about regulation,” Mr Hassabis said.

“We’re very early in this technology phase, so we don’t really know yet what the right things would be to regulate.

“It’s not as simple as something like embryology where there’s physical stuff where you say: ‘Do we want this?’.

“It’s much more difficult to define, and actually I think we need to have a lot more empirical work to get a better understanding of how these goal systems should be built, what values should the machines have, which I think will come over the next decade.

“And then that will give us an idea of what sort of things we could put in a regulatory framework.”

‘Proud’

London is doing rather well in artificial intelligence. DeepMind is based in King’s Cross and has grown to a 150-strong company of mathematicians and computer scientists.

Mr Hassabis urged the UK not to squander its leading position in the developing sector.

“We’re proud to be a UK company,” he said.

“And although we’re owned by Google, our whole operation is here.

“And actually it goes well beyond DeepMind into all our universities.

“Cambridge, Oxford, University College London, Imperial have very strong machine learning departments.

“It’s something the UK is extremely strong in and I think it’s a great UK success story.

“But unlike in the past – where we were also there at the dawn of the computer age and yet Silicon Valley ended up doing all the innovation and reaping most of the commercial benefits – we should make sure that we stay at the forefront of what will be an incredibly important technology in the next 10 or 20 years.”

By then, society may well need answers to the question – who controls the machines.

BBC

LEAVE A REPLY

Please enter your comment!
Please enter your name here