Man and machine or man vs machine?
6 Oct 2016 by Evoluted New Media
Can artificial intelligence carry out scientific research?
'Using intelligence to formulate new, usable knowledge from complex information' – a perfectly reasonable description of a scientist I’d say. So why did it stop me dead in my tracks?
Because, while it is a description of a scientist, that scientist happens not to be a human.BenevolentAI, a British artificial intelligence company aiming to develop therapeutics, has begun using a purpose built version of the world’s most advanced deep learning AI supercomputer. Now this thing isn’t simply a fast computer – it will use something BenevolentAI call a ‘Judgment Augmented Cognition System’ in order to speed up the drug development process.
They say this is with an aim to augment, rather than replace human scientists – and I’m sure that is the case. For now. The work of science has, by its very nature, always been in deep concert with mechanical and computational advances. As instruments and equipment incrementally improve so does the insight we can glean from experimentation – indeed even the types of experimentation we can perform is entirely reliant on technology.
Memory, calculations, physical work – all have been augmented by technology. But thinking – the ability to use an intellect to draw together the strands of our findings in order to improve understanding…well, that has largely remained within the boundary of the soft grey-ish matter inside our skulls.
However, as the various strands of computation develop and draw together we get ever closer to what could realistically be called AI. Now, as it stands BenevolantAI’s new super-toy could be considered a ‘narrow-AI’ in many regards. It essentially performs information processing; it can’t actually conduct physical experiments. Though this by no means renders it scientifically impotent. The company say it has already enabled breakthroughs by analysing millions of scientific articles and hundreds of medical databases. As a result, scientists (...the human kind) are able to draw conclusions faster than they could otherwise.
OK – in so as far as it is a partnership, this seems a perfect solution. Not only that, solutions like this are vital – science faces problems that will in all likelihood only be solved in this way. But how long until the hypothesis forming human scientist is considered inefficient? The slowest cog in a machine under constant pressure to run ever faster. How long before the human scientist is simply the caretaker of their computerised counterpart? After all, we already have technology which can, to varying degrees, perform automated hypothesis generation and reasoning.
And of course all this runs in parallel to generalised concerns regarding AI. I am conscious of scare mongering here; and I’m not a computer scientist – but many leaders in the AI field are deeply troubled by the idea that artificial intelligence will leave our own behind. You’ll remember no doubt the open letter written in January last year by many specialists and academics (including a certain Professor Stephen Hawking) urging us all to consider seriously the fact that there may be pitfalls to technology of this type.
I have heard it said that the most worrying AI isn’t the one that passes the Turning test, it’s the one that fails intentionally. When – or indeed if – this kind of AI becomes a pressing matter is uncertain, but it would be prudent to consider now medium term worries on how we as a species responsibly use AI (and you can read Dr Michael Reinsborough on this very topic in our latest edition). In the meantime, for scientific discovery, AI of the type used by BenevolantAI is an incredible opportunity – indeed, technology like this is inevitable as computer science develops. But I can’t pretend not to be nervous at the idea that biological tissue is no longer required to understand biology.
We know AI plays better chess…will it soon do better science?
Phil Prime, Editor