A worthwhile doomsday
17 Jul 2013 by Evoluted New Media
Never quite found a doomsday scenario to suit your scientific tastes? Look no further than robotic researchers taking over the world says Russ Swan...
As the veteran of several ends of the world, I have to say I'm beginning to feel a bit cheated. I survived the Mayan apocalypse, the Millennium Bug, and even Nostradamus's 1999 King of Terror event, but still the credit card bills drop through the letterbox with depressing regularity.
The good news is that there is no shortage of ends of the world to look forward to, and you can probably find one to suit your taste. Religious zealot? Simply take the words of your chosen holy book and apply a bit of dubious extrapolation to create your own personalised rapture. Eco-warrior? Select from the many excellent environmental nightmares available to you, from climate change to genetic engineering to the extinction of the honey bee, and join a campaign to Do Something About It Before It's Too Late. Just plain bonkers? Perhaps you could try aliens or asteroids or the incarnation of Godzilla to deliver your own individual armageddon.
Many of those, with perhaps the exception of the book-based scenarios mentioned first, are potentially real threats to our continued existence and deserve at least a watchful eye. But what if you are a scientist? Where does the rational, educated, intellectual reader of Lab News go for a worthwhile doomsday?
Dear friends, look no further than The Singularity.
The term singularity can mean different things to different people – mathematicians think of a non-dimensional point describable only by its position, while astronomers might use it as shorthand for a black hole (which, whatever its actual properties, presumably has some measurable dimension. You can go first with the tape measure).
The technological singularity is the proposed culmination of advances in technology, which have been following a steady geometric progression since the dawn of the industrial age. Probably the best-understood example of this is Moore's Law, which describes the doubling of computing power every couple of years or so.
The theory goes that this progression will at some point undergo a step change, and the growth in advances will suddenly accelerate. The event that will cause this step change is the application of machine intelligence to the design of greater machine intelligence.
Processor development driven by the human mind has given us the roughly two-year period for the doubling of power that Gordon Moore first observed way back in 1965. What might change, and relatively soon, is that computers become intelligent enough to design the next generation of processors themselves, and in less time than a human.
The point here is that the doubling period would not simply be shortened, it would also be the subject of an exponential decay. Say the first generation of machine-designed chip took just half the time to design its own successor, and that then also took half the time to create the third. Computing power would double in two years, then one year, then six months, then three months…
Bam! Singularity.
About four years ago, a bold claim which many see as a potential first step towards the singularity was made in the journal Science. A lab robot called Adam at Aberystwyth University, it was claimed, had made an independent scientific discovery. The machine had not only conducted the experiment – a fairly innocuous test of yeast strains – but had actually learned by the results to devise the next round of procedures. If we are to believe the hype, the RoboScientist was born.
I was sceptical of it then, and I am now. We have seen plenty of advances in lab automation in the intervening four years, including two further doublings of processor power. What we have not seen is a machine that can demonstrate actual intelligence beyond the very narrow confines of an iterative process involving simple microorganisms. The Turing Test remains unpassed, over 60 years after being devised, and estimates for the date when a machine can convincingly imitate a human conversation remain stubbornly 20 years into the future: it was 20 years away at the turn of the millennium, and it is 20 years away now. It means at least ten further doublings or, to put it another way, we are barely one one-thousandth of the way from pre-digital technology to the first true machine intelligence.
A review of Adam and his ilk earlier this year in the open-access journal h+ concluded vaguely that there will be a time when Robot Scientists will work together with human scientists to advance knowledge. Hardly time to start building barricades.
But let's not get complacent. The step-change scenario could make nonsense of these arguments – and if so we in the laboratory community may have the dubious pleasure of having been at the vanguard of the apocalypse thanks to an over-smart lab robot that started thinking for itself. Forget Skynet – will we survive Adam?