AI isn’t going anywhere, says Matthew Partridge, so let’s stop worrying and think more about how to make it work properly.
It is hard to explain just how fast technology has developed and changed modern times when compared to history. The pinnacle of warfare for the vast majority of human history has been bows and arrows. The first short bows and arrows appeared around 40,000 years ago; it then took until the 1300s for us to work out that longbow versions worked better.
Not a particular big technological change for 38,700 years’ worth of research. The various technologies required to go from the Wright brothers’ first flight to landing people on the moon were developed in 66 years. And finally, from the invention of the internet in 1983 to the first Rickroll was an astonishingly short 23 years.
This dizzying rate has understandably created a general sense of unease about the rise of new technologies impacting jobs and work practices. Ironically science, the source of many of these advances, has not been immune to the general sense of scepticism and worry caused by the startling rapid growth of technology. None more so than the growth of AI.
We can’t put AI back in the box, we’ve thrown away the packaging and the bin men have already been. AI is here to stay, and we need to now work out how we can make it work properly and stop making upsetting grunting noises
If I had a penny for every researcher or researchadjacent person I’ve spoken to in the last year who has expressed worry about the use of AI in science I would have a pile of really annoying change I can’t spend anywhere as no one uses cash anymore.
The worry ranges from simple ‘this is going to take my job’ to the slightly more paranoid ‘I can’t trust anything because it’s probably AI-generated’. They are both to some extent probably true. But to properly explore them I’d need an article far longer than the editor allows, and I don’t want a frowny editor, so instead of a well-cited argument I’m going to go with the following: AI is bad... but probably not as bad as you think.
First off, AI is a tool that requires human agency. Despite all the spooky things it does it still needs a person to tell it what to do. Be it giving it simple instructions like ‘Write me a funny blog post about AI in science’ or training the AI in the first place. This gives the AI a lot of scope then for doing the wrong thing, but it also allows it to be helpful.
Secondly, inventing things that save humans from doing things is literally how humans work. We can’t go getting all twitchy every time we find a way of replacing labour with something smart. I have a little motor that rotates a chicken in my BBQ, I don’t think anyone is decrying the loss of the turnspit dog. I’m fine with also retiring the large-data-set-analysis cat I keep.
Lastly, we can’t put AI back in the box, we’ve thrown away the packaging and the bin men have already been. AI is here to stay, and we need to now work out how we can make it work properly and stop making upsetting grunting noises.
That might mean enacting standards and laws to slow it down to a pace we can deal with, but not engaging with it at all is just not an option. AI is going to help do a great many things. It’s going to do those things well or badly mostly depending on how it’s used. My advice, start understanding how to use AI well or you’ll end up doing it badly and become part of the reasons you didn’t want to use it in the first place.
Besides AI needs more sceptical people looking at it carefully and prodding it, sceptical prodding is the best way to make anything better.
Dr Matthew Partridge is a researcher, cartoonist and writer who runs the outreach blog errantscience.com and edits our sister title Lab Horizons