It may not be robots that researchers need to fear. Dr Matthew Partridge suggests, however, that the onward advance of modelling might mean we humans will soon need to look to our laurels.
History books report that the first experimental simulation experiment was conducted by John von Neumann and Stanislaw Ulam who wanted to understand the behaviour of neutrons but couldn't afford to run any of the experiments. Rather than write a grant application von Neumann and Ulam invented an entirely new scientific field, which seems extreme but given the forms involved in grant applications, entirely understandable.
Now modelling and simulation are not only fields in their own right (well, actually a whole host of fields) but are also a common tool-kit which helps researchers with experimental design. In fact, whole grants are given over just to examine the use of modelling and simulation in new fields. Something that seems a little ironic given the reasons modelling was first used.
It’s a fast-growing and improving field. Back when I first experimented with modelling it was like playing with an early chat bot. The model could do very simple things and sort of convincingly give you back exactly what you expected but the second you asked it a tricky question it stopped making sense. In the case of looking at chemical structures, I remember one cutting-edge model that seemed to think that it was totally okay to have bonds that crossed through the middle of four carbons. Future versions I assume had a short piece of code explaining that you can’t drill holes in atoms.
In just a few decades the field has come on in leaps and bounds. Tellingly modelling and simulation conferences are now not only focused on how to make better models but also on how the heck to manage the huge amount of fascinating data they are producing. I guess we've not managed to simulate a model that saves all its data in the standard researcher filing system also known as a folder called ‘TEMP’ on the desktop.
Given advancements in lab automation and computer-controlled experimental systems, it feels likely that self-training/testing computer models can't be far away. At the moment, models have to be improved using a feedback loop of real data and experiments, but how long until the computer model just includes a line “IF model_fitness<0.95: run(pipette) else: generate paper.pdf”.
As an experimental scientist, this fast progression in experimental modelling has made me a little nervous. That is not helped by modelling scientists who say things like “modelling can’t replace real researchers, yet” which would be a lot more reassuring without the ‘yet’. It doesn’t take a PhD in futurology to see that the writing is on the wall, I’m going to be replaced unless I can adapt.
Perhaps before it’s too late I can move my research to areas that are harder to model. I have no idea what those are and I feel like the list is getting smaller every day. Or maybe the smart thing to do is to work at adding more of a human element into the research I already do. Try to make sure all the values of myself as a human researcher are highlighted. Values like our ability to add random ideas and mistakes into the scientific process. Some of the best discoveries happen by human mistakes or whims so why not lean into that and become a proponent of research done by random accident?
Next time I go check on my experiment I’ll fiddle with one of the cables, maybe change a setting, spill a reagent and then go to lunch early without turning the data logger on. I’d like to see a fancy computer model simulate that!
Dr Matthew Partridge is a researcher, cartoonist and writer who runs the outreach blog errantscience.com