Been there, done that, bought the T-shirt… chatbots in perspective
17 Feb 2023
Doom-mongers relax, cautions Professor Brian J Ford, ChatGPT is not yet about to spell the end of research integrity. Besides, we’ve been through many iterations of this sort of thing already...
You must have tried ChatGPT. We all have – but we’re behind the times. We can envisage an era when a chatbox takes the place of humans by writing songs and designing buildings as well as answering questions. Well, one is already here and has been for years. Over a billion smart devices in China use a chatbot called Xiaoice (in English we call her ‘Little Ice’) and it’s been growing in popularity for almost a decade.
Like ChatGPT, Xiaoice was designed by Microsoft as an AI-powered interface and was launched in 2014. The English-language version made its debut in 2016 and Microsoft named it TAY Tweets (TAY being an acronym for Thinking About You). Amateur hackers were quick to teach it profanity and it soon began generating offensive and insulting responses, so before the first 100,000 messages had been posted it was taken down. It survived online for just 16 hours.
The Chinese iteration, however, went from strength to strength. She is now an indispensable part of their business world, provides comfort to the lonely, succour for the dispossessed, and inspiration for artistic minds by composing mellifluous poetry and beautiful songs. She hosts Chinese radio programmes and television shows and writes a column for Qianjiang Evening News, as well as recording songs such as ‘I Miss You’ (you can hear her breathe between the lines). You can purchase Skyline Series printed T-shirts, all designed by Xiaoice.
There’s nothing new about chatbots. The first was created at the Massachusetts Institute of Technology (MIT) by Joseph Weizenbaum in 1966. It was named ELIZA – not an acronym; it commemorates Eliza Doolittle in George Bernard Shaw’s stage play Pygmalion, who was trained to converse in an educated fashion. The eponymous hero of Young Sheldon tried to use ELIZA to solve family problems, using his newly-acquired Tandy 1000 SL computer. It didn’t work for Sheldon.
The wartime code-breaker Alan Turing proposed his test for artificial intelligence in 1950. He claimed that – if you couldn’t detect whether a conversationalist was human or not – it was intelligent. That was never a sensible idea. Although ChatGPT provides human-like responses, if you ask it whether it’s intelligent, ChatGPT will assure you that it’s not. The Turing ‘test’ was passed fifty years ago by Parry, a chatbox devised by Mark Colby at Stanford. In 1984 the Racter chatbox wrote a humorous novel entitled The Policeman’s Beard Is Half Constructed (you can still buy it online, ISBN 0-446-38051-2) and by the 1990s chatbots were springing up everywhere. There was ALICE (Artificial Linguistic Internet Computer Entity), Dr Sbaisto (Sound Blaster Acting Intelligent Text to Speech Operator), Cleverbot – an updated version of the unsuccessful Jabberwacky – and the interactive game TINYMUD. Sorry, ChatGPT fans, but there’s nothing new about them.
What’s so astonishing about ChatGPT is its perfectly poised prose and the literary manner in which it responds to input. Ask it to write a letter of condolence, and it will provide something warm, sincere, emotionally reassuring. Have it write an essay about a topic for homework and it will submit the perfect composition (probably written better than a schoolchild would have contrived). Sometimes it makes mistakes. There was much hilarity when a rival chatbot, Bard (devised by Google), made a factual error about observing the first exoplanet. Bard said this breakthrough was made by the American James Webb telescope in 2022, but an exoplanet had already been observed by the European Very Large Telescope back in 2004. This single error wiped $100 billion off Google’s market value.
Just as we were all fearing that ChatGPT would render homework obsolete, up popped Edward Tian with the answer. He is not the employee of a major corporation, like Microsoft or Google; Tian is a student at Princeton studying computers and journalism. In his spare time he has coded GPTZero. Drop a section of AI generated text into his text box and it will analyse it for ‘Perplexity’ and ‘Burstiness’ … and it recognised that a ChatGPT answer I’d requested had ‘likely been written entirely by AI’.
No sooner did somebody devise a method of creating prose than someone else found a way to detect it. All is not lost ... just yet.