A quick Google search will reveal that ChatGPT, the AI chatbot capable of producing intuitive, human-like responses to complex questions, is a threat to everything: our ability to think and to process knowledge, higher education, the essay, cybersecurity, integrity, accountability, and even to Google itself.
The bad news is these headlines are probably right.
The good news is we’ve faced similar crises before and survived.
The first such crisis point occurred in the fifth century BC with the invention of writing. In Phaedrus, master dialectician, philosopher and champion of the spoken word, Socrates, railed against this new invention, suggesting it would bring about mass forgetfulness and ignorance.
Why bother to learn or remember anything when you can just look it up in written form? Further, he suggested, the written word, which cannot be argued with, nor persuaded to change its mind even if out of date or incorrect, would produce generations of know-nothings who eschewed inquiry and intellectual rigour, preferring instead to unquestioningly rely on the written words of others.
For the ancients, writing did not destroy thinking
For Socrates, writing challenged the place of oral discourse and dialogue as the primary form of inquiry. If you don’t find things out for yourself through personal investigation and questioning, he maintained, how can you really claim to know anything?
Ironically, the only reason we can still discuss Socrates’ aversion to writing is because someone (Plato) wrote down what he said.
A millennium or so later, we had long realised that writing was not the end of thinking – that what was written could be rewritten or rebutted, either orally or in the form of more written words. Instead of destroying our ability to create and process knowledge, writing, by the very fact that it freed us from having to remember everything we needed to know, allowed us to explore further and question more deeply.
Then, in the 15th century, along came Gutenberg and the idea of moveable type, which led to the invention of the first mass-producible commodity, the printed book, the ChatGPT-type crisis of the time.
The Catholic Church, then gatekeeper of all knowledge and authority (in the Western world, anyway) warned books imperilled the fabric of society. The printed word’s ability to circulate ideas widely, cheaply and with accuracy threatened the all-knowing primacy of the church. A public able to turn to cheap and readily available texts, rather than the clergy, for answers to life’s great questions, was a recipe for heresy, sedition and revolution.
The church’s fears were well-founded.
Thanks to the printing press, the church’s authority was lessened. Heretical and revolutionary ideas did proliferate, leading to the rise of humanism, the Reformation, the industrial revolution, mass literacy, secularism, public education and eventually the internet and now ChatGPT.
Writing and the printed word were feared because they questioned accepted modes of knowledge creation, husbandry and control. However this led, in both cases, to a revitalisation of thinking and the ways we engage with, create and share information.
Sure, there was initial uncertainty and chaos, but this was as much caused by those resisting change as the change itself. It is very human to fear the new and unknown and to fight to preserve what we believe.
Knowledge itself is no longer power
ChatGPT comes to us at a time in history when knowledge itself, the currency of writing and of the published word, has become so ubiquitous as to be, largely, worthless. Just about everything you might want to know is available somewhere on the internet, and for free.
Knowledge is no longer power. Power and worth now lie in the ability to synthesise information to form new and innovative ideas.
ChatGPT is perceived as a threat because it, and the other improved AIs that will inevitably follow, have the potential to be able to do that synthesis for us. That’s a scary notion. Do we really want to place our trust in knowledge synthesised by AI? Surely the answer is no, at least not yet. Especially not one as error-prone as ChatGPT.
But do we want access to tools that shortcut that synthesisation process, reducing the grunt factor of research and instead offering us more time to innovate and create? To that option, I say yes. It is the nature of the most powerful of inventions to arrive without a rule book. We grab such radically disruptive innovations and we run with them, like children with knives, excited by the adventure, oblivious to its consequences.
AI, like writing and print, is not going to go away. We must embrace the chaos that ChatGPT will bring, let go of the ways of working and thinking that it makes redundant, and look for the ways in which, like writing and print, it can free our minds to do more and to reach further.
This may well be the end of the world as we know it (again) but, if it is, I feel fine. Apologies to REM.