In his famous 1950 research paper entitled “The Imitation Game”, Alan Turing conjectured that it was possible for a machine to eventually mimic a human to the point that in a blind test, it was indiscernible from the real thing. Of course, since the dawn of time, humans have projected human behavior onto lesser creatures, or even inanimate objects (much to the detriment of virgins thrown into volcanoes), so why not computers eventually?
In the 19th century, Germany gave us a horse called Clever Hans, who was capable of human action. Clever Hans’ owner made tons of money spoofing freak show goers, while his horse appeared to perform basic arithmetic and other acts exceeding the intelligence of the local peasant folk. This was a massive sleight of hand; the horse simply acted on subtle physical cues from his owner. While some dimwits in the audience actually believed that we were on the verge of Planet of the Horses, most probably just wanted to believe that the horse was a four-legged pocket calculator.
Literature and lore are full of stories about creating humans out of clay. From Pygmalion, the Greek Myth, to Mary Shelly’s Frankenstein to Pinocchio, we have surrounded ourselves with tales of how we build something and try to breathe life into it. These stories almost always end badly, except for the Greek guy who hooked up with his statue, unless you question the emotional state of someone who does that. Yet with AI, our Frankenstein complex has taken on a real form, and it’s a matter of time before the villagers show up with torches.
In what’s possibly the ultimate act of arrogance, we continuously project and rationalize ourselves onto the world around us, seeking to break the loneliness of being human. We are, after all, currently alone in the universe, and while one would think that with 8 billion of us to talk to, and social networks abounding that allow us to connect with each other, we would not need to create thinking machines to solve this problem. But, somehow we can’t let go of trying to make Things in our image. Whether the driver is just a fear of being alone, collective narcissism, or good old fear of death, as technology advances to the point where we may actually be able to make intelligent machines, it’s time to look closely at whether our ethical systems can support what may come next.
Enter ChatGPT, the Clever Hans of Artificial Intelligence. ChatGPT is an evolutionary variant of the original chatbots like Eliza, or a machine learning version of the kid you bought a sandwich for to write your English paper for you. We project intelligence onto its output the same way we project wisdom into the words of dime store therapists. Its “success” is due to the fact that we are mostly boring and pedantic creatures, and there’s so much redundancy in our communications that it’s easy to piece together text that seems to make sense. GPTs cousins that “make” music or other “art” are on par with a two year old thumping on a piano in front of proud parents who hear Bach in the chaos. While these AIs are a cool on-line circus act, we’re nowhere near ChatGPT writing a spec for a new jet engine that works, nor will we be for a while, if ever.
The debate around ChatGPT is important though, but currently centers around the same issues that dogged pocket calculators in the 1970s. Many at the time believed that calculators would somehow breed a generation of dunces who could not do math. Instead, these devices liberated people from the rote drudgery of arithmetic and allowed students to really appreciate the beauty of mathematics (and ironically, eventually write programs like ChatGPT).
The real short term question is authenticity. Not quite the actual Turing Test (at one level, who cares), but if we can generate text at very high scale that to the casual web browser looks real, can we discern who or what wrote it? Think of what this technology would do in the hands of a troll farm during an election, or somewhat more prosaically, what it could do to the entire news industry. Imagine political seasons being marked by massive amounts of actual fake news generated by AI and targeted with the same techniques digital advertisers use.
Whether it’s kids using ChatGPT to write English papers, or whether it’s maliciously generated fake news articles at high scale that escape fact checkers’ abilities, the Internet now has a true authentication problem. The flip side is that some of this stuff generated by machines may actually have aesthetic value and the machine may deserve recognition (remember the copyright case of the monkey selfie?). Without passing judgment about what passes as original, Internet users need to be able to understand provenance and origin of what they’re viewing.
This problem gets solved in two ways - one is a highly distributed ledger system that carries trusted assertions about provenance and the other is a clone army of AI fact checkers that can verify whether a work is human-generated or not. We also need to manage and govern the raw data that feeds these bots and helps them get smart. After all, ChatGPT’s output is a massively refined derivative work (this will keep many courts busy and many a litigator well-fed over time). Without this infrastructure, eventually nothing digital will be trusted, there’s no way to put the genie back in the bottle or to overcome humanity’s desire to make machines intelligent, so we must treat inventive fire with inventive fire. Ultimately, the only thing we can do is make intelligent machines help us figure out what’s real or not.
Through hardship to the stars.
It's amazing how "Trusting" I am of results that are delivered to me in a human way. We always imagine that we will be able to discern between a machine and a human. That was the whole point really of the Turing Test. Unfortunately, we live in fast paced world and I doubt your average citizen will want to conduct "Turing Tests" on every interaction in their lives. I think its safe to say that in 20-30 years humans will probably interact with more bots in a day then actual humans. Our human 5 senses can only take us so far.. Perhaps this is what opens the human mind up to the 6th sense. The ability to discern an AI from a real human.