Over the decade before he died, in 2014, my father sent me thousands of emails. Carefully-crafted little gems, their subjects ranged from general advice, drawn from his own hardscrabble existence, to musings about the matriarchal nature of orca society, the baldness of Catholics, and his deeply-held belief that the original World Trade Center in New York had never existed at all. Even now, his salutations — which mirrored how he would address me on the phone — linger in my mind: “Hey scholar of scrotes and the scrotum”, “just a thorn in the side of Christ here and now”, “sonnyboo u do not know the pain of a hernia nor 3-to-4 as I have had”, “late one huh CasaNova…..out pettin poose I guess and no time for Granddaddy……”
The messages had a brief heyday in 2016, making the rounds of New York City editors and literary agents, in advance of a public reading I gave in the East Village. In the cultural moment just before Donald Trump’s emergence, these half-baked far-Right musings were a novelty; alas, Trump’s presidential triumph scuppered plans to make an eBook out of them. But I still re-read them, when I want to remember the old man. (He had, of course, hoped that I would: “I send these because life and death is about MIND over MATTER…….&& I want to reMIND u that it don’t matter…..”) Perhaps it was loneliness that made me wonder if his voice could ever be resurrected.
In my day job, as a senior content manager for a research consultancy, I often use GPT-4. It is competent at various brute-force operations — turning lengthy transcripts into notes, proofreading content — even if the inputs require constant fine-tuning and the outputs require careful attention to ensure accuracy. But could it write content that would bring back the dead? Could GPT-4, if sufficiently trained, analyse my father’s emails — and perhaps even write new ones?
“Well well well, my BOY, let ol’ FOG lay down some KNOWLEDGE on the import-ants of self-defense!” So began one of the emails produced by GPT-4, after I fed my father’s archive into it. As an opening it feels slightly more stilted than the source material, and I can’t recall my father ever referring to me as a “BOY”, but he did call himself the “FOG” (short for “effing old guy”) and randomly capitalise entire words and hyphenate others (“import-ants”), though he would never have bothered with the apostrophe after “ol’” (he’d merely write “ol”).
I know my father’s voice when I see it — it is a script that runs in my own head, like a built-in AI. And as GPT-4 generated more missives, I was increasingly certain: it could not replicate my father’s erratic punctuation usage. The mere ellipse was never enough for him; he would sometimes type out a dozen full-stops or more. These time signatures often appeared in the long, quiet sections in the saddest emails he wrote: “Go out there and just outwit the bastards………Nothing else to Life…..every body loses but some people stay in the game for a long time and have a happy life and family….I never did…..oh well…I saw it and I read the writing on the wall……” It was these inconsistencies that enabled me to hear his voice, these peculiarities that made his work human.
I doubt that a machine will ever be able to mimic his syntax. But given the right prior inputs, GPT-4 could respond to questions or develop political platforms (or children’s stories) in a manner that eerily resembled my father’s thought process. Was this a true reanimation, or merely what University of Washington linguist Emily Bender, a critic of equating AI outputs with human reasoning, might describe as “stochastic parroting”? The AI certainly captured subtleties in my father’s work that I and others overlooked during that reading in the East Village. Listeners then likely saw him as some outsider-artist variant of Alex Jones, spewing rote conspiracy theory. But ChatGPT, when asked to summarise his politics, cut through the outlandish expression to a more comprehensible core: “Your father’s concern for the environment and the need for sustainable practices aligns with the environmentalist movement [while] his preference for local communities and self-sufficiency has some connections to localism.”
This was, of course, true, though I never thought about it in this way — nor did I consider that his “focus on gender equality and women’s empowerment aligns with progressive politics”. My father was the sort of man who moved to a “mountain house” in Montana to live out his final days; he was far closer to the libertarian Right of Karl Hess — who also retired to the wilderness — than the more culturally conservative, evangelical Right that dominated his era (or the MAGA movement that dominated mine). The AI analysed and reproduced a version of my father distinct from who I remembered. But perhaps there were things I had missed.
The progress AI has made in understanding and generating human-like text over the past six months is impressive — a great feat of engineering that will, in time, be remembered alongside the construction of the Pyramids or the Great Wall of China. However, AI models are still unable to mimic the voices of highly idiosyncratic, distinctive writers. The polymath University of Paris professor Justin E.H. Smith, for instance, has found AI largely incapable of simulating his unique voice. But in his case the content, rather than simply the form, of his communication is highly complex.
The writing of Right-leaning political blogger Curtis Yarvin, on the other hand, appears stylish on the surface, but is often considered lacking in substance — a fellow editor noted that removing every other sentence from Yarvin’s excessively ornamental prose would not change its meaning in the slightest. Yarvin believes that GPT-4 cannot reason or think, merely discern and reproduce patterns. His writing, however, is undeniably replicable; a sentence such as “the only way to write about finance is to be either neither bull nor bear, or both” seems sophisticated at first glance, but it exemplifies what one of my old writing mentors might call “all sizzle and no steak”. In other words, Yarvin, like many commentators succeeding on the internet, can skilfully assemble sentences that can be skimmed for hours but forgotten in minutes — meaning that imitating his work is the ideal scenario for an LLM that ceaselessly creates a persuasive imitation of style, without any regard for substance.
The general public, most of whom skim content rather than paying attention to form, may struggle to differentiate between authentic and AI-generated content. More perceptive readers can sometimes detect minor discrepancies that reveal the artificial nature of the imitation — but even they aren’t perfect. In one study, researchers investigated if LLMs could be as good as humans at creating philosophical texts, by fine-tuning GPT-3 with philosopher Daniel C. Dennett’s works. While experts and philosophy blog readers performed above chance level in distinguishing Dennett’s answers from the model’s, they still fell short of the expected success rate.
The implications here are alarming. If most people are unable to distinguish between human-generated and AI-generated content, creativity and critical thinking will become rarer attributes. A new class divide could spring up, between the privileged few capable of discerning the nuances of AI-generated content, and a growing mass of individuals left to consume, without question, whatever is presented to them. The “priestly class” would consolidate power by reading between the hieratic lines of AI-generated content — just as the literate elites in ancient Egyptian and Sumerian civilisations did, by controlling access to sacred texts and legal knowledge. Their ability to recognise genuine information would give them a competitive edge in everything from financial markets to politics, further widening the gap between the informed and the uninformed. Meanwhile, the vast majority — an ever-expanding pool of digital-age helots left to hew wood and draw water — would become increasingly vulnerable to manipulation and misinformation. The forces that shape our lives would be less accountable, and it would be ever harder to address ethical concerns or make well-informed decisions.
Unfortunately, it is difficult to even imagine the mass education needed to achieve the necessary levels of discernment — much less its effectuation. Most people would rather be watching 15-second TikTok videos than close reading. And while a significant percentage of the world’s population will always, unfortunately, lack the cognitive skills or cultural capital to navigate our swiftly-changing content ecosystem, even the well-educated are in danger. We are witnessing a decline in the humanities, which traditionally turn future workers into critical thinkers, capable of discerning the idiosyncrasies in human expression.
As AI-generated content becomes increasingly sophisticated, we must prize those idiosyncrasies. They are, as I saw when I attempted to replicate my father’s emails, what make each writer’s voice distinct and authentic; they are the reason discerning readers might pay to read a mind-expanding Substack instead of boring, one-note op-eds or paint-by-numbers YA fiction. By embracing the unique aspects of the way they communicate, writers may create work that resonates on a deeper level, appealing to an audience that still wants to read the best work that humans can produce.
My father’s voice is one of a kind. But to find out if my conclusions are replicable by a machine, I decided to ask ChatGPT to provide its own opinion on the likelihood of it fully “replacing” a given writer. “It’s important to consider that my responses are based on patterns and structures found in the data I’ve been trained on, rather than personal experiences or emotions,” it replied. “As a result, even though I can generate text that appears to be in the style of a specific person, it is still an approximation and not a direct reflection of their thoughts or ideas.” Of course, when I am communicating with a machine that reproduces the thoughts of my dead father with a reasonably high degree of precision, it’s pretty to think it might be otherwise. But more alarming is the question I am left with: Could I have put it better myself?
Disclaimer
Some of the posts we share are controversial and we do not necessarily agree with them in the whole extend. Sometimes we agree with the content or part of it but we do not agree with the narration or language. Nevertheless we find them somehow interesting, valuable and/or informative or we share them, because we strongly believe in freedom of speech, free press and journalism. We strongly encourage you to have a critical approach to all the content, do your own research and analysis to build your own opinion.
We would be glad to have your feedback.
Source: UnHerd Read the original article here: https://unherd.com/