
More Than Words
By John Warner
I've been thinking a lot about writing in the age of Generative AI, and I found this book overlaps a lot with how I see things. AI doesn't think, and AI doesn't feel. So what it outputs cannot be considered writing.
Notes & Highlights
Ten days after ChatGPT’s arrival, writing at the Atlantic, veteran high school English teacher Daniel Herman declared ChatGPT meant “the end of high school English.”
No person or company appears to be making significant revenue from a generative AI–enabled or –enhanced application. The AI gold rush is primarily confined to speculative investment in companies that are promising something big in the future.
It is frankly bizarre to me that many people find the outsourcing of their own humanity to AI attractive. It is akin to promising to automate our most intimate and meaningful experiences, like outsourcing the love you have for your family because going through the hassle of the times your loved ones try your spirit isn’t worth the trouble.
Because ChatGPT cannot write. Generating syntax is not the same thing as writing. Writing is an embodied act of thinking and feeling. Writing is communicating with intention. Yes, the existence of a product at the end of the process is an indicator that writing has happened, but by itself, it does not define what writing is or what it means to the writer or the audience for that writing.
In many ways, writing is the act of saying I, of imposing oneself upon other people, of saying listen to me, see it my way, change your mind. It’s an aggressive, even a hostile act. You can disguise its aggressiveness all you want with veils of subordinate clauses and qualifiers and tentative subjunctives, with ellipses and evasions—with the whole manner of intimating rather than claiming, of alluding rather than stating—but there’s no getting around the fact that setting words on paper is the tactic of a secret bully, an invasion, an imposition of the writer’s sensibility on the reader’s most private space.
What ChatGPT and other large language models are doing is not writing and shouldn’t be considered as such.
Writing is thinking. Writing involves both the expression and exploration of an idea, meaning that even as we’re trying to capture the idea on the page, the idea may change based on our attempts to capture it. Removing thinking from writing renders an act not writing.
Writing is also feeling, a way for us to be invested and involved not only in our own lives but the lives of others and the world around us.
Reading and writing are inextricable, and outsourcing our reading to AI is essentially a choice to give up on being human.
If ChatGPT can produce an acceptable example of something, that thing is not worth doing by humans and quite probably isn’t worth doing at all.
Deep down, I believe that ChatGPT by itself cannot kill anything worth preserving. My concern is that out of convenience, or expedience, or through carelessness, we may allow these meaningful things to be lost or reduced to the province of a select few rather than being accessible to all.
Generative AI does not “review” anything. It has no capacity for consideration. It has no taste or worldview.
Generative AI does not “remember” anything. While it does have the capacity to fit future prompts to past responses as part of a chain, it is not working from memory rooted in experience as we understand it in humans.
Generative AI is not doing what Menand does when writing a poem. It has no capacity for working from intention in the way humans do as they write.
Large language models do not “write.” They generate syntax. They do not think, feel, or experience anything. They are fundamentally incapable of judging truth, accuracy, or veracity. Any actions that look like the exercise of judgment are illusory.
Bjarnason suggests that just as those who seek out psychic advice are likely to believe in the existence of paranormal connections to the beyond, those who go to large language models are predisposed to want to find intelligence in the tokens delivered to our queries. To begin, they have likely been exposed to some measure of hype about the capabilities of the technology. To test intelligence, they begin asking about things they know, and if the answers are reflective of what the prompter knows and believes, there is a kind of kinship established. The kicker is that even if something in the LLM reply is off, the eager seeker of intelligence will re-prompt, putting the LLM back on the right path, similar to how when a psychic says something like, “I’m seeing a dog, a Labrador,” and the mark responds with, “No, but we did have a chihuahua,” and the psychic replies, “Yes, high-energy dog. That’s what I was seeing.”
The things ChatGPT is “smarter” at—primarily the speed and efficiency of production—are relatively limited as compared to our human capacities for experience, reflection, analysis, and creativity, at least as long as we continue to value things like experience, reflection, analysis, and creativity.
We are people. Large language models will always be machines. To declare the machines superior means believing that what makes humans human is inherently inferior. I acknowledge that there are many people in the world who believe this is the case, that our fragile, frequently malfunctioning, inefficient meat sacks cause us all sorts of problems, but this does not mean we must view a possible cyborg future as some kind of “progress.”
Generative AI models are trained on what has happened in the past, enshrining that world as a basis for its syntactical assemblages. To consider how this is a potential problem at a basic level, imagine that ChatGPT were primed with writing that goes no further than 1955 and ask yourself how racist the output would be.
If a hostile foreign power detonated an EMP or three over us, wiping out our entire electronic infrastructure, we’d have a hard time figuring the route to the nearest Starbucks and then tipping the barista, but we’d also have bigger problems to deal with under that scenario.
What I want to say about writing is that it is a fully embodied experience. When we do it, we are thinking and feeling. We are bringing our unique intelligences to the table and attempting to demonstrate them to the world, even when our intelligences don’t seem too intelligent.
Writing involves both the expression of an idea and the exploration of an idea—that is, when writing, you set out with an intention to say something, but as part of the attempt to capture an idea, the idea itself is altered through the thinking that happens as you consider your subject. Anyone who has written has experienced one of these mini-epiphanies that is unique to the way humans write.
The synthetic text ChatGPT produces is convincing because we confuse those surface traits for genuine meaning, often imputing (particularly in education contexts) intelligence on text that is, by and large, as featureless and indistinct, though “correct” as possible. It’s interesting that this correctness is conflated with intelligence, perhaps because it is identifiable, explicable, and easy to compare between texts, but this doesn’t mean it is something we should necessarily value.
If an idea is the atom, the true building block of writing matter, consider the notion a subatomic particle, perhaps along with the “inkling,” “sense,” “suspicion,” and “hunch.”
Rebecca Solnit, author of more than twenty books, including Men Explain Things to Me and A Paradise Built in Hell, was asked for her feelings about ChatGPT and other LLMs after the revelation that her books had been part of a database of pirated texts that were used to train generative AI applications.1
I’m a writer because I want to write. I don’t want a machine to do it for me. I’m a writer because the process of writing is creative in what I do with language, but also in how I understand the subject. I often feel that I don’t think hard enough about things until I have to write about them. Often my understanding changes in the process of writing. That’s exciting for me. That’s my own development, which, ideally, is somehow also something I can share with the readers.
I’m engaging in thinking, and what is the point of handing the job of thinking itself over, of understanding something more deeply, seeing the pattern that underlies? Why would I want to give up that profound experience?
We tend to view thinking as a solo activity, emblemized by Rodin’s famous statue of The Thinker hunched over, fist on chin, absorbed in thought. But with writing, at some point, the thinking ends, and we uncurl ourselves and present the product of our thoughts to an audience.
Writing is communication. Writers are responsible for the impact of their words on the community.
In terms of skills, writers must be able to conceive, draft, revise, and edit a piece of writing. They have to be able to make sentences that prove pleasing to the audience’s sensibilities. The skill suggests they must also be able to analyze the needs of their audience, just as chefs are thinking about the tastes of their diners. Like chefs, writers must be able to think deductively and inductively, to look at the material they have to work with and craft a message, as well as to look at the messages of others and understand how and why they work.
For knowledge, writers have two realms they must be concerned with, their knowledge of writing as a process—essentially the ways writing works—and their knowledge of the subject matter they are writing about.
In terms of attitudes, writers must be curious, open (but also skeptical), empathetic, and obsessive. They must be comfortable with ambiguity and complexity and oriented toward being both accurate in what they share of their own ideas and in how they convey the ideas of others.
I think the fact that our writing practices are hidden from audiences is one of the reasons so many people so readily came to accept what ChatGPT is doing as “writing” as opposed to automated text production.
The 10,000-Hour Rule has been debunked repeatedly, including by Ericsson himself, who declared that Gladwell got the research wrong, and “there’s nothing magical or special about ten thousand hours.”1 A meta-analysis across a number of different activities found little correlation between the amount of practice and the effect of practice, including only a 4 percent correlation in educational activities and a 1 percent correlation in professional activities.
The 10,000-Hour Rule and Duckworth’s grit theory are manifestations of a particularly American attitude toward self-improvement that a better life is right around the corner if you can simply identify and embrace “one key thing.” This attitude dominates fitness and wellness spaces as we’re informed of the optimum diets and workouts. Businesses chase one fad after another in the pursuit of increased employee productivity and profits. It’s not incidental that the business and self-help sections in the bookstore are virtually indistinguishable when it comes to the prevalence of books that promise to “unlock the keys to success” with this “one simple rule/tool/principle.
Believing that there is “one key thing” and falling for the repeated promises of those who sell such remedies is a natural outgrowth of not wanting to deal with the inevitable complexity of operating in the world as it actually is.
So, according to the raft of research and examples Grant has mustered for his book, what does matter when it comes to improving our practices? We benefit from three big principles: making sure practice is purposeful, varied, and fun. Essentially, we develop best when we ignore that we’re trying to get better at something and instead just do a bunch of stuff that’s related to our big-picture goal. Our orientation should be around finding the best fit for our interests rather than relying on grit because that fit makes it much easier to be gritty.
I believe ChatGPT is viewed as a desirable alternative because, sadly, most people have not been given the chance to explore and play within the world of writing. We have taken something that is dynamic, useful, and uniquely human and turned it into a series of rote exercises with limited or even absent purpose. This is true whether we’re talking about school, work, or otherwise.
Writing is communication within a community, and the circle is closed at the moment of reading. Because we are unique individuals, the potential results of these joinings are infinite.
Reading is thinking and feeling in all the same ways as writing. Reading is a process that allows us to better understand the world and one another, sometimes even achieving something like virtual or alternate reality in our own minds as we join with the thoughts of others
First, reading is not an innate biological function like speech. It is an adaptive behavior cobbling together the frontal, temporal, and parietal regions of the left hemisphere,2 unlike speech, which is largely confined to a specific region.
Second, reading undergirds other aspects of our overall cognitive development around memory, critical thinking, and empathy, among other things. Reading invokes the brain’s “plasticity,” the ability to adapt to new challenges. Different kinds of reading develop different aspects of our reading brains.
Finally, and perhaps most importantly, digital texts are changing the way we read and appear to be threatening the skills of deep reading, the ability to be totally absorbed in a text. The challenges of concentrating on text are undermined by a culture where we are expected to spend much more time skimming and assimilating significant volumes of information than we are deeply considering the ideas and concepts in those texts.
Prior to being disgraced and convicted of fraud as a cryptocurrency Ponzi schemer, Sam Bankman-Fried told a journalist who had expressed his own love of books that Bankman-Fried would “never read a book.”
After the journalist reacted with surprise, Bankman-Fried elaborated, “I’m very skeptical of books. I don’t want to say no book is ever worth reading, but I actually do believe something pretty close to that. I think, if you wrote a book, you fucked up, and it should have been a six-paragraph blog post.”
Writer and critic Maris Kreizman calls this the “bulletpointification” of books and believes it is endemic to a tech culture that fetishizes optimization. “It seems to me that there is a fundamental discrepancy between the way readers interact with books and the way the hack-your-brain tech community does. A wide swath of the ruling class sees books as data-intake vehicles for optimizing knowledge rather than, you know, things to intellectually engage with.”
If the large language model is going to be useful in the realm of “reading,” perhaps it is as an assistant whose job it is to monitor and sort digital texts, and being prepared to bring forth the most relevant information responsive to my specific request on demand.
Khan is merely the latest in a long line of men—and they are all men—who believe that the “problem” of teaching can be solved with a teaching machine.
It is not coincidental that teaching was (and still is) a female-dominated profession, while the engineering boom of the 1950s and 1960s was almost exclusively the province of men. This disrespect for teaching rooted in mid-twentieth-century sexism continues to be manifested today as teachers are subjected to an ever-changing list of demands without being given the time and resources necessary to do the job.
One of the ideas we must renew is that we are not the sum total of our averages. When we reduce individuals to averages and then constrain their behaviors based on those averages, we are restricting freedoms. Generative AI content is, by definition, a great averaging of what’s in the world. An embrace of this output is a kind of capitulation to the machine, rather than staying true to our nature as creatures.
You can’t think, read, research, study, learn, or teach everything. To choose one thing is to choose against many things. To know some things well is to know other things not so well, or not at all. Knowledge is always surrounded by ignorance.
Our communities inevitably must contain both those with whom we agree and those with whom we differ. As long as they are willing to see themselves as a member of the community with the well-being of the community in mind, they should be welcome.