This is a guest post by Rachel Jamison Webster.
This fall, students returned to school with AI software embedded in Google Docs and Microsoft Word. Students will use AI to do research and to generate poems and papers while instructors scramble to stay one step ahead of the game.
The speed of Generative AI is thrilling and promises to streamline work even more than the invention of the steam engine. But not everything is about efficiency.
Psychologists, writers, readers, and educators can respond to AI by acknowledging what it can and cannot do. And we can trust that our work remains important because humans need connections with other human minds in order to learn and grow.
We are currently facing epidemics of loneliness, teen suicide, and mass shootings that are essentially problems of meaning, indicators that the human psyche is not being effectively reflected and nurtured by our society. Empathic listening, deep reading, and creative writing all counteract isolation by putting our thoughts and feelings into conversation with other human minds.
Educators can facilitate these meetings in the classroom and on the page. And we can get beyond questions of how to implement or penalize AI-based writing by teaching the processes of consciousness rather than simply the products. Works of writing are too often treated as products to be evaluated, graded, or rated. And we in our culture can think of ourselves like products too—more worthy if we accrue more degrees, get more likes, or garner more external markers of success. But this objectification of knowledge—and of ourselves—threatens to alienate us from the very processes that give life meaning.
I am an author who has taught creative writing at Northwestern University for 17 years, and I understand that generative AI has just ushered in a new age. As we grapple with what AI can and cannot do, we can reassert the value of human-to-human learning that happens through reading, writing, and research.
Human writing is a process that begins in the mind of the writer and continues in the mind of the reader, bringing information, intuition, thoughts, and feelings into new combinations. Reading writing by other humans inducts us into a method of integrating this “other”—whether it is a new idea, the inner life of a character, or a plot that we now know could happen. Studies conducted by literacy scholar Maryanne Wolfe on “deep reading” have shown that deep reading accesses more of the human brain than tech-based reading. It results in self-reflection, unexpected insight, and empathic understandings that have wide social value.
The slowness of traditional reading is central to its power because it cultivates sustained attention. AI can summarize a book in a matter of seconds, and while this helps to identify plots and themes, it leaves no time for integrative thought, empathic development, reflection, or wisdom. And wisdom matters now more than ever because it teaches us how to stay conscious in hard times.
AI will change human consciousness by synthesizing information in new ways. But that expansion will exist primarily on the informational/material plane, while relegating humans to an increasingly passive role. Even when we are revising text generated by AI, we will act as prompters and managers—something like Amazon consumers or warehouse bosses—rather than creators. Sometimes this efficiency will be worth it, when our writing is task-oriented. But being a consumer is far less transformational than being a creator.
Human writing is transformational because it allows us to externalize and develop our thinking. We writers live for that surprising moment when we realize that we know more than we thought we knew, that we are more than we thought we were, that we really do “contain multitudes,” as poet Walt Whitman understood. Before we outsource this sense of discovery to technology, we deserve to cultivate the complexity within ourselves.
AI algorithms pluck facts, ideas, and phrases out of context without crediting their human sources. This disconnects us from our intellectual ancestors and disempowers us as humans, as it suggests that technology itself creates knowledge. Humans create knowledge, and confronting the shortcomings, particularities, and genius of other human beings is what inspires us in our own imperfect humanity. When we know what other humans before us have written and thought, we feel prodded to cultivate our own consciousness.
Because AI moves so rapidly, users may soon forget that the “data” it synthesizes is language first created by humans. This disconnection could cause more of us to give up integrative thinking, writing, and reading, or to do so in a way disconnected from human history. It is up to us—as humans—to cite and remember our human sources.
There are, of course, positive arguments for AI, most of which fall along utilitarian lines. But even as AI changes the nature of work, solves some societal ills, and shuffles and consolidates information, it cannot answer the most enduring questions of consciousness. Scientists note that while we have made great strides in understanding the workings of the human brain, we still do not know how consciousness arises. We do know, however, that meaning is created not just by gathering information, but by establishing connections between facts, ideas, experiences, and feelings. The human mind is still essential for creating such meaning, and for cultivating it in others.
Source: Brad Neathery / Unsplash.
Rachel Jamison Webster is Professor of Creative Writing at Northwestern University and author of Benjamin Banneker and Us: Eleven Generations of an American Family.