As I write, the stock market is taking a tumble due to the incestuous investing going on in the AI and technology space. It is a clear sign AI is not changing the world as quickly as investors had banked on. If the tech bubble bursts, we will be in for an extended period of economic hardship which will only make the adoption of AI more painful for the people who will be displaced from their AI-usurped jobs.
To soften the blow of this painful reality, we can look at the adoption of any other revolutionary technology throughout history and see the number of jobs that were actually displaced and how the market moved to accommodate that. In 1983, Nobel Prize-winning economist Wassily Leontief predicted the internet would cause massive job loss. That same year, the New York Times reported that Leontiff “thinks conditions facing today’s technologically unemployed are less like those of farm workers who moved to manufacturing and more like the plight of farm horses retired to pasture.” While many jobs were displaced by the internet, the people unemployed by its advent soon found employment in other fields. Jobs Leontief would never have been able to predict, like ‘social media manager’, were created which filled the gaps caused by the automation. To pretend AI is different than every economic upheaving invention in the past is naive. People will adapt and become better and learn to do new things. The ceiling of human innovation has not been met, and it never will.
As artificial intelligence has been maturing for the last 75 years, we have watched it turn from a mere idea into a self-correcting checkers-player in 1952. Since then, AI has been slowly expanding into more and more industries. Even cute little Clippy was a precursor to the true AI agent, Microsoft Copilot. Now, it has seeped into every crack of our technological lives. Because of the breadth of the AI space, we need to define what we are talking about. As writers, we need to focus on understanding the impact of generative AI.
Generative AI is defined by IBM as ‘[AI] that can create original content such as text, images, video, audio or software code in response to a user’s prompt or request.’ This is not AI that gives you suggestions of how to correct your grammar or suggests a diagnoses to a doctor. It could most likely do those things, but gen AI can go much further and create things humans would consider works of imagination, even works of the spirit.
As authors, our main concern is obviously the writing that gen AI can produce. Already, many self-published books have been listed on Amazon that heavily relied on gen AI to do huge parts of the work. Though many agents and publishers now have language in their contracts prohibiting the use of AI, there are also still most likely authors using AI to do portions of the heavy lifting like character or plot development despite the deep ethical problem that it causes.
All generative AI relies on its training. If its creators want it to write literature, it has to be trained on large datasets of existing literature. This is almost exclusively done without the permission of the authors who wrote the work in the datasets. That means that when a writer takes information about plot, character, or anything else from a generative AI model and uses it in their work, they are using purely derivative material that directly links back to the literature that was stollen by the AI companies.
As writers, we should be offended by the idea that any part of our work might be derivative and not from our own individual imagination. For some reason, in the general zeitgeist of culture, generative AI is getting a pass on copyright infringement. Something so fundamental to our livelihoods as copyright should be protected so that we can continue to write without fear of our work being ripped off by a soulless predictive model.
Fortunately, for those of us sitting on the sidelines waiting to see how all of this shakes out, there are authors that have gotten down in the weeds. There are currently more than 50 authors’ lawsuits against AI in the United States. None of those lawsuits have reached any definitive end, meaning that we are still waiting to hear if the courts will protect copyrighted creative expression. You can view all of their statuses here:
If these lawsuits lead to widespread regulation of generative AI and they are no longer able to train on copyrighted material, that will greatly limit its ability to generate content in any way that aligns with the current publishing market. Humans will continue to own their own work and will be able to create without fear.
If authors are not able to win, if we must bow to a dysregulated environment where the minute we put out work we must begin to fear its corruption, there is still reason for hope. Many readers, especially those with appreciation for well-written work, will demand human literature. Poetry and prose written by a spiritless, empty mechanism will be rejected by swaths of readers. Think of yourself. Would you be interested in reading something written solely by a computer? Would you walk into a bookstore and purchase a novel that said ‘Written by OpenAI’ on the cover? I certainly wouldn’t. When I pick up a novel, my deepest desire is not to just connect with the story inside the pages, but to discover that the spirit that wrote the novel is in some way like my spirit.
The traditional publishing industry is also currently protecting against this. They refuse to publish AI work, but if the potential earnings grow tempting enough, I don’t think they will have any qualms with using it. At least for the time being, we can lean on the larger publishers to ensure they reward real people only.
In the self-publishing space, where generative AI is already being used to produce whole manuscripts, AI will force the best authors forward and the worst authors out. If a reader is unconcerned about the quality of the novel they are reading, they will most likely be unconcerned about the quality of the creator. What is the difference between a mediocre book by a human and one by a robot if your goal as the reader is to just be entertained? I see this as a great thing for self-publishing. A call to excellence is exactly what the self-publishing space needs to gain its legitimacy in wider culture.
And now, I must address the elephant in the room. I use generative AI–namely Midjourney–to create images for my blog. It is an incredible tool for creating topic-specific images. Otherwise, I would be using stock images or terrible pictures I took myself. My blog aesthetic would be more homely, and I would have to work harder to procure them. So, what I don’t want to portray is that AI has no place in any endeavor. People can use it to decrease the tedium of everyday or menial tasks and to add specificity to complicated questions. When its use gives us these benefits, we need to be highly discerning and never pretend its products are our own work.
What I am saying is that AI should never be trained on copyrighted property, and we as a society should never let it replace the soulful expression of creativity. Authors and consumers have to draw the line of how far we are willing to let the creeping hand of technology wrap its grimy fingers around our minds. We must demand regulation for an industry that has the potential to disrupt what it means to be human. We have the power to speak up and act. It is up to us to decide what happens next.
Please let me know your thoughts on AI! I would love to hear what you agree or disagree with!







Leave a comment