SEARCHING FOR THE SOUL OF AI

I have been vocally anti-AI/LLM/Chat bots “writing” fiction—“writing” anything—or making any sort of “art” since the first rumblings made themselves known, and I continue to rail against it as it continues to accelerate into an uncertain future.

But something made me start to really think about this. Am I turning into that fusty old man ensconced entirely and forever in the Good Old Days? Am I reacting to AI like Woody Allen reacting to the computer?

If you’ve watched any Woody Allen movies in the last couple decades you’ll know what I mean. Here’s a bit of a scene from Blue Jasmine in which Cate Blanchett plays a woman in the present day, a character her own age, forty-four when this movie was released in 2013, but speaking dialog written by a then seventy-eight year old Woody Allen, who remains under the impression that in order for Jasmine to take an online course in interior design, she must first take an in-person adult education course to “learn how to use a computer.”

In his book Apropos of Nothing, Allen restates his insistence on still writing on a typewriter, but double downs on his ludditism by admitting that his wife has to change the ribbons for him. That’s a level of technological expertise still unavailable to him. This was the least of the cringeworthy moments in that book, which didn’t help me like the artist as much as I like his art. I know he’s old, but he is still alive in the world—the world of 2013, and the world of 2024. He is choosing not to participate.

In the period between closing up Alternative fiction & poetry and starting at TSR, I worked various “day jobs” while writing, and the one I liked the most was managing record stores. Once—and this was late 1994 or early 1995—an older woman came into the store and told me she wanted music for the car she just bought, and apologetically said she likes “the older stuff, like Tony Bennet.” While walking her over to the Easy Listening section I asked her if she needed a cassette or a CD. She looked at me as if I’d begun reading aloud from a paper on quantum mechanics then shakily replied, “Um… it’s a Cadillac.” After walking with her back to her car and determining it was equipped with a cassette player I sold her some Easy Listening tapes and she left happy. In that moment, though, I resolved to myself, at the age of thirty, never to be like that. That whatever new technologies might arise to replace the cassette tape, I would never be so utterly oblivious to the world around me.

Okay then, fast forward thirty years and in terms of AI, am I being that old lady who didn’t know the difference between a cassette and a CD? Will some youngster have to walk me out to my car to determine if generative AI is a tool or an obscenity? Am I, as I approach the age of sixty, as much a science fiction-raised technophile as I’ve always been, growing incapable of rolling with real world technological progress?

In her Big Think article “Why we reject new ideas,” Kristen French interviewed University of Utah business scholar Wayne Johnson, who said:

There’s a paper by Erik Dane from 2017 on something called cognitive entrenchment, which argues that the more expertise you have, the more kind of ossified your knowledge structures become, the less you expose yourself to other sources of information. And so it becomes really difficult to accept or value a new thing. We rely on experts to judge things, but novelty specifically by itself means that the expertise is less helpful. When something is new, nobody can be an expert on it. That’s why sometimes experts make such mistakes.

Am I cognitively entrenched when it comes to AI? I don’t believe I am, but would I be able to recognize my own cognitive entrenchment from within a state of cognitive entrenchment? And am I reacting to the reality of AI or the controversy around AI? As Johnson continues…

Disagreement makes people think the idea is risky. But I think the disagreement really tells us something about the evaluators: They’re using different criteria, they may lack the expertise for it. So the biggest thing that I would say is don’t give up when you see a lot of disagreement about an idea. Don’t just say, “Ooh, controversial, bad, we’re done.” So if you’re trying to see if ideas are good or not, come up with a really good evaluation checklist or criteria and have everyone use that.

Is there some version of an “evaluation checklist” for AI? It certainly seems to be being built and developed entirely in the blind, with no consideration to the authors and artists it’s stealing from, with no consideration of the past, present, or future of human creative endeavor. It’s currently at a lower level state where it’s easy to see it struggling with the sense of coherence a human artist is innately capable of, but that bug having been identified, the tech companies—long running on “can we build it?” without a thought to “should we build it?”—are hard at work fixing it. When it inevitably, and by many accounts soon, crosses that invisible boundary, will I be rendered obsolete? Will any and every human author or artist be as important to the world as, say, a swordsmith is now? There are people who do make swords, as a hobby or to sell to a small community of collectors, but as a part of the economy or as a valued trade, they may as well not exist at all. That technology is a thing entirely of the past. Am I—are we all—the lost tradesmen of the near future? A wordsmith with an audience of dozens?

In his 1899 essay “On a Certain Blindness in Human Beings,” William James wrote:

We are practical beings, each of us with limited functions and duties to perform. Each is bound to feel intensely the importance of his own duties and the significance of the situations that call these forth. But this feeling is in each of us a vital secret, for sympathy with which we vainly look to others. The others are too much absorbed in their own vital secrets to take an interest in ours. Hence the stupidity and injustice of our opinions, so far as they deal with the significance of alien lives. Hence the falsity of our judgments, so far as they presume to decide in an absolute way on the value of other persons’ conditions or ideals.

Am I protecting my own domain while and by vilifying the domain of the field of generative AI? Writing, creative writing—fiction or non-fiction—is important to me, it’s sacred to me. And as William James said, “Wherever a process of life communicates an eagerness to him who lives it, there the life becomes genuinely significant.” This is my life. But is this new world of AI-generated art happening whether anyone, including me, likes it or not, whether it will be objectively good or bad? And in railing against it am I only closing myself off to “the new,” insisting on a typewriter instead of the current-gen Mac Mini I’m writing this on? As James said:

Yet we are but finite, and each one of us has some single specialized vocation of his own. And it seems as if energy in the service of its particular duties might be got only by hardening the heart toward everything unlike them. Our deadness toward all but one particular kind of joy would thus be the price we inevitably have to pay for being practical creatures. Only in some pitiful dreamer, some philosopher, poet, or romancer, or when the common practical man becomes a lover, does the hard externality give way, and a gleam of insight into the ejective world, as Clifford called it, the vast world of inner life beyond us, so different from that of outer seeming, illuminate our mind. Then the whole scheme of our customary values gets confounded, then our self is riven and its narrow interests fly to pieces, then a new centre and a new perspective must be found.

I consider myself a “pitiful dreamer, some philosopher, poet, or romancer,” and that’s exactly what’s closing me off to the “ejective world” of AI. After all, the atomic bomb created a new center, a new perspective: the Cold War and the threat of global annihilation. That sucked. The burning of fossil fuels brought about the new center, the new perspective of the Anthropocene. Technology is not always good, nor, even, is it always neutral. I can’t see generative AI or “AI art” as anything but a disaster in the making. But we survived—at least for now—the disasters of fossil fuels and nuclear fission, and in some ways are better for it. Can we outlast AI?

William James ends his essay with this, which might be a recipe for a sort of truce between human artists and the machines created to mimic them:

And now what is the result of all these considerations and quotations? It is negative in one sense, but positive in another. It absolutely forbids us to be forward in pronouncing on the meaninglessness of forms of existence other than our own; and it commands us to tolerate, respect, and indulge those whom we see harmlessly interested and happy in their own ways, however unintelligible these may be to us. Hands off: neither the whole of truth nor the whole of good is revealed to any single observer, although each observer gains a partial superiority of insight from the peculiar position in which he stands. Even prisons and sick-rooms have their special revelations. It is enough to ask of each of us that he should be faithful to his own opportunities and make the most of his own blessings, without presuming to regulate the rest of the vast field.

I can’t stop the tech industry from doing whatever they want to do, however clearly awful it might be to me, but will there be any way, ever, to get the same from them? Is there a way to stop AI “art” from devouring the rest of the vast field of human creativity? Can we tame, or at least peacefully coexist, with what Pandora has released?

For now, at least, I have nothing to leave you with but questions, and this simple statement: Absolutely not one word of this post was in any way generated by any version of an “AI” or Large Language Model..

—Philip Athans

Fantasy Author’s Handbook is now on YouTube!

Did this post make you want to Buy Me A Coffee

Follow me on Twitter/X @PhilAthans

Link up with me on LinkedIn

Join our group on GoodReads

And our group on Facebook

Find me at PublishersMarketplace

Check out my eBay store

Or contact me for editing, coaching, ghostwriting, and more at Athans & Associates Creative Consulting?

As an Amazon Associate I earn from qualifying purchases.

If you are a human author in need of a human editor…

Where Story Meets World™

Look to Athans & Associates Creative Consulting for story/line/developmental editing at 3¢ per word.

About Philip Athans

Philip Athans is the New York Times best-selling author of Annihilation and a dozen other books including The Guide to Writing Fantasy and Science Fiction, and Writing Monsters. His blog, Fantasy Author’s Handbook, (https://fantasyhandbook.wordpress.com/) is updated every Tuesday, and you can follow him on Twitter @PhilAthans.
This entry was posted in authors helping authors, authors to writers, best fantasy blogs, best genre fiction blogs, best horror blogs, best science fiction blogs, best websites for authors, best websites for writers, Books, fiction writing blog, fiction writing websites, freelance editing, freelance writing, help for writers, helping writers become authors, how to write fantasy, how to write fiction, how to write horror, how to write science fiction, indie publishing, intellectual property development, Publishing Business, RPG, science fiction technology, SF and Fantasy Authors, technology, websites for authors, websites for writers, writers to authors, Writing, writing advice, Writing Community, writing fantasy, writing horror, writing science fiction, Writing Science Fiction & Fantasy and tagged , , , , , , , , , , , , , , , , , , , , , , . Bookmark the permalink.

7 Responses to SEARCHING FOR THE SOUL OF AI

  1. garyklinecc2014 says:

    Dear Philip:

    Thank you for your thoughts.

    So far generative AI makes a good show, but it can’t reason, at least not yet. I’ve been reluctant to even play with it, knowing it’s basically pretty noise for the most part.

    Your article makes some good points, though.. Time marches on and all that, and even though we can’t stop tech from summoning another devil, it should at least be a devil we learn to know.

    Take care,

    Gary K.

  2. mjtedin says:

    You raise some very good questions here. Personally, I am not pro or anti-AI. I see it as a tool we can use as we will. I don’t think it has the creative capacity to come up with something truly new. Could AI have written a post like yours that raises these philosophical questions? Probably not. On the other hand, I think it could be useful as a tool much like we use spellcheck on out computers now. Of course, it is a sophisticated spell check and even the spell check misses errors. You can always tell when someone used spellcheck and didn’t proofread themselves or have a proofreader do it. In much the same way, we’ll likely be able to tell when something has been written by AI or by a creative human.

    • Philip Athans says:

      We can easily see the difference between a human author and an AI “author”… now. But what about next year? Ten years from now? Twenty? It’s not the primitive AI of today we have to worry about, but the thing that billions of dollars are being spent to push forward into… what? A machine that can generate a “Stephen King novel” that actually reads like a Stephen King novel, even after the real Stephen King is long dead?

      • mjtedin says:

        Maybe a more apt comparison would be between AI and CGI. In the 90s, we could tell when something was CGI due to the uncanny valley. By the 2020s, it’s hard to tell at all. Nevertheless, CGI hasn’t replaced actors, merely enhanced moviemaking. I think AI will end up being the same. It will be a tool to be used. It can and must be used with intentionality. It’s not going to replace authors, but it will make the writing process different, like computers did. We will probably still be able to tell when someone used AI to “create” content without artistry.

  3. mjtedin says:

    That being said, CGI is always best when used sparingly.

  4. Janetta Maclean says:

    Thanks Phil, As always you got me thinking. On AI I have started to notice people using it on Google reviews amongst other things. A very articulate and reasonable sounding ( late 19th century ‘Oxford Academic’ language) take down of a harmless customer service agent doing their best to deal with an obnoxious customer (the reviewer) who didn’t have a receipt. The giveaway was his previous reviews which were brutish and poorly constructed. What I fear is the immediate sympathy I feel for the well written review as well as my Anglophone bigotry in my reactions to the badly phrased ones. AI radically changes the way I accept the information given to me. If I can swallow something that easily then who knows to what extent I can be manipulated?

    I consider myself a “pitiful dreamer, some philosopher, poet, or romancer,” and that’s exactly what’s closing me off to the “ejective world” of AI. I’ve been a big fan of William James since Uni but ‘ejective’. It sounds vaguely obscene… what the heck does that mean??

    I would buy you a coffee but the exchange rate re Canada dollar is so pathetic it would cost me more than double, so I’ll share instead. Best wishes, Munirah

  5. Philip Athans says:

    That response was better than a “coffee” anyway. Thank you for this!

    I’m struggling with the existential dread, and it’s nice to know I’m not alone out here.

Leave a comment