AI and Why We Write in the First Place

— September 09, 2024 (2 comments)

Recently, the organization behind National Novel Writing Month (which challenges writers to write 50,000 words in the month of November) officially condoned the use of generative AI and said anyone who didn't like it was classist and ableist.

People got mad about that.

So, let's talk about AI for a bit, what it can do, what it can't do, and whether it should have any place in the writing process.

What do we mean by "AI"?

As always, let's define terms first. This post is not talking about AI that defines enemy behavior in Pac-Man nor the fictional, self-aware AIs of Terminator and I, Robot. We are specifically talking about generative AI or large language models (LLMs).

In a technical sense, generative AI is closer to Pac-Man than Skynet. In science fiction—including science fiction that I wrote!—AIs are self-aware and sentient, capable of complex and original thought. But that's not how any of our current technology works, not now nor in the foreseeable future.

What we call artificial intelligence today is not, in fact, intelligent. LLMs are very powerful, very structured predictive text generators. They are very good at putting together strings of words that sound good and are grammatically correct (i.e., modeling language), but they have no idea what any of it means. They don't even have a way to know.

This is an important point, and we can't get anywhere in discussing the topic unless we agree on it.

So.


What can AI do?

In an ideal world (not an ethical one—we'll get to that in a sec), generative AI can do a bunch of things for writers in theory, like...

  • ...brainstorm a list of ideas.
  • ...edit text to be grammatically correct.
  • ...write a whole damn story.
And that sounds amazing, which is why the CEOs of the world have been throwing everything they have at this tech.

But there are some inherent and (because of the way LLMs fundamentally work) insurmountable problems.

What AI can't do

Remember that part about AI not understanding what anything means? Turns out, that causes some problems.

AI can't brainstorm a list of original ideas. They might sound original to you, but there is nothing AI can come up with that hasn't been thought of or remixed already. In fact, because LLMs are trained to produce something that sounds good rather than something that is unique, the list you get will be the most mediocre ideas you can pull from a quick Google search. Helpful perhaps, but never ground-breaking.

"But, Adam, didn't you say there are no ideas so original that they are unlike anything that has come before?"

I did! I also said that novelty doesn't come from original ideas but from combining them with your unique life, experience, voice, and story.

An AI doesn't have any of those things.

An AI editor can't ensure the author's voice or intended meaning is maintained. Again, this is because AI has no idea what words mean. It only knows which words statistically appear in a given sequence to be considered "correct" by humans (plus whatever extra guidelines and guardrails its programmers placed on top of it). Your text will sound correct, intelligent even, but it will also sound generic. You will no longer be in it.

(Note that if you are looking for a way to make your text great while maintaining your intended meaning and unique voice, that's exactly what I do.)

AI can write a whole damn story but not a story that's worth a damn. Sure, it'll sound smart. Statistical models (and a soupçon of plagiarism) ensure that. But it won't mean anything. Nothing connects. Nothing has a point, and nothing is being said, because the AI has nothing to say and isn't aware that "saying something" with your story is even a thing.

Should writers use AI at all then?

In a brighter timeline, I believe there are versions of us discussing how AI can be used to help with all the tedious stuff humans have to do so we can have more time to do something that matters—like make art. Or at the very least, we could discuss how AI can enhance our creativity rather than make it worse.

For example, brainstorming mediocre ideas isn't all that bad! I do that all the time with a Google search, helping me trigger new, unique ideas. And helping a poverty-stricken, non-native English speaker edit their story into passable English seems like a good thing. Even writing a whole damn story could be...

Well okay, I don't think that one's any good.

I mean, if I'm just using AI to churn out a story—even if I do the work of revising that story to sound good—at that point, what am I even doing then? I'm not making money. (Statistically speaking, publishing books is a terrible way to make money!) And I'm not even writing. At that point, I'm just editing someone else's mediocre prose at a loss.

In any case, those discussions are for a brighter timeline, one in which AI is 100% free and ethical. In our timeline, AIs have some ethical wrinkles:
  • Big LLMs are trained on authors' writing without their permission.
    • And they do an excellent job plagiarizing that writing without telling you it's plagiarized... because they have no idea.
  • Corporations want LLMs to replace human writers and editors in order to increase profits for the already-rich.
    • And as these corporations discover LLMs suck at writing, they try to rehire those human writers and editors to fix the LLMs' work at a fraction of their worth.
  • By all accounts, training and using LLMs consumes a lot of power—like way more than it should considering what little we get out of it.
If we could get around those problems—if AI had consent for all the data it was trained on, if corporations used it to make creative lives better, if training and using one didn't consume as much electricity as a single Icelandic citizen uses in a year—then sure, maybe, AI might be useful for things like brainstorming or grammar checking.

But those are real problems, and personally, I can't get past them. (And AI's are only mediocre at brainstorming and grammar checking anyway.)

I've heard folks say the tech will get better, these problems are fixable, etc., etc. But coming from the computer science field myself and having studied LLMs back in the 20th century (GOOD GOD!), I'm unconvinced. The technology hasn't changed very much in that time, only the amount of data and server power available (and the billions of investment dollars to make it look like things are better).

So, I won't be using AI for the foreseeable future. Writing is hard, but not because humans are bad it. We're actually the only beings on Earth that are any good it! Making a computer write for me (and not very well) just makes me wonder: What am I buying with that time, when instead, I could be making something new?

That's me. I'm curious your thoughts (but do be kind in the comments if you want them to stay there).

Enjoyed this post? Stay caught up on future posts by subscribing here.


2 comments:

  1. That was an awesome tour through the questionable land of AI. I tested it a few times, like asking it to write a song about Bigfoot and Elvis eloping in a UFO, and it did it surprisingly well. But it also took all the challenge out of it, and the fun as well.

    Now I want to get a t-shirt that says this on the front:
    AI can write a whole damn story
    And this on the back:
    But not a story that's worth a damn

    ReplyDelete