On the Importance of Democracy

— September 16, 2024 (2 comments)


One of my kids told me they didn't really think they were gonna vote when they turned 18, and I felt like I failed as a father. I know that's a common feeling (am I right, fathers?), but it drove me to action. I don't want to fail them, and I don't want to fail you, so you get to be my temporary children for the next few minutes.

You gotta vote.

I don't mean that in a burdensome obligation kind of way, but in a "Hey, it's actually pretty cool we live in a time and place where our opinion has meaning!" kind of way.

It's a safe bet that you have lived your whole life in a democracy. I know I have. Because of that, it's easy to take it for granted that (1) we can always vote (that's what all countries do, right?) and (2) our vote doesn't feel like it does anything.

But here's the thing. If you live in a country without a democracy (or with a fake/failed democracy, like say Russia), your opinion is worthless—sometimes even dangerous. The people in charge of your country/state/city/school are chosen by other people for reasons you don't even get to know about. The law is whatever those leaders say it is. And there's nothing you can do to change it short of some sort of rebellion, which are notoriously difficult to organize and bad for the health of everyone involved (historically speaking).

Voting's easy though. Among other things, the organizing has been done for you, and most laws ensure a minimum of bloodshed. Most importantly, your voice matters.

Yeah, your voice doesn't make change alone—it's the collective voice of thousands or millions of people—but your voice is part of those millions. Change happens when we speak together.

Despite popular opinion, there are electable representatives who care about people and who will fight for change that serves all people. These candidates aren't always available at the highest levels of government, but guess what! The highest levels of government are not the ones that matter the most!

Sure, it'd be nice if the federal government finally ended Daylight Savings Time, raised the minimum wage, or did literally anything about 70% of the world's mass shootings. But state and local governments can and do make those kinds of changes all the time, and your vote carries orders of magnitude more weight in those elections. And when enough cities and states make a successful change, the federal government eventually just goes along with it.

And while you're there, vote for the highest levels of government too. It's just one extra dot.

Voting isn't the end-all fix to the world—nothing is. But so long as we live in a place where it's an option, voting is one of the easiest, most important ways to help.

I know there's a lot going on in the world right now. Hope is a hard thing to maintain, but hope is absolutely vital to life, liberty, and the pursuit of happiness.

Voting itself is a kind of hope, and you know what they say....



Enjoyed this post? Stay caught up on future posts by subscribing here.


AI and Why We Write in the First Place

— September 09, 2024 (2 comments)

Recently, the organization behind National Novel Writing Month (which challenges writers to write 50,000 words in the month of November) officially condoned the use of generative AI and said anyone who didn't like it was classist and ableist.

People got mad about that.

So, let's talk about AI for a bit, what it can do, what it can't do, and whether it should have any place in the writing process.

What do we mean by "AI"?

As always, let's define terms first. This post is not talking about AI that defines enemy behavior in Pac-Man nor the fictional, self-aware AIs of Terminator and I, Robot. We are specifically talking about generative AI or large language models (LLMs).

In a technical sense, generative AI is closer to Pac-Man than Skynet. In science fiction—including science fiction that I wrote!—AIs are self-aware and sentient, capable of complex and original thought. But that's not how any of our current technology works, not now nor in the foreseeable future.

What we call artificial intelligence today is not, in fact, intelligent. LLMs are very powerful, very structured predictive text generators. They are very good at putting together strings of words that sound good and are grammatically correct (i.e., modeling language), but they have no idea what any of it means. They don't even have a way to know.

This is an important point, and we can't get anywhere in discussing the topic unless we agree on it.

So.


What can AI do?

In an ideal world (not an ethical one—we'll get to that in a sec), generative AI can do a bunch of things for writers in theory, like...

  • ...brainstorm a list of ideas.
  • ...edit text to be grammatically correct.
  • ...write a whole damn story.
And that sounds amazing, which is why the CEOs of the world have been throwing everything they have at this tech.

But there are some inherent and (because of the way LLMs fundamentally work) insurmountable problems.

What AI can't do

Remember that part about AI not understanding what anything means? Turns out, that causes some problems.

AI can't brainstorm a list of original ideas. They might sound original to you, but there is nothing AI can come up with that hasn't been thought of or remixed already. In fact, because LLMs are trained to produce something that sounds good rather than something that is unique, the list you get will be the most mediocre ideas you can pull from a quick Google search. Helpful perhaps, but never ground-breaking.

"But, Adam, didn't you say there are no ideas so original that they are unlike anything that has come before?"

I did! I also said that novelty doesn't come from original ideas but from combining them with your unique life, experience, voice, and story.

An AI doesn't have any of those things.

An AI editor can't ensure the author's voice or intended meaning is maintained. Again, this is because AI has no idea what words mean. It only knows which words statistically appear in a given sequence to be considered "correct" by humans (plus whatever extra guidelines and guardrails its programmers placed on top of it). Your text will sound correct, intelligent even, but it will also sound generic. You will no longer be in it.

(Note that if you are looking for a way to make your text great while maintaining your intended meaning and unique voice, that's exactly what I do.)

AI can write a whole damn story but not a story that's worth a damn. Sure, it'll sound smart. Statistical models (and a soupçon of plagiarism) ensure that. But it won't mean anything. Nothing connects. Nothing has a point, and nothing is being said, because the AI has nothing to say and isn't aware that "saying something" with your story is even a thing.

Should writers use AI at all then?

In a brighter timeline, I believe there are versions of us discussing how AI can be used to help with all the tedious stuff humans have to do so we can have more time to do something that matters—like make art. Or at the very least, we could discuss how AI can enhance our creativity rather than make it worse.

For example, brainstorming mediocre ideas isn't all that bad! I do that all the time with a Google search, helping me trigger new, unique ideas. And helping a poverty-stricken, non-native English speaker edit their story into passable English seems like a good thing. Even writing a whole damn story could be...

Well okay, I don't think that one's any good.

I mean, if I'm just using AI to churn out a story—even if I do the work of revising that story to sound good—at that point, what am I even doing then? I'm not making money. (Statistically speaking, publishing books is a terrible way to make money!) And I'm not even writing. At that point, I'm just editing someone else's mediocre prose at a loss.

In any case, those discussions are for a brighter timeline, one in which AI is 100% free and ethical. In our timeline, AIs have some ethical wrinkles:
  • Big LLMs are trained on authors' writing without their permission.
    • And they do an excellent job plagiarizing that writing without telling you it's plagiarized... because they have no idea.
  • Corporations want LLMs to replace human writers and editors in order to increase profits for the already-rich.
    • And as these corporations discover LLMs suck at writing, they try to rehire those human writers and editors to fix the LLMs' work at a fraction of their worth.
  • By all accounts, training and using LLMs consumes a lot of power—like way more than it should considering what little we get out of it.
If we could get around those problems—if AI had consent for all the data it was trained on, if corporations used it to make creative lives better, if training and using one didn't consume as much electricity as a single Icelandic citizen uses in a year—then sure, maybe, AI might be useful for things like brainstorming or grammar checking.

But those are real problems, and personally, I can't get past them. (And AI's are only mediocre at brainstorming and grammar checking anyway.)

I've heard folks say the tech will get better, these problems are fixable, etc., etc. But coming from the computer science field myself and having studied LLMs back in the 20th century (GOOD GOD!), I'm unconvinced. The technology hasn't changed very much in that time, only the amount of data and server power available (and the billions of investment dollars to make it look like things are better).

So, I won't be using AI for the foreseeable future. Writing is hard, but not because humans are bad it. We're actually the only beings on Earth that are any good it! Making a computer write for me (and not very well) just makes me wonder: What am I buying with that time, when instead, I could be making something new?

That's me. I'm curious your thoughts (but do be kind in the comments if you want them to stay there).

Enjoyed this post? Stay caught up on future posts by subscribing here.


Grounding the Reader in the Scene

— September 03, 2024 (0 comments)

In a first draft, we often write things as they occur to us. Maybe some dialogue first, an occasional gesture or action by one of the characters, throw in an emotion or two. The result might be something like this (for the purpose of illustration, I have hacked this passage from Leviathan by Scott Westerfeld):

"How long can we last without parts, Klopp?" Alek asked.

"Until someone lands a shell on us, young master."

"Until something breaks, you mean," Volger said.

Klopp shrugged. "A Cyklop Stormwalker is meant to be part of an army. We have no supply train, no tankers, no repair team."

Alek shifted the cans of kerosene in his grip. He felt like some vagabond carrying everything he owned.

A functional scene, but confusing for anyone other than the author. The reader only knows what you tell them, and the lines above don't say much by themselves.

Grounding a scene means imagining that you are painting a picture in the reader's head (because you basically are). Without any additional context, the reader has nothing in their mind, a white space with only the characters and objects you place in it as you name them.


By the end of the first line above, the reader knows there are two characters: Klopp and Alek. They might know something about these characters from previous scenes, but they don't know where the characters are or what they're doing now. All they have to imagine are two characters they know standing in empty space.

The third line adds another character: Volger. The reader now has to reimagine the scene, possibly even replaying the first two lines in their head to imagine Volger also being present. This slows the reader down as they have to rethink what they thought they knew.

The fourth line mentions a Cyklop Stormwalker, some kind of vehicle. Are they in this vehicle? Are they repairing it? Who knows? Not the reader, but they have to revise their mental image again. Finally, in the last paragraph, we get some visual. We know that Alek is carrying cans of kerosene, so maybe they're carrying these back to the Stormwalker, but where are they now? The author might know, but the reader doesn't

The most straightforward way to fix this is to ground the reader in the scene. Start the scene with a description that answers the questions: Who is here? Where is here? What are they doing?

For example in the passage above, we could add the following paragraph before the dialogue:
Alek, Klopp, and Volger trudged along the streambed, the kerosene sloshing with every step, its fumes burning Alek's lungs. With each of them carrying two heavy cans, the trip back to the Stormwalker already seemed much farther than the walk to town this morning.
With just a couple of sentences, we now know who is in the scene (Alek, Klopp, and Volger), where the scene is (along a streambed), and what they are doing (carrying kerosene back to the Stormwalker). This simple addition makes it far easier for the reader to visualize the scene, and they don't have to revise that mental image with each new line of dialogue.

But what if the reader stopped reading at the last chapter and hasn't picked the book back up in months? Or what if they were distracted when reading the last chapter? Or what if they just don't remember the details—or at least the important details—of what happened in the previous scene? It is often useful to drop a hint of where this scene occurs in the plot as well as in time and space, something like this:
And yet, thanks to Alek, they'd left behind most of what they needed.
This serves as a quick, clean reminder without needing to do a full recap. The reader knows something bad happened, and the line above will be enough to remind most readers what that thing was.

It also has the added benefit of implying what Alek feels in this scene, which is in some ways even more important.

Let's put it all together and add a little bit more of Alek's emotions to the scene (i.e., let me show you the full passage that I hacked apart for illustration):
Alek, Klopp, and Volger trudged along the streambed, the kerosene sloshing with every step, its fumes burning Alek's lungs. With each of them carrying two heavy cans, the trip back to the Stormwalker already seemed much farther than the walk to town this morning.

And yet, thanks to Alek, they'd left behind most of what they needed.

"How long can we last without parts, Klopp?" he asked.

"Until someone lands a shell on us, young master."

"Until something breaks, you mean," Volger said.

Klopp shrugged. "A Cyklop Stormwalker is meant to be part of an army. We have no supply train, no tankers, no repair team."

"Horses would have been better," Volger muttered.

Alek shifted the burden in his grip, the smell of kerosene mixing with the smoked sausages that hung around his neck. His pockets were stuffed with newspapers and fresh fruit. He felt like some vagabond carrying everything he owned.

"Master Klopp?" he said. "While the walker's still in fighting prime, why don't we take what we need?"

Now we have a scene that can be easily visualized, that doesn't require mental revision as the reader reads each new line, that reminds us what the characters are trying to accomplish, and that shows the character's emotions. In other words, we have a well-grounded scene.

Should this be what was written in the first draft? I mean, only if you already have a clear, clear idea of the scene from the start. For most of us, the first draft is essentially our pencil sketch of the story. Revision is where we make it read well, like I've done above.

I can't say that this is how Scott Westerfeld actually put this scene together, but it's how most of my scenes get put together and probably most of yours. Write what comes to mind first, then go back and make it look like you knew what you were doing all along.

And if you still need help, well, that's what editors are for.

Enjoyed this post? Stay caught up on future posts by subscribing here.