I make a living writing words, so naturally, everyone keeps asking me what I think of ChatGPT. It’s proven to be a surprisingly difficult question to answer. At least, an answer that doesn’t sound GPT-generated.

I’ve read about it. I’ve used it. I’ve seen what it and its cousins in the “generative AI” space can do, and what some hope it will do (to cash in on what’ll be a $100B+ market by 2030).

Many have applauded the operational efficiencies this tech will (supposedly) produce. And many others have decried the devaluation of human production this tech will (potentially) inflict. Much talk goes into how AI will take jobs and displace workers. Or how it’ll augment humans’ abilities so they can do more, reach higher, dream bigger.

But to me, a question fundamental to generative AI remains unanswered:


I agree that generative AI will let human workers produce more. Human writers can write more stuff thanks to ChatGPT and similar systems.

But why generate more stuff? Is more production necessarily better? 

And are we as content producers — and business leaders — clear on the purpose of production?

The Writing Assembly Line

Think back to your early 20th century history classes, and you may remember Fredrick Winslow Taylor. His management principles for manufacturing, colloquially dubbed “Taylorism,” laid the foundation for “scientific management,” a method to deconstruct projects into individually assignable subtasks and a system to assess and optimize productivity in factories. 

Taylor also had much to say on paying workers higher wages and enabling effective employer-employee relations to incentivize productivity. But most folks in the U.S. probably remember his principles inspiring Henry Ford’s assembly line concept. 

Fordism took Taylorism to its ultimate rational application by unleashing tight managerial control and deep division of labor to separate “thinking” from “doing.” It was productivity for the sake of productivity: a worker should stamp their widget and think of nothing else. People and machines were synonymous.

If your history lesson stopped there, you might’ve missed out on the assembly line’s evolution. While Western companies squeezed productivity from their workers, Japanese automaker Toyota adopted more of Taylor’s pro-employee elements. Specifically, that productivity (and high-quality productivity, at that) arose from workers sharing a common belief that their work mattered to the end result. 

In other words, a purpose

Peaking around the 1970s, Toyotism sets purpose as the superstructure from which all other productive improvements flow. Realizing optimal gains means workers should feel connected to their work and be encouraged to bring creativity and intellect into it.

Toyotism’s implementation waned throughout the ‘90s, but it laid a bedrock for the company’s modern-day results. It contributed to a symbiosis between productivity and purpose, a similar symbiosis that today’s job-satisfaction principles and programs strive to achieve.

Why the manufacturing history lesson?

I contend we’re seeing the same processes unfold with generative AI and how companies are applying it.

A tool like ChatGPT performs the same division of labor Taylorism pioneered. The data GPT ingests is tokenized (deconstructed into discrete units), then its predictive algorithm recomposes those tokens into a finished project (a blog post) based on user demands (the prompt). The AI operates as the “collective worker,” and its human operators are the factory floor managers demanding production.

Prompts take tinkering, and the system’s output beyond a few dozen words requires editing. But the idealized outcome is producing more stuff, faster. And, for all intents and purposes, GPT and its cousins accomplish exactly that. We can stamp content widgets more productively.

But GPT is a tool — generative AI is a doer, not a thinker. This tool will underpin the modern writing assembly line responsible for producing content in marketing, sales, PR, and many other industries. But humans will be responsible to think about that productive function and choose what shape it’ll take.

So how will we guide this technology’s evolution? Should we use it to squeeze productivity from every dark corner of our businesses, even at the cost of our human workers? Or should we use it in service of the larger purpose that connects both human and machine workers?

The Law of Averages: Generative AI Edition

Right now, I’d say most generative AI marketing campaigns lean heavily into Fordism. You can “write novels with one click” (sorry, Writers’ Tears Whiskey, time to pack it in). AI will automate your sales funnels from top to bottom — no more writing awkward cold outreach emails yourself. Calls abound to 10x, 50x, 100x your team’s productivity. 

There’s that word again. But to what end does “productivity” strive for?

ChatGPT’s trick is anticipating and compiling content based on the data it was fed. It’s looking for the most common combinations of tokens and using that (with a little behind-the-scenes finagling) to produce your results. 

In horrifically reductionist terms, generative AI is designed to produce an average result. Or, perhaps, an average of averages. (And yes, AI is constantly evolving. But it all still relies on the same humongous pool of human-generated data and algorithmic limitations; we’re now tinkering with use cases.)

I think of the scene with the villain Syndrome in The Incredibles. He’s captured the superhero family and unveiled his master plan to use technology to grant superpowers to anybody who wants them. As he leaves the heroes at their lowest, he delivers this memorable line:

When everyone is super, no one will be.

Well, wouldn’t you know it: generative AI will give workers “superpowers” too! I’ll take my heat vision and power of flight, please.

My argument is not to gatekeep this technology — if people want to use this power, by all means, go ahead. (Does that mean I agree with some principles of Syndrome’s? Huh. Curse those complex villains.) 

But using this tech should come with the understanding that you are accessing a well-packaged algorithm that will give you average results. I struggle to think of one company who wants their slogan to include, “We produce average results.”

When everyone deploys their “superpowers” to produce average results, what benefit will pure productivity offer? 

Let’s assert that, as writers (or content creators generally, since generative AI also produces images, voice, and video), our job is to sway our audiences through our content. If we further assert that an average piece of content causes zero sway in our audiences, then 100x of zero is…zero.

Productivity unmoored from purpose is the slippery slope that’ll dump us into the average content quagmire. You could have a knowledge base that would shame the Library at Alexandria, but if your audience can find that content literally anywhere else — if you don’t offer purpose-driven content that moves and convinces them — what benefit did all that work deliver?

Toyotism’s adherents believe purpose and productivity are deeply intertwined; they are separate yet inseparable. We need that same belief instilled in our content production, AI-powered or otherwise.

Where Does This Leave Today’s Content Writer?

I am by no means a Luddite on this generative AI stuff. Truly helpful use cases exist for the technology today. Personally, I’d love for a machine to spend one minute finding this article’s hyperlinked research so I could’ve spent 40 minutes lounging outside.

But that desire, like many other desires about generative AI, isn’t reality. We’re humans, so naturally, we project our hopes and dreams onto this collection of algorithms and databases. We want it to understand us. And to do the hard work for us.

But it can’t. Not yet — and maybe not ever.

And swaying audiences is hard work: confirmation bias is real, and most people are uncomfortable changing their minds. It’s why the creative work behind marketing (or any creative endeavor) is so damned challenging — and rewarding. That part cannot be so easily automated away.

That answers part of our question: Are we as content producers — and business leaders — clear on the purpose of production?

We are not — not yet. Many leaders still equate “number of pieces published” with “impact created in the marketplace of ideas.” They see the how and the why as one and the same, not as symbiotic partners.

The use of generative AI could exacerbate this issue. When we can write 100 pieces in the time it used to take to write one, we get so caught up in the how, we forget to even ask why anymore. And the ocean of average content swells. 

This power of increased production can’t function effectively without purpose. And leaders will struggle to find the symbiosis on their own. So as generative AI spreads to new content turf, we as creators must understand the technology’s limitations and advocate for purpose alongside productivity.

Where do we start?

  • We need original ideas from humans. The swell of average content means new, interesting ideas will rise higher. We must have access to our company leaders’ ideas that might actually move people — and be brave enough to push against internal resistance to sharing those ideas.
  • We need to give creativity the space to thrive. As SpongeBob SquarePants taught us, one Krabby Patty made with love beats a hundred patties churned out by efficiency magic. While perfect is the enemy of done, we must be willing to forgo flooding our audiences with average content simply because we can. Creativity takes time and effort, and the gumption to fail. But creativity is worth the investment.
  • We need to champion the higher purpose we aspire to. Your company and your work have a purpose. You believe something, and you want your audience to believe it, too. The why behind our work has to matter as much as the how, and we must shout this truth from the rooftops every day.

Generative AI Can’t Give Us Purpose, But It Can Support It

“So, Alex, do you use ChatGPT?”

Of course I do. It helps me work through ideas. Occasionally, it unearths good research. And it can whip up a mighty fine average cold outreach email.

But I understand what it cannot do. GPT alone cannot connect my work to its higher purpose. Alone it cannot elevate my work above the sea of average. 

Alone it cannot sway me.

That’s why we pay human writers.

Even as the technology improves, it can only ever emulate these concepts. Generative AI can pretend, but never truly understand, what purpose means to humans.

That last bit waxed philosophical, but the answer to our question is as much philosophical as it is business-related. There is a compelling qualitative and quantitative business interest to write not-average content, bolstered by engagement metrics, MQLs, and the like. But purpose itself is ill-defined by KPIs — it’s an ethos that drives our work and inspires us to achieve meaning in the marketplace of ideas.

Generative AI and its iterations cannot and will not give us purpose. Instead, in (good) practice, this technology can help alleviate some burdens associated with productivity. Ideation, research, email admin — these elements of work offer fertile ground for AI disruption. 

Successful use of AI leaves room for creatives to focus on infusing purpose into our content. To be intentional with our efforts to sway people. To say things that might actually change someone’s mind and bring them onto your team.

But we cannot skip the gritty work behind that infusion for the convenience of pumping out more stuff. The writing assembly line is here to stay. But the symbiosis between productivity and purpose — the how and the why — is more vital than ever. So let’s let generative AI help us produce, while we shoulder the responsibility to infuse creativity, insight, and purpose into our work.