The generative AI conversation online typically reflects the two major market sentiments: greed and fear. You find either hype-filled praise (greed) or depressing doomspeak (fear), and it’s easy for the uninitiated to disappear into these extremes. 

 

Why? Two reasons:

  1. Most people are unaware of generative AI. A Reuters survey released at the end of May 2024 found that half of Americans have never heard of ChatGPT, and another 20% either have never used it or have only tried it once.
  2. Most folks who consistently use AI happily leave the system’s internal workings locked in a black box, only tinkering with basic prompts and interactions.

Ignorance opens a window for hype and doom to intrude. “Here’s this magical tool that can do neato things. I don’t know how it works, but I’ve been told it’s initiating the Fourth Industrial Revolution/threatening to destroy humanity!”

 

Proponents of Clarke’s Third Law might agree that sufficiently advanced technology like GenAI seems capable of doing magical stuff (e.g. summoning marketing copy in seconds — alakazam!). But peek under the hood, and you’ll discover that underneath the hype, doom, and slick marketing copy, GenAI is just math. Still neato, but understandable.

 

And the power to use this technology wisely lies in understanding it.

My God, It’s All Math!

First, all AI is not created equal. Companies have co-opted “AI” in numerous ways because it’s buzzy and marketable. So, let’s set some terms:

  • Artificial intelligence (AI) is technology that simulates human intelligence and problem-solving capabilities. It can be applied broadly across many functional areas (read the linked IBM article for more).
  • Generative AI (GenAI) is an algorithm that generates content based on existing data and patterns. All GenAI is AI, but not all AI is GenAI.
  • Large language models (LLMs) are a subset of AI models specifically targeting text generation. ChatGPT is a popular example.

 

For the sake of this email’s length and readability, I’ll also grossly oversimplify AI modeling (sorry, data geeks). In short, math — specifically linear algebra and multivariable calculus — powers the generative AI ecosystem. All AI-generated text, images, and videos result from mathematical equations. It’s many equations requiring massive resources to compute, but equations nonetheless.

 

Understanding that reality makes generative AI less scary. You probably even liked math at one point, even if you rarely use it today beyond calculator operations. While you don’t need to recreate ChatGPT from scratch, even grasping the higher-level concepts and mathematical intuitions lets you lead meaningful dialogue on GenAI’s use cases, benefits, limitations, and workflow capabilities.

 

For instance, understanding vectors (part of linear algebra) helps you understand how GenAI models map and comprehend similarities between data. That’s an important component of Retrieval Augmented Generation (RAG) architecture, which merges LLMs (e.g. ChatGPT) with databases of business-specific information (e.g. website copy or product documentation PDFs) to generate more personalized outputs. If you ask your bank’s chatbot about a specific branch location’s hours of operation, RAG is likely powering the backend to deliver the specific response an LLM alone could not.

 

A working knowledge helps you analyze the noise around GenAI and determine what might be worth pursuing. That’s power, my friends.


Note: I am no data expert and am only scratching the surface. For those wanting to learn more, 3Blue1Brown, run by math and comp sci expert Grant Sanderson, explains these concepts brilliantly through awesome visual aids. It sometimes gets math-y, but he encourages you to “pause and ponder” to puzzle things out.

You should watch the complete series, but here’s the ChatGPT-focused video:

Build an AI Experiment on Your Terms

Despite the hullabaloo around generative AI, the bar for implementing this technology in PR workflows is still low. I think AI opportunities have gone largely unexplored due to… 

  1. a) the fear of math (and the fear of not knowing how AI applies it)
  2. b) the potential time/cost investment into understanding and tweaking models for use cases
  3. c) the need for tech-literate employees to champion opportunities to less-savvy bosses and leadership tranches

You can educate yourself out of Problem A. Problems B and C require basic system and process knowledge and a willingness to try and fail. Agencies have an advantage here because they’re typically less hierarchical than enterprises, lowering the bureaucratic barriers to experimentation.

 

If you’re ready to try, ChatGPT offers the easiest starting point with GPT Builder. It uses RAG to let you upload specific documentation and provide instructions and context to focus the LLM’s responses. For $20/month, you taste personalized power without needing knowledge of embedding models and vector stores.


If you feel more technically inclined, I’ve talked about NVIDIA Chat with RTX. It uses RAG to gather locally hosted data and build a chatbot running a personalized LLM to let you “talk” to your files. The demo dropped in February, and I’ve seen chatter debating its reliability. But if you’ve got the hardware, give it a go.

What Do You Do With Generative AI and RAG?

Ain’t that the $64,000 Question. RAG could support several data and content management use cases within agencies: 

  • Financial PR firm Gregory FCA used GPT Builder for its press release generation tool, Write Release and offers other GPTs for tasks like media training or editing. 
  • Edelman built Archie, a proprietary tool fed by the firm’s Trust Barometer data to track brands’ trust levels with the market and make real-time recommendations on improving trust. (This is custom-built; I wish I had Edelman’s budget).
  • I use a personalized GPT as an assistant for scoping and pricing projects (a big help with RFP prep).
  • Other firms have explored AI for media monitoring, deeper research, and supporting client interactions (just imagine auto-generating agendas).

 

I still caution you about what data you let LLMs ingest. Against common perceptions, models like ChatGPT are static files after training, so they won’t “remember” everything you input (they’re mathematics and algorithms, not people). It probably isn’t stealing your private data; however, AI companies’ secrecy and shifting T&Cs make it hard to believe 100% (ask Adobe how that’s going). 

 

For now, I would hold back potentially sensitive data from cloud-based LLMs. You can also follow OpenAI’s steps to opt your data out of potential future training.

 

If you’re nervous about manipulating client data, consider your internal information. For example, if you’re behind on marketing your agency (you are your own worst client, usually), you could feed your website, brand guides, and industry-specific content into GPT Builder. The ensuing model can help you use ChatGPT to generate blog articles and social media posts that better represent your brand voice, or identify prospect knowledge gaps you can fill with fresh content. That’s a relatively safe and potentially valuable experiment.


Much discussion remains on GenAI’s efficacy, ethical standing, legal and regulatory framework, data security, and quality capabilities. That’s a bunch of caveats communicators should reckon with. But if you’re thoughtful in your experimental design and careful with data inputs, you can run successful AI experiments — and maybe roll out MVPs to trusted clients.

AI Isn’t Magic — Treat It Like Tech

Regardless, you should try something beyond a few simple prompts in ChatGPT. Ignorance about AI lets hype artists and doomsayers thrive.

 

As a writer, I research concepts and learn from experts to develop sets of working knowledge of company products, services, and industry trends. GenAI is no different. Screeching at shadows dancing on cave walls does not serve writers well. We must understand the technology behind the apparent magic.

 

Don’t fear AI’s black box. Crack it open and peek inside.

This article was first published in my newsletter, The Executive’s Guide to the Content Galaxy.

Facebook
Twitter
LinkedIn

alex@sventeckis.com