Articles

Ned Ludd, Amara’s Law, and Protecting Your Career Against AI

26 June 2023
Ned Ludd, Amara’s Law, and Protecting Your Career Against AI

100 miles northwest of London in the heart of the English Midlands lies the land of Robin Hood, lace and industrial protest: Nottingham. In the early years of the nineteenth century, textile workers in the city began to lash out at their employers, who had taken to using weaving machines manned by untrained, unskilled workers to make shoddy products that would bring in a quick buck—or pound sterling, as it were. 

It wasn’t even the machines themselves that were the problem in their eyes. No, these skilled laborers who had apprenticed and trained and spent years of their lives learning their craft—which included making handy use of these machines—were being cut out and replaced. Manufacturers had become content to hire low-paid workers to make inferior products, rather than rely on the expertise of their craftsmen. 

So, as disgruntled workers throughout history have been known to do, they started causing a ruckus, sneaking in at night and bashing up these machines to stick it to The Man. They took as their patron saint a fellow machine-bashing malcontent from Leicester named Ned Ludd (he’s that handsome lad in the photo up there.)

They called themselves the Luddites. And modern workers have a whole lot of common cause with them.

One of the most persistent reasons that workers, especially in creative fields, are wringing their hands over AI is the possibility that, given the opportunity, they’ll be replaced. If a Large Language Model (LLM) like ChatGPT can write blogs, why hire writers? If generative image AI platforms and built-in AI like Adobe’s Firefly can generate images with a few words of prompt or just a few clicks, why hire designers? Even a decade ago an Oxford study predicted that up to 47% of U.S. jobs could be at risk due to automation. Recent estimates from Goldman Sachs suggest that up to 300 million jobs worldwide could be affected: 

Using data on occupational tasks in both the US and Europe, we find that roughly two-thirds of current jobs are exposed to some degree of AI automation, and that generative AI could substitute up to one-fourth of current work. Extrapolating our estimates globally suggests that generative AI could expose the equivalent of 300mn full-time jobs to automation.

The Potentially Large Effects of Artificial Intelligence on Economic Growth, Goldman Sachs, 2023

As a writer, researcher and strategist who works alongside a cadre of insanely talented designers, developers, and other creative geniuses, quite frankly that scares the hell out of me. 

And when I get stressed and in my head, I like to listen to podcasts to shift my brain to something else that’s both edifying and distracting. Imagine my utter horror to turn on my favorite podcast, Stuff You Should Know, and hear this little chestnut:

So one of the astounding things about this that it really caught everybody off guard is that these large language models, the jobs they’re coming after are white-collar knowledge jobs. Yeah. They’re so good at things like writing. They’re good at researching. They’re good at analyzing photos now. And that’s a huge sea change from what it’s been like traditionally, right? Whenever we’ve automated things, it’s usually replaced manual labor. Now it’s the manual labor that’s safe in this generation of automation. It’s the white-collar knowledge jobs that are at risk. And not just white-collar jobs, but artists who have nothing to do with white-collar or jobs, they’re at risk as well.

Large Language Models and You, Stuff you Should Know, June 20, 2023

Et tu, podcasts?

It’s easy to spiral in a cultural conversation wherein much of the conversation seems to center upon the inevitability that everyone who works at a computer will soon be rendered useless. Rather than speed headlong into catastrophizing, though, we’d do well to remember Amara’s Law, named for the late futurist Roy Amara:

We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.

Roy Amara

That makes it doubly important, then, to understand what AI is good at, what the limitations of this technology are and how you can future-proof (as well as anyone can) your career.

The Helping Hand(?) of AI

To understand what AI is good at, it helps to have a baseline understanding of how it produces content. Take the name ChatGPT. The “GPT” stands for Generative Pre-Trained Transformer. ChatGPT and other LLMs scrape existing content to essentially predict a sequence of words that make sense together. For example, if I type the word “thank,” an AI can predict that the next word I’m likely to type is “you,” because among the hundreds of billions of data inputs it has to work with, that’s usually what comes next. 

If I’m setting up a meeting with a member of my team and I type “Does that…” the AI can take a guess and finish out the phrase based on something that gets input a million times a day. So it knows that there’s a good statistical probability that what I’m about to say is “Does that time work for you?” That makes LLMs incredibly useful for spitting out a routine email at lightning speed, something I’m sure you’ve already made use of with Google’s Smart Compose feature.

Another area in which generative AI can be a huge benefit to creatives is ideation. Whereas I can put 10 ideas on a page in—depending on my caffeine intake—between 10 and 30 minutes, an AI can generate 10 ideas in three seconds. And one of them might be the spark that I need to do some deeper thinking, make some long-buried connections, and start considering something in a new way.

So, these platforms are remarkably helpful for doing low-level thinking for us. But when we rely too heavily on generative AI and LLMs to do too much of the heavy lifting, we get into some real trouble.

The Hilarious Ineptitude of AI

For all of our well-founded concerns about what AI could mean for the future of these knowledge jobs, there are some serious limitations to what it can do. And “limitations” is a generous term. We’ve already talked about how generative AI platforms are trained on extant data, and then assemble that data as a response to a prompt, in a way that is statistically likely. The biggest thing that means is that LLMs are incapable of novel thought. Well, they’re incapable of thought, really. They can only output what they can predict based on existing inputs. So you can forget about the works of Cormac McCarthy (RIP) or Jean-Michel Basquiat, or the off-the-wall menus assembled by the world’s most innovative chefs. 

ChatGPT was recently asked to describe the meeting between James Joyce and V.I. Lenin. It responded dutifully:

James Joyce and Vladimir Lenin met in Zurich, Switzerland in 1916. Both men were living in exile in Zurich during World War I. Joyce was a writer and Lenin was a revolutionary. They met at the Cafe Odéon, a popular gathering place for artists and intellectuals in Zurich.

ChatGPT

Imagine being there when two of the most influential men of the 20th century met each other. Well, you’ll have to imagine it. Because 1916 was more than 100 years ago, and also that never happened. ChatGPT totally made it up.

This is a persistent problem that plagues AI, known as Hallucination. Hallucinations occur when a bot not only gets something wrong, but fabricates it completely. 

This New York Times article doesn’t exist.

Pinning too much of the legwork of researching, developing and writing content on LLMs is an unreliable way to perform work that often leads to outputs that are poor quality at best, and factually inaccurate at worst. When professionals outsource large parts of their work to AI, it can have disastrous results. New York Lawyer Peter LoDuca found that out the hard way in his client’s personal injury case against Avianca airlines, when Avianca’s lawyers noted that they couldn’t find any information on eight cases that LoDuca had cited in his brief. LoDuca had enlisted ChatGPT to do the legwork for his legal brief, resulting in the platform fabricating eight legal decisions—and detailed background on those decisions—from whole cloth.

ChatGPT isn’t a search engine. It only knows what it’s been told. It then does its level best to reconstitute all that information into a series of words that make logical sense. If you can do better than that, you already have a leg up on generative AI. But, remember Amara’s Law? While we’re probably making too much of AI right now, there will come a day when we realize we’ve underestimated the impact of AI on our jobs. 

Three  Ways to Protect Your Job Against AI

So what can we do? Here  are some concrete things you can start doing that will make you irreplaceable. 

  1. Never Stop Learning
    AI can only automate tasks that require repetitive, predictable outputs. Stay curious, continually upskill, and focus on developing new approaches to complex, strategic problems. We talked about how susceptible AI bots are to hallucinations, so become the real-world expert in your job. Become a holder of the real, accurate knowledge that AI is trying to emulate.
  2. Learn Everything You Can About AI
    The more you understand about what AI can and can’t do, the more you can use it to your advantage. By understanding the menial tasks that AI can help you with, you can be more productive and less bogged down in low-level tasks. You’ll also be able to understand what parts of your work are susceptible to automation, and develop skills that AI can’t replicate that will make you indispensable.
  3. Work on Soft Skills
    As much as AI is sure to advance, one thing that it will never be able to do is empathize. It will never be able to collaborate, be compassionate, lighten the mood or pick up the load for a team member that’s having a tough time. Engage your team, build relationships and become inimitably human.


Thanks for joining us as we continue to explore the ins and outs of AI and what it means for our advertising and marketing world. Next time we’ll dive into who owns, and gets credit for, content produced by AI.

*The content above was 100% written by a human being.