Is the Next Tina Fey a Bot?

baratunde lensa
Images of the author as conceived by Lensa AI.
Baratunde Thurston
December 11, 2022

The robots are coming. That’s what we’ve been hearing for generations, with the assumption that artificial intelligence would soon arrive to replace human labor, especially the physical sort. First outsourcing, then automation, would further erode blue collar jobs in the developed world. 

In some ways, the machine revolution is already halfway complete. We don’t have elevator operators or telephone operators anymore. Many of us check ourselves out at grocery stores or rental car kiosks or airport newsstands. People can push their own buttons, and the folks who did those jobs have other gigs now, including working with technology to produce or oversee production of exponentially more output than was previously possible. As time has progressed, so has the speed of automated systems and the range of their possible application, so that our fears have grown beyond labor displacement in certain sectors to include wholesale displacement of physical human work writ large. 

With the shift toward rapid software development, cloud computing, and the ability of machines to build learning models based on vast troves of images and text, we are realizing that A.I. might be coming much more rapidly for the jobs we didn’t think were as easily replicable: artists, screenwriters, and even software developers themselves. The very people who pontificate about the impact of artificial intelligence might have their pontification taken over by artificial intelligence. Robotic poetic justice? 

The latest entry into the A.I. debate is ChatGPT, a chatbot trained on a massive dataset and capable of producing answers to questions and prompts that are indistinguishable from human-generated text. The chatbot was created by OpenAI, an artificial intelligence research firm largely funded by Microsoft, but also by Elon Musk, Peter Thiel, and others. ChatGPT ingests information from myriad sources and pattern-matches and correlates in such a way as to produce often-accurate answers delivered in human-enough voice, but it will not and cannot ascribe sources to any parts of its answers (at least not yet). Beyond decent paragraphs, ChatGPT can generate full essays, poems, even software code. 

Of course, it’s probably too early for ChatGPT to replace screenwriters like Tina Fey, but it’s got a decent chance of putting all those WikiHow scribes out of work. “As a writer, I was initially excited that a robot could possibly take over the part of my job I hate the most, which is actually writing anything,” joked my friend Matt Klinman, a comedy writer and former colleague from our time at The Onion. “So I messed around with it and like everyone I was initially really impressed. But then I tried to use it to do anything useful for me and it turns out it’s mostly just a bullshitter that rips off Wikipedia articles without even donating. Plus I had to check its facts, so not only is my shitty employee trying to take my job, but now I also have to babysit it so it doesn’t get both of us canceled.” Still, Matt noted, while he doesn’t yet know what he’ll use ChatGPT for, he’s “having fun messing with it,” and foresees a world where it might soon take over the most annoying parts of his job, like organizing notes—something that an assistant or a freelancer might otherwise help with. (By the way, Matt has a history of “messing with” technology for the sake of comedy. Back in 2013, helped build his own satirical version of a text generator that manufactured satirical conspiracy theories, and it’s worth a look at the hilarious horror that emerged.) It’s perhaps too early to say definitively whether ChatGPT is a good or bad thing, but it’s definitely a thing, and it’s definitely not the last we’ll be hearing about this kind of spooky disruption. 


Hard Choices

Back in the ’80s, my mother and I would plot our trips to campgrounds up and down the East Coast on another once-transformative technology: paper. Her AAA membership included access to “TripTiks”—printed, vertically-oriented maps that we’d receive in the mail, highlighting our routes and turns and exits, rest areas, and hotels. In the ’90s, I swapped out TripTik for Microsoft Streets and Trips on my Windows laptop, a stack of CD-ROMs pre-loaded with different roadmaps of the United States. I gave voiceover turn-by-turn directions to my mom as I rode shotgun. She called me “The Navigator,” and I loved that job. 

When Garmin and TomTom made GPS systems that any driver with some money could install themselves, I was excited about that too, opting for the upgrade to my rental cars and happily choosing the Snoop Dogg voice option whenever possible. But the pace of change came with a learning curve. As these dashboard GPS systems became widely available, people began getting into new kinds of accidents: not just with other cars, but with bridges or ramps or roads that weren’t finished. Drivers were so deferential to the “intelligent” technology, they failed to see what was actually in front of them. The GPS didn’t stutter or pause or deliver its instruction with any hesitation. The GPS told these people with absolute confidence to drive off the damn bridge, and some people did. They trusted the virtual display more than the real world—until they literally collided with the real world. (Another analogy might be what happened with media coverage of the 2016 election, where reporters over-relied on models and forecasts). 

The rise of ChatGPT, I think, is following a similar trajectory. It’s obviously innovative and possibly useful, but it also allows humans to become lazy and lose touch with their surroundings—if not our physical geography than our history, knowledge, and curiosity. ChatGPT confidently offers up false or even dangerous answers. There are scores of news stories documenting ChatGPT’s errors in identifying countries, inventing quotes by company executives, and worse. It’s a search engine that offers up only one answer and doesn’t reveal it might be wrong unless you already know the correct answer. 

At least with Google Search, you can scan multiple answers and compare them to arrive at a likely true response. A brave and enterprising searcher could even go to the second page of results! With a search engine you also have the ability to know the information source. Not so with ChatGPT. It has sucked in gobs of information, but literally can’t tell you how it generated the answer. I tried. I asked ChatGPT “Who is Fannie Lou Hamer,” and it wrote up a nice, accurate paragraph. Then I asked, “What sources did you use for that previous answer?” Its response:

As a large language model trained by OpenAI, I do not have access to external sources of information. My responses are generated based on the text that I have been trained on, which includes a vast amount of text from many different sources. In the case of your previous question, my response was generated based on my training and knowledge of the civil rights movement and the life of Fannie Lou Hamer. I do not have the ability to browse the internet or access other sources of information.

That would not be acceptable to any teacher I’ve ever had, yet students will use ChatGPT to generate all or parts of writing assignments, and even if they wanted to confess the use of the tool, the attribution would honestly have to read, “A machine that is designed in such a way as to be incapable of divulging its sources.” 


Industrial Content System

Technology, of course, is inextricably bound up with artistic and intellectual creation. Paint brushes let us do things we can’t do with our bare fingers. As we’ve shifted into a world of ubiquitous software, we’ve increased the speed and scope of what automated systems can do to vastly improve the volume of content we are capable of creating. But where does that lead? We already can’t process the content we create. 

Back in 2019, Tubefilter reported that YouTube users uploaded 500 hours of video every minute, or 720,000 hours per day. That’s over 82 years, longer than a human life. We take more photos than we have time to view. Just look at your own photo library if you need a reminder. There’s such a high volume of content to consume that we need machine help in consuming it. That’s part of why we get algorithmic recommendations on services like YouTube or Amazon. It’s why Apple and Google auto-generate photo albums on our behalf now. We need machines to help us process the information that machines are helping us create. The circular rub is that this processing assistance is strengthened by machines taking in the vast amounts of content we make with their help in the first place!

When it comes to ChatGPT, one wonders if inventing digital ghostwriting will necessitate the invention of digital ghost-reading. What is the role of humans if software can write more articles, poems, and code on an order of magnitude that only other software can manage to consume?

We’re not dead yet, and the generative A.I. era will have beautiful and useful contributions as well. I remember seeing a TED Talk by Gary Kasparov—did he write it, or a machine?—in which he made the case for human-machine collaboration. In the chess world, a machine alone couldn’t reliably defeat the best human player, nor could a human alone. But a human with a machine could win almost every time. My friend Cleo Abram has an excellent YouTube show called Huge If True, and in her A.I. art episode she comes to a similar conclusion: the “best” art comes as a result of humans working with machines. Disney researchers created a de-aging tool called FRAN that will save hours, if not months, of work from visual effects teams. (The image at the top of this article was produced by allowing an A.I. model to train itself on photos of me, taken by fellow humans.)

But what else does it look like to collaborate with machines, and how do we integrate this technology without merely succumbing to demands to increase output and lower cost, leading us to industrialize art and creativity at a frightening scale? It sounds so treasonous when I use the word “collaborate,” but it’s simply a fascinating and genuine question we are continually trying to answer. My gut is that we should insist on bringing more democracy to the process, meaning more citizen involvement and collective consent. 

If your art or images or words are being used to train a supermind that will displace the need for someone to pay you for your art, that is extractive and abusive. The artist Molly Crabapple recently posted to her Instagram an image she painted which reads, “Fuck A.I. ‘ART.’” The caption includes her observation that “capitalism goes with what’s cheap and fast, and an algorithm is faster and cheaper than a human who needs to eat.” The comments beneath it are illuminating and intelligent, but what resonated most powerfully with me were the comments from artists complaining about their own art being used to train A.I. models that would then make art in their style for much lower cost and without compensating them.

My own mother was a computer programmer who was made to train her younger, whiter, male-r, human replacements. We shouldn’t aspire to scale this form of labor and intellectual abuse. So how do we create a new form of labor and intellectual property protections so we aren’t just serving as feedstock to machines? We’ve got to upgrade our consent models as fast as we’re upgrading our machine learning models. For people whose work is powering the machines, there’s an intellectual property, sort of creative rights layer to this that may involve new models of attribution and compensation. For those of us subject to these systems, I want protections similar to those that prevent my data from being weaponized against me in credit checks or in my experience with social media or online advertising. We need meaningful consent, control, and compensation. 

We also need to design these systems with transparency and the ability to communicate the provenance of the information in human-digestible forms. “I can’t tell you,” should not be an acceptable response from any authority when asked about the sources of information involved in its decisions and declarations. We’ve got to be able to “show our work.” This, for once, seems like a job that machines can only achieve by collaborating with humans.

Finally, can we get more creative about the sort of problems we point these solutions toward? Can we use large language models to spot abuse or corruption in existing systems the way image processing models help radiologists spot cancer? What if, instead of unleashing tools on a workforce, we involved the workforce in developing the tools? What do customer service reps need, and how can automation help? Ask them! What do illustrators or legislative aids, or copywriters or Amazon product pickers need? Too many of these solutions are developed and funded by people who lack experience in the worlds their tools will affect. Imagine if we flipped that order of operations and designed tools to serve us?

I’m still processing and experimenting with all these questions myself. I can confirm that I am a human who wrote this piece, and that my skills were mildly augmented by software assistance with grammar, spelling, and a handful of auto-fill word choices. I then handed this article over to human editors who completed the process, and they handed it over to a content management system and email service provider whose software and code delivered it to you. And probably, some automated system is reading this right now, which will help a Baratunde-bot in the near future, write an article very much like this one.