Me, Myself, & A.I.

OpenAI C.E.O. Sam Altman, announcing ChatGPT’s integration with Microsoft Bing.
OpenAI C.E.O. Sam Altman announcing ChatGPT’s integration with Microsoft Bing in February. Photo: Jason Redmond / AFP
Baratunde Thurston
March 5, 2023

It’s been three months since I wrote about ChatGPT and the implications of generative A.I. models. In the intervening time, I’ve experimented, been impressed, been scammed, and now I’m back with a set of updated observations on this trend that might never go away. Herewith, five observations about the present and future of A.I. right now.


I. This Time Is Different

We’ve experienced a lot of technology hype over the past few years: crypto, Web3, and the metaverse, etcetera. In each of those cases, proponents made bold claims about how the technology would change everything. In some cases, it started to: venture capitalists invested billions in dubious startups; tech workers fled traditional tech for web3 concepts; your relatives and rideshare drivers all started talking to you about crypto. 

But in all cases, the momentum waned. The government is coming for crypto after the collapse of various coins and exchanges. Web3 is still hard to define. Mark Zuckerberg changed the name of his entire company just to ride the trending topic of the metaverse, but in his latest earnings call he barely mentioned the technology, instead suggesting that 2023 would be “the year of efficiency.” I’m surprised he didn’t change the company name to Net Profits. 

As we find ourselves in another technological hype cycle, this time circling ChatGPT and other large language models (L.L.M.s), it feels fair to ask: Will this cycle be any different? Will anyone be investing in, talking about, or using chatbots two or three years from now? I’m betting the answer is yes

ChatGPT is already having a meaningful impact on our culture. It reached 100 million active users only two months after its launch, making it the fastest-adopted technology ever. Microsoft deployed the thing in Bing (which got the entire world to realize that Bing is still a thing!) and Google and Meta are racing to catch up. Generative A.I. is dynamic and pervasive enough that it will find its way into multiple areas of our lives beyond what we already see—and we see it everywhere. Your filters on TikTok, Snapchat, and Instagram? That’s A.I. The autocomplete in Google Docs? A.I. The auto-framing feature in Adobe Premiere? A.I. That annoying chat with customer service? Probably A.I. This is just the beginning, not a fad.


II. Be Careful What You Ask For

Despite my own awareness of the limitations of large language models, and my explicit reference to the bullshit text it can generate, I was recently fooled. In my last piece, The Black Liberation Paradox, I wrote at length about the conundrum I’ve faced in defining what freedom for Black Americans really means. I had outlined the piece and worked on it for literally months. As I neared the end, I knew that I wanted to make a point about the power of fiction and imagination, and decided to try out ChatGPT as a research assistant. My prompt: “Please share examples of Black writers, artists, and intellectuals who believe in the value of imagination in the effort to achieve Black Liberation.” I was fishing for a reference or quote I didn’t already know; Octavia Butler always comes to mind, but ChatGPT “informed” me that James Baldwin had written about this very topic.

With the unabashed confidence of a university student who definitely didn’t do the reading, yet eagerly volunteers to answer the professor’s question, ChatGPT said, “In his essay The Creative Process, Baldwin wrote that ‘the imagination creates the space for us to dream beyond what is immediately visible, to be more than what our circumstances might suggest.’” This sounded great, but there was a catch: Baldwin did write an essay called “The Creative Process,” but he did not write those specific words in that essay or any essay. In fact, based on my online searches, no one wrote those words. ChatGPT invented them.

Thankfully, my still-human editors fact-checked the piece, saving us all some embarrassment, and some old-fashioned manual research led me to an alternative Baldwin reference which had the benefit of being real. When I shared this story with a software developer friend, he told me I can avoid this by instructing ChatGPT not to invent things. I wouldn’t have to say that to a human research assistant, but these bots need explicit guidance on the whole misinformation thing. And sure enough, I got better, accurate results after I resubmitted the request with this addendum: “Do not invent quotes. Provide only examples and quotes you can support with clear attribution.” I think I’m also going to start adding “…and please don’t kill all the humans” to each prompt from now on, just to be safe. You’re welcome. 

As more of these generative tools are tested publicly, it’s becoming clear that how you ask for a result is just as important as what you ask. I had joked on a recent episode of Puck’s podcast, The Powers That Be, that software engineers will soon be replaced by prompt engineers. Within hours of saying that, I learned there really is such a thing, and prompt engineering is a fast-growing field! There are online marketplaces where people share and even sell prompts, and leaders in A.I. are saying things like, “The hottest new programming language is English.” 


III. A.I. Spammers and Scammers

Because my YouTube algorithm is quite good at surveilling me, I’ve been getting a bunch of video recommendations related to A.I., and one type in particular keeps coming up. It’s a man—always a man—explaining to me how some finite number of A.I. tools, always less than 10, can be used to grow my business. These A.I. evangelists aren’t interested in helping me be a better writer or artist or citizen. They ostensibly want to help me track down sales leads, and automate outbound messages. Essentially, they are next-gen spam artists. And the spam—and far more malicious uses for A.I.—is about to get turned up to 11. 

Imagine your parents or in-laws getting a call from someone that sounds like you, asking for their bank or other personal information. Imagine our social media feeds primarily filled with A.I.-assisted or generated content. In the near future, I could just tell Instagram to post a video every day, in my voice, in which I comment on the weather and news headlines for that day. Industrial scale content farming and constant growth hacking will follow. Bots will scrape LinkedIn for any possible sales leads then bombard them with entreaties. 

To counter this, researchers are working on developing watermarks that can be embedded in the large language models of chatbots, making it easier for us to distinguish synthetic text from text truly written by humans. This will help teachers, employers, and others, but we’ll need a lot more. We are in for an avalanche of bullshit not only in text, but audio and video. Ironically, the best way to police fraudulent use of A.I. tools may be more A.I. This might be the true “A.I. arms race,” not one big tech company versus another, but A.I. truth detectors versus misinformation spreaders. As in traditional wars, the true winner will be those supplying arms. 


IV. You Might Need An “A.I. Strategy”

Every time a new wave of technology lands ashore, we are told we need to change how we do everything, and to embrace it and integrate it into our lives. What’s your social media strategy? Short-form video strategy? Blockchain strategy? Microblogging strategy? I think the proliferation of automation and A.I. tools will launch us into the next round of these questions, and the impact for businesses is going to be tremendous. 

I’ve hosted two seasons of a branded series with Lenovo called Late Night I.T., where I do some serious nerding out with tech leaders about sustainability, hybrid work, diversity and inclusion, and more. In a recent episode, I spoke with a chief automation officer and a digital transformation leader, and both made the point that I.T. departments are understaffed and unable to support all the technology that employees use. Everyone is looking to increase efficiency and the best people to design tech tools aren’t those in the “tech” department; it’s the people actually using them day-to-day throughout the organization. That’s already beginning to change. Tools that help people quickly edit images, generate text, or create their own bots will be deployed deeper into organizations to help with everything from setting meeting agendas to measuring the financial return on marketing spend. There’s a good chance company org charts will expand to represent not only headcount but bot count. 

As I ponder this question for myself—do I have an A.I. strategy?—I start to dream up scenarios where it would be useful. I’ve made my own list of about 20 A.I. tools to investigate. Most are mere novelties to play with, but others are actually useful. Browse A.I. helps me build my own bot to monitor changes on websites—great for being first to know when out-of-stock products return to inventory, for monitoring price changes, or monitoring changes in rhetoric by companies or politicians. RunwayML lets me remove unwanted objects from videos. And there’s hundreds more at sites like Future Tools

One tantalizing possibility for media businesses and content creators is the ability to converse with a body of work, something I think of as conversations with data. Could I query junior-high-school-me via my old journals? Or transform my podcast, and the scores of conversations I’ve had over the past few years, into an interactive database? I raised this possibility with my software developer friend mentioned above. (Peter. His human name is Peter.) Within an hour, Peter had built a working prototype. He literally had a verbal conversation with a bot built around my How To Citizen library, and it accurately responded to him about the guests, their individual and overlapping themes, and my opinion about those themes. Now, imagine doing this for all of Puck. Or the Marvel Universe, or every newspaper article written in your town. 

We’ll soon be able to interact this way with contracts and financial services, too. The financial app Truebill (now part of RocketFinance) gets close to my vision by auto-importing bank and credit card activity to identify subscription services and help me cancel them. But I want something more, for all my financial data. I want a service that will help me find financial products that actually serve me. Or help me understand the terms of service for a product I use. Or help me make sense of my health insurance plan. All of these examples share the common trait of being massively complicated, and needlessly so. My friend Ron J. Williams, an entrepreneur, investor, and partner at a venture studio, recently wrote about his belief that we’re entering an era of “Radical Comprehensibility” in business, as opposed to an era of hiding your terms or pricing and betting your customers won’t find out. As he put it, “in a generative A.I.-everywhere world, it will be tough to bury the implications of choices. Hiding where and how you make money will be impossible over time… because consumers won’t need a calculator and a lawyer to understand traps in the fine print.” Then he said something that stopped me in my tracks: “Imagine being able to ask your insurance company which health insurance policy you should select and actually getting really good answers based upon your prior year’s health activities and family planning goals?”

After seeing that, I recalled my own regular frustration with health insurance in the U.S. I tabbed over to ChatGPT and asked if it could read a PDF file. It essentially told me to bring it on. So I pointed it to the 180-page “Evidence of Coverage and Health Service Agreement” that describes, in impenetrable detail, what I get from my health insurance provider. I asked ChatGPT to help me understand the contents of that document (and stick to the facts only!), and it accurately dissected many points buried deep in the document that would have required multiple manual PDF searches on my part plus my own synthesis of the results. It did this in seconds. When I asked how my plan stacks up against others, it reminded me that “as an A.I. language model, I do not have access to a comprehensive database of all health insurance plans.” It should have added “yet” to the end of that. 

I want and need services like this, to help me make sense of insurance coverage, labor contracts, terms of service, and more. We all could benefit from something like this if we’re empowered to define the use case and customize the interpretation based on our own behavior and history. How about this one: Imagine being able to actually understand what’s in a legislative text! When the average citizen can understand what Congress is up to, the A.I. revolution will truly be here. That, or when Skynet nukes us all.


V. Culture Over Technology

Remember what a big deal it was for your friend group to have that one person who didn’t have a mobile phone? Whatever happened to that person? They got a phone, that’s what happened. Mobile phones are the price of entry in modern society. There are, of course, many tangible benefits to owning one, but we’ve also reached a point where those tangible benefits are overshadowed by powerful network effects: once a critical mass of people use something, to not use it is to exile oneself from the group. In an A.I. dominated future, are you going to be the one friend physically sending your own text messages, the one employee fully writing all of your emails, the one analyst scanning spreadsheets instead of interacting with the data through an interface powered by a learning model? 

Once enough people demonstrate the ability to work better, faster, or both, these tools will become omnipresent. A.I. will be like a performance enhancing drug, and we have to decide if we’re OK using it and under what conditions. There will always be a niche community of resistant old-schoolers and landliners and vinyl fans, but the vast majority go where the momentum is. That’s the tipping point ahead for A.I. 

Waves of backlash are inevitable, as we’ve already seen with artists voicing their loud objections to A.I. models imitating their work without compensating them. Just wait until you receive your first campaign robocall from an A.I.-generated Barack Obama. But based on the 100 million-and-growing users of the ChatGPT beta, I think we know the critics won’t hold back the tide. I’ve learned repeatedly with technology, and perhaps all human endeavors, that “good enough” most often wins the day. Cell phone quality wasn’t perfect when compared to landlines, but it was good enough to get the job done most of the time. A.I. will get things wrong, and devastatingly so, but I’m betting that with so much money on the line, most times it will be good enough to excuse the failures. And there will be failures, not just of technology but of culture. 

The fact that these learning models are built primarily on English and with the perspectives of our white-dominant culture will have a major impact as well, reinforcing Western thought in an automated fashion presented as fact. We’ve heard plenty about the new digital Berlin Wall dividing the Chinese internet and culture and the American/European one. But there are other culture wars brewing in which A.I. will play a role. Yet that built-in advantage won’t stop some from complaining that chatbots are “too woke” because they won’t write odes to Donald Trump or refuse to be baited into saying the n-word. Elon Musk, who helped establish OpenAI, has waded into these waters in a move that ChatGPT could have predicted, whining that the very company he funded is biased against conservatives. Now he’s backing an alternative A.I. project. The same fragmentation we see in the media will increasingly show up in our tech; soon we’ll all have A.I. systems and customizable algorithms to choose from. Are you ready for the future?