|
Welcome back to The Best & The Brightest. I’m Teddy Schleifer, taking the reins today for a story about OpenAI and Sam Altman, including some new dish on everything from how board member Helen Toner viewed the nonprofit charter, to the closed-door argument that Sam Altman got into with an OpenAI critic in a Stanford class just the other week. My partner Tina Nguyen will be back in the driver’s seat tomorrow.
If this email was forwarded to you, now is a good time as any to sign up for Puck. You can do so here.
But first, some news on Biden’s re-elect from Tara Palmeri…
- Hillary to the rescue: The dynamic between Barack Obama and his former V.P., President Joe Biden, is a complex and teetering wonder to behold. I’ve heard, as The Washington Post also reported, that Obama’s failure to vocally back his former V.P., or even mention him when prompted on several occasions, has led to understandable tensions between the two camps, which are filled with many former colleagues. Obama offered a few perceived slights of Biden at his Democracy Forum in Chicago earlier this month. (The event was moderated by my Puck partner Baratunde Thurston.)
Absent Obama’s invisible hand, I’ve learned that Biden is getting some fundraising help from Hillary Clinton. The former secretary of state (and winner of the popular vote) is raising money for the Biden Victory Fund via an exclusive dinner and conversation at her home in Washington next Monday. The event is a blessing for high-level Democrats who keep repeating to themselves that they will raise $1 billion to deploy on advertising to buoy the president’s re-election campaign.
|
And now, the latest Capitol Hill dish from Abby Livingston… |
The Johnson Effect & The Dems’ Retirement Paradox |
|
- “Member Dues” Blues: The House G.O.P. vacation from basic governance this fall was not the full-blown fundraising catastrophe for the N.R.C.C. that many had anticipated. And now that campaign finance reports have officially been filed, it’s clear why.
The $5 million that House Republicans raised in October paled in comparison to the Democrats’ $8.1 million haul. But starting on Oct 24, the day Mike Johnson secured enough support during his successful floor vote to become speaker, dozens and dozens of G.O.P. members cut big checks to the committee, totaling more than $1 million through the last week of the month. More than a few members wrote $10,000 checks, and several Republicans transferred six figures to the committee. One of the more interesting donations: Mario Diaz-Balart, who gave $25,000 that week, despite being previously perceived as one of the most frustrated Republicans in the conference.
Of course, both parties expect members to make these kinds of donations from their campaign accounts, a tradition better known as “member dues.” At the outset of every congressional term, leadership assigns members a certain amount of money they’re expected to raise or donate to the House campaign committee, which is determined by committee assignments and party rank (the higher you climb, the higher your dues). But there’s no way to require members to pay, so this surge of donations helped the N.R.C.C. save (some) face in October and demonstrated early enthusiasm for the Johnson regime.
- Democratic Retirements: Another day, another big House retirement. This time, it’s Anna Eshoo, the longtime Energy and Commerce member from the Bay Area, known as perhaps Nancy Pelosi’s closest friend in Congress. Sure, it isn’t terribly surprising that a member might decide to step back after her best pal withdrew from leading the party. What is startling, however, is the rate of Democratic retirements.
Many of these recent exits can be explained by situational circumstances: Some Democrats are jumping ship to run for other offices—senator, governor, mayor—while others have personal affairs forcing them to hang it up. But, in aggregate, Democratic departures are far outpacing Republican retirements, which was not expected when the House G.O.P. was mired in October’s ungovernable quagmire that included screaming and crying behind closed doors.
Nevertheless, it’s still early in the retirement season, and Republicans seem to be taking a wait-and-see approach regarding Johnson’s leadership style. But there’s increasing concern within the Democratic conference that Republican chaos simply isn’t running off Republicans, and is instead running off Democrats—the kind who know how committees work and care about passing actual legislation.
Some of the most thoughtful, knowledgeable Democratic members—young and old, alike—have simply had their fill of Republican governance, and see the House of Representatives as a waste of time. It’s been expressed to me that for younger, upwardly ambitious Democrats—the Slotkins, Spanbergers, Porters, etcetera—the House has become such a miserable, frustrating place that running for higher office is an increasingly easy choice.
So far, most of these retirements have been in safe seats that should be easy holds for the respective parties, and opportunities for a younger person to rise to power. But should there be a crush of new blood in either party, it’s likely that anti-establishment sentiment within the House will creep closer to critical mass, and we will see more and more challenges to the House’s once-sacred social mores, comity and traditions.
|
|
Altman Alternative Facts |
What Sam said at Stanford when confronted about ChatGPT. What people think of Sam’s effective-altruist board member, Helen Toner. And what the hell happened at OpenAI. |
|
|
There are already multiple books being written about Sam Altman and OpenAI, and even more in the market this week, for good reason: On Friday, his board of directors, out of nowhere, forced him out; he staged a tremendously impressive counter-coup by mobilizing hundreds of his employees (including, incredibly, the original coup leader); and won the support of all his investors, including Microsoft, against the company, itself. Then, Altman promised to take his talents to Redmond, though that’s probably a head fake. In the process, much of the $86 billion valuation of OpenAI was temporarily vanquished.
This is not only a business story but also a larger, more meaningful plot point in a high-minded debate that has riven ethicists and technologists for the last few years about the speed of artificial intelligence development, the most epic technology that we’ve seen in years. There’s plenty we don’t know—events are still unfolding rapidly, with Altman allegedly joining Microsoft, OpenAI employees threatening to resign en masse, and discussions underway as we speak to potentially reinstate Altman as C.E.O. In the meantime, I’ve been talking with plugged-in sources across Silicon Valley to compile a more textured view.
|
“Moratoriums… Or Whatever” |
|
Whether or not he was really fired for developing and commercializing ChatGPT too quickly for the board’s liking, Altman didn’t make secret his occasional frustrations with the guardrails that doomsayers wanted to place on the technology. Sometimes he would betray his disdain for the OpenAI critics who felt he was not prioritizing A.I. safety over winning the race to build a hugely profitable superintelligence business.
For instance, two weeks ago, I’ve learned, Altman found himself squabbling with an activist who had worked successfully to get the Federal Trade Commission to investigate OpenAI. Altman had been speaking to a Stanford extension class under Chatham House rules, so this readout wasn’t supposed to make it to me. But, I’ve learned, Altman was clearly miffed with the first questioner who unmuted themself on Zoom. “You can go ahead and write all the complaints you want for, like, a moratorium on GPTs,” he told the activist, after praising OpenAI’s “unusual structure” and “profit cap” and the importance of building A.I. technology that creates maximum benefits and minimal downside. “I don’t think you actually mean that in a serious way, but that’s all right,” he said.
A little while later, when answering an unrelated question about the jobs of the future, he snuck in another shot at the activist. “The world does need some people to write silly letters calling for moratoriums or whatever, but I think most people really do want to create and push things forward. And with new tools, we’ll do that more and more.”
This was vintage Sam, as he’s known in Silicon Valley, where anyone with a third comma in their net worth gets the single-name treatment: well-rounded, at least compared to his three-comma peers, socially adept, but also someone who can struggle to hide his arrogance behind his boyish smile, especially when confronted with criticism he deems “silly.” And, as he told the activist, he was an absolute believer in his “unusual” business structure, which served him well… at least until Friday.
|
Is Effective Altruism “Done?” |
|
The OpenAI saga has yielded many losers. One, without a doubt, is effective altruism, which over the past two years has turned from an ascendant philosophical force in Silicon Valley to a punchline. “OpenAI’s board members’ religion of ‘effective altruism’ and its misapplication could have set back the world’s path to the tremendous benefits of artificial intelligence,” wrote Vinod Khosla, the first outside investor in OpenAI, on Monday.
Khosla speaks for plenty of industry intellectuals who are totally fine with applying a more data-driven approach to philanthropy and decision-making—whether they want to call it E.A. or “rationalism” or good old-fashioned “utilitarianism.” Those are all terms that plenty of Silicon Valley leaders who read Scott Alexander and Tyler Cowen proudly adopted not too long ago. But these people now view E.A. as a value-destruction creed that has led to the annihilation of two multibillion-dollar startups (both led by people named Sam). “E.A. is done,” one Silicon Valley politico argued. “Can be reconstituted later maybe. But these people have zero political skill.”
I don’t know that E.A. is “done”—OpenAI’s hard-core E.A. rival, Anthropic, is doing fine—but the philosophy’s most famous practitioner, Sam Bankman–Fried, is now in prison after twisting E.A.’s ends-justify-the-means ethos. In interviews, S.B.F. spoke frequently about the importance of making as much money as possible, in order to funnel those economic winnings into combatting “existential” risks such as pandemics. (Ironically, the E.A. crisis at OpenAI might boost FTX’s portfolio company, Anthropic, making debtors more whole…)
At OpenAI, on the other hand, it appears that board members Helen Toner and Tasha McCauley—both card-carrying effective altruists—were both genuinely concerned that Altman, for all his genuflections toward A.I. safety, was prioritizing the commercialization of ChatGPT over whatever danger it poses to humanity. At the end of the day, the board decided that it would be better to fire Sam, potentially sabotaging an $86 billion company, than risk some theoretical future apocalypse.
The effective altruist takeover of the OpenAI board occurred gradually, even accidentally, as more traditional board members withdrew from the nonprofit—Reid Hoffman, Will Hurd, Shivon Zilis—and were never replaced. That left Toner and McCauley as two of the three independents. Ilya Sutskever, the OpenAI co-founder and board member who was initially behind the Altman coup, isn’t an E.A. acolyte himself, but he is part of a veritable cult of existential-risk fanatics—called “Doomers” by critics—who like to debate their estimates for “p(doom),” or the probability that A.G.I. will lead to the destruction of society. You blink, and suddenly the Doomers control half the board.
Another, perhaps better, explanation for the chaos that unfolded over the weekend is that some of the board members were simply in over their heads. Toner, in particular, was described to me as probably having lacked the experience to oversee one of the world’s leading technology companies. An Australian 30-something, process-oriented Washington policy type who was an early employee at GiveWell and Dustin Moskovitz’s philanthropy before joining a think tank at Georgetown, Toner is an E.A. but not part of the “religion” as one person put it. “On the normal end of the E.A. spectrum for sure,” said one person who knows her. “Smart and sober, unusually savvy given her experience,” said another source. “But of course [she] has never navigated anything at this scale or complexity.” “This is almost like a Janet Yellen personality,” said a third, “not a Caroline Ellison.” This friend told me, based on their conversations, that they thought Toner had a “narrow reading of the charter” of OpenAI.
As for McCauley, the Bard-educated roboticist better known in some quarters as the wife of actor Joseph Gordon-Levitt? Someone who knows both her and Toner told me that they were more surprised by McCauley’s actions because she is “more startup friendly” and “very experienced.” (Though this person also said McCauley was a real effective altruist, having hung out in E.A. circles for about eight years.) Another friend of McCauley’s argued that she wouldn’t have made a “rash” decision without a very good reason. “I would not expect them to take it lightly to make an unprecedented move in A.I. without thinking very carefully.”
|
|
All this gets back to the bigger question that has mostly gotten lost amid the flurry of headlines: Were these smart, if inexperienced, people on OpenAI’s board onto something when they decided to fire Altman? We should allow for that possibility, and I fear we haven’t. The governance model of OpenAI, after all, deserves to be taken seriously. Some people have been aghast that Altman could be fired by people with no skin in the game. But of course, that was the entire point of the board—to adhere to a nonprofit charter that describes “humanity” as the company’s “primary fiduciary responsibility.” The governance model is unusual and certainly open to criticism, but OpenAI’s board being bad at capitalism is its raison d’être. This was a feature, not a bug. (One FTX executive, referencing Bankman-Fried’s well-known hatred of governance, joked in a text to me the other day: “S.B.F.’s ideas on boards looking less crazy.”)
To some extent, in fact, you could argue that Altman’s defenestration and the revolt that followed was a victory for the A.I. safety crowd—especially since we still don’t really know why the board fired him. For the most part, the tech media has been pretty dismissive of the board’s logic and has barely considered the possibility that there could be a there there. If the board really had reason to believe that ChatGPT’s capabilities were crossing over into dangerous territory, they would be wise to leak a few specific anecdotes to get their side of the story out.
Anyway, this is precisely why OpenAI placed its for-profit division under the control of the nonprofit entity, in 2019, when it opened the door to big-money investors, like Microsoft. And it appears to have worked precisely as intended, for better or for worse. Silicon Valley heavyweights like OpenAI investor Josh Kushner have tried to put their thumb on the scale with their charms of persuasion to rehire Sam, but short of an explosive lawsuit against a portfolio company, there are no better options but to convince the board that they were wrong. Khosla was even on Twitter asking for random followers of his to slide into the DMs of Emmett Shear, the new C.E.O., and gleefully sharing the best oppo he could get his hands on. Ron Conway was jumping from the top rope, comparing the situation to when Steve Jobs was fired from Apple and saying that the “coup” must be stopped. This is a new era of investor warfare.
After all, Sam and deputy Greg Brockman can get new jobs and continue their A.I. research, but the investors in OpenAI are at risk of being the true losers here. The equity in their expected $90 billion startup could collapse to zero. You could argue—and some people have to me, privately—that investors like Sequoia should have had the foresight to never have agreed to the structure in the first place, where they’d have minimal control or information rights for their investment. It’s kind of their fault that they agreed to invest their limited partners’ money in a startup that they couldn’t protect, no? At the time, they presumably justified getting involved in the company, in the first place, as a coup. But they didn’t de-risk their downside—in case, you know, there was a real coup.
|
|
It is a little hyperbolic to compare the firing of Altman with Jobs, but the OpenAI board has botched the public communications here, obviously, and Sam has played it perfectly. It is clear how much Sam has benefitted from the general goodwill he has accumulated over the last two decades.
The reality, unspoken because no one wants to break the fourth wall, is that lots of reporters like Sam. I do! He makes time for us. He is a fun chat. He understands the game, our egos, our insecurities. He talks to you when he doesn’t have to, and is especially friendly at a time when many Silicon Valley personalities are viscerally anti-media. He wasn’t necessarily doing that with an eye on the moment, years down the road, when he would get shit-canned, but Sam, I think, understood that it wouldn’t hurt to cultivate the media even when there were better uses of his half-hour. You never know!
That practice, without a doubt, has paid dividends this week. Reporters came into the storyline with a sense that Sam is a reasonable, available, honest person, and the burden of proof, so to speak, was on OpenAI’s board to convince them otherwise. That was especially challenging given the relative silence of this largely anonymous, amorphous group. Sam, on the other hand, was probably going to be Time’s Person of the Year. He still might be…
And then there’s the tactics. Have you ever seen an executive use Twitter to rally his fans in such a wholesome, non-cringey way? Sam’s lowercase letter tweets—and the heart emojis that followed from hundreds of his employees—comprised one of the most effective social-media marketing campaigns I’ve ever seen. (That’s to say nothing of the angsty tweet with his guest pass at OpenAI HQ, a tweet that is going in Corporate America’s Hall of Fame.)
Meanwhile, the board? There was zero attention paid to the public narrative whatsoever. The blog post on Friday evening seemed to be a bet by the board that they could escape without mounting a public defense of their decision, with its vague innuendo of Sam’s “candor” problem, sans any explanation or examples. Shocker, that didn’t work, especially when they appear to have done zero thinking about the legal and P.R. questions at play.
You could argue—and some have—that Sam’s impeccable reputation is already fading. It used to be that Sam’s critics would only argue privately that he never built a successful startup from scratch, or that he is a bit too addicted to the limelight, or is essentially a marketer with impeccable connections and the right social graces but not actual startup chops. Now for the first time, serious people are accusing him publicly of being dishonest—and quietly celebrating that the once-invincible boy king has been taken down a peg, if only temporarily.
|
|
I bet Sam ends up back at OpenAI at the end of this, but everything won’t be hunky-dory then. At the heart of OpenAI was the fundamental debate over how to commercialize a potentially dangerous technology. As an E.A.-aligned source close to all of this put it to me the other day, OpenAI as we know it was built on a fundamental contradiction between the objectives of its for-profit business and nonprofit structure. Someone, eventually, had to be wrong. We just still don’t know who.
The last question Sam got at Stanford had to do with the so-called Eisenhower Matrix, or the tradeoff between solving “important” things, like safeguarding the world from A.I., and “urgent” things, like advancing it. Sam told the students they had to do both.
“Unfortunately I think both sides of that debate really devalue the other. And that has not been helpful,” he said. “You caring about something doesn’t mean you have to tell other people that they’re wrong or bad or whatever to care about the other side of it. This is, I think, a time where we can talk about the whole thing.”
|
|
|
FOUR STORIES WE’RE TALKING ABOUT |
|
Haley Mary |
Can Nikki Haley capitalize on her surging campaign? |
PETER HAMBY |
|
|
|
|
|
|
|
|
|
|
|
|
Need help? Review our FAQs
page or contact
us for assistance. For brand partnerships, email ads@puck.news.
|
You received this email because you signed up to receive emails from Puck, or as part of your Puck account associated with . To stop receiving this newsletter and/or manage all your email preferences, click here.
|
Puck is published by Heat Media LLC. 227 W 17th St New York, NY 10011.
|
|
|
|