 |
Happy Monday, I’m Eriq Gardner.
|
Welcome back to The Rainmaker, my new private email focused on the litigators and dealmakers remaking Hollywood, Silicon Valley, Washington, and Wall Street. As a reminder, this newsletter will remain free for another couple weeks before being limited to Puck subscribers. If you haven’t yet subscribed and wish to do so, click here. And for those who have already signed up, thank you so much! You’re supporting quality journalism from a company that’s co-owned by its authors.
In today’s column, I’m profiling a lawyer who is forcing courts around the world to reckon with artificial intelligence. I’m also breaking some news about a lawsuit he’ll soon be filing that could impact the future of creativity.
But first…
|
Netflix’s Hollywood Red Line |
|
I’ve been talking with a lot of transactional attorneys in Hollywood, and the No. 1 topic by far right now is what’s going on with Netflix. The streamer was, until very recently, the most profligate content machine in Hollywood, spending tens of billions of dollars every year on dozens upon dozens of shows, many of them low quality. Now, with its stock in the toilet and layoffs on the horizon, everyone in town is looking to Reed Hastings and Ted Sarandos for clues as to how Netflix might begin to pare back or restructure future deals.
So what can we learn from those who are sitting on the other side of the negotiating table from Netflix? The streamer is thinking a lot these days about new revenue opportunities. The company is pushing hard to capture ancillary rights including live stage adaptations, merchandising, and especially the ability to do video games. Perhaps a bit more surprising, Netflix wants the contractual ability to take content off platform. Why? Dealmakers can only speculate about syndication plans or even an eventual sale of the entire company. Also, while there hasn’t been much pullback just yet on new projects and renewals, talent lawyers are deeply concerned that Netflix’s free-spending days are over.
But the bigger question on the minds of many dealmakers is what the Great Netflix Correction could mean for how creatives get paid. In the past, Netflix dished out humongous nine-figure packages for the likes of Ryan Murphy, Shonda Rhimes, and Kenya Barris, but these deals were frontloaded with fixed fees rather than allowing creators to share in the financial success of whatever they created. So, no profit participation or stock options. The benefit is that Netflix could keep all the rewards for a hit like Bridgerton for itself plus keep viewership data secret and avoid those “Hollywood Accounting” fights. But, of course, Netflix also shouldered a lot of expense for shows and showrunners that bombed.
Now that Netflix is reconsidering quite a bit about its business model—including its previous hostility to advertising (which may necessitate more revelations of viewership data)—the streamer in theory could pursue an alternative approach that keeps costs light on the front end in return for sharing more riches on the back end. This might be appealing to talent, whose attorneys have been counseling clients that they should take the long view and push harder on this front. “Think about holograms,” they say.
But fuggedaboutit. At least for now, Netflix is not open to backend compensation for show TV creators, I hear. And the same really goes for many of the streamers who have been following Netflix’s lead and moving away from the profit participation model (HBO Max, for instance, is adopting fixed bonuses upon production and distribution milestones). Which, given the headspace devoted to everything from merchandising to videogames (maybe, NFTs, holograms, and the metaverse to come), could become ever more meaningful.
|
 |
The Hollywood A.I.-I.P. Supernova |
Will the robots replace us all one day? Who knows, but chances are they will eventually learn how to create a superhero movie. Ergo, the start of one of the great legal debates in Hollywood history. |
|
|
The A.I. Wars are almost here. No, I’m not talking about Terminator or even a crackdown on Twitter bots. Instead, we’ll soon be witnessing a series of extraordinary test cases designed to force the American legal system to reconsider the concept of authorship as artificial intelligence begins to write short stories or pop songs. It may sound like a Zuckerbergian fever dream, but A.I. could soon be creating blockbuster movies and life-saving pharmaceuticals, too—multi-billion dollar products with no human creator.
The legal battle has already begun. Sometime in the next couple of weeks, I’ve learned, a lawsuit will be filed that challenges the U.S. Copyright Office’s recent decision to deny an “author” identified as “Creativity Machine.” Then, a few weeks later, a federal appeals court will hear oral arguments in Thaler v. Hirshfeld, an under-the-radar but potentially blockbuster case concerning whether A.I. can be listed as the “inventor” in a patent application. Meanwhile, authorities in the European Union and 15 other countries are being asked to make similar determinations to properly credit the achievements of A.I.
The instigator for all this action is a Yale-trained lawyer named Ryan Abbott, a globetrotting, modern-day renaissance man who has recruited Dr. Stephen Thaler, a pioneer in the field of A.I. who developed the Creativity Machine and other modern engineering marvels, as his model client. Abbott, a partner at the Los Angeles-based firm Brown Neri Smith & Khan, also has a medical degree from the University of California, San Diego, and is a scholar. Besides practicing law, both in the United States and England, he teaches at the University of Surrey School of Law and the David Geffen School of Medicine at UCLA.
Most importantly, Abbott is obsessed with our technological future. In writings including The Reasonable Robot, he outlines how we should discriminate between A.I. and human behavior under the law. For example, if businesses get taxed on the wages of its human but not robot workers, he asks, do we incentivize automation? And if we hold the suppliers of autonomous driving software to a punishing tort standard (i.e. strict liability rather than strict negligence), will there come a time when we’re actually discouraging the adoption of technology that would prevent accidents on the road?
Currently, the topic that Abbott is pushing courts to clarify is the relationship between A.I. and intellectual property. In a nutshell: Must government agencies accept and properly record the ingenuity that’s coming from our increasingly sophisticated machines? Abbott argues yes and believes it’s important that this happens sooner rather than later. “People always say that technology lags behind the law,” he told me. “This is an opportunity for the law to play a role in the development of technology.”
Accordingly, Abbott will soon be filing a lawsuit in federal court that appeals the Copyright Office’s rejection of a registration for “A Recent Entrance to Paradise,” a two-dimensional artwork created by Dr. Thaler’s Creativity Machine. In a Feb. 14 determination (read here in full), the Copyright Review Board told Abbott that the work was ineligible for registration because “human authorship is a prerequisite to copyright protection in the United States.” What’s more, the Board threw cold water on the controversial proposition that the work-for-hire doctrine—whereby an employee assigns authorship rights to an employer like a corporation, which, Abbott points out, is a non-human entity—opens a similar path for the owners of AI. “The ‘Creativity Machine’ cannot enter into binding legal contracts,” stated the Copyright Board in an assessment that will now be reviewed. “[T]he work-for-hire doctrine only speaks to the identity of a work’s owner, not whether a work is protected by copyright.”
This new case focusing on copyright protection for A.I.-generated work could become meaningful for the creative industry as studios and filmmakers explore A.I.’s potential. In recent years, for example, Warner Bros. has used A.I. to guide its decision-making about what projects to pursue. In Japan, a new film about a boy’s dislike of tomatoes, based on a script by A.I., is now hitting the festival circuit. There’s now an A.I. tool out there that, sensing the tone of any video, recomposes music for a score. Sony, in fact, has tried to use A.I. to make new music that sounds like The Beatles, and Spotify is experimenting too. And as anyone who has seen the deepfake “Tom Cruise” knows, A.I. can do a pretty good job of replicating actors (something that’s of increasing concern to actor unions). Put it all together, and we’ll likely be seeing A.I. acting soon as the auteur on a major motion picture. And not just for movies either. A.I. is increasingly involved in video game development, too.
The implications are immense. For example, at least one scholar has pointed out that Spotify could use A.I. creation to develop tunes it would then own, which could diminish the pot of royalty payments for human musicians. Abbott is mindful of such talk. In one law review article, he writes, “If A.I. does not have rights and a human author is no longer in the picture, is there still a case for copyright if we are no longer concerned with protecting authors in this context? Do we still want to apply copyright the same way? Do we need human centric laws? Do the laws that currently apply to people also change once A.I. is in the picture?”
While this will be the first lawsuit that directly confronts a claim of A.I. authorship, courts have been exploring the derivation of creativity and deciding which works get protected for quite some time. Back in 1884, for instance, the Supreme Court tackled whether photographs exhibited sufficient originality or were mere mechanical reproductions of their subjects. A few decades later, a British justice had to wrestle with a copyright case that involved séances and spirit-guided “automatic writing.” Authorship by non-human spiritual beings would come up in U.S. courts, too. Not to mention supposed authorship by Jesus himself.
More recently, and rather famously, a court had to tackle whether a “monkey selfie” was eligible for copyright protection. In fact, when that case (Naruto v. Slater) was being decided a few years ago, the PETA lawyer who was ostensibly representing the monkey told the court that the issue of non-human authorship presented in the case would impact the future of artificial intelligence. In the end, a judge concluded that monkeys don’t have standing to go to court to enforce legal rights, leaving no definitive answer to the question of whether animals can be authors of copyrighted work. Hey, guess what? Abbott’s forthcoming case may settle that issue as well.
|
|
Scholars, sometimes at the behest of Congress, have been writing about and debating whether machines can truly be deemed “authors” and “inventors” for decades. For the imaginative academics who have engaged on this topic, the issue harkens to the original purpose for having intellectual property—as the U.S. Constitution puts it, “To promote the progress of science and useful arts.” How should society properly align incentives to encourage innovation and creativity? Some scholars have argued this is a “bad penny of a question” as there tends to be a programmer behind the algorithm. Technology is a tool. Maybe there’s no need to worry about authorship from computers.
From Abbott’s vantage point, there’s a practical purpose to properly crediting the source of creativity—and it has nothing to do with giving a boastful A.I. machine an ownership claim over the fruits of its labor. In fact, Abbott doesn’t want that. He wants the A.I.’s owner to get those benefits. But he worries that won’t be the outcome if there’s no integrity in the patent and copyright systems. As his court papers in his patentable A.I. case put it, “Allowing a person to be listed as an inventor for an AI.-.Generated Invention would not be unfair to an A.I., which has no interest in being acknowledged, but allowing people to take credit for work they have not done would devalue human inventorship.”
Abbott gives an example. He asks me to imagine a pharmaceutical company directing an A.I. to find a cure for Covid. Let’s say the A.I. finds a way to create antibodies. The company then lists its top scientist as the inventor of this novel approach while taking ownership of the patent for itself. “The Patent Office doesn’t challenge when someone says they are an inventor,” he notes. “But a patent can be invalidated if you don’t follow the rules, and there’s certainly a scenario where the scientist is deposed in some patent infringement suit down the line and has to admit having an A.I. solve the critical problem.”
The notion that an important patent could be invalidated some day thanks to the role of A.I. in the development of the invention leads to a key question: Would that pharma company do everything it could to cure a disease if certain paths resulted in no patent protection?
With such uncertainty in the background, Abbott is doing what he can do now to list one of Dr. Thaler’s machines, DABUS (“Device for the Autonomous Bootstrapping of Unified Science”), as the inventor of a beverage container with a design that’s based on fractal geometry. The Patent Office has rejected the attempt. So, in June, Abbott will be appearing before the Federal Circuit and disputing the conclusion that an inventor must be a “natural person.” His primary argument is there’s nothing in the U.S. Constitution nor patent statutes that narrowly excludes artificial intelligence.
Au contraire, responds the nation’s patent authorities, pointing out that the Patent Act uses the word “individual” plus “himself” and “herself.” Sharing gender pronouns? Here, it may be legally significant. As the government argues, “By using personal pronouns… Congress only strengthened the conclusion that it was referring to a ‘human being’ in referencing an ‘individual.’”
The appellate hearing will take place on June 6.
|
|
Few industries are as heavily unionized as entertainment, but there’s one huge exception: video games. Despite long hours, trailing compensation, and frequent talk over the years about the need for collective bargaining, game developers have remained largely independent. Might that change? On April 22, National Labor Relations Board regional director Jennifer Hadsall issued a 27-page decision that allows 21 quality assurance testers at Activision Blizzard’s Raven Studio (responsible for Call of Duty) in Wisconsin to hold a union election. Hadsall rejected Activision’s arguments that organizational changes at the studio required dismissal of a petition by the Communication Workers of America. Activision is investigating an appeal. |
|
Sorry, Elon Musk. Having a babysitter who vets your tweets isn’t always a First Amendment violation… Speaking of free speech, ESPN anchor Sage Steele is suing her employer for allegedly punishing her for speaking out about a COVID-19 vaccine mandate. She’s suing in Connecticut, which extends First Amendment protections to the workplace, specifically by forbidding employers from disciplining on the basis of speech, religion, assembly, or the press (so long as such activity doesn’t interfere with job duties). Perhaps Musk should consider moving Twitter HQ to the Land of Steady Habits… A public defender gets to move forward in a lawsuit that aims to hold the federal judiciary accountable for alleged sexual harassment. It’s nice to know that appellate judges can exercise impartiality here… The Yankees fought hard in court to keep the contents of an M.L.B. letter sealed, but the Second Circuit made it public anyway. The result was the minor revelation that the Yankees were once fined $100K over an inappropriate but already known use of technology. Here’s how I feel about this… Warren Buffett is betting Biden’s antitrust cops will allow Microsoft to acquire Activision… TikTok doesn’t like how some law firms are sending out opt-outs to a $92 million privacy settlement and is letting a judge know… Amber Heard is set to testify this week at Johnny Depp’s defamation trial… Following its acquisition of Bandcamp, Epic Games is now seeking an injunction to stop Google from removing the app from its Play Store. Epic says removal would mean irreparable harm to musicians… R.I.P. Senator Orrin Hatch, who for decades, was a pretty underrated influence in entertainment and media thanks to his role championing intellectual property rights and shepherding key legislation on the subject. |
|
For those who want something to listen to, I was on a couple of Puck’s podcasts last week, including Matt Belloni’s “The Town” and Peter Hamby’s “The Powers That Be,” discussing last week’s topics, plus the legal backstory behind the NFL Draft. (Hint: I may have mentioned the words “non-statutory labor exemption.”) |
|
|
FOUR STORIES WE'RE TALKING ABOUT |
 |
A MAGA Casualty? |
Notes on Dr. Oz’s battle against former beloved hedge fund titan David McCormick for the hearts and minds of the Rust Belt. |
TINA NGUYEN |
|
 |
Netflix Player Haters |
Jon Kelly talks with Peter about the public market's media meltdown. Plus, Dylan Byers recaps the White House Correspondents Weekend. |
PETER HAMBY |
|
 |
Elon's Eject Button |
Musk's essentially only bought a $1 billion call option to buy Twitter, but various market moves suggest things are lining up. |
WILLIAM D. COHAN |
|
 |
The PayPal Mafia |
Notes on the B.Y.U. alum running Musk’s family office and Thiel’s next moves. |
TEDDY SCHLEIFER |
|
|
|
|
|
You received this message because you signed up to receive emails from Puck
Was this email forwarded to you?
Sign up for Puck here Sent to
{{customer.email}} Unsubscribe Interested in exploring our newsletter offerings?
Manage your preferences Puck is published by Heat Media LLC
64 Bank Street
New York, NY 10014
For support, just reply to this e-mail
For brand partnerships, email ads@puck.news |
|
|