Chris Lehane is among the greatest within the enterprise at making unhealthy information disappear. Al Gore’s press secretary throughout the Clinton years, Airbnb’s chief disaster supervisor via each regulatory nightmare from right here to Brussels – Lehane is aware of how you can spin. Now he’s two years into what is likely to be his most not possible gig but: as OpenAI’s VP of world coverage, his job is to persuade the world that OpenAI genuinely offers a rattling about democratizing synthetic intelligence whereas the corporate more and more behaves like, nicely, each different tech big that’s ever claimed to be completely different.
I had 20 minutes with him on stage on the Elevate convention in Toronto earlier this week – 20 minutes to get previous the speaking factors and into the actual contradictions consuming away at OpenAI’s rigorously constructed picture. It wasn’t straightforward or completely profitable. Lehane is genuinely good at his job. He’s likable. He sounds affordable. He admits uncertainty. He even talks about waking up at 3 a.m. apprehensive about whether or not any of it will truly profit humanity.
However good intentions don’t imply a lot when your organization is subpoenaing critics, draining economically depressed cities of water and electrical energy, and bringing lifeless celebrities again to life to say your market dominance.
The corporate’s Sora drawback is admittedly on the root of the whole lot else. The video technology device launched final week with copyrighted materials seemingly baked proper into it. It was a daring transfer for an organization already getting sued by the New York Occasions, the Toronto Star, and half the publishing business. From a enterprise and advertising and marketing standpoint, it was additionally good. The invite-only app soared to the high of the App Retailer as folks created digital variations of themselves, OpenAI CEO Sam Altman; characters like Pikachu and Cartman of “South Park”; and lifeless celebrities like Tupac Shakur.
Requested what drove OpenAI’s determination to launch this latest model of Sora with these characters, Lehane provided that Sora is a “basic goal know-how” just like the printing press, democratizing creativity for folks with out expertise or assets. Even he – a self-described inventive zero – could make movies now, he stated on stage.
What he danced round is that OpenAI initially “let” rights holders choose out of getting their work used to coach Sora, which isn’t how copyright use usually works. Then, after OpenAI observed that individuals actually appreciated utilizing copyrighted pictures, it “advanced” towards an opt-in mannequin. That’s not iterating. That’s testing how a lot you may get away with. (By the best way, although the Movement Image Affiliation made some noise final week about authorized threats, OpenAI seems to have gotten away with rather a lot.)
Naturally, the scenario brings to thoughts the aggravation of publishers who accuse OpenAI of coaching on their work with out sharing the monetary spoils. After I pressed Lehane about publishers getting reduce out of the economics, he invoked honest use, that American authorized doctrine that’s purported to steadiness creator rights towards public entry to information. He referred to as it the key weapon of U.S. tech dominance.
Techcrunch occasion
San Francisco
|
October 27-29, 2025
Possibly. However I’d just lately interviewed Al Gore – Lehane’s previous boss – and realized anybody may merely ask ChatGPT about it as a substitute of studying my piece on TechCrunch. “It’s ‘iterative’,” I stated, “but it surely’s additionally a alternative.”
Lehane listened and dropped his spiel. “We’re all going to wish to determine this out,” he stated. “It’s actually glib and straightforward to sit down right here on stage and say we have to determine new financial income fashions. However I feel we are going to.” (We’re making it up as we go, is what I heard.)
Then there’s the infrastructure query no one needs to reply actually. OpenAI is already working an information heart campus in Abilene, Texas, and just lately broke floor on an enormous knowledge heart in Lordstown, Ohio, in partnership with Oracle and SoftBank. Lehane has likened the adoption of AI to the arrival of electrical energy – saying those that accessed it final are nonetheless enjoying catch-up – but OpenAI’s Stargate venture is seemingly focusing on a few of those self same economically challenged locations to arrange amenities with their attendant and large appetites for water and electrical energy.
Requested throughout our sit-down whether or not these communities will profit or merely foot the invoice, Lehane went to gigawatts and geopolitics. OpenAI wants a few gigawatt of power per week, he famous. China introduced on 450 gigawatts final yr plus 33 nuclear amenities. If democracies need democratic AI, he stated, they must compete. “The optimist in me says it will modernize our power programs,” he’d stated, portray an image of re-industrialized America with reworked energy grids.
It was inspiring, but it surely was not a solution about whether or not folks in Lordstown and Abilene are going to look at their utility payments spike whereas OpenAI generates movies of The Infamous B.I.G. It’s very price noting that video technology is the most energy-intensive AI on the market.
There’s additionally a human value, one made clearer the day earlier than our interview, when Zelda Williams logged onto Instagram to beg strangers to cease sending her AI-generated movies of her late father, Robin Williams. “You’re not making artwork,” she wrote. “You’re making disgusting, over-processed hotdogs out of the lives of human beings.”
After I requested about how the corporate reconciles this sort of intimate hurt with its mission, Lehane answered by speaking about processes, together with accountable design, testing frameworks, and authorities partnerships. “There isn’t any playbook for these things, proper?”
Lehane confirmed vulnerability in some moments, saying he acknowledges the “monumental tasks that include” all that OpenAI does.
Whether or not or not these moments had been designed for the viewers, I consider him. Certainly, I left Toronto pondering I’d watched a grasp class in political messaging – Lehane threading an not possible needle whereas dodging questions on firm choices that, for all I do know, he doesn’t even agree with. Then information broke that sophisticated that already sophisticated image.
Nathan Calvin, a lawyer who works on AI coverage at a nonprofit advocacy group, Encode AI, revealed that on the similar time I used to be speaking with Lehane in Toronto, OpenAI had despatched a sheriff’s deputy to Calvin’s home in Washington, D.C., throughout dinner to serve him a subpoena. They needed his personal messages with California legislators, school college students, and former OpenAI staff.
Calvin says the transfer was a part of OpenAI’s intimidation techniques round a brand new piece of AI regulation, California’s SB 53. He says the corporate weaponized its ongoing authorized battle with Elon Musk as a pretext to focus on critics, implying Encode was secretly funded by Musk. Calvin added that he fought OpenAI’s opposition to California’s SB 53, an AI security invoice, and that when he noticed OpenAI declare that it “labored to enhance the invoice,” he “actually laughed out loud.” In a social media skein, he went on to name Lehane, particularly, the “grasp of the political darkish arts.”
In Washington, that is likely to be a praise. At an organization like OpenAI whose mission is “to construct AI that advantages all of humanity,” it feels like an indictment.
However what issues far more is that even OpenAI’s personal individuals are conflicted about what they’re turning into.
As my colleague Max reported final week, quite a few present and former staff took to social media after Sora 2 was launched, expressing their misgivings. Amongst them was Boaz Barak, an OpenAI researcher and Harvard professor, who wrote about Sora 2 that it’s “technically wonderful but it surely’s untimely to congratulate ourselves on avoiding the pitfalls of different social media apps and deepfakes.”
On Friday, Josh Achiam – OpenAI’s head of mission alignment – tweeted one thing much more exceptional about Calvin’s accusation. Prefacing his feedback by saying they had been “presumably a danger to my entire profession,” Achiam went on to put in writing of OpenAI: “We are able to’t be doing issues that make us into a daunting energy as a substitute of a virtuous one. We now have an obligation to and a mission for all of humanity. The bar to pursue that responsibility is remarkably excessive.”
It’s price pausing to consider that. An OpenAI govt publicly questioning whether or not his firm is turning into “a daunting energy as a substitute of a virtuous one,” isn’t on a par with a competitor taking pictures or a reporter asking questions. That is somebody who selected to work at OpenAI, who believes in its mission, and who’s now acknowledging a disaster of conscience regardless of the skilled danger.
It’s a crystallizing second, one whose contradictions might solely intensify as OpenAI races towards synthetic basic intelligence. It additionally has me pondering that the actual query isn’t whether or not Chris Lehane can promote OpenAI’s mission. It’s whether or not others – together with, critically, the opposite individuals who work there – nonetheless consider it.