Ex-Google CEO Praises Altman’s Handling of OpenAI Ouster
OpenAI CEO Sam Altman is “genuinely a hero,” ex-Google CEO Eric Schmidt said Tuesday while discussing Altman’s recent ouster and reinstatement at the company (see 2311170065).
“It’s pretty simple,” Schmidt said during Axios’ AI+ Summit. “The board tries to fire Sam. Sam fires the board. Don’t fire Steve Jobs. I mean come on, guys. I mean work it out. Sam is genuinely a hero.” There’s a “simple rule” in Silicon Valley, said Schmidt: “These founder-CEO types are unusual, they’re incredibly valuable, and they change the world.”
OpenAI’s board of directors announced Altman’s removal Nov. 17 without giving a specific reason. President and co-founder Greg Brockman quit in protest, and Altman loyalists threatened to join them at Microsoft, which owns 49% of OpenAI. OpenAI announced Altman’s return Nov. 22. He’s now in a much stronger position and will continue to be a key figure in the future of AI, said Schmidt.
A major concern with AI technology will be the ability for supercomputers to “set their own objectives,” therefore bypassing human decision-making, said Schmidt. Some believe computers will be able to do this in 10 years or fewer, he said. Two years ago, many thought that possibility was 20 years away, he said. Now some people believe AI systems will have the capability within two to four years, he said: “I’m going to say five to 10.”
This ability raises major concerns about access to dangerous weapons, he said. Humans theoretically might not know if the systems are being truthful about access, he said: “It’s all theoretical, but these are some of the concerns that the technical people talk about because the loss of human agency is a really big deal.” That loss is “possible,” and “many people think it’s probable,” he said. “I think it’s highly probable we’ll get to savants.”
One concern is the concentration of ownership over AI technology, Consumer Financial Protection Bureau Director Rohit Chopra said during a separate appearance. Rarely has the U.S. “prospered” by having a few individuals from the “corporate royalty dictating” the future, said Chopra: “You always want to be concerned when there’s any type of market structure that quickly goes to just a couple players. Ultimately, we won’t be able to unleash the progress of innovation really from that.” Sen. Josh Hawley, R-Mo., and others on Capitol Hill have raised concerns about the dominance of Microsoft, Meta and Google over AI (see 2307250063).
There are questions about two or three companies being able to “monetize” society’s collective data, Chopra added. This “winner-take-all” dynamic means a few individuals could have “enormous control over decisions” made all over the world, he said. AI technology could exacerbate issues related to fraud, national security, intellectual property, privacy and competition, and regulators should be focused on how to use existing law to root wrongdoing, he said.
Overregulation is a concern, said Sen. Todd Young, R-Ind., a leading member of an AI working group that Senate Majority Leader Chuck Schumer, D-N.Y., established. Heavy-handed regulation is a particular concern given foreign adversaries like China have no plans of “throttling” their domestic innovators and entrepreneurs, said Young. The leading AI companies should have American values when it comes to privacy and bias, he said: That will force Chinese companies to “sell into a market” that meets certain standards.
Schumer plans to host the Senate’s seventh AI Insight Forum on Wednesday. The focus will be on transparency, explainability, intellectual property and copyright, Schumer’s office said Tuesday. Attendees will include NAB CEO Curtis LeGeyt, News Media Alliance CEO Danielle Coffey, SAG-AFTRA National Executive Director Duncan Crabtree-Ireland, MPA Senior Vice President Ben Sheffner and Dennis Kooker, Sony Music Entertainment president-global digital business & U.S. sales.