How OpenAI’s Sam Altman combines Steve Jobs and Elon Musk
If you asked AI to create the ultimate tech bro, it would probably come up with Sam Altman.
He models his style on Steve Jobs, vies with Elon Musk on his vision for humanity, has bio-hacking ambitions like Jeff Bezos, a property empire like Mark Zuckerberg and now is entangled in more drama than all of them combined.
The 38-year-old wunderkind and admitted doomsday prepper with a secret gun-and-gold stacked lair, one of the key players in the artificial intelligence revolution whose worth is north of $500 million, is currently embroiled in Silicon Valley’s biggest soap opera.
Until Friday, Altman — a prodigy who began coding at 8 in his native St. Louis, began his first startup at 19 and frequently draws parallels between himself and physicist J. Robert Oppenheimer — was CEO of OpenAI, an artificial intelligence research firm until he was abruptly sacked, prompting an extraordinary drama which appeared to have ended Tuesday night, at last temporarily.
Nobody, even some of the top names in artificial intelligence, knows exactly why. But many say the mystery has to do with the pace at which artificial intelligence is progressing and how Altman is directing it.
OpenAI chief scientist Ilya Sutskever reportedly voiced concerns about how fast Altman was moving things without enough attention being paid to safety by OpenAI’s board — then fired him Friday, then voiced regret.
In six days and counting filled with corporate scheming Altman tried to mount a comeback from a “war room” in his San Francisco mansion, failed on the first attempt, took a job with OpenAI’s biggest investor Microsoft, gained the support of 700 of OpenAI’s 770 workers and late on Tuesday was reinstated as CEO.
Altman co-founded OpenAI with Elon Musk, the CEO of Tesla, SpaceX and X, formerly Twitter in 2015. Their original aim was to prevent artificial intelligence from accidentally wiping out humanity.
Musk has since cut ties with the company and has criticized how it’s being run.
Musk, for his part, apparently supported Altman’s ouster. “I am very worried,” Musk posted to X on Sunday, citing Ilya Sutskever’s reported concerns over Altman. “Ilya has a good moral compass and does not seek power. He would not take such drastic action unless he felt it was absolutely necessary.”
OpenAI is best known for unleashing ChatGPT — a kind of narrow-band artificial intelligence that can respond to one task — on the world.
But what everyone is waiting for is AGI, or artificial general intelligence, in which machines basically will be able to do a lot of what humans do — just a billion times better and faster.
Though Altman himself and others say that mastery of AGI is a ways off, perhaps at least 2029 or 2030, others believe researchers may already be there.
“If AI is going to be a living god on earth, AGI is probably going to be 80 to 90 percent of that step,” one AI researcher who did not want to be publicly identified told The Post.
Ray Kurzweil, a principal engineer at Google and one of the world’s foremost experts on AI, told The Post he is on Altman’s side in the mess.
“His firing was very unusual, shocking really,” Kurzweil said. “I’ve never seen anything like this happen before. This happened out of the blue.
“It has to do with people’s concerns about whether the current process to keep the cutting edge of AI safe or not. But this is not the way to handle it. I think Altman was careful on that point.”
But the MIT-educated Gary Marcus, a leading expert on AI and the founder of Robust.Ai and Geometric.AI, told The Post that he believes the OpenAI board probably did the right thing in kicking Altman out when they did.
“I think that the (OpenAI) board thinks that Sam was not candid enough about something that was material to the board,” Marcus said.
“The board is non-profit, they are not there to make money, they are there to make sure AI works to the benefit of humanity. I think the board’s actions, sticking to their guns even under enormous pressure, shows that they are genuinely concerned about this.
“I think they’re concerned that something that Sam was intending to do was going to have a significant negative consequence.”
Until last week, he was better known for his incredible work ethic and lifestyle than life-and-death office politics.
Though known as a fiercely hard worker who became so obsessive as a Stanford dropout building his first startup, Loopt, that he got scurvy, a vitamin C deficiency that used to take down sailors several centuries ago, Altman also lives a grand — and grandiose — life.
In 2011 he became president of Y Combinator, the Silicon Valley hothouse for start-ups which created AirBnB, Reddit and Instacart — where he sometimes appeared in stocking feet, cargo shorts, and a gray hoodie, waving around a Bronze Age sword — but rubbed some staffers the wrong way.
“Sam’s a little too focussed on glory—he puts his personal brand way out front,” a Y Combinator exec told The New Yorker.
“We had a family feel, and now it’s all institutional and aloof. Sam’s always managing up, but as the leader of the organization he needs to manage down.”
When Altman was asked the the critique, he said: “The missing circuit in my brain, the circuit that would make me care what people think about me, is a real gift.
“Most people want to be accepted, so they won’t take risks that could make them look crazy—which actually makes them wildly miscalculate risk.”
Altman owns a $27 million house on San Francisco’s Russian Hill, as well as a ranch in Napa Valley and likes to fly — he is a qualified pilot — and race his fleet of prestige cars that includes a $1m Lexus LFA, and more than one McLaren.
His own fortune is invested in start-up ventures with lofty aims which give clues to his world-changing ambition: hypersonic flight; biohacking to extend lifespan by a decade; nuclear fusion; personalized cell therapies; and create a mind-computer interface in the form of Elon Musk’s Neuralink.
Recent months have seen him enter a new stratosphere, crisscrossing the planet to attend a White House dinner with partner Oliver Mulherin, speak one-on-one with France’s President Emmanuel Macron, front a British government-sponsored “AI Summit,” and be mobbed for selfies at last week’s Asia-Pacific Economic Cooperation in San Francisco.
His friends include fashion designer Diane von Furstenburg – who once compared him to Einstein – and Peter Thiel, the billionaire entrepreneur and venture capitalist.
A vegetarian, Altman came out as gay years ago at school, to the surprise of his mom who thought he was “unsexual,” and credits his early life as a computer geek with helping him do so.
The oldest of four children in what he called a “middle-class Jewish family,” his mom a dermatologist and his father a real estate broker; his two younger brothers have followed him to Silicon Valley while he is estranged from his sister, the youngest of the siblings.
“Growing up gay in the Midwest in the two-thousands was not the most awesome thing,” he told The New Yorker. “And finding AOL chat rooms was transformative. Secrets are bad when you’re eleven or twelve.”
If it turns out that Altman’s firing does involve fears over how fast artificial intelligence has come, it is in line with Altman’s own well-known paranoia about the future.
He’s expressed fears not only about the future of AI — but worries about all sorts of things that could befall the human race.
Altman confessed his fear while testifying before a Senate panel last June, declaring that his worst fear is that advanced AI will “cause significant harm to the world.”
“I try not to think about it too much,” Altman said in 2016. “But I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.” Its location is secret.
He even has a backup plan for his backup plan. If things really hit the fan, Altman plans fly with Thiel to the PayPal founder’s bolthole in New Zealand.
But he’s denied being reckless about the development of AI.
“I certainly don’t think I’m all gas, no brakes toward the future,” Altman said. “But I do think we should go to the future. And that probably is what differentiates me from most of the A.I. companies.
“I think A.I. is good. Like, I don’t secretly hate what I do all day. I think it’s going to be awesome. I want to see this get built. I want people to benefit from this.
“So all gas? No brakes? Certainly not. And I don’t even think most people who say it mean it.”
Read the full article Here