Techno Blender
Digitally Yours.

When OpenAI’s CEO Sam Altman was sacked I thought it was corporate trivia. Here’s why it’s not

0 28


In recent days, there has been utter chaos at perhaps the most important company in the world. Sam Altman, the very rich CEO of OpenAI, was suddenly removed from his post. There were protests and resignations. An interim CEO was appointed. Then the interim CEO was un-appointed, and Altman – just 38 years old – was back.

Perhaps this sounds like corporate trivia to you; when Altman was sacked 10 days ago, it did to me too. But in trying to comprehend what went on, and why it mattered, I felt, for the first time, what I suspect will very soon become a common human experience: the fear of the potential of Artificial Intelligence to radically alter our world.

hen Altman was sacked, I felt, for the first time, what I suspect will very soon become a common human experience.Credit: Joe Benke

Altman himself is open about the fact that risks exist, but is optimistic. In general, he comes across as sensible and sane. In an interview with Bloomberg’s Emily Chang in June, when asked about the unusual fact that he does not have a financial stake in the $86 billion company (technically, OpenAI is a not-for-profit with a for-profit arm), he said that most people struggled to grasp the concept of “enough money”.

He has, it should be noted, used some of that money to prep for catastrophe. In 2016, before he was heading up an AI project, he listed a pandemic, nuclear war, and AI “that attacks us” as possible disasters. He tried not to think about it too much, “But I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defence Force, and a big patch of land in Big Sur I can fly to.” This fact may affect how seriously you take his professed optimism.

In an interview with The Atlantic earlier this year, he downplayed this prepping as a hobby. But he said something more frightening still: that his company had built artificial intelligence that they would never release – because it was too dangerous.

Loading

That fact has re-emerged in recent days as a possible reason he was fired: Reuters reported on Thursday that researchers had written to the board warning them this new AI could threaten humanity. (I’m assuming it was the same AI. If not, the company has two separate AI models considered dangerous. I hope not; one seems like enough.)

This “dangerous AI” was one reason I went from skipping over articles about AI to taking warnings seriously. (And it was while reading that article, by Ross Andersen, that I first felt fear: some of the bits that scared me most are below.)

The second was the dramatic language employed by those in AI (who are working towards Artificial General Intelligence – something closer to human intelligence) even when they’re being positive. Altman told Chang a lot of people spoke about AI as though it was the last technological revolution. “I suspect that from the other side it will look like the first”.


In recent days, there has been utter chaos at perhaps the most important company in the world. Sam Altman, the very rich CEO of OpenAI, was suddenly removed from his post. There were protests and resignations. An interim CEO was appointed. Then the interim CEO was un-appointed, and Altman – just 38 years old – was back.

Perhaps this sounds like corporate trivia to you; when Altman was sacked 10 days ago, it did to me too. But in trying to comprehend what went on, and why it mattered, I felt, for the first time, what I suspect will very soon become a common human experience: the fear of the potential of Artificial Intelligence to radically alter our world.

hen Altman was sacked, I felt, for the first time, what I suspect will very soon become a common human experience.

hen Altman was sacked, I felt, for the first time, what I suspect will very soon become a common human experience.Credit: Joe Benke

Altman himself is open about the fact that risks exist, but is optimistic. In general, he comes across as sensible and sane. In an interview with Bloomberg’s Emily Chang in June, when asked about the unusual fact that he does not have a financial stake in the $86 billion company (technically, OpenAI is a not-for-profit with a for-profit arm), he said that most people struggled to grasp the concept of “enough money”.

He has, it should be noted, used some of that money to prep for catastrophe. In 2016, before he was heading up an AI project, he listed a pandemic, nuclear war, and AI “that attacks us” as possible disasters. He tried not to think about it too much, “But I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defence Force, and a big patch of land in Big Sur I can fly to.” This fact may affect how seriously you take his professed optimism.

In an interview with The Atlantic earlier this year, he downplayed this prepping as a hobby. But he said something more frightening still: that his company had built artificial intelligence that they would never release – because it was too dangerous.

Loading

That fact has re-emerged in recent days as a possible reason he was fired: Reuters reported on Thursday that researchers had written to the board warning them this new AI could threaten humanity. (I’m assuming it was the same AI. If not, the company has two separate AI models considered dangerous. I hope not; one seems like enough.)

This “dangerous AI” was one reason I went from skipping over articles about AI to taking warnings seriously. (And it was while reading that article, by Ross Andersen, that I first felt fear: some of the bits that scared me most are below.)

The second was the dramatic language employed by those in AI (who are working towards Artificial General Intelligence – something closer to human intelligence) even when they’re being positive. Altman told Chang a lot of people spoke about AI as though it was the last technological revolution. “I suspect that from the other side it will look like the first”.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment