Techno Blender
Digitally Yours.

OpenAI saga shows the race for AI supremacy is no longer just between nations

0 26


NurPhoto/Getty Images

The battle between the US and China for AI prowess is what I’d first planned to discuss when starting this post. A frenzied weekend has now changed the scope of this debate. Yet, the underlying messaging remains the same, especially for governments still figuring out their role in an era that may be impacted significantly by these emerging technologies.

It’s been a jaw-dropping week at OpenAI, and it now appears an agreement has been reached for its ousted co-founder and CEO Sam Altman to return to the helm. The decision comes after days of popcorn-worthy developments, during which the generative AI powerhouse lost its CEO, watched him join Microsoft, replaced its first interim CEO with a second interim CEO, and faced a staff revolt. 

Also: Generative AI advancements will force companies to think big and move fast

OpenAI said in a statement announcing Altman’s reinstatement: “We have reached an agreement in principle for Sam Altman to return to OpenAI as CEO, with a new initial board of Bret Taylor (Chair), Larry Summers, and Adam D’Angelo. We are collaborating to figure out the details. Thank you so much for your patience through this.”

Reports are still unclear whether Altman will now gain a seat on the board, one he didn’t have before. He previously noted in a June 2023 Bloomberg interview on AI trust: “No one person should be trusted here… The board can fire me. I think that’s important.”

Also, no word yet on whether OpenAI’s co-founder and chief scientist Ilya Sutskever will return. Sutskever was on the previous board, along with fellow board member Helen Toner, both of whom were speculated to have played a part in the decision to remove Altman. Sutskever, though, later expressed his regret in doing so.

Also: OpenAI aiming to create AI as smart as humans, helped by funds from Microsoft

Toner, who had stayed silent throughout the mayhem, finally said on X after it was revealed Altman would return: “And now, we all get some sleep.”

Toner, the director of strategy and foundational research grants at the Center for Security and Emerging Technology, had co-authored a research paper that Altman said was critical of OpenAI. The report noted that the company’s efforts in keeping its AI developments safe had paled in comparison with Anthropic’s. Altman was upset enough to campaign for her removal from the board, according toaNew York Times article

In its original statement announcing Altman’s dismissal, OpenAI’s board had said it was no longer confident in his ability to lead the company. “Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.”

Also: AI safety and bias: Untangling the complex chain of AI training

The board further noted that OpenAI, founded as a non-profit in 2015, was “deliberately structured to advance our mission” of ensuring artificial general intelligence (AGI) would benefit all humanity. “The board remains fully committed to serving this mission…we believe new leadership is necessary as we move forward,” the statement read. 

The company was restructured in 2019 to allow for capital to be raised in pursuit of its mission, while “preserving” the non-profit’s governance and oversight. “While the company has experienced dramatic growth, it remains the fundamental governance responsibility of the board to advance OpenAI’s mission and preserve the principles of its Charter,” it said.

Despite noting Altman’s less-than-candid communications as the rationale behind his ousting, the board gave no further details or specific reasons that had led to that conclusion.

With the board, including Toner and Sutskever — said to be concerned about Altman’s focus on expansion over AI safety — choosing to remain largely silent over the reason that drove the decision to fire Altman, speculation ran rife on social media.

Also: Companies aren’t spending big on AI. Here’s why that cautious approach makes sense

As more reports of tensions between Altman and the board emerged, it soon became clear — to most observers — that the debate was very likely between AI safety and profit. And herein lies the crux of the problem. These are still assumptions and speculations because there simply isn’t enough information, or any at all, about what really were the concerns of OpenAI’s board.

What facts has Altman omitted or lied about that led the board to determine he was no longer aligned with OpenAI’s mission that AGI must “benefit all of humanity”? Is OpenAI’s backroom research and development nearing AGI, and the board isn’t sure “all of humanity” is ready for it? Should this be something the general public and nations need to worry about, too?

If there’s one thing that has at least become even clearer in the past week, it is that the world’s future with AI in it is very much in the hands of a very small band of market players. The Big Tech collective has the deep pockets and the resources to determine how they think AI should impact society at large. Yet, this elite tech community represents just a minute fraction of the world’s population and demographics. 

Also: Global players look to create baseline to evaluate generative AI applications

Within days, these tech elite have been able to maneuver Altman’s ousting, his hiring at Microsoft (albeit short-lived), the potential transfer of almost an entire OpenAI workforce to another major market player, and Altman’s eventual reinstatement. And they’ve done all of this without any clear concerted effort to explain why he was fired in the first place and verify, or refute, concerns about the prioritization of AI safety over profits

There are suggestions that OpenAI’s new board will initiate an investigation into the motives behind Altman’s dismal, but this is said to be internal. 

Practice what AI transparency preaches

Amid the chaotic week, one message is now even more apparent. Transparency is crucial in the development and adoption of all AI —  generative, AGI, or otherwise. Transparency is the foundation of trust, on which most agree AI must be built to gain human acceptance. 

Big Tech, too, has preached the importance of transparency in driving responsible and ethical AI. 

And when none is forthcoming, transparency then must be driven by regulation. We need legislation that does not seek to inhibit market innovation in AI, but that focuses on mandating transparency in how this innovation is developed and advanced.  

The whole OpenAI debacle should serve as a great learning opportunity for governments and societies on how AI development should move forward. We’ve now also witnessed the complexities of managing this, even if its development is tied to a non-profit corporate framework. 

Also: AI is transforming organizations everywhere. How these 6 companies are leading the way

That a key employee had to resign, so he could talk freely about the risks of AI, clearly indicates market players are unlikely to be fully transparent with its development, despite pledging to do so. 

It underscores the need for strong governance to ensure they do so, and the urgency for these to be established. As we’ve already witnessed, the market — in particular Big Tech — can move at incredible speed. And this likely will accelerate at an even faster pace now that this past week has put more scrutiny on AI. 

Lawmakers will need to move quickly. The UK-led Bletchley Declaration on AI Safety is a great step forward, with 28 nations, including China, the US, Singapore, and EU agreeing to collaborate on identifying and managing potential risks from “frontier” AI. The multilateral agreement outlines the countries’ recognition of the “urgent need” to ensure AI is developed and deployed in a “safe, responsible way” for the benefit of the global community. 

The United Nations also has laid out plans for an advisory team to look at the international governance of AI to mitigate potential risks, with a pledge to adopt a “globally inclusive” approach. 

I hope someone over there is taking notes from this past week’s OpenAI case study. Because the discussion now isn’t just about which nation will dominate the AI race, but whether Big Tech will take over the steering wheel without the necessary speed bumps in place. 




altmanaigettyimages-1794514589

NurPhoto/Getty Images

The battle between the US and China for AI prowess is what I’d first planned to discuss when starting this post. A frenzied weekend has now changed the scope of this debate. Yet, the underlying messaging remains the same, especially for governments still figuring out their role in an era that may be impacted significantly by these emerging technologies.

It’s been a jaw-dropping week at OpenAI, and it now appears an agreement has been reached for its ousted co-founder and CEO Sam Altman to return to the helm. The decision comes after days of popcorn-worthy developments, during which the generative AI powerhouse lost its CEO, watched him join Microsoft, replaced its first interim CEO with a second interim CEO, and faced a staff revolt. 

Also: Generative AI advancements will force companies to think big and move fast

OpenAI said in a statement announcing Altman’s reinstatement: “We have reached an agreement in principle for Sam Altman to return to OpenAI as CEO, with a new initial board of Bret Taylor (Chair), Larry Summers, and Adam D’Angelo. We are collaborating to figure out the details. Thank you so much for your patience through this.”

Reports are still unclear whether Altman will now gain a seat on the board, one he didn’t have before. He previously noted in a June 2023 Bloomberg interview on AI trust: “No one person should be trusted here… The board can fire me. I think that’s important.”

Also, no word yet on whether OpenAI’s co-founder and chief scientist Ilya Sutskever will return. Sutskever was on the previous board, along with fellow board member Helen Toner, both of whom were speculated to have played a part in the decision to remove Altman. Sutskever, though, later expressed his regret in doing so.

Also: OpenAI aiming to create AI as smart as humans, helped by funds from Microsoft

Toner, who had stayed silent throughout the mayhem, finally said on X after it was revealed Altman would return: “And now, we all get some sleep.”

Toner, the director of strategy and foundational research grants at the Center for Security and Emerging Technology, had co-authored a research paper that Altman said was critical of OpenAI. The report noted that the company’s efforts in keeping its AI developments safe had paled in comparison with Anthropic’s. Altman was upset enough to campaign for her removal from the board, according toaNew York Times article

In its original statement announcing Altman’s dismissal, OpenAI’s board had said it was no longer confident in his ability to lead the company. “Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.”

Also: AI safety and bias: Untangling the complex chain of AI training

The board further noted that OpenAI, founded as a non-profit in 2015, was “deliberately structured to advance our mission” of ensuring artificial general intelligence (AGI) would benefit all humanity. “The board remains fully committed to serving this mission…we believe new leadership is necessary as we move forward,” the statement read. 

The company was restructured in 2019 to allow for capital to be raised in pursuit of its mission, while “preserving” the non-profit’s governance and oversight. “While the company has experienced dramatic growth, it remains the fundamental governance responsibility of the board to advance OpenAI’s mission and preserve the principles of its Charter,” it said.

Despite noting Altman’s less-than-candid communications as the rationale behind his ousting, the board gave no further details or specific reasons that had led to that conclusion.

With the board, including Toner and Sutskever — said to be concerned about Altman’s focus on expansion over AI safety — choosing to remain largely silent over the reason that drove the decision to fire Altman, speculation ran rife on social media.

Also: Companies aren’t spending big on AI. Here’s why that cautious approach makes sense

As more reports of tensions between Altman and the board emerged, it soon became clear — to most observers — that the debate was very likely between AI safety and profit. And herein lies the crux of the problem. These are still assumptions and speculations because there simply isn’t enough information, or any at all, about what really were the concerns of OpenAI’s board.

What facts has Altman omitted or lied about that led the board to determine he was no longer aligned with OpenAI’s mission that AGI must “benefit all of humanity”? Is OpenAI’s backroom research and development nearing AGI, and the board isn’t sure “all of humanity” is ready for it? Should this be something the general public and nations need to worry about, too?

If there’s one thing that has at least become even clearer in the past week, it is that the world’s future with AI in it is very much in the hands of a very small band of market players. The Big Tech collective has the deep pockets and the resources to determine how they think AI should impact society at large. Yet, this elite tech community represents just a minute fraction of the world’s population and demographics. 

Also: Global players look to create baseline to evaluate generative AI applications

Within days, these tech elite have been able to maneuver Altman’s ousting, his hiring at Microsoft (albeit short-lived), the potential transfer of almost an entire OpenAI workforce to another major market player, and Altman’s eventual reinstatement. And they’ve done all of this without any clear concerted effort to explain why he was fired in the first place and verify, or refute, concerns about the prioritization of AI safety over profits

There are suggestions that OpenAI’s new board will initiate an investigation into the motives behind Altman’s dismal, but this is said to be internal. 

Practice what AI transparency preaches

Amid the chaotic week, one message is now even more apparent. Transparency is crucial in the development and adoption of all AI —  generative, AGI, or otherwise. Transparency is the foundation of trust, on which most agree AI must be built to gain human acceptance. 

Big Tech, too, has preached the importance of transparency in driving responsible and ethical AI. 

And when none is forthcoming, transparency then must be driven by regulation. We need legislation that does not seek to inhibit market innovation in AI, but that focuses on mandating transparency in how this innovation is developed and advanced.  

The whole OpenAI debacle should serve as a great learning opportunity for governments and societies on how AI development should move forward. We’ve now also witnessed the complexities of managing this, even if its development is tied to a non-profit corporate framework. 

Also: AI is transforming organizations everywhere. How these 6 companies are leading the way

That a key employee had to resign, so he could talk freely about the risks of AI, clearly indicates market players are unlikely to be fully transparent with its development, despite pledging to do so. 

It underscores the need for strong governance to ensure they do so, and the urgency for these to be established. As we’ve already witnessed, the market — in particular Big Tech — can move at incredible speed. And this likely will accelerate at an even faster pace now that this past week has put more scrutiny on AI. 

Lawmakers will need to move quickly. The UK-led Bletchley Declaration on AI Safety is a great step forward, with 28 nations, including China, the US, Singapore, and EU agreeing to collaborate on identifying and managing potential risks from “frontier” AI. The multilateral agreement outlines the countries’ recognition of the “urgent need” to ensure AI is developed and deployed in a “safe, responsible way” for the benefit of the global community. 

The United Nations also has laid out plans for an advisory team to look at the international governance of AI to mitigate potential risks, with a pledge to adopt a “globally inclusive” approach. 

I hope someone over there is taking notes from this past week’s OpenAI case study. Because the discussion now isn’t just about which nation will dominate the AI race, but whether Big Tech will take over the steering wheel without the necessary speed bumps in place. 

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment