Techno Blender
Digitally Yours.

As OpenAI inches toward chip-building, the company loses a key cofound

0 21



Welcome to AI DecodedFast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here.

Big talk—and a big departure—at OpenAI

News broke Tuesday night that one of OpenAI’s original founding members, the decorated AI researcher Andrej Karpathy, has left the company to pursue “personal projects.”

Both Karpathy and OpenAI seemed to downplay the significance of the move Tuesday night. “First of all nothing ‘happened’ and it’s not a result of any particular event, issue or drama (but please keep the conspiracy theories coming as they are highly entertaining :)),” Karpathy wrote on X. OpenAI spokesperson Kayla Wood said in a statement to The Information, which broke the story, that a senior researcher who worked closely with Karpathy would take over his work. 

Karpathy is the first OpenAI executive to leave the company since CEO Sam Altman was fired—and then quickly reinstated—in November. While Karpathy’s exit is sure to raise some eyebrows within the company, many employees may take solace knowing that Altman’s plans to scale OpenAI continue to grow. 

Last week, the Wall Street Journal reported that Altman plans to try to raise as much as $7 trillion in new capital for an effort to produce more graphics-processing chips (GPUs), the chips commonly used to train and run advanced AI models. Should OpenAI succeed in raising the capital, the company will be operating two major levels of the AI stack: the hardware and the foundation model. This would almost certainly draw renewed attention from the Federal Trade Commission (FTC), which is already investigating competition in the AI industry.

This might all fit into a common speed-versus-safety narrative around OpenAI. Altman’s chip aspirations suggest that he wants OpenAI’s rocket ride to go even faster. Remember, it was widely speculated in November that OpenAI chief scientist and board chairman Ilya Sutskever moved to oust Altman because of concerns the CEO was pushing the company to release more and more powerful AI models without allowing enough time to ensure their safety. Those same concerns could have played into Karpathy’s decision to leave, and he may not be the last. 

Bret Taylor and Clay Bavor launch new AI company

You may know Clay Bavor’s name from his work leading Google’s VR group, and Bret Taylor’s from his rapid rise to co-CEO at Salesforce and his high-profile role as chairman of OpenAI’s board. Now the two men, who first became friends while working at Google in the early 2000s, have a new company that uses conversational AI to help brands build smarter and more useful customer service agents. 

Sierra has already been testing its product with some known-name consumer brands, including WeightWatchers, SiriusXM, and Sonos. Now, backed by a $110 million investment, Sierra is announcing the general availability of its product. 

So-called customer service bots are of course nothing new, but most people dislike the bots because of their rigidity and lack of knowledge. “If you asked 100 people if they like customer service chatbots, zero out of 100 would say that they do,” Taylor tells me. “However, if you ask those same 100 people if they like ChatGPT, my guess is that you’ll get close to 100 out of 100 saying yes.” He says customer service bots haven’t yet caught up with the full power of state-of-the-art conversational AI, and once they do people might actually enjoy dealing with them. 

Sierra’s agents don’t just provide information for simple requests, they can walk a customer through more complex tasks like changing a WeightWatchers subscription or resetting a SiriusXM receiver, Bavor says. And performing more advanced tasks depends on the AI having more access to more information. Sierra’s secret sauce may be managing the complex behind-the-scenes plumbing necessary to connect LLMs to a brand’s order management or customer relationship management systems, or any other system human agents might use to support customers. Actually, Sierra has a very different way of charging for its service—it collects a fee only when a bot actually resolves a customer request or problem. 

“At the end of the day, we do think we have a technology advantage but most of our customers don’t understand the technology anyway, so we always anchor on customer success,” Taylor says.

Big tech companies will pledge to defend against political deepfakes

At Friday’s Munich Security Conference, some of the biggest tech companies will pledge (again) to make efforts to keep political deepfakes off their platforms and to prevent their tools from creating such content. The companies included Google, Meta, Microsoft, TikTok, Adobe, and OpenAI, reports Politico EU.  

Much of the U.S. political world is already spooked by the prospect that misleading audio, images, or videos generated by new AI tools could profoundly distort the facts around races up and down the ballot in the November elections. Social networks, which typically act as the distribution platforms for deepfakes and other forms of misinformation, have scaled back the kinds of political ads they’ll allow. But they’ve also, so far, failed to create AI systems that detect deepfakes.

The pledge the companies will sign uses broad language, and is nonbinding and totally voluntary. “We will continue to build upon efforts we have collectively and individually deployed over the years to counter risks from the creation and dissemination of Deceptive AI Election Content … including developing technologies, standards, open-source tools, user information features, and more,” reads an early draft of the pledge obtained by Axios.  

Many of the signatories are also participating in a standards body that’s creating insertion methods for encrypted provenance information into generative tools—a nice way to lessen the risk of well-known creation tools, but a feature that ultimately still does nothing to protect consumers from deepfakes created using open-source tools from unscrupulous developers.

And the companies acknowledge as much in the draft: “We recognize that no individual solution or combination of solutions, including those described below, such as metadata, watermarking, classifiers . . . can fully mitigate risks related to deceptive AI election content, and that accordingly it behooves all parts of society to help educate the public on these challenges.”

Yes, teaching media literacy is a crucial part of mitigating the harms of political disinformation. But tech companies have a natural responsibility to build safety guardrails and risk-mitigation features into their tools from the earliest stages of their development. It’s nothing to make public pledges after the tool is already in the wild. 

This all smacks of PR-approved safety theater. Most importantly, social networks have a responsibility to quickly detect and label or delete political deepfakes—and the pledge doesn’t directly address that.





Welcome to AI DecodedFast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here.

Big talk—and a big departure—at OpenAI

News broke Tuesday night that one of OpenAI’s original founding members, the decorated AI researcher Andrej Karpathy, has left the company to pursue “personal projects.”

Both Karpathy and OpenAI seemed to downplay the significance of the move Tuesday night. “First of all nothing ‘happened’ and it’s not a result of any particular event, issue or drama (but please keep the conspiracy theories coming as they are highly entertaining :)),” Karpathy wrote on X. OpenAI spokesperson Kayla Wood said in a statement to The Information, which broke the story, that a senior researcher who worked closely with Karpathy would take over his work. 

Karpathy is the first OpenAI executive to leave the company since CEO Sam Altman was fired—and then quickly reinstated—in November. While Karpathy’s exit is sure to raise some eyebrows within the company, many employees may take solace knowing that Altman’s plans to scale OpenAI continue to grow. 

Last week, the Wall Street Journal reported that Altman plans to try to raise as much as $7 trillion in new capital for an effort to produce more graphics-processing chips (GPUs), the chips commonly used to train and run advanced AI models. Should OpenAI succeed in raising the capital, the company will be operating two major levels of the AI stack: the hardware and the foundation model. This would almost certainly draw renewed attention from the Federal Trade Commission (FTC), which is already investigating competition in the AI industry.

This might all fit into a common speed-versus-safety narrative around OpenAI. Altman’s chip aspirations suggest that he wants OpenAI’s rocket ride to go even faster. Remember, it was widely speculated in November that OpenAI chief scientist and board chairman Ilya Sutskever moved to oust Altman because of concerns the CEO was pushing the company to release more and more powerful AI models without allowing enough time to ensure their safety. Those same concerns could have played into Karpathy’s decision to leave, and he may not be the last. 

Bret Taylor and Clay Bavor launch new AI company

You may know Clay Bavor’s name from his work leading Google’s VR group, and Bret Taylor’s from his rapid rise to co-CEO at Salesforce and his high-profile role as chairman of OpenAI’s board. Now the two men, who first became friends while working at Google in the early 2000s, have a new company that uses conversational AI to help brands build smarter and more useful customer service agents. 

Sierra has already been testing its product with some known-name consumer brands, including WeightWatchers, SiriusXM, and Sonos. Now, backed by a $110 million investment, Sierra is announcing the general availability of its product. 

So-called customer service bots are of course nothing new, but most people dislike the bots because of their rigidity and lack of knowledge. “If you asked 100 people if they like customer service chatbots, zero out of 100 would say that they do,” Taylor tells me. “However, if you ask those same 100 people if they like ChatGPT, my guess is that you’ll get close to 100 out of 100 saying yes.” He says customer service bots haven’t yet caught up with the full power of state-of-the-art conversational AI, and once they do people might actually enjoy dealing with them. 

Sierra’s agents don’t just provide information for simple requests, they can walk a customer through more complex tasks like changing a WeightWatchers subscription or resetting a SiriusXM receiver, Bavor says. And performing more advanced tasks depends on the AI having more access to more information. Sierra’s secret sauce may be managing the complex behind-the-scenes plumbing necessary to connect LLMs to a brand’s order management or customer relationship management systems, or any other system human agents might use to support customers. Actually, Sierra has a very different way of charging for its service—it collects a fee only when a bot actually resolves a customer request or problem. 

“At the end of the day, we do think we have a technology advantage but most of our customers don’t understand the technology anyway, so we always anchor on customer success,” Taylor says.

Big tech companies will pledge to defend against political deepfakes

At Friday’s Munich Security Conference, some of the biggest tech companies will pledge (again) to make efforts to keep political deepfakes off their platforms and to prevent their tools from creating such content. The companies included Google, Meta, Microsoft, TikTok, Adobe, and OpenAI, reports Politico EU.  

Much of the U.S. political world is already spooked by the prospect that misleading audio, images, or videos generated by new AI tools could profoundly distort the facts around races up and down the ballot in the November elections. Social networks, which typically act as the distribution platforms for deepfakes and other forms of misinformation, have scaled back the kinds of political ads they’ll allow. But they’ve also, so far, failed to create AI systems that detect deepfakes.

The pledge the companies will sign uses broad language, and is nonbinding and totally voluntary. “We will continue to build upon efforts we have collectively and individually deployed over the years to counter risks from the creation and dissemination of Deceptive AI Election Content … including developing technologies, standards, open-source tools, user information features, and more,” reads an early draft of the pledge obtained by Axios.  

Many of the signatories are also participating in a standards body that’s creating insertion methods for encrypted provenance information into generative tools—a nice way to lessen the risk of well-known creation tools, but a feature that ultimately still does nothing to protect consumers from deepfakes created using open-source tools from unscrupulous developers.

And the companies acknowledge as much in the draft: “We recognize that no individual solution or combination of solutions, including those described below, such as metadata, watermarking, classifiers . . . can fully mitigate risks related to deceptive AI election content, and that accordingly it behooves all parts of society to help educate the public on these challenges.”

Yes, teaching media literacy is a crucial part of mitigating the harms of political disinformation. But tech companies have a natural responsibility to build safety guardrails and risk-mitigation features into their tools from the earliest stages of their development. It’s nothing to make public pledges after the tool is already in the wild. 

This all smacks of PR-approved safety theater. Most importantly, social networks have a responsibility to quickly detect and label or delete political deepfakes—and the pledge doesn’t directly address that.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment