OpenAI and Elon Musk trading barbs. Meanwhile, trust in AI is fading



Welcome to AI DecodedFast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here.

OpenAI fires back at Elon Musk over lawsuit

In Elon Musk’s breach of contract lawsuit filed late last month against OpenAI, the billionaire raises a fair question: Why does OpenAI, a nonprofit entity, act so much like a for-profit one?  

Since the public launch of ChatGPT—and the ensuing mania around the tech—OpenAI has raced to release a stream of improvements to its large language models (LLMs). The company has amped up its lobbying efforts in Washington and doubled the size of its PR operation over the past year. Musk is particularly concerned about OpenAI’s practice of treating its research as intellectual property to be hidden away as a business asset, including from the wider research community.

OpenAI started out as a nonprofit and later adopted an unusual corporate structure in which a nonprofit board was granted oversight of its for-profit business. Despite the turmoil around CEO Sam Altman’s firing and rehiring in November and the growing calls for the company to dissolve the nonprofit, that structure has remained in place. 

“Imagine donating to a non-profit whose asserted mission is to protect the Amazon rainforest, but then the non-profit creates a for-profit Amazonian logging company that uses the fruits of the donations to clear the rainforest. That is the story of OpenAI, Inc.,” the lawsuit says. 

Key to the lawsuit—and to OpenAI’s arguments in favor of its for-profit arm—is the company’s pursuit of artificial general intelligence (AGI), or AI models with superior intelligence to humans over a broad range of tasks.  

OpenAI countered in a blog post published Tuesday that its for-profit entity is needed in order to raise enough capital to pursue AGI. “In early 2017, we came to the realization that building AGI will require vast quantities of compute,” company executives wrote. ”We all understood we were going to need a lot more capital to succeed at our mission—billions of dollars per year, which was far more than any of us, especially Elon, thought we’d be able to raise as the non-profit.”

But Musk, in his lawsuit, says AGI is itself a dangerous goal. “[W]here some like Mr. Musk see an existential threat in AGI, others see AGI as a source of profit and power,” the lawsuit states. 

OpenAI, for its part, claims Musk knew that restricting access to the models was part of the plan. “Elon understood the mission did not imply open-sourcing AGI,” the blog post reads. “As Ilya told Elon: ‘As we get closer to building AI, it will make sense to start being less open. The Open in openAI means that everyone should benefit from the fruits of AI after it’s built, but it’s totally OK to not share the science…’, to which Elon replied: ‘Yup.’” OpenAI says it has stayed true to its mission of letting the many, not the few, benefit from AI by putting tools like ChatGPT into the hands of consumers.

Promoters of open-source believe that the best way to understand and manage the risks (including bias) in large frontier models is by giving the research community access to the models’ blueprints. OpenAI says in the blog that it will “move to dismiss all of Elon’s claims” in court.

Trust in AI companies is fading fast

The results of a new Edelman study of consumers in 28 countries show some surprisingly bad sentiment about AI and AI companies. Edelman’s researchers say in the report that their work reveals a new paradox: “Rapid innovation offers the promise of a new era of prosperity, but instead risks exacerbating trust issues, leading to further societal instability and political polarization.” Here are the main findings that relate directly to AI:

  • Three quarters of the people surveyed say they trust the tech industry, but only half say they trust AI.
  • Globally, trust has declined in AI companies over the past five years from 61% to 53%. In the U.S., there has been a 15-point drop from 50% to 35%.
  • Democrats’ trust in AI companies is 38%, compared to Independents’ 25%, and Republicans’ 24%. There is a 30-point gap between trust in tech companies and trust in AI companies for both Democrats and Republicans (66 versus 38 for Democrats; 55 versus 24 for Republicans).
  • By a three-to-one margin, respondents in France, Canada, Ireland, U.K., U.S., Germany, Australia, Holland, and Sweden reject the growing use of AI. That contrasts to developing markets such as Saudi Arabia, India, China, Kenya, Nigeria, and Thailand, where two- or three-to-one respondents accept the growing use of AI.
  • Only 19% of respondents are afraid of AI’s impact on job security. They’re concerned about their privacy (39%), that AI may devalue what it means to be human (38%), and that AI could be harmful to people (37%). 
  • The U.S. has much higher levels of concern, on potential harm to society (61%), compromising privacy (52%), and not adequately tested or evaluated AI (54%).

The Snowflake and Mistral CEOs on why they partnered

The red-hot AI startup Mistral AI said Tuesday that it will make its large language models available through the Snowflake data cloud. This includes the company’s most recent Mistral Large and Mistral Medium models, but also the two open-source language models it released last year, Mistral 7B and Mixtral 8x7B. 

Snowflake believes that by offering a state-of-the-art LLM within the same cloud as enterprise data resides, customers will get better data privacy and security. Snowflake also said its venture arm is participating in Paris-based Mistral’s Series A funding round, but didn’t say the amount or how big the stake it bought. 

“Most of the interesting use cases of AI are leveraging the reasoning capacities of large language models like Mistral’s, and some appropriate type of data like that Snowflake is hosting,” Mistral CEO Arthur Mensch tells me. “There’s really some interesting synergy there.”

Mistral has billed itself as an open-source LLM provider since its launch in June 2023, but itsMistral Large and Mistral Medium models are both closed and proprietary in nature, meaning Snowflake data customers must pay for access. “Obviously, since we are building a business we have premium offerings like Mistral Large, Mistral Medium,” Mensch says. “And those are things that we are deploying on Snowflake, so even though it isn’t open-source, but through the partnership with Snowflake it is available where no LLM was available before.” 

Snowflake CEO Sridhar Ramaswamy says that his company has made smaller language models available to its data customers, and it also hosts Meta’s Llama 2 open-source model, but Mistral is the first time it will be able to offer a state-of-the-art model like Large to its customers. Ramaswamy, who once ran the ads business at Google, had been running Snowflake’s AI strategy since his hiring last year until he was named CEO a week ago. 

“The beautiful thing about accessing Mistral through Snowflake is that there’s literally no work,” he says. “There’s not even an API—an API requires a programmer . . . By being able to call a language model with a (commonly used) programming language like SQL, there’s no work.” 

Snowflake’s customers will be able to quickly build apps on top of Mistral models, which will be served from Snowflake’s Cortex platform. Cortex also provides security, privacy, compliance, and governance (including who has access to what data through the model) features around the data and the LLMs.

More AI coverage from Fast Company: 

From around the web: 





Welcome to AI DecodedFast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here.

OpenAI fires back at Elon Musk over lawsuit

In Elon Musk’s breach of contract lawsuit filed late last month against OpenAI, the billionaire raises a fair question: Why does OpenAI, a nonprofit entity, act so much like a for-profit one?  

Since the public launch of ChatGPT—and the ensuing mania around the tech—OpenAI has raced to release a stream of improvements to its large language models (LLMs). The company has amped up its lobbying efforts in Washington and doubled the size of its PR operation over the past year. Musk is particularly concerned about OpenAI’s practice of treating its research as intellectual property to be hidden away as a business asset, including from the wider research community.

OpenAI started out as a nonprofit and later adopted an unusual corporate structure in which a nonprofit board was granted oversight of its for-profit business. Despite the turmoil around CEO Sam Altman’s firing and rehiring in November and the growing calls for the company to dissolve the nonprofit, that structure has remained in place. 

“Imagine donating to a non-profit whose asserted mission is to protect the Amazon rainforest, but then the non-profit creates a for-profit Amazonian logging company that uses the fruits of the donations to clear the rainforest. That is the story of OpenAI, Inc.,” the lawsuit says. 

Key to the lawsuit—and to OpenAI’s arguments in favor of its for-profit arm—is the company’s pursuit of artificial general intelligence (AGI), or AI models with superior intelligence to humans over a broad range of tasks.  

OpenAI countered in a blog post published Tuesday that its for-profit entity is needed in order to raise enough capital to pursue AGI. “In early 2017, we came to the realization that building AGI will require vast quantities of compute,” company executives wrote. ”We all understood we were going to need a lot more capital to succeed at our mission—billions of dollars per year, which was far more than any of us, especially Elon, thought we’d be able to raise as the non-profit.”

But Musk, in his lawsuit, says AGI is itself a dangerous goal. “[W]here some like Mr. Musk see an existential threat in AGI, others see AGI as a source of profit and power,” the lawsuit states. 

OpenAI, for its part, claims Musk knew that restricting access to the models was part of the plan. “Elon understood the mission did not imply open-sourcing AGI,” the blog post reads. “As Ilya told Elon: ‘As we get closer to building AI, it will make sense to start being less open. The Open in openAI means that everyone should benefit from the fruits of AI after it’s built, but it’s totally OK to not share the science…’, to which Elon replied: ‘Yup.’” OpenAI says it has stayed true to its mission of letting the many, not the few, benefit from AI by putting tools like ChatGPT into the hands of consumers.

Promoters of open-source believe that the best way to understand and manage the risks (including bias) in large frontier models is by giving the research community access to the models’ blueprints. OpenAI says in the blog that it will “move to dismiss all of Elon’s claims” in court.

Trust in AI companies is fading fast

The results of a new Edelman study of consumers in 28 countries show some surprisingly bad sentiment about AI and AI companies. Edelman’s researchers say in the report that their work reveals a new paradox: “Rapid innovation offers the promise of a new era of prosperity, but instead risks exacerbating trust issues, leading to further societal instability and political polarization.” Here are the main findings that relate directly to AI:

  • Three quarters of the people surveyed say they trust the tech industry, but only half say they trust AI.
  • Globally, trust has declined in AI companies over the past five years from 61% to 53%. In the U.S., there has been a 15-point drop from 50% to 35%.
  • Democrats’ trust in AI companies is 38%, compared to Independents’ 25%, and Republicans’ 24%. There is a 30-point gap between trust in tech companies and trust in AI companies for both Democrats and Republicans (66 versus 38 for Democrats; 55 versus 24 for Republicans).
  • By a three-to-one margin, respondents in France, Canada, Ireland, U.K., U.S., Germany, Australia, Holland, and Sweden reject the growing use of AI. That contrasts to developing markets such as Saudi Arabia, India, China, Kenya, Nigeria, and Thailand, where two- or three-to-one respondents accept the growing use of AI.
  • Only 19% of respondents are afraid of AI’s impact on job security. They’re concerned about their privacy (39%), that AI may devalue what it means to be human (38%), and that AI could be harmful to people (37%). 
  • The U.S. has much higher levels of concern, on potential harm to society (61%), compromising privacy (52%), and not adequately tested or evaluated AI (54%).

The Snowflake and Mistral CEOs on why they partnered

The red-hot AI startup Mistral AI said Tuesday that it will make its large language models available through the Snowflake data cloud. This includes the company’s most recent Mistral Large and Mistral Medium models, but also the two open-source language models it released last year, Mistral 7B and Mixtral 8x7B. 

Snowflake believes that by offering a state-of-the-art LLM within the same cloud as enterprise data resides, customers will get better data privacy and security. Snowflake also said its venture arm is participating in Paris-based Mistral’s Series A funding round, but didn’t say the amount or how big the stake it bought. 

“Most of the interesting use cases of AI are leveraging the reasoning capacities of large language models like Mistral’s, and some appropriate type of data like that Snowflake is hosting,” Mistral CEO Arthur Mensch tells me. “There’s really some interesting synergy there.”

Mistral has billed itself as an open-source LLM provider since its launch in June 2023, but itsMistral Large and Mistral Medium models are both closed and proprietary in nature, meaning Snowflake data customers must pay for access. “Obviously, since we are building a business we have premium offerings like Mistral Large, Mistral Medium,” Mensch says. “And those are things that we are deploying on Snowflake, so even though it isn’t open-source, but through the partnership with Snowflake it is available where no LLM was available before.” 

Snowflake CEO Sridhar Ramaswamy says that his company has made smaller language models available to its data customers, and it also hosts Meta’s Llama 2 open-source model, but Mistral is the first time it will be able to offer a state-of-the-art model like Large to its customers. Ramaswamy, who once ran the ads business at Google, had been running Snowflake’s AI strategy since his hiring last year until he was named CEO a week ago. 

“The beautiful thing about accessing Mistral through Snowflake is that there’s literally no work,” he says. “There’s not even an API—an API requires a programmer . . . By being able to call a language model with a (commonly used) programming language like SQL, there’s no work.” 

Snowflake’s customers will be able to quickly build apps on top of Mistral models, which will be served from Snowflake’s Cortex platform. Cortex also provides security, privacy, compliance, and governance (including who has access to what data through the model) features around the data and the LLMs.

More AI coverage from Fast Company: 

From around the web: 

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – admin@technoblender.com. The content will be deleted within 24 hours.
AI DecodedArtificial General Intelligenceartificial intelligencebarbsEdelmanElonelon muskFadinggenerative aiLatestMistralMuskOpenAISnowflakeTechnoblendertradingtrustUpdates
Comments (0)
Add Comment