Techno Blender
Digitally Yours.

AI Is in the Midst of a Fever Dream and It’s Only Getting Worse

0 23


AI chatbots made headlines this week with a range of glaring flubs that were hard to look past. Tech companies are rushing to show off their non-offensive, error-free AI chatbots. They’re hoping to show investors that the billions of dollars spent on this technology was a good investment. This week, AI chatbots showed they are not ready for the spotlight just yet.

AI Off The Rails

Google’s Gemini offended many users by refusing to generate white people and turning historically white figures into multiracial characters. The company ended up apologizing, and then pausing its AI image generator’s people-creating feature altogether to fix it. The embarrassing moment, featured in The New York Post Thursday, comes just two weeks after Google’s broader release of Gemini.

Over at OpenAI, ChatGPT went berzerk this week, spouting gibberish that briefly rendered the service useless for many users, including some reports from businesses using ChatGPT Enterprise. OpenAI later said the problem was caused by an “optimization” that introduced a bug, completely throwing off the way ChatGPT generates answers.

Lastly, Gab, a far-right social media platform, launched AI chatbot versions of Adolf Hitler and Osama bin Laden, which deny the Holocaust’s existence. It’s unclear which large language model is powering Gab’s AI chatbots at this time. The reach of Gab’s chatbots is far smaller than Gemini or ChatGPT, but CEO Andrew Torba says their users are growing by 20,000 people a day. Gab AI, like Grok, is being touted as an alternative to “woke” AI chatbots like Gemini and ChatGPT.

Offending Both Sides of the Aisle, Informing Neither

This week, Gemini and Gab offended conservatives and liberals alike. OpenAI, the most widely embraced AI tool for business, showed how it can be unreliable. It leads to the question, once again, of whether AI is being developed too quickly.

Google and OpenAI rushed to have AI chatbots pushed out into the world, but the technology behind them is very new. AI is being tested in real-time, in front of everyone, and the pace at which these companies are shipping new products is unprecedented. They’ve made great advancements, but some pretty big errors as well.

“Alignment — controlling a model’s behavior and values — is still a pretty young discipline,” said OpenAI co-founder John Schulman in a tweet. “That said, these public outcries [are] important for spurring us to solve these problems and develop better alignment tech,” he said in a follow-up.

Meanwhile, Big Tech players, such as Apple, have been more conservative in their AI efforts, waiting for others to work out the kinks. Apple is expected to have a big AI update sometime in 2024, hoping to ship a cleaner product than Gemini and ChatGPT.

Whether AI is ready or not, Google and OpenAI are too deep to back out now. Gemini and ChatGPT will likely have more problems come up in the next year, but these companies will have to figure them out in real time in front of everyone.


AI chatbots made headlines this week with a range of glaring flubs that were hard to look past. Tech companies are rushing to show off their non-offensive, error-free AI chatbots. They’re hoping to show investors that the billions of dollars spent on this technology was a good investment. This week, AI chatbots showed they are not ready for the spotlight just yet.

AI Off The Rails

Google’s Gemini offended many users by refusing to generate white people and turning historically white figures into multiracial characters. The company ended up apologizing, and then pausing its AI image generator’s people-creating feature altogether to fix it. The embarrassing moment, featured in The New York Post Thursday, comes just two weeks after Google’s broader release of Gemini.

Over at OpenAI, ChatGPT went berzerk this week, spouting gibberish that briefly rendered the service useless for many users, including some reports from businesses using ChatGPT Enterprise. OpenAI later said the problem was caused by an “optimization” that introduced a bug, completely throwing off the way ChatGPT generates answers.

Lastly, Gab, a far-right social media platform, launched AI chatbot versions of Adolf Hitler and Osama bin Laden, which deny the Holocaust’s existence. It’s unclear which large language model is powering Gab’s AI chatbots at this time. The reach of Gab’s chatbots is far smaller than Gemini or ChatGPT, but CEO Andrew Torba says their users are growing by 20,000 people a day. Gab AI, like Grok, is being touted as an alternative to “woke” AI chatbots like Gemini and ChatGPT.

Offending Both Sides of the Aisle, Informing Neither

This week, Gemini and Gab offended conservatives and liberals alike. OpenAI, the most widely embraced AI tool for business, showed how it can be unreliable. It leads to the question, once again, of whether AI is being developed too quickly.

Google and OpenAI rushed to have AI chatbots pushed out into the world, but the technology behind them is very new. AI is being tested in real-time, in front of everyone, and the pace at which these companies are shipping new products is unprecedented. They’ve made great advancements, but some pretty big errors as well.

“Alignment — controlling a model’s behavior and values — is still a pretty young discipline,” said OpenAI co-founder John Schulman in a tweet. “That said, these public outcries [are] important for spurring us to solve these problems and develop better alignment tech,” he said in a follow-up.

Meanwhile, Big Tech players, such as Apple, have been more conservative in their AI efforts, waiting for others to work out the kinks. Apple is expected to have a big AI update sometime in 2024, hoping to ship a cleaner product than Gemini and ChatGPT.

Whether AI is ready or not, Google and OpenAI are too deep to back out now. Gemini and ChatGPT will likely have more problems come up in the next year, but these companies will have to figure them out in real time in front of everyone.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment