Techno Blender
Digitally Yours.

Google’s AI isn’t too woke. It’s too rushed

0 24



Google’s rushed, faulty AI isn’t alone. Microsoft’s Bing chatbot wasn’t just inaccurate, it was unhinged, telling a New York Times columnist soon after its release that it was in love with him and wanted to destroy things. Google has said that responsible AI is a top priority, and that it was “continuing to invest in the teams” that apply its AI principles to products.

Loading

OpenAI, which kick-started Big Tech’s race for a foothold in generative AI, normalised the rationale for treating us all like guinea pigs with new AI tools. Its website describes an “iterative deployment” philosophy, where it releases products like ChatGPT quickly to study their safety and impact and to prepare us for more powerful AI in the future. Google’s Pichai now says much the same. By releasing half-baked AI tools, he’s giving us “time to adapt” to when AI becomes super powerful, according to comments he made in a 60 Minutes interview last year.

When asked what keeps him up at night, Pichai said, with no trace of irony, that it was knowing that AI could be “very harmful if deployed wrongly.” So what was his solution? Pichai didn’t mention investing more in the researchers that make AI safe, accurate and ethical, but pointed to greater regulation, a solution that lay outside his control.

“There have to be consequences for creating deepfake videos which cause harm to society,” he said, referring to AI videos that could spread misinformation. “Anybody who has worked with AI for a while, you know, you realise this is something so different and so deep that we would need societal regulations to think about how to adapt.”

This is a bit like the chef of a restaurant saying, “Making people sick with salmonella is bad, and we need more food inspectors to check our raw food,” when they know full well there are no food inspectors to speak of and won’t be for years. It gives them license to continue dishing out tainted meat or fish. The same is true in AI.

Loading

With regulations in the distant future, Pichai knows the onus is on his company to build AI systems that are fair and safe. But now that he is caught up in the race to put generative AI into everything quickly, there’s little incentive to ensure that it is.

We know about Gemini’s diversity bug because of all the tweets on X, but the AI model may have other problems we don’t know about — issues that may not trigger Elon Musk but are no less insidious. The female popes and black founding fathers are products of a deeper, years-long problem of putting growth and market dominance before safety.

Expect our role as guinea pigs to continue until that changes.

Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of We Are Anonymous.

Bloomberg

The Business Briefing newsletter delivers major stories, exclusive coverage and expert opinion. Sign up to get it every weekday morning.



Google’s rushed, faulty AI isn’t alone. Microsoft’s Bing chatbot wasn’t just inaccurate, it was unhinged, telling a New York Times columnist soon after its release that it was in love with him and wanted to destroy things. Google has said that responsible AI is a top priority, and that it was “continuing to invest in the teams” that apply its AI principles to products.

Loading

OpenAI, which kick-started Big Tech’s race for a foothold in generative AI, normalised the rationale for treating us all like guinea pigs with new AI tools. Its website describes an “iterative deployment” philosophy, where it releases products like ChatGPT quickly to study their safety and impact and to prepare us for more powerful AI in the future. Google’s Pichai now says much the same. By releasing half-baked AI tools, he’s giving us “time to adapt” to when AI becomes super powerful, according to comments he made in a 60 Minutes interview last year.

When asked what keeps him up at night, Pichai said, with no trace of irony, that it was knowing that AI could be “very harmful if deployed wrongly.” So what was his solution? Pichai didn’t mention investing more in the researchers that make AI safe, accurate and ethical, but pointed to greater regulation, a solution that lay outside his control.

“There have to be consequences for creating deepfake videos which cause harm to society,” he said, referring to AI videos that could spread misinformation. “Anybody who has worked with AI for a while, you know, you realise this is something so different and so deep that we would need societal regulations to think about how to adapt.”

This is a bit like the chef of a restaurant saying, “Making people sick with salmonella is bad, and we need more food inspectors to check our raw food,” when they know full well there are no food inspectors to speak of and won’t be for years. It gives them license to continue dishing out tainted meat or fish. The same is true in AI.

Loading

With regulations in the distant future, Pichai knows the onus is on his company to build AI systems that are fair and safe. But now that he is caught up in the race to put generative AI into everything quickly, there’s little incentive to ensure that it is.

We know about Gemini’s diversity bug because of all the tweets on X, but the AI model may have other problems we don’t know about — issues that may not trigger Elon Musk but are no less insidious. The female popes and black founding fathers are products of a deeper, years-long problem of putting growth and market dominance before safety.

Expect our role as guinea pigs to continue until that changes.

Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of We Are Anonymous.

Bloomberg

The Business Briefing newsletter delivers major stories, exclusive coverage and expert opinion. Sign up to get it every weekday morning.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment