Techno Blender
Digitally Yours.

Sam Altman explained the real danger of AI that can read everything you say

0 26


We’re in the early days of generative AI software, and they are already starting to change how we use technology, as well as how we go about being productive. Generative AI like ChatGPT and its rivals might change the world for the better. But success isn’t guaranteed, and it’s not straightforward. Many people worry that AI could end the world as we know it, including some of the very folks who developed the AI tech that got us here.

While I don’t necessarily share those doom-and-gloom opinions about AI, I’m aware that it can happen. When AI reaches AGI (artificial general intelligence), we could miss it, and bad AI could lead to world-ending events. Before that happens, I’m more worried about a different kind of danger that the current AI models might pose to society: Manipulating public opinion.

It turns out that OpenAI CEO Sam Altman also thinks that AI that sees everything you write online could then craft messages that might manipulate you more effectively than any previous algorithms could. He said as much in a new interview about the recent events at OpenAI, where he also shared his thoughts about the future of AI.

Sam Altman still won’t explain his firing

OpenAI fired and rehired Altman about a month ago, all in the course of a week. We haven’t learned the real reason for the drama, and Sam Altman isn’t talking. He said in a previous interview that an independent review of those events will provide explanations. Altman stayed away from answering similar questions during his interview with Time, though he addressed the event.

“We always said that some moment like this would come,” said Altman. “I didn’t think it was going to come so soon, but I think we are stronger for having gone through it.”

He also steered the conversation to AGI and acknowledged that the world is right to question OpenAI’s ability to operate during such drama. Some speculated that OpenAI might have made some sort of AGI breakthrough, which could have been a factor in Altman’s firing.

“It’s been extremely painful for me personally, but I just think it’s been great for OpenAI. We’ve never been more unified,” he said. “As we get closer to artificial general intelligence, as the stakes increase here, the ability for the OpenAI team to operate in uncertainty and stressful times should be of interest to the world.”

OpenAI DevDay keynote: ChatGPT usage this year. Image source: YouTube

AGI is a big talking point for the OpenAI CEO

The CEO also acknowledged that “everybody involved in this, as we get closer and closer to super intelligence, gets more stressed and more anxious.” He then also addressed the changes that OpenAI needs to make to the board to ensure the development of safe AGI. Altman again seemed to connect AGI to his firing.

“We have to make changes. We always said that we didn’t want AGI to be controlled by a small set of people, we want it to be democratized. And we clearly got that wrong,” Altman said. “So I think if we don’t improve our governance structure, if we don’t improve the way we interact with the world, people shouldn’t [trust OpenAI]. But we’re very motivated to improve that.”

AGI was clearly a major talking point for Altman during the interview, and that’s understandable given what has just happened in the world of AI.

In the past few days, Google released Gemini, its first real ChatGPT rival. Then French startup Mistral completed another round of funding, making its Mixtral AI available for free to developers. Meta then announced that Meta AI will hear what you say and see what you see via the Ray-Ban smart glasses.

Before all of that, Amazon came out with its own generative AI model that targets businesses and developers.

Google's Gemini AI system revealed at Google I/O 2023.
Google’s Gemini AI system revealed at Google I/O 2023. Image source: Google

Everyone involved in generative AI development is also pushing towards AGI, and there’s no telling who will get there first. Altman believes AGI “will be the most powerful technology humanity has yet invented,” teasing we’ll soon experience “the world that sci-fi has promised us for a long time.”

But getting to “incredible new things” via AGI will not come without perils. Altman warned, “there are going to be real downsides.” That’s where he singled out the AI-powered disinformation that we might start to see, particularly in years like 2024 that bring major election cycles.

“A thing that I’m more concerned about is what happens if an AI reads everything you’ve ever written online … and then right at the exact moment, sends you one message customized for you that really changes the way you think about the world,” Altman said.

If that sounds disturbing, it’s because it is. The scenario above might happen without anyone realizing it. Remember that anybody with enough resources can develop generative AI like ChatGPT right now, including nation-states that aren’t that democratic. Countries that would want to weaponize AI to manipulate public opinion.

I’m just speculating here, but I did show you the incredible things Gemini can do behind closed doors. We saw a report a few days ago that detailed Google’s internal work on Gemini features that would allow the chatbot to infer information about the user by simply getting access to all their information. At the time, I highlighted the dangers of a company like Google using AI to profile users for ad purposes.

As for Altman’s scenario, which is entirely plausible, there’s another big downside. If anybody is working on such AI, and it’s deployed online to aid entities that want to manipulate public opinion, we’ll never see it coming. Unlike troll farms, which eventually get discovered, bad AI could help with such manipulation campaigns, and we’ll never be able to prove it.

ChatGPT photo illustration
In this photo illustration, the ChatGPT (OpenAI) logo is displayed on a smartphone screen. Image source: Rafael Henrique/SOPA Images/LightRocket via Getty Images

The AI future is bright, despite the worries

On that note, there’s no proof that ChatGPT AI rivals with powers like Altman describes are in use anywhere in the world right now. I am talking about AI software that doesn’t have guardrails in place. Commercial generative AI products like ChatGPT have guardrails in place to prevent such abuse. But a lot can happen in a very short timeframe when it comes to AI. That’s what 2023 has proven so far.

Still, Altman is hopeful about the future of AI, which is what you’d expect from the CEO of one of the most important AI tech companies in the world right now. He said that safe and responsible AI, like OpenAI is developing, has the potential to create a “path where the world gets much more abundant and much better every year.”

Aside from AI tech geniuses crafting our path to AGI, strong AI regulation will hopefully help with that. Legislators in the US and the European Union are already setting up guidelines for the development of safe AI. These should hopefully prevent manipulation scenarios like Altman described. Or delay them, at least.

Altman’s interview with Time is available in full at this link.


We’re in the early days of generative AI software, and they are already starting to change how we use technology, as well as how we go about being productive. Generative AI like ChatGPT and its rivals might change the world for the better. But success isn’t guaranteed, and it’s not straightforward. Many people worry that AI could end the world as we know it, including some of the very folks who developed the AI tech that got us here.

While I don’t necessarily share those doom-and-gloom opinions about AI, I’m aware that it can happen. When AI reaches AGI (artificial general intelligence), we could miss it, and bad AI could lead to world-ending events. Before that happens, I’m more worried about a different kind of danger that the current AI models might pose to society: Manipulating public opinion.

It turns out that OpenAI CEO Sam Altman also thinks that AI that sees everything you write online could then craft messages that might manipulate you more effectively than any previous algorithms could. He said as much in a new interview about the recent events at OpenAI, where he also shared his thoughts about the future of AI.

Sam Altman still won’t explain his firing

OpenAI fired and rehired Altman about a month ago, all in the course of a week. We haven’t learned the real reason for the drama, and Sam Altman isn’t talking. He said in a previous interview that an independent review of those events will provide explanations. Altman stayed away from answering similar questions during his interview with Time, though he addressed the event.

“We always said that some moment like this would come,” said Altman. “I didn’t think it was going to come so soon, but I think we are stronger for having gone through it.”

He also steered the conversation to AGI and acknowledged that the world is right to question OpenAI’s ability to operate during such drama. Some speculated that OpenAI might have made some sort of AGI breakthrough, which could have been a factor in Altman’s firing.

“It’s been extremely painful for me personally, but I just think it’s been great for OpenAI. We’ve never been more unified,” he said. “As we get closer to artificial general intelligence, as the stakes increase here, the ability for the OpenAI team to operate in uncertainty and stressful times should be of interest to the world.”

OpenAI DevDay keynote: ChatGPT usage this year.
OpenAI DevDay keynote: ChatGPT usage this year. Image source: YouTube

AGI is a big talking point for the OpenAI CEO

The CEO also acknowledged that “everybody involved in this, as we get closer and closer to super intelligence, gets more stressed and more anxious.” He then also addressed the changes that OpenAI needs to make to the board to ensure the development of safe AGI. Altman again seemed to connect AGI to his firing.

“We have to make changes. We always said that we didn’t want AGI to be controlled by a small set of people, we want it to be democratized. And we clearly got that wrong,” Altman said. “So I think if we don’t improve our governance structure, if we don’t improve the way we interact with the world, people shouldn’t [trust OpenAI]. But we’re very motivated to improve that.”

AGI was clearly a major talking point for Altman during the interview, and that’s understandable given what has just happened in the world of AI.

In the past few days, Google released Gemini, its first real ChatGPT rival. Then French startup Mistral completed another round of funding, making its Mixtral AI available for free to developers. Meta then announced that Meta AI will hear what you say and see what you see via the Ray-Ban smart glasses.

Before all of that, Amazon came out with its own generative AI model that targets businesses and developers.

Google's Gemini AI system revealed at Google I/O 2023.
Google’s Gemini AI system revealed at Google I/O 2023. Image source: Google

Everyone involved in generative AI development is also pushing towards AGI, and there’s no telling who will get there first. Altman believes AGI “will be the most powerful technology humanity has yet invented,” teasing we’ll soon experience “the world that sci-fi has promised us for a long time.”

But getting to “incredible new things” via AGI will not come without perils. Altman warned, “there are going to be real downsides.” That’s where he singled out the AI-powered disinformation that we might start to see, particularly in years like 2024 that bring major election cycles.

“A thing that I’m more concerned about is what happens if an AI reads everything you’ve ever written online … and then right at the exact moment, sends you one message customized for you that really changes the way you think about the world,” Altman said.

If that sounds disturbing, it’s because it is. The scenario above might happen without anyone realizing it. Remember that anybody with enough resources can develop generative AI like ChatGPT right now, including nation-states that aren’t that democratic. Countries that would want to weaponize AI to manipulate public opinion.

I’m just speculating here, but I did show you the incredible things Gemini can do behind closed doors. We saw a report a few days ago that detailed Google’s internal work on Gemini features that would allow the chatbot to infer information about the user by simply getting access to all their information. At the time, I highlighted the dangers of a company like Google using AI to profile users for ad purposes.

As for Altman’s scenario, which is entirely plausible, there’s another big downside. If anybody is working on such AI, and it’s deployed online to aid entities that want to manipulate public opinion, we’ll never see it coming. Unlike troll farms, which eventually get discovered, bad AI could help with such manipulation campaigns, and we’ll never be able to prove it.

ChatGPT photo illustration
In this photo illustration, the ChatGPT (OpenAI) logo is displayed on a smartphone screen. Image source: Rafael Henrique/SOPA Images/LightRocket via Getty Images

The AI future is bright, despite the worries

On that note, there’s no proof that ChatGPT AI rivals with powers like Altman describes are in use anywhere in the world right now. I am talking about AI software that doesn’t have guardrails in place. Commercial generative AI products like ChatGPT have guardrails in place to prevent such abuse. But a lot can happen in a very short timeframe when it comes to AI. That’s what 2023 has proven so far.

Still, Altman is hopeful about the future of AI, which is what you’d expect from the CEO of one of the most important AI tech companies in the world right now. He said that safe and responsible AI, like OpenAI is developing, has the potential to create a “path where the world gets much more abundant and much better every year.”

Aside from AI tech geniuses crafting our path to AGI, strong AI regulation will hopefully help with that. Legislators in the US and the European Union are already setting up guidelines for the development of safe AI. These should hopefully prevent manipulation scenarios like Altman described. Or delay them, at least.

Altman’s interview with Time is available in full at this link.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment