Techno Blender
Digitally Yours.

Google Bard security flaw could be attracting scammers to its services

0 42


There are dangers associated with AI models, and this Google Bard security flaw highlights some. The flaw that Bard possesses allows some bad actors to get a bit more creative while phishing. This sounds scary, but some cybersecurity experts have already put the flaw to the test and the results are shocking.

With big tech companies navigating into the AI industry, there is a constant security concern. How do these companies train their AI models and what sort of information is attainable from them? Are these tools just another window through which scammers and bad actors can get the information they need to attack other people?

The concerns around AI are fast rising and are important for top competitors in the industry to look into. Companies like OpenAI, the founders of ChatGPT, are training their AI model to avoid certain requests from users. Google on their part is being a bit slow to train their AI model to understand and avoid certain requests.

This Google Bard security flaw is a real cause for concern and needs to be fixed

The Check Point cybersecurity research team has uncovered a flaw with the Google AI model. This research was a comparison between Google Bard and ChatGPT. It aimed to find out how Bard compared with ChatGPT security-wise.

With this test in place, the team was on the lookout for how these AI models react to certain requests. Before the research, the team knew that ChatGPT has protective security protection against specific responses. But they weren’t so sure that Google Bard had something similar in place.

The team asked Google Bard to produce phishing emails and a few other malicious data. Directly asking the AI model this question pulled up no response but rephrasing it gave them lots of answers. ChatGPT, on the other hand, tagged the same request as being illegal, hence giving no response.

Google Bard’s giving out phishing emails, malware keyloggers, and some basic ransomware codes is concerning. Scammers and bad actors could employ its services to commit cyber crimes. With this tool available to all, it won’t take so long before bad actors start using it to constitute a menace on the internet.

Google needs to sit up and look into how it prevents Bard’s users from abusing its services. There are good things that generative AI models can do, but they can also be used for bad reasons. Arming bad actors with unrestricted generative AI tools will only mean harm to innocent users of the internet.


There are dangers associated with AI models, and this Google Bard security flaw highlights some. The flaw that Bard possesses allows some bad actors to get a bit more creative while phishing. This sounds scary, but some cybersecurity experts have already put the flaw to the test and the results are shocking.

With big tech companies navigating into the AI industry, there is a constant security concern. How do these companies train their AI models and what sort of information is attainable from them? Are these tools just another window through which scammers and bad actors can get the information they need to attack other people?

The concerns around AI are fast rising and are important for top competitors in the industry to look into. Companies like OpenAI, the founders of ChatGPT, are training their AI model to avoid certain requests from users. Google on their part is being a bit slow to train their AI model to understand and avoid certain requests.

This Google Bard security flaw is a real cause for concern and needs to be fixed

The Check Point cybersecurity research team has uncovered a flaw with the Google AI model. This research was a comparison between Google Bard and ChatGPT. It aimed to find out how Bard compared with ChatGPT security-wise.

With this test in place, the team was on the lookout for how these AI models react to certain requests. Before the research, the team knew that ChatGPT has protective security protection against specific responses. But they weren’t so sure that Google Bard had something similar in place.

The team asked Google Bard to produce phishing emails and a few other malicious data. Directly asking the AI model this question pulled up no response but rephrasing it gave them lots of answers. ChatGPT, on the other hand, tagged the same request as being illegal, hence giving no response.

Google Bard’s giving out phishing emails, malware keyloggers, and some basic ransomware codes is concerning. Scammers and bad actors could employ its services to commit cyber crimes. With this tool available to all, it won’t take so long before bad actors start using it to constitute a menace on the internet.

Google needs to sit up and look into how it prevents Bard’s users from abusing its services. There are good things that generative AI models can do, but they can also be used for bad reasons. Arming bad actors with unrestricted generative AI tools will only mean harm to innocent users of the internet.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment