Techno Blender
Digitally Yours.

Malware worm targets AI models to steal data and spam users

0 24


As worrisome as it might be that generative AI models such as ChatGPT and Gemini might one day become sentient or take our jobs, there are far more pressing concerns. For instance, three security researchers from the US and Israel recently created a malware worm which specifically targets generative AI services in order to perform malicious activities such as extracting private data, spreading propaganda, or performing phishing attacks.

The good news is that the researchers developed this worm — which they called Morris II after the 1988 Morris worm — “as a whistleblower to the possibility of creating GenAI worms in order to prevent their appearance.” In other words, you’re not in danger of being attacked by Morris II. The goal here is to warn tech companies of potential threats.

That said, the AI malware this team developed is still rather terrifying.

You can read more about the study in this paper published by the researchers, but the gist here is that an attacker can use a similar computer worm to target generative AI services by inserting adversarial self-replicating prompts into inputs that the models process and replicate as output, at which point they can be used to engage in malicious activity.

In the study, the researchers demonstrated the application of their malware by targeting AI-powered email assistants. In one case, they were able to weaponize an image attachment in an email to spam end users. In another, they used a text in an email to “poison” the database of an email app client, jailbreak ChatGPT and Gemini, and exfiltrate sensitive data.

“This work is not intended to argue against the development, deployment, and integration of GenAI capabilities in the wild. Nor is it intended to create needed panic regarding a threat that will doubt the adoption of GenAI,” the researchers explain in their study. “The objective of this paper is to present a threat that should be taken into account when designing GenAI ecosystems and its risk should be assessed concerning the specific deployment of a GenAI ecosystem (the usecase, the outcomes, the practicality, etc.).”

If you want to learn more about the AI malware worm, watch the video below:


As worrisome as it might be that generative AI models such as ChatGPT and Gemini might one day become sentient or take our jobs, there are far more pressing concerns. For instance, three security researchers from the US and Israel recently created a malware worm which specifically targets generative AI services in order to perform malicious activities such as extracting private data, spreading propaganda, or performing phishing attacks.

The good news is that the researchers developed this worm — which they called Morris II after the 1988 Morris worm — “as a whistleblower to the possibility of creating GenAI worms in order to prevent their appearance.” In other words, you’re not in danger of being attacked by Morris II. The goal here is to warn tech companies of potential threats.

That said, the AI malware this team developed is still rather terrifying.

You can read more about the study in this paper published by the researchers, but the gist here is that an attacker can use a similar computer worm to target generative AI services by inserting adversarial self-replicating prompts into inputs that the models process and replicate as output, at which point they can be used to engage in malicious activity.

In the study, the researchers demonstrated the application of their malware by targeting AI-powered email assistants. In one case, they were able to weaponize an image attachment in an email to spam end users. In another, they used a text in an email to “poison” the database of an email app client, jailbreak ChatGPT and Gemini, and exfiltrate sensitive data.

“This work is not intended to argue against the development, deployment, and integration of GenAI capabilities in the wild. Nor is it intended to create needed panic regarding a threat that will doubt the adoption of GenAI,” the researchers explain in their study. “The objective of this paper is to present a threat that should be taken into account when designing GenAI ecosystems and its risk should be assessed concerning the specific deployment of a GenAI ecosystem (the usecase, the outcomes, the practicality, etc.).”

If you want to learn more about the AI malware worm, watch the video below:

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment