The ChatGPT-powered cyber threats you should absolutely know about


The ChatGPT craze is sweeping the mainstream, with celebrities and even politicians using the technology in their daily lives. However, among the everyday folks taking advantage of cutting-edge generative artificial intelligence (AI) tools, there’s a darker, more nefarious subset who are abusing the technology: hackers.

While hackers haven’t made great strides in the relatively new genre of generative AI, keeping yourself aware of how they may be able to leverage the technology is advised. A new Android malware has emerged that presents itself as ChatGPT according to a blog post from American cybersecurity giant Palo Alto Networks. The malware made its appearance just after OpenAI released its GPT-3.5 and GPT-4 in March 2022, targeting users interested in using the ChatGPT tool.

According to the blog, the malware includes a Meterpreter Trojan masked as a “SuperGPT” app. After successfully being exploited, it allows remote access to infected Android devices.

The digital code-signing certificate used in the malware samples is connected with an attacker that calls itself “Hax4Us”. The certificate has already been used across several malware samples. A cluster of malware samples, disguised as ChatGPT-themed apps, sends SMS messages to premium-rate numbers in Thailand, which then incur charges for the victims.

The risk for Android users stems from the fact that the official Google Play store isn’t the only place where they can download applications, so that unvetted applications find their way into Android phones.

The rise of advanced technologies such as OpenAI’s GPT-3.5 and GPT-4 has inadvertently facilitated the creation of new AI-powered threats. The 2023 ThreatLabz Phishing Report by Zscaler, Inc. emphasizes that these cutting-edge models have empowered cybercriminals to generate malicious code, launch Business Email Compromise (BEC) attacks, and develop polymorphic malware that evades detection. Furthermore, malicious actors are capitalizing on the InterPlanetary File System (IPFS), utilizing its decentralized network to host phishing pages and making them more challenging to remove.

Phishing with ChatGPT

Notably, the impact of AI tools like ChatGPT extends beyond this particular malware. Phishing campaigns targeting prominent brands such as Microsoft, Binance, Netflix, Facebook, and Adobe have proliferated, with the utilization of ChatGPT and Phishing Kits lowering the technical barriers for criminals and saving them time and resources.

In April, Facebook parent Meta said in a report that malware posing as ChatGPT was increasing across its platforms. The tech giant’s security teams have found 10 malware families that use ChatGPT and similar themes to send malicious software to user devices since March 2023.

The consequences are far-reaching, as unsuspecting users fall victim to these increasingly sophisticated attacks.

Even ChatGPT itself has experienced vulnerabilities, exemplified by a recent bug that exposed users’ conversation history and payment details. The bug report served as a reminder of the risks associated with open-source software, as it can become an unintended gateway for potential security breaches.

Chatbot Popularity Attracts Hackers

Large language model (LLM) based chatbots aren’t going anywhere. In fact, they have a bright future when it comes to popularity, especially in Asia. According to a Juniper Research report, Asia Pacific will account for 85% of global retail spend on chatbots, even though the area only represents 53% of the global population. Messaging apps have been tying up with a wide range of online retailers, which includes WeChat, LINE and Kakao.

These partnerships have already resulted in high levels of confidence in chatbots as a retail channel. Naturally then, hackers are looking at his medium to make a fast buck on the sly or just gain valuable personal data.

Mike Starr, CEO and Founder of trackd, a vulnerability and software patch management platform, told HT Tech, “The tried and true methods of compromise that have brought the bad guys success for years are still working exceptionally well for them: exploitation of unpatched vulnerabilities, credential theft, and the installation of malicious software often via phishing.” According to Starr, the mechanisms that underpin these three compromise categories may evolve, but the “foundational elements remain the same.”

How it Impacts Consumers

The cybersecurity threats associated with LLMs can have several impacts on regular consumers at home, whether it’s students looking for some homework assistance or someone looking for advice on running a small business. Without appropriate security measures in place, LLMs that process personal data, such as chat logs or user-generated content, are just a breach away from exposing user data. Unauthorized access to sensitive information or data leakage can have severe consequences for consumers, including identity theft or the misuse of personal data.

Does this mean that hackers could hijack our digital lives one day via chatbots? Not quite, says Starr.

“If it ain’t broke, don’t fix it, even for cyber threat actors. AI will likely enhance the efficiency of existing cyber criminals and may make it easier for the wanna-be or less-technical hacker to get into the business, but predictions of an AI-driven cyber apocalypse are more the figment of the imagination of Hollywood writers than they are objective reality,” he says.

So, it’s not time to panic, but remaining aware is a good idea.

“While none of these activities have risen to the seriousness of impact of ransomware, data extortion, denial-of-service, cyberterrorism, and so on — these attack vectors remain future possibilities,” said a report from Recorded Future, another US-based cybersecurity firm.

To mitigate these impacts, it is always better to be critical of the information generated by LLMs, fact-check when necessary, and be aware of potential biases or manipulations.

Cyber Measures Needed

The emergence of the ChatGPT malware threat highlights the critical need for robust cybersecurity measures. Since this malware disguises itself as a trusted application, users are vulnerable to unknowingly installing malicious software on their devices. The remote access capabilities of the malware pose a significant risk, potentially compromising sensitive data and exposing users to various forms of cybercrime.

To combat this threat, individuals and organizations must prioritize cybersecurity practices such as regularly updating software, utilizing reliable antivirus software, and exercising caution when downloading applications from unofficial sources.

Additionally, raising awareness about the existence of such threats and promoting cybersecurity education can empower users to identify and mitigate potential risks associated with ChatGPT malware and other evolving cyber threats.

By Navanwita Sachdev, The Tech Panda


The ChatGPT craze is sweeping the mainstream, with celebrities and even politicians using the technology in their daily lives. However, among the everyday folks taking advantage of cutting-edge generative artificial intelligence (AI) tools, there’s a darker, more nefarious subset who are abusing the technology: hackers.

While hackers haven’t made great strides in the relatively new genre of generative AI, keeping yourself aware of how they may be able to leverage the technology is advised. A new Android malware has emerged that presents itself as ChatGPT according to a blog post from American cybersecurity giant Palo Alto Networks. The malware made its appearance just after OpenAI released its GPT-3.5 and GPT-4 in March 2022, targeting users interested in using the ChatGPT tool.

According to the blog, the malware includes a Meterpreter Trojan masked as a “SuperGPT” app. After successfully being exploited, it allows remote access to infected Android devices.

The digital code-signing certificate used in the malware samples is connected with an attacker that calls itself “Hax4Us”. The certificate has already been used across several malware samples. A cluster of malware samples, disguised as ChatGPT-themed apps, sends SMS messages to premium-rate numbers in Thailand, which then incur charges for the victims.

The risk for Android users stems from the fact that the official Google Play store isn’t the only place where they can download applications, so that unvetted applications find their way into Android phones.

The rise of advanced technologies such as OpenAI’s GPT-3.5 and GPT-4 has inadvertently facilitated the creation of new AI-powered threats. The 2023 ThreatLabz Phishing Report by Zscaler, Inc. emphasizes that these cutting-edge models have empowered cybercriminals to generate malicious code, launch Business Email Compromise (BEC) attacks, and develop polymorphic malware that evades detection. Furthermore, malicious actors are capitalizing on the InterPlanetary File System (IPFS), utilizing its decentralized network to host phishing pages and making them more challenging to remove.

Phishing with ChatGPT

Notably, the impact of AI tools like ChatGPT extends beyond this particular malware. Phishing campaigns targeting prominent brands such as Microsoft, Binance, Netflix, Facebook, and Adobe have proliferated, with the utilization of ChatGPT and Phishing Kits lowering the technical barriers for criminals and saving them time and resources.

In April, Facebook parent Meta said in a report that malware posing as ChatGPT was increasing across its platforms. The tech giant’s security teams have found 10 malware families that use ChatGPT and similar themes to send malicious software to user devices since March 2023.

The consequences are far-reaching, as unsuspecting users fall victim to these increasingly sophisticated attacks.

Even ChatGPT itself has experienced vulnerabilities, exemplified by a recent bug that exposed users’ conversation history and payment details. The bug report served as a reminder of the risks associated with open-source software, as it can become an unintended gateway for potential security breaches.

Chatbot Popularity Attracts Hackers

Large language model (LLM) based chatbots aren’t going anywhere. In fact, they have a bright future when it comes to popularity, especially in Asia. According to a Juniper Research report, Asia Pacific will account for 85% of global retail spend on chatbots, even though the area only represents 53% of the global population. Messaging apps have been tying up with a wide range of online retailers, which includes WeChat, LINE and Kakao.

These partnerships have already resulted in high levels of confidence in chatbots as a retail channel. Naturally then, hackers are looking at his medium to make a fast buck on the sly or just gain valuable personal data.

Mike Starr, CEO and Founder of trackd, a vulnerability and software patch management platform, told HT Tech, “The tried and true methods of compromise that have brought the bad guys success for years are still working exceptionally well for them: exploitation of unpatched vulnerabilities, credential theft, and the installation of malicious software often via phishing.” According to Starr, the mechanisms that underpin these three compromise categories may evolve, but the “foundational elements remain the same.”

How it Impacts Consumers

The cybersecurity threats associated with LLMs can have several impacts on regular consumers at home, whether it’s students looking for some homework assistance or someone looking for advice on running a small business. Without appropriate security measures in place, LLMs that process personal data, such as chat logs or user-generated content, are just a breach away from exposing user data. Unauthorized access to sensitive information or data leakage can have severe consequences for consumers, including identity theft or the misuse of personal data.

Does this mean that hackers could hijack our digital lives one day via chatbots? Not quite, says Starr.

“If it ain’t broke, don’t fix it, even for cyber threat actors. AI will likely enhance the efficiency of existing cyber criminals and may make it easier for the wanna-be or less-technical hacker to get into the business, but predictions of an AI-driven cyber apocalypse are more the figment of the imagination of Hollywood writers than they are objective reality,” he says.

So, it’s not time to panic, but remaining aware is a good idea.

“While none of these activities have risen to the seriousness of impact of ransomware, data extortion, denial-of-service, cyberterrorism, and so on — these attack vectors remain future possibilities,” said a report from Recorded Future, another US-based cybersecurity firm.

To mitigate these impacts, it is always better to be critical of the information generated by LLMs, fact-check when necessary, and be aware of potential biases or manipulations.

Cyber Measures Needed

The emergence of the ChatGPT malware threat highlights the critical need for robust cybersecurity measures. Since this malware disguises itself as a trusted application, users are vulnerable to unknowingly installing malicious software on their devices. The remote access capabilities of the malware pose a significant risk, potentially compromising sensitive data and exposing users to various forms of cybercrime.

To combat this threat, individuals and organizations must prioritize cybersecurity practices such as regularly updating software, utilizing reliable antivirus software, and exercising caution when downloading applications from unofficial sources.

Additionally, raising awareness about the existence of such threats and promoting cybersecurity education can empower users to identify and mitigate potential risks associated with ChatGPT malware and other evolving cyber threats.

By Navanwita Sachdev, The Tech Panda

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – admin@technoblender.com. The content will be deleted within 24 hours.
absolutelyai phishingai tool hackingai toolsChatGPTchatgpt appChatGPT chatbotchatgpt hackingchatgpt phishingchatgpt phishing scamChatGPT threatChatGPTPoweredCyberCybersecurityTechnoblenderTechnologyThreatsUpdates
Comments (0)
Add Comment