Techno Blender
Digitally Yours.

Employees input sensitive data into generative AI tools despite the risks

0 34


Andriy Onufriyenko/Getty Images

Employees might recognize the potential leak of sensitive data as a top risk, but some individuals still proceed to input such information into publicly available generative artificial intelligence (AI) tools. 

This sensitive data includes customer information, sales figures, financial data, and personally identifiable information, such as email addresses and phone numbers. Employees also lack clear policies or guidance on the use of these tools in the workplace, according to research released by Veritas Technologies. 

Also: Five ways to use AI responsibly

Conducted by market researcher 3Gem in December 2023, the study polled 11,500 employees worldwide, including workers in Australia, China, Japan, Singapore, South Korea, France, Germany, the UK, and the US. 

Asked about the risks to their organization from using public generative AI tools, 39% of respondents pointed to the potential leak of sensitive data, while 38% said these tools could produce incorrect, inaccurate, or unhelpful information. Another 37% of respondents cited compliance risks and 19% noted the technology could negatively impact productivity. 

Some 57% of employees used public generative AI tools in the office at least once weekly, with 22.3% using the technology daily. About 28% of people said they did not use such tools at all. 

Also: The best AI chatbots: ChatGPT and other noteworthy alternatives

Close to half (42%) of respondents said they used the tools for research and analysis, while 41% and 40% turned to generative AI to write email messages and memos, as well as to improve their writing, respectively. 

With regards to the types of data that can provide business value when entered into public generative AI tools, 30% of employees pointed to customer information, such as references, bank details, and addresses. Some 29% cited sales figures, while 28% highlighted financial information, and 25% pointed to personally identifiable information. Another 22% of workers referred to confidential HR data and 17% cited confidential company information. 

Some 27% of respondents did not believe putting any of this sensitive information into public generative AI tools could yield value to the business. 

Almost a third (31%) of employees acknowledged having entered such sensitive data into these tools, while 5% were not sure if they had done so. Close to two-thirds (64%) said they did not input any sensitive data into public generative AI tools.

Also: Today’s AI boom will amplify social problems if we don’t act now

However, the use of emerging technology could provide faster access to information, said 48% of respondents when asked about the benefits to their organization. Forty percent cited higher productivity, 39% said generative AI could replace mundane tasks, and 34% believed it helped generate new ideas. 

Interestingly, 53% of employees considered a colleague’s use of generative AI tools as an unfair advantage and 40% believed those who did so should be required to teach the rest of their team or colleagues. Another 29% said colleagues who used such tools should be reported to their line manager, while 27% believed they should face disciplinary action.  

In terms of formal guidance and policies on the use of public generative AI tools at work, 36% of respondents said none was available. Some 24% revealed having mandatory policies on such use, while 21% said such guidelines were voluntary for their workplace. Another 12% said their organization implemented a ban on the use of generative AI tools at work. 

A majority 90% of respondents believed it was important to have guidelines and policies on the use of emerging technology, with 68% noting the need for everyone to know the “right way” to adopt generative AI.

Risks will escalate as GenAI use climbs

It’s likely that as the adoption of generative AI increases, the associated security risks also will grow. 

Key platforms could see large-scale attacks as a single generative AI technology approaches a 50% market share, or when the market consolidates to no more than three technologies, according to IBM’s X-Force Threat Intelligence Index 2024

Also: Train AI models with your own data to mitigate risks

The study is based on the tech vendor’s analysis from monitoring more than 150 billion security events per day across more than 130 countries and data insights from within IBM, including its managed security services unit and Red Hat.

Cyber criminals target technologies that are ubiquitous across organizations globally to see returns from their campaigns, IBM noted. This approach will extend to AI once generative AI gains market dominance, triggering the maturity of AI as an attack surface and motivating cyber criminals to invest in new tools.

It is, therefore, critical that businesses secure their AI models before threat actors scale their activities, IBM warned. In 2023, there were more than 800,000 posts on AI and GPT across dark web forums, it noted, adding that identity-based threats will continue to grow as adversaries tap the technology to optimize their attacks.

Describing generative AI as the next big frontier to secure, the tech vendor said: “Enterprises should also recognize their existing underlying infrastructure is a gateway to their AI models that doesn’t require novel tactics from attackers to target — highlighting the need for a holistic approach to security in the age of generative AI.”

Also: These are my 5 favorite AI tools for work

Charles Henderson, IBM Consulting’s global managing partner and head of IBM X-Force, said: “While ‘security fundamentals’ doesn’t get as many head turns as ‘AI-engineered attacks,’ it remains that enterprises’ biggest security problem boils down to the basic and known — not the novel and unknown. Identity is being used against enterprises time and time again, a problem that will worsen as adversaries invest in AI to optimize the tactic.”

In addition, exploiting valid accounts has become the path of least resistance for cyber criminals. The IBM Threat Intelligence Index saw a 266% increase in attacks involving malware designed to steal personal identifiable information, including social media and messaging app credentials, banking details, and crypto wallet data.

Europe in 2023 was the most targeted region, accounting for 32% of incidents IBM’s X-Force responded to around the world, including 26% of ransomware attacks globally. Such attacks contributed to 44% of all incidents Europe experienced, which partly fuelled the region’s climb to top position last year. Europe’s high use of cloud platforms might also have expanded its attack surface, compared to its global counterparts, according to IBM. 

Asia-Pacific, which was the most targeted region in 2021 and 2022, was the third-most impacted, taking on 23% of global incidents, while North America accounted for 26%.

Also: Have 10 hours? IBM will train you in AI fundamentals – for free

Globally, almost 70% of attacks were against critical infrastructure organizations, where nearly 85% of these incidents were caused by exploiting public-facing applications, phishing emails, and the use of valid accounts. 

IBM noted that in 85% of attacks on critical sectors, the compromise could have been mitigated with patching, multi-factor authentication, or least-privilege principals.




Lock with wheels turning inside

Andriy Onufriyenko/Getty Images

Employees might recognize the potential leak of sensitive data as a top risk, but some individuals still proceed to input such information into publicly available generative artificial intelligence (AI) tools. 

This sensitive data includes customer information, sales figures, financial data, and personally identifiable information, such as email addresses and phone numbers. Employees also lack clear policies or guidance on the use of these tools in the workplace, according to research released by Veritas Technologies. 

Also: Five ways to use AI responsibly

Conducted by market researcher 3Gem in December 2023, the study polled 11,500 employees worldwide, including workers in Australia, China, Japan, Singapore, South Korea, France, Germany, the UK, and the US. 

Asked about the risks to their organization from using public generative AI tools, 39% of respondents pointed to the potential leak of sensitive data, while 38% said these tools could produce incorrect, inaccurate, or unhelpful information. Another 37% of respondents cited compliance risks and 19% noted the technology could negatively impact productivity. 

Some 57% of employees used public generative AI tools in the office at least once weekly, with 22.3% using the technology daily. About 28% of people said they did not use such tools at all. 

Also: The best AI chatbots: ChatGPT and other noteworthy alternatives

Close to half (42%) of respondents said they used the tools for research and analysis, while 41% and 40% turned to generative AI to write email messages and memos, as well as to improve their writing, respectively. 

With regards to the types of data that can provide business value when entered into public generative AI tools, 30% of employees pointed to customer information, such as references, bank details, and addresses. Some 29% cited sales figures, while 28% highlighted financial information, and 25% pointed to personally identifiable information. Another 22% of workers referred to confidential HR data and 17% cited confidential company information. 

Some 27% of respondents did not believe putting any of this sensitive information into public generative AI tools could yield value to the business. 

Almost a third (31%) of employees acknowledged having entered such sensitive data into these tools, while 5% were not sure if they had done so. Close to two-thirds (64%) said they did not input any sensitive data into public generative AI tools.

Also: Today’s AI boom will amplify social problems if we don’t act now

However, the use of emerging technology could provide faster access to information, said 48% of respondents when asked about the benefits to their organization. Forty percent cited higher productivity, 39% said generative AI could replace mundane tasks, and 34% believed it helped generate new ideas. 

Interestingly, 53% of employees considered a colleague’s use of generative AI tools as an unfair advantage and 40% believed those who did so should be required to teach the rest of their team or colleagues. Another 29% said colleagues who used such tools should be reported to their line manager, while 27% believed they should face disciplinary action.  

In terms of formal guidance and policies on the use of public generative AI tools at work, 36% of respondents said none was available. Some 24% revealed having mandatory policies on such use, while 21% said such guidelines were voluntary for their workplace. Another 12% said their organization implemented a ban on the use of generative AI tools at work. 

A majority 90% of respondents believed it was important to have guidelines and policies on the use of emerging technology, with 68% noting the need for everyone to know the “right way” to adopt generative AI.

Risks will escalate as GenAI use climbs

It’s likely that as the adoption of generative AI increases, the associated security risks also will grow. 

Key platforms could see large-scale attacks as a single generative AI technology approaches a 50% market share, or when the market consolidates to no more than three technologies, according to IBM’s X-Force Threat Intelligence Index 2024

Also: Train AI models with your own data to mitigate risks

The study is based on the tech vendor’s analysis from monitoring more than 150 billion security events per day across more than 130 countries and data insights from within IBM, including its managed security services unit and Red Hat.

Cyber criminals target technologies that are ubiquitous across organizations globally to see returns from their campaigns, IBM noted. This approach will extend to AI once generative AI gains market dominance, triggering the maturity of AI as an attack surface and motivating cyber criminals to invest in new tools.

It is, therefore, critical that businesses secure their AI models before threat actors scale their activities, IBM warned. In 2023, there were more than 800,000 posts on AI and GPT across dark web forums, it noted, adding that identity-based threats will continue to grow as adversaries tap the technology to optimize their attacks.

Describing generative AI as the next big frontier to secure, the tech vendor said: “Enterprises should also recognize their existing underlying infrastructure is a gateway to their AI models that doesn’t require novel tactics from attackers to target — highlighting the need for a holistic approach to security in the age of generative AI.”

Also: These are my 5 favorite AI tools for work

Charles Henderson, IBM Consulting’s global managing partner and head of IBM X-Force, said: “While ‘security fundamentals’ doesn’t get as many head turns as ‘AI-engineered attacks,’ it remains that enterprises’ biggest security problem boils down to the basic and known — not the novel and unknown. Identity is being used against enterprises time and time again, a problem that will worsen as adversaries invest in AI to optimize the tactic.”

In addition, exploiting valid accounts has become the path of least resistance for cyber criminals. The IBM Threat Intelligence Index saw a 266% increase in attacks involving malware designed to steal personal identifiable information, including social media and messaging app credentials, banking details, and crypto wallet data.

Europe in 2023 was the most targeted region, accounting for 32% of incidents IBM’s X-Force responded to around the world, including 26% of ransomware attacks globally. Such attacks contributed to 44% of all incidents Europe experienced, which partly fuelled the region’s climb to top position last year. Europe’s high use of cloud platforms might also have expanded its attack surface, compared to its global counterparts, according to IBM. 

Asia-Pacific, which was the most targeted region in 2021 and 2022, was the third-most impacted, taking on 23% of global incidents, while North America accounted for 26%.

Also: Have 10 hours? IBM will train you in AI fundamentals – for free

Globally, almost 70% of attacks were against critical infrastructure organizations, where nearly 85% of these incidents were caused by exploiting public-facing applications, phishing emails, and the use of valid accounts. 

IBM noted that in 85% of attacks on critical sectors, the compromise could have been mitigated with patching, multi-factor authentication, or least-privilege principals.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment