ChatGPT Helped Win a Hackathon



The ChatGPT AI bot has spurred speculation about how hackers might use it and similar tools to attack faster and more effectively, though the more damaging exploits so far have been in laboratories.

In its current form, the ChatGPT bot from OpenAI, an artificial-intelligence startup backed by billions of dollars from Microsoft Corp., is mainly trained to digest and generate text. For security chiefs, that means bot-written phishing emails might be more convincing than, for example, messages from a hacker whose first language isn’t English. 

Today’s ChatGPT is too unpredictable and susceptible to errors to be a reliable weapon itself, said

Dustin Childs,

head of threat awareness at Trend Micro Inc.’s Zero Day Initiative, the cybersecurity company’s software vulnerability-hunting program. “We’re years away from AI finding vulnerabilities and doing exploits all on its own,” Mr. Childs said.

Still, that won’t always be the case, he said. 

Two security researchers from cybersecurity company Claroty Ltd. said ChatGPT helped them win the Zero Day Initiative’s hack-a-thon in Miami last month.

Noam Moshe,

a vulnerability researcher at Claroty, said the approach he and his partner took shows how a determined hacker can employ an AI bot. Generative AI—algorithms that create realistic text or images built on the training data they have consumed—can supplement hackers’ know-how, he said.

The goal of the three-day event, known as Pwn2Own, was to disrupt, break into and take over Internet of Things and industrial systems. Before arriving, contestants chose targets from Pwn2Own’s list, and then prepared tactics.  

Mr. Moshe and his partner found several potential weak points in their selected systems. They used ChatGPT to help write code to chain the bugs together, he said, saving hours of manual development. No single bug would have allowed the team to get very far, he said, but manipulating them in a sequence would. At the contest, Mr. Moshe and his partner succeeded all 10 times they tried, winning $123,000. 

“A vulnerability on its own isn’t interesting, but when we look at the bigger picture and collect vulnerabilities, we can rebuild the chain to take over the system,” he said.  

OpenAI and other companies with generative AI bots are adding controls and filters to prevent abuse, such as to prevent racist or sexist outputs. 

Some bad actors will likely try to get around any cybersecurity boundaries the bots are taught, said

Christopher Whyte,

an assistant professor of cybersecurity and homeland security at Virginia Commonwealth University. 

Rather than instructing a bot to write code to take data from a computer without a user knowing, a hacker could try to trick it to write malicious code by formulating the request without obvious triggers, Mr. Whyte said.

It is similar to when a scammer uses persuasion to trick an office worker to reveal credentials or wire money to fraudulent accounts, he said. “You steer the conversation to get the target to bypass controls,” he said.  

Write to Kim S. Nash at kim.nash@wsj.com

Copyright ©2022 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8



The ChatGPT AI bot has spurred speculation about how hackers might use it and similar tools to attack faster and more effectively, though the more damaging exploits so far have been in laboratories.

In its current form, the ChatGPT bot from OpenAI, an artificial-intelligence startup backed by billions of dollars from Microsoft Corp., is mainly trained to digest and generate text. For security chiefs, that means bot-written phishing emails might be more convincing than, for example, messages from a hacker whose first language isn’t English. 

Today’s ChatGPT is too unpredictable and susceptible to errors to be a reliable weapon itself, said

Dustin Childs,

head of threat awareness at Trend Micro Inc.’s Zero Day Initiative, the cybersecurity company’s software vulnerability-hunting program. “We’re years away from AI finding vulnerabilities and doing exploits all on its own,” Mr. Childs said.

Still, that won’t always be the case, he said. 

Two security researchers from cybersecurity company Claroty Ltd. said ChatGPT helped them win the Zero Day Initiative’s hack-a-thon in Miami last month.

Noam Moshe,

a vulnerability researcher at Claroty, said the approach he and his partner took shows how a determined hacker can employ an AI bot. Generative AI—algorithms that create realistic text or images built on the training data they have consumed—can supplement hackers’ know-how, he said.

The goal of the three-day event, known as Pwn2Own, was to disrupt, break into and take over Internet of Things and industrial systems. Before arriving, contestants chose targets from Pwn2Own’s list, and then prepared tactics.  

Mr. Moshe and his partner found several potential weak points in their selected systems. They used ChatGPT to help write code to chain the bugs together, he said, saving hours of manual development. No single bug would have allowed the team to get very far, he said, but manipulating them in a sequence would. At the contest, Mr. Moshe and his partner succeeded all 10 times they tried, winning $123,000. 

“A vulnerability on its own isn’t interesting, but when we look at the bigger picture and collect vulnerabilities, we can rebuild the chain to take over the system,” he said.  

OpenAI and other companies with generative AI bots are adding controls and filters to prevent abuse, such as to prevent racist or sexist outputs. 

Some bad actors will likely try to get around any cybersecurity boundaries the bots are taught, said

Christopher Whyte,

an assistant professor of cybersecurity and homeland security at Virginia Commonwealth University. 

Rather than instructing a bot to write code to take data from a computer without a user knowing, a hacker could try to trick it to write malicious code by formulating the request without obvious triggers, Mr. Whyte said.

It is similar to when a scammer uses persuasion to trick an office worker to reveal credentials or wire money to fraudulent accounts, he said. “You steer the conversation to get the target to bypass controls,” he said.  

Write to Kim S. Nash at kim.nash@wsj.com

Copyright ©2022 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – admin@technoblender.com. The content will be deleted within 24 hours.
Applications Softwareartificial intelligenceArtificial Intelligence TechnologiesArtificial Intelligence/Machine LearningBusinessC&E Executive News FilterChatGPTComputer ScienceComputersComputers/Consumer ElectronicsComputingconsumer electronicsContent TypescorporateCorporate/Industrial NewscrimeCrime/Legal ActioncybercrimeCybercrime/HackingData Security BreachesFactiva Filtersgeneral newsHackathonHackingHelpedhumanitiesindustrial newsinformation securityInformation TechnologyLatestlegal actionmachine learningpoliticalPolitical/General Newsprivacy issuesPrivacy Issues/Information Securityprivacy softwareProsciencesSciences/HumanitiesSecuritySecurity/Privacy SoftwareSoftwareSYNDTechnoblenderTechnologyWinWSJ-PRO-AIWSJ-PRO-CYBER
Comments (0)
Add Comment