Techno Blender
Digitally Yours.

ChatGPT and the Future (Present) We’re Facing | by Alberto Romero | Feb, 2023

0 40


Opinion

2023 will be much more intense and overwhelming than 2022, so tighten your seatbelts

Credit: Midjourney

Until ChatGPT stops being the most important news on AI I guess we’re stuck talking about it… Just kidding, I’ll make sure to interweave other topics, or else we may burn out.

There’s still a lot to talk about ChatGPT’s immediate and long-term implications. I’ve written about what ChatGPT is and how to get the most out of it, about the challenge to identify its outputs, and the threat it poses to Google and traditional search engines, but I’ve yet to touch on how the risks and harms that some foresaw are already taking shape in the real world.

Two months after its release, we can all agree that ChatGPT has reached the mainstream and has taken AI as a field with it. As an anecdote, a friend who knows nothing about AI came to me talking about ChatGPT before I told him about it. That was a first time for me — and I’m not the only one.

That’s the reason why it’s urgent to talk about the consequences of AI: ChatGPT has reached people much faster than any resources on how to use it well or how it definitely shouldn’t be used. The number of people using AI tools today is larger than ever before (not only ChatGPT; Midjourney has almost 10M members in the Discord server), which implies that more people than ever before will misuse them.

In contrast to my predictive/speculative essays, this one isn’t about things that could happen but about things that are happening. I’ll zoom in on ChatGPT because it’s what the world is talking about, but most of what follows could apply, with adequate translation, to other types of generative AI.

This article is a selection from The Algorithmic Bridge, an educational newsletter whose purpose is to bridge the gap between AI, algorithms, and people. It will help you understand the impact AI has in your life and develop the tools to better navigate the future.

On January 6, security research group Check Point Research (CPR) published a terrifying article entitled “OpwnAI: Cybercriminals Starting to Use ChatGPT.” Although not surprising, I wasn’t expecting it so soon.

CPR had previously studied how malicious hackers, scammers, and cybercriminals could exploit ChatGPT. They demonstrated how the chatbot can “create a full infection flow, from spear-phishing to running a reverse shell” and how it can generate scripts to run dynamically, adapting to the environment.

Despite OpenAI’s guardrails, which appeared as an orange warning notification when CPR forced ChatGPT to do something against the usage policy, the research group had no problem generating a simple phishing email. “Complicated attack processes can also be automated as well, using the LLMs APIs to generate other malicious artifacts,” they concluded.

Basic phishing email generated by ChatGPT. Credit: CPR (with permission)

CPR researchers weren’t satisfied with proof that ChatGPT could do this hypothetically (one of the common criticisms skeptics receive is that the potential risks they warn about never materialize into real-world harm). They wanted to find real instances of people misusing it in similar ways. And they found it.

CPR analyzed “several major underground hacker communities” and found at least three concrete examples of cyber criminals using ChatGPT in ways that not only violate the ToS but that could become harmful in a direct and measurable way.

First, an info stealer. In a thread entitled “ChatGPT — Benefits of Malware,” a user shared experiments where he “recreated many malware strains.” As CPR noted, the OP’s other posts revealed that “this individual [aims] to show less technically capable cybercriminals how to utilize ChatGPT for malicious purposes.”

“Cybercriminal showing how he created infostealer using ChatGPT.” Credit: CPR (with permission)

Second, an encryption tool. A user by the name “USDoD” published a Python script with “encryption and decryption functions.” CPR concluded that the “script can easily be modified to encrypt someone’s machine completely without any user interaction.” While USDoD has “limited technical skills,” he is “engaged in a variety of illicit activities.”

“Cybercriminal dubbed USDoD posts multi-layer encryption tool.” Credit: CPR (with permission)

The last example is fraud activity. The title of the post is quite telling: “Abusing ChatGPT to create Dark Web Marketplaces scripts.” CPR writes: “The cybercriminals published a piece of code that uses third-party API to get up-to-date cryptocurrency … prices as part of the Dark Web market payment system.”

“Threat actor using ChatGPT to create DarkWeb Market scripts.” Credit: CPR (with permission)

It’s clear that ChatGPT being free to use and highly intuitive is an attractor for cybercriminals, including those with low technical skills. As Sergey Shykevich, Threat Intelligence Group Manager at Check Point explains:

“Just as ChatGPT can be used for good to assist developers in writing code, it can also be used for malicious purposes. Although the tools that we analyze in this report are pretty basic, it’s only a matter of time until more sophisticated threat actors enhance the way they use AI-based tools.”

ChatGPT being a driver of security issues online isn’t a hypothesis exacerbated by fearmongerers but a reality that’s hard to deny. For those who use the argument that this was possible before ChatGPT, two things: First, ChatGPT can bridge the technical gap. Second, scale matters a lot here — ChatGPT can automatically write a script in seconds.

Cybersecurity, disinformation, plagiarism… Many people have repeatedly warned about the problems ChatGPT-like AIs can cause. Now malicious users start to abound.

Someone could still try to make the case in favor of ChatGPT. Maybe it’s not that problematic — the upsides can compensate for the downsides — but maybe it is. And a “maybe” should suffice for us to think twice. OpenAI lowered its guard when GPT-2 turned out to be “harmless” (they saw “no strong evidence of misuse so far”), and they never raised it back again.

I agree with Scott Alexander that “perhaps it is a bad thing that the world’s leading AI companies cannot control their AIs.” Perhaps reinforcement learning through human feedback isn’t good enough. Perhaps companies should find better ways to exert control over their models if they’re going to unleash them in the wild. Perhaps GPT-2 wasn’t so dangerous but a couple of iterations later we’ve got something to worry about. And if not, we’ll have it in a couple more.

I’m not saying OpenAI hasn’t tried — they have (they’ve even been criticized for being too conservative). What I’m arguing is that, if we perpetuate this mindset of “I’ve tried to make it right so I now have the green light to release my AI” into the short-term future, we’ll encounter more and more downsides that no upside would make up for.

One question has been bothering me for a few weeks: If OpenAI is so worried about doing things right, why didn’t they set up the watermarking scheme to identify ChatGPT’s outputs before releasing the model to the public? Scott Aaronson is still trying to make it work — a month after the model went completely viral.

Source

I don’t think a watermark would’ve solved the fundamental problems this technology entails, but it’d have helped by giving time. Time for people to adapt, for scientists to find solutions to the most pressing issues, and for regulators to come up with relevant legislation.

Due to OpenAI’s inaction, we’re left with shy attempts at building GPT detectors that could provide people with a means to avoid AI disinformation, scams, or phishing attacks. Some have tried to repurpose a 3-year-old GPT-2 detector for ChatGPT but it doesn’t work. Others, like Edward Tian, a CS and journalism senior at Princeton University, have developed systems from the ground-up, specifically targeted to ChatGPT.

Source

As of now, 10,000+ people have tested GPTZero, me included (here’s the demo. Tian is building a product for which 3K+ teachers have already subscribed). I confess that I’ve managed to fool it just once (and only because ChatGPT misspelled a word) but haven’t tried too hard either.

The detector is quite simple; it evaluates the “perplexity” and “burstiness” of a chunk of text. Perplexity measures how much a sentence “surprises” the detector (i.e. to what degree the distribution of output words doesn’t match what it’s expected from a language model) and burstiness measures the constancy of perplexity across sentences. Simply put, GPTZero leverages the fact that humans tend to write much more weirdly than AIs — which becomes apparent as soon as you read a page of AI-generated text. It’s so dull…

At a <2% false positive rate, GPTZero is the best detector out there. Tian is proud: “Humans deserve to know when the writing isn’t human,” he told the Daily Beast. I agree — even if ChatGPT doesn’t plagiarize, it’s morally wrong for people to claim they’re authors of something ChatGPT wrote.

But I know it isn’t infallible. A few changes to the output (e.g. misspelling a word or interleaving your own) may be enough to trick the system. Asking ChatGPT to avoid repeating words works just fine, as Yennie Jun shows here. And finally, GPTZero may become obsolete soon because new language models appear every few weeks — AnthropicAI has unofficially announced Claude which, as evidenced by Riley Goodside’s analyses, is better than ChatGPT.

And GPT-4 is around the corner.

This is a cat-and-mouse game, as some people like to call it — and the mouse is always one step ahead.

If detectors worked just fine, many people would get angry. Most want to use ChatGPT without barriers. Students, for instance, wouldn’t be able to cheat in written essays because an AI-savvy professor may be aware of the existence of a detector (it has already happened). The fact that 3K+ teachers have signed up for Tian’s incoming product says it all.

But, because detectors aren’t sufficiently reliable, those who don’t want to face the uncertainty of having to guess whether some written deliverable is or isn’t ChatGPT’s product have taken the most conservative solution: Banning ChatGPT.

The Guardian reported on Friday that “New York City schools have banned ChatGPT.” Jenna Lyle, a department spokesperson, cites “concerns about negative impacts on student learning, and concerns regarding the safety and accuracy of contents” as the reasons for the decision. Although I understand the teachers’ point of view, I don’t think this is a wise approach — it may be the easier choice, but it isn’t the right one.

Stability.ai’s David Ha tweeted this when the news came out:

Source

I acknowledge (and have done it before) the problems schools face (e.g. widespread undetectable plagiarism) but I have to agree with Ha.

Here’s the dilemma: This technology isn’t going away. It’s a part of the future — a big part, probably — and it’s super important that students (and you, me, and everyone else) learn about it. Banning ChatGPT from schools isn’t a solution. As Ha’s Tweet implies, it could be more harmful to ban it than to allow it.

Yet, students who use it to cheat on exams or to write essays would waste their teachers’ time and effort as well as hinder their development without realizing it. As Lyle says, ChatGPT may prevent students from learning “critical-thinking and problem-solving skills.”

What’s the solution that I (and many others) foresee? The education system will have to adapt. Although harder, this is the better solution. Given how broken the schooling system is, it may very well be a win-win situation for students and teachers. Of course, it goes without saying that until that happens it’s better that teachers have access to a reliable detector — but let’s not use that as an excuse to avoid adapting education to these changing times.

The education system has a lot of room for improvement. If it hasn’t changed in so many years it’s because there weren’t strong-enough incentives to do so. ChatGPT gives us a reason to reimagine education, the only piece that’s missing in this puzzle is the willingness of those who decide.

It really feels like it. Some have compared AI to fire or electricity but those inventions integrated slowly into society and are too far back in time. We don’t know how that felt. AI is more like the internet, it’s going to transform the world. Very fast.

I’ve tried to capture in this essay a future that’s already more of a present than a future. One thing is that AIs like GPT-3 or DALL-E exist, and a very different thing is that everyone in the world is aware of them. Those hypotheticals (e.g. disinformation, cyber hacking, plagiarism, etc.) are no longer. It’s happening here and now and we are going to see more desperate measures to stop them (e.g. building scrappy detectors or banning AI).

We have to assume some things will change forever. But, in some cases, we may have to defend our position (like artists are doing with text-to-image or minorities have done before with classification systems). Regardless of who you are, AI will get to you in one way or the other. You better be ready.




Opinion

2023 will be much more intense and overwhelming than 2022, so tighten your seatbelts

Credit: Midjourney

Until ChatGPT stops being the most important news on AI I guess we’re stuck talking about it… Just kidding, I’ll make sure to interweave other topics, or else we may burn out.

There’s still a lot to talk about ChatGPT’s immediate and long-term implications. I’ve written about what ChatGPT is and how to get the most out of it, about the challenge to identify its outputs, and the threat it poses to Google and traditional search engines, but I’ve yet to touch on how the risks and harms that some foresaw are already taking shape in the real world.

Two months after its release, we can all agree that ChatGPT has reached the mainstream and has taken AI as a field with it. As an anecdote, a friend who knows nothing about AI came to me talking about ChatGPT before I told him about it. That was a first time for me — and I’m not the only one.

That’s the reason why it’s urgent to talk about the consequences of AI: ChatGPT has reached people much faster than any resources on how to use it well or how it definitely shouldn’t be used. The number of people using AI tools today is larger than ever before (not only ChatGPT; Midjourney has almost 10M members in the Discord server), which implies that more people than ever before will misuse them.

In contrast to my predictive/speculative essays, this one isn’t about things that could happen but about things that are happening. I’ll zoom in on ChatGPT because it’s what the world is talking about, but most of what follows could apply, with adequate translation, to other types of generative AI.

This article is a selection from The Algorithmic Bridge, an educational newsletter whose purpose is to bridge the gap between AI, algorithms, and people. It will help you understand the impact AI has in your life and develop the tools to better navigate the future.

On January 6, security research group Check Point Research (CPR) published a terrifying article entitled “OpwnAI: Cybercriminals Starting to Use ChatGPT.” Although not surprising, I wasn’t expecting it so soon.

CPR had previously studied how malicious hackers, scammers, and cybercriminals could exploit ChatGPT. They demonstrated how the chatbot can “create a full infection flow, from spear-phishing to running a reverse shell” and how it can generate scripts to run dynamically, adapting to the environment.

Despite OpenAI’s guardrails, which appeared as an orange warning notification when CPR forced ChatGPT to do something against the usage policy, the research group had no problem generating a simple phishing email. “Complicated attack processes can also be automated as well, using the LLMs APIs to generate other malicious artifacts,” they concluded.

Basic phishing email generated by ChatGPT. Credit: CPR (with permission)

CPR researchers weren’t satisfied with proof that ChatGPT could do this hypothetically (one of the common criticisms skeptics receive is that the potential risks they warn about never materialize into real-world harm). They wanted to find real instances of people misusing it in similar ways. And they found it.

CPR analyzed “several major underground hacker communities” and found at least three concrete examples of cyber criminals using ChatGPT in ways that not only violate the ToS but that could become harmful in a direct and measurable way.

First, an info stealer. In a thread entitled “ChatGPT — Benefits of Malware,” a user shared experiments where he “recreated many malware strains.” As CPR noted, the OP’s other posts revealed that “this individual [aims] to show less technically capable cybercriminals how to utilize ChatGPT for malicious purposes.”

“Cybercriminal showing how he created infostealer using ChatGPT.” Credit: CPR (with permission)

Second, an encryption tool. A user by the name “USDoD” published a Python script with “encryption and decryption functions.” CPR concluded that the “script can easily be modified to encrypt someone’s machine completely without any user interaction.” While USDoD has “limited technical skills,” he is “engaged in a variety of illicit activities.”

“Cybercriminal dubbed USDoD posts multi-layer encryption tool.” Credit: CPR (with permission)

The last example is fraud activity. The title of the post is quite telling: “Abusing ChatGPT to create Dark Web Marketplaces scripts.” CPR writes: “The cybercriminals published a piece of code that uses third-party API to get up-to-date cryptocurrency … prices as part of the Dark Web market payment system.”

“Threat actor using ChatGPT to create DarkWeb Market scripts.” Credit: CPR (with permission)

It’s clear that ChatGPT being free to use and highly intuitive is an attractor for cybercriminals, including those with low technical skills. As Sergey Shykevich, Threat Intelligence Group Manager at Check Point explains:

“Just as ChatGPT can be used for good to assist developers in writing code, it can also be used for malicious purposes. Although the tools that we analyze in this report are pretty basic, it’s only a matter of time until more sophisticated threat actors enhance the way they use AI-based tools.”

ChatGPT being a driver of security issues online isn’t a hypothesis exacerbated by fearmongerers but a reality that’s hard to deny. For those who use the argument that this was possible before ChatGPT, two things: First, ChatGPT can bridge the technical gap. Second, scale matters a lot here — ChatGPT can automatically write a script in seconds.

Cybersecurity, disinformation, plagiarism… Many people have repeatedly warned about the problems ChatGPT-like AIs can cause. Now malicious users start to abound.

Someone could still try to make the case in favor of ChatGPT. Maybe it’s not that problematic — the upsides can compensate for the downsides — but maybe it is. And a “maybe” should suffice for us to think twice. OpenAI lowered its guard when GPT-2 turned out to be “harmless” (they saw “no strong evidence of misuse so far”), and they never raised it back again.

I agree with Scott Alexander that “perhaps it is a bad thing that the world’s leading AI companies cannot control their AIs.” Perhaps reinforcement learning through human feedback isn’t good enough. Perhaps companies should find better ways to exert control over their models if they’re going to unleash them in the wild. Perhaps GPT-2 wasn’t so dangerous but a couple of iterations later we’ve got something to worry about. And if not, we’ll have it in a couple more.

I’m not saying OpenAI hasn’t tried — they have (they’ve even been criticized for being too conservative). What I’m arguing is that, if we perpetuate this mindset of “I’ve tried to make it right so I now have the green light to release my AI” into the short-term future, we’ll encounter more and more downsides that no upside would make up for.

One question has been bothering me for a few weeks: If OpenAI is so worried about doing things right, why didn’t they set up the watermarking scheme to identify ChatGPT’s outputs before releasing the model to the public? Scott Aaronson is still trying to make it work — a month after the model went completely viral.

Source

I don’t think a watermark would’ve solved the fundamental problems this technology entails, but it’d have helped by giving time. Time for people to adapt, for scientists to find solutions to the most pressing issues, and for regulators to come up with relevant legislation.

Due to OpenAI’s inaction, we’re left with shy attempts at building GPT detectors that could provide people with a means to avoid AI disinformation, scams, or phishing attacks. Some have tried to repurpose a 3-year-old GPT-2 detector for ChatGPT but it doesn’t work. Others, like Edward Tian, a CS and journalism senior at Princeton University, have developed systems from the ground-up, specifically targeted to ChatGPT.

Source

As of now, 10,000+ people have tested GPTZero, me included (here’s the demo. Tian is building a product for which 3K+ teachers have already subscribed). I confess that I’ve managed to fool it just once (and only because ChatGPT misspelled a word) but haven’t tried too hard either.

The detector is quite simple; it evaluates the “perplexity” and “burstiness” of a chunk of text. Perplexity measures how much a sentence “surprises” the detector (i.e. to what degree the distribution of output words doesn’t match what it’s expected from a language model) and burstiness measures the constancy of perplexity across sentences. Simply put, GPTZero leverages the fact that humans tend to write much more weirdly than AIs — which becomes apparent as soon as you read a page of AI-generated text. It’s so dull…

At a <2% false positive rate, GPTZero is the best detector out there. Tian is proud: “Humans deserve to know when the writing isn’t human,” he told the Daily Beast. I agree — even if ChatGPT doesn’t plagiarize, it’s morally wrong for people to claim they’re authors of something ChatGPT wrote.

But I know it isn’t infallible. A few changes to the output (e.g. misspelling a word or interleaving your own) may be enough to trick the system. Asking ChatGPT to avoid repeating words works just fine, as Yennie Jun shows here. And finally, GPTZero may become obsolete soon because new language models appear every few weeks — AnthropicAI has unofficially announced Claude which, as evidenced by Riley Goodside’s analyses, is better than ChatGPT.

And GPT-4 is around the corner.

This is a cat-and-mouse game, as some people like to call it — and the mouse is always one step ahead.

If detectors worked just fine, many people would get angry. Most want to use ChatGPT without barriers. Students, for instance, wouldn’t be able to cheat in written essays because an AI-savvy professor may be aware of the existence of a detector (it has already happened). The fact that 3K+ teachers have signed up for Tian’s incoming product says it all.

But, because detectors aren’t sufficiently reliable, those who don’t want to face the uncertainty of having to guess whether some written deliverable is or isn’t ChatGPT’s product have taken the most conservative solution: Banning ChatGPT.

The Guardian reported on Friday that “New York City schools have banned ChatGPT.” Jenna Lyle, a department spokesperson, cites “concerns about negative impacts on student learning, and concerns regarding the safety and accuracy of contents” as the reasons for the decision. Although I understand the teachers’ point of view, I don’t think this is a wise approach — it may be the easier choice, but it isn’t the right one.

Stability.ai’s David Ha tweeted this when the news came out:

Source

I acknowledge (and have done it before) the problems schools face (e.g. widespread undetectable plagiarism) but I have to agree with Ha.

Here’s the dilemma: This technology isn’t going away. It’s a part of the future — a big part, probably — and it’s super important that students (and you, me, and everyone else) learn about it. Banning ChatGPT from schools isn’t a solution. As Ha’s Tweet implies, it could be more harmful to ban it than to allow it.

Yet, students who use it to cheat on exams or to write essays would waste their teachers’ time and effort as well as hinder their development without realizing it. As Lyle says, ChatGPT may prevent students from learning “critical-thinking and problem-solving skills.”

What’s the solution that I (and many others) foresee? The education system will have to adapt. Although harder, this is the better solution. Given how broken the schooling system is, it may very well be a win-win situation for students and teachers. Of course, it goes without saying that until that happens it’s better that teachers have access to a reliable detector — but let’s not use that as an excuse to avoid adapting education to these changing times.

The education system has a lot of room for improvement. If it hasn’t changed in so many years it’s because there weren’t strong-enough incentives to do so. ChatGPT gives us a reason to reimagine education, the only piece that’s missing in this puzzle is the willingness of those who decide.

It really feels like it. Some have compared AI to fire or electricity but those inventions integrated slowly into society and are too far back in time. We don’t know how that felt. AI is more like the internet, it’s going to transform the world. Very fast.

I’ve tried to capture in this essay a future that’s already more of a present than a future. One thing is that AIs like GPT-3 or DALL-E exist, and a very different thing is that everyone in the world is aware of them. Those hypotheticals (e.g. disinformation, cyber hacking, plagiarism, etc.) are no longer. It’s happening here and now and we are going to see more desperate measures to stop them (e.g. building scrappy detectors or banning AI).

We have to assume some things will change forever. But, in some cases, we may have to defend our position (like artists are doing with text-to-image or minorities have done before with classification systems). Regardless of who you are, AI will get to you in one way or the other. You better be ready.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.
Leave a comment