Techno Blender
Digitally Yours.

These engineers are being hired to get the most out of AI tools without coding

0 63


Day 68:43As AI chatbots proliferate, so does demand for prompt engineers turned AI whisperers

The arrival of artificial intelligence software like OpenAI’s ChatGPT has created both intrigue and alarm about how the technology will shape everything from the future homework, security and even the very framework of capitalism.

The sharp uptick in available AI tools is driving demand for a growing field called prompt engineering.

According to Simon Willison, a developer and researcher who has studied prompt engineers, they’re being sought out as the experts in “communicating with these things.”

Prompt engineers don’t really use coding languages, but specialize in crafting detailed prompts to get better outputs from AI tools. They’re being hired by companies to improve the results from their AI tools, and there are even freelance marketplaces for prompts

Willison thinks the field is here to stay because expertise will continue to be necessary to get the most out of increasingly intricate AI models. For example, just this week, OpenAI released GPT-4, which is the latest, scaled-up version of the large-language model that runs ChatGPT. It can read the content of images, as well as text, and OpenAI claims it can even pass a simulated bar exam.

Simon Willison, a British programmer who has studied prompt engineering, says prompt engineers are a lot like computer programmers, except their work is “weird and different.” (Natalie Downe)

There are also prompt-based interactions with AI that are intentionally malicious. In one recent high-profile example, Stanford University student Kevin Liu tricked Microsoft Bing’s AI-powered chatbot using a “prompt injection attack” to get the AI to spill its secrets, leaving it to declare itself “violated and exposed.” 

Willison spoke with Day 6 guest host Peter Armstrong about whether skepticism is warranted over how much control prompt engineers have — and whether it’s a real science, or just learned intuition. Here’s part of that conversation.

You’ve said in some of your writing that it’s important for prompt engineers to resist what you call superstitious thinking. What do you mean by that? 

It’s very easy when talking to one of these things to think that it’s an AI out of science fiction, to think that it’s like the Star Trek computer, and it can understand and do anything. And that’s very much not the case.

These systems are extremely good at pretending to be all powerful or knowing things, but they have massive, massive flaws in them. So it’s very easy to become superstitious to think, “Oh, wow, I asked it to read this web page. I gave you a link to an article and it read it.” It didn’t read it.

A lot of the time it will invent things that look like it did what you asked it to. But actually it’s really just sort of imitating what it thought you might … but really it’s sort of imitating what would look like a good answer to the question that you asked it.

I’m not used to working with computers that might say no to me.– Simon Willison

We already have people calling these the AI whisperers. How much of this is, you know, magic as opposed to science? 

It really can feel like you’re a sort of magician. You sort of cast spells at [the AI]. You don’t fully understand what they’re going to do, and it reacts sometimes well, and sometimes it reacts poorly.

I’ve talked to AI practitioners who kind of talk about collecting spells for their spell book, but it’s also a very dangerous comparison to make because magic is, by its nature, impossible for people to understand and can do anything. These models are absolutely not fundamentally that. They’re mathematics.

WATCH | ChatGPT is capable of writing poems and even computer code: 

ChatGPT software highlights advances, limitations of modern artificial intelligence

ChatGPT is artificial intelligence chatbot software capable of writing poems, college-level essays and even computer code. Experts say the software highlights how far AI has come in just a few years, while still spotlighting concerns around accuracy.

How much control do you think these prompt engineers actually have? 

One of the frustrations of working with these systems is that you do feel a total lack of control. I’m a computer programmer. I’m used to programing computers where they do exactly what you tell them to do, and these systems don’t do that.

Often they’ll do what you ask. Sometimes they’ll even refuse you on ethical grounds. They’ll say, “No, I’m not comfortable completing that operation.” 

I’m not used to working with computers that might say no to me.

Should we have some ethical [concerns] about prompt engineering within that world?

I’m not worried about the sort of science fiction scenario where the AI breaks out of my laptop and takes over the world.

But there are many very harmful things you can do with a machine that can imitate human beings and that can produce realistic human text. The opportunities for spam and for scamming people and automating things, like romance scams, are very real and very concerning to me.

And does that get more complex as we get better and more effective at using this? 

I think it does. I think people who are people with malicious intent who learn to do this stuff will be able to scale up that malicious intent. They’ll be able to operate at much higher scales. And meanwhile, there are people who are trying to … help fight disinformation and help them help spot sort of influence campaigns.

So there are all sorts of different applications of this. Some are definitely bad, some are definitely good.

WATCH | What if scammers could use AI to create highly personalized scam emails? 

Scammer’s paradise: How AI makes money

Feb. 28, 2023 | What if scammers could use artificial intelligence to create highly personalized scam emails? Andrew sits down with About That producer Keiran Oudshoorn to discuss how scammers are manipulating AI to make money and how you can protect yourself.

Do prompt engineers have a future, or are we all just going to eventually be able to catch up with them and use this AI more effectively? 

Many people in their professional and personal lives are going to learn to use these tools. But I also think there’s going to be space for expertise.

There will always be a level at which it’s worth investing full-time experience in solving some of these problems, especially for companies that are building entire products around these engines under the hood.


Radio segment by Mickie Edwards. Q&A edited for length and clarity.


Day 68:43As AI chatbots proliferate, so does demand for prompt engineers turned AI whisperers

The arrival of artificial intelligence software like OpenAI’s ChatGPT has created both intrigue and alarm about how the technology will shape everything from the future homework, security and even the very framework of capitalism.

The sharp uptick in available AI tools is driving demand for a growing field called prompt engineering.

According to Simon Willison, a developer and researcher who has studied prompt engineers, they’re being sought out as the experts in “communicating with these things.”

Prompt engineers don’t really use coding languages, but specialize in crafting detailed prompts to get better outputs from AI tools. They’re being hired by companies to improve the results from their AI tools, and there are even freelance marketplaces for prompts

Willison thinks the field is here to stay because expertise will continue to be necessary to get the most out of increasingly intricate AI models. For example, just this week, OpenAI released GPT-4, which is the latest, scaled-up version of the large-language model that runs ChatGPT. It can read the content of images, as well as text, and OpenAI claims it can even pass a simulated bar exam.

Man with glasses stands against a brick wall.
Simon Willison, a British programmer who has studied prompt engineering, says prompt engineers are a lot like computer programmers, except their work is “weird and different.” (Natalie Downe)

There are also prompt-based interactions with AI that are intentionally malicious. In one recent high-profile example, Stanford University student Kevin Liu tricked Microsoft Bing’s AI-powered chatbot using a “prompt injection attack” to get the AI to spill its secrets, leaving it to declare itself “violated and exposed.” 

Willison spoke with Day 6 guest host Peter Armstrong about whether skepticism is warranted over how much control prompt engineers have — and whether it’s a real science, or just learned intuition. Here’s part of that conversation.

You’ve said in some of your writing that it’s important for prompt engineers to resist what you call superstitious thinking. What do you mean by that? 

It’s very easy when talking to one of these things to think that it’s an AI out of science fiction, to think that it’s like the Star Trek computer, and it can understand and do anything. And that’s very much not the case.

These systems are extremely good at pretending to be all powerful or knowing things, but they have massive, massive flaws in them. So it’s very easy to become superstitious to think, “Oh, wow, I asked it to read this web page. I gave you a link to an article and it read it.” It didn’t read it.

A lot of the time it will invent things that look like it did what you asked it to. But actually it’s really just sort of imitating what it thought you might … but really it’s sort of imitating what would look like a good answer to the question that you asked it.

I’m not used to working with computers that might say no to me.– Simon Willison

We already have people calling these the AI whisperers. How much of this is, you know, magic as opposed to science? 

It really can feel like you’re a sort of magician. You sort of cast spells at [the AI]. You don’t fully understand what they’re going to do, and it reacts sometimes well, and sometimes it reacts poorly.

I’ve talked to AI practitioners who kind of talk about collecting spells for their spell book, but it’s also a very dangerous comparison to make because magic is, by its nature, impossible for people to understand and can do anything. These models are absolutely not fundamentally that. They’re mathematics.

WATCH | ChatGPT is capable of writing poems and even computer code: 

ChatGPT software highlights advances, limitations of modern artificial intelligence

ChatGPT is artificial intelligence chatbot software capable of writing poems, college-level essays and even computer code. Experts say the software highlights how far AI has come in just a few years, while still spotlighting concerns around accuracy.

How much control do you think these prompt engineers actually have? 

One of the frustrations of working with these systems is that you do feel a total lack of control. I’m a computer programmer. I’m used to programing computers where they do exactly what you tell them to do, and these systems don’t do that.

Often they’ll do what you ask. Sometimes they’ll even refuse you on ethical grounds. They’ll say, “No, I’m not comfortable completing that operation.” 

I’m not used to working with computers that might say no to me.

Should we have some ethical [concerns] about prompt engineering within that world?

I’m not worried about the sort of science fiction scenario where the AI breaks out of my laptop and takes over the world.

But there are many very harmful things you can do with a machine that can imitate human beings and that can produce realistic human text. The opportunities for spam and for scamming people and automating things, like romance scams, are very real and very concerning to me.

And does that get more complex as we get better and more effective at using this? 

I think it does. I think people who are people with malicious intent who learn to do this stuff will be able to scale up that malicious intent. They’ll be able to operate at much higher scales. And meanwhile, there are people who are trying to … help fight disinformation and help them help spot sort of influence campaigns.

So there are all sorts of different applications of this. Some are definitely bad, some are definitely good.

WATCH | What if scammers could use AI to create highly personalized scam emails? 

Scammer’s paradise: How AI makes money

Feb. 28, 2023 | What if scammers could use artificial intelligence to create highly personalized scam emails? Andrew sits down with About That producer Keiran Oudshoorn to discuss how scammers are manipulating AI to make money and how you can protect yourself.

Do prompt engineers have a future, or are we all just going to eventually be able to catch up with them and use this AI more effectively? 

Many people in their professional and personal lives are going to learn to use these tools. But I also think there’s going to be space for expertise.

There will always be a level at which it’s worth investing full-time experience in solving some of these problems, especially for companies that are building entire products around these engines under the hood.


Radio segment by Mickie Edwards. Q&A edited for length and clarity.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.
Leave a comment