Techno Blender
Digitally Yours.

Beware the prophets of p(doom), but don’t ignore them

0 41


Why. Exactly? “Firstly, AI has many potential benefits in areas I don’t fully understand like diagnostic health care,” he tells me. “But secondly, when you’re looking to secure something, panicking people about it is a really bad idea.

“We know this from cybersecurity. In 2010 The Economist magazine had a big front page, a cover of a city skyline falling in flames a la 9/11,” supposedly a vision of future cyberwar.

Cybersecurity expert Ciaran Martin: “Cybersecurity is about hacking code…. but it tends not to kill people.”Credit: James Alcock

“Cyberwar, the threat from the internet, that’s not the way cybersecurity works,” Martin explains. “Cybersecurity is about hacking code. Second-order effects can be highly disruptive, extremely costly, very intimidating, it does all sorts of harm, but it tends not to kill people.

“And it’s almost impossible to detonate some sort of explosion that would bring down a skyscraper by cyberspace. Why does that matter? Because, as well as scaring people, it matters in two ways. One is it scares people and makes them feel powerless.”

Martin, a professor of management of public organisations at Oxford, also serves as an adviser to the Australian cybersecurity firm CyberCX and says he has to remind clients that they are not powerless, they do have agency: “We could talk about Ukraine, showing that when a defender really sets its mind to it, you know, all sorts of things are possible. So you don’t want to infantilise people in cybersecurity. And the second thing is that it may send you chasing after the wrong problem.”

He urges governments and companies instead to break down potential AI problems into specific parts and deal with each.

Joe Biden met with leaders from major artificial intelligence firms at the White House on Friday.

Joe Biden met with leaders from major artificial intelligence firms at the White House on Friday.Credit: Bloomberg

For instance, on Friday, a brace of US tech firms struck a deal with the White House to stamp a watermark on any content produced by AI, an effort to keep deepfakes apart from reality.

US President Joe Biden said that it was a step towards responsible use of AI, a technology he described as “astounding”. The watermarking deal is voluntary. The signatories – Google, Amazon, Meta, Microsoft, OpenAI Anthropic, and Inflection AI – said they’d apply it to text, images, audio and video. But many other companies refused to sign. These include Midjourney, whose product was used to make a fake clip of Donald Trump being arrested.

Other countries are trying other ways to regulate AI. More sweepingly, the Chinese Communist Party last week issued guidelines for China’s AI developers that stipulate the use of “socialist values” in any new products. This is, in effect, a demand that all the AI programs permitted access to the Chinese internet must promote Beijing’s authoritarian world view.

But overarching all the detail is a fundamental strategic principle. The creation of the nuclear bomb is, as Christopher Nolan says, a cautionary tale of humanity’s capacity for self-destruction. Yet only two have been used in anger, and none has been used in the last 78 years.

Loading

This is an instructive tale of the logic of mutually assured destruction, with one country’s atomic arsenal held in check by the threat posed by others’. And Ciaran Martin says that a similar principle applies to the online world, the logic of technological equilibrium.

“Can AI be used to scale malevolent software? Yes. But can the same techniques be used to scale the defence against that? Yes. And furthermore, can it be used to expand cybersecurity and solutions? Absolutely.

“This can be done for as long as our equilibrium holds – what can be done for attack can be done for defence, then we stay in a good place.” Of course, that equilibrium could be broken by the next revolution in computing, quantum. In truth, there’s always something to panic about if we choose to.

Martin says that we like to think that “because something can happen, it will happen”. Yet the Doomsday Clock never has struck midnight.

So “is there a robot that could be programmed to go and pour petrol in your toaster and then turn it on? Probably, but I think you’re more likely to face targeted extortion or financial theft threat from organised cyber criminals.” In other words, p(rip-off) is more likely than p(doom).

Peter Hartcher is international editor.


Why. Exactly? “Firstly, AI has many potential benefits in areas I don’t fully understand like diagnostic health care,” he tells me. “But secondly, when you’re looking to secure something, panicking people about it is a really bad idea.

“We know this from cybersecurity. In 2010 The Economist magazine had a big front page, a cover of a city skyline falling in flames a la 9/11,” supposedly a vision of future cyberwar.

Cybersecurity expert Ciaran Martin: “Cybersecurity is about hacking code.... but it tends not to kill people.”

Cybersecurity expert Ciaran Martin: “Cybersecurity is about hacking code…. but it tends not to kill people.”Credit: James Alcock

“Cyberwar, the threat from the internet, that’s not the way cybersecurity works,” Martin explains. “Cybersecurity is about hacking code. Second-order effects can be highly disruptive, extremely costly, very intimidating, it does all sorts of harm, but it tends not to kill people.

“And it’s almost impossible to detonate some sort of explosion that would bring down a skyscraper by cyberspace. Why does that matter? Because, as well as scaring people, it matters in two ways. One is it scares people and makes them feel powerless.”

Martin, a professor of management of public organisations at Oxford, also serves as an adviser to the Australian cybersecurity firm CyberCX and says he has to remind clients that they are not powerless, they do have agency: “We could talk about Ukraine, showing that when a defender really sets its mind to it, you know, all sorts of things are possible. So you don’t want to infantilise people in cybersecurity. And the second thing is that it may send you chasing after the wrong problem.”

He urges governments and companies instead to break down potential AI problems into specific parts and deal with each.

Joe Biden met with leaders from major artificial intelligence firms at the White House on Friday.

Joe Biden met with leaders from major artificial intelligence firms at the White House on Friday.Credit: Bloomberg

For instance, on Friday, a brace of US tech firms struck a deal with the White House to stamp a watermark on any content produced by AI, an effort to keep deepfakes apart from reality.

US President Joe Biden said that it was a step towards responsible use of AI, a technology he described as “astounding”. The watermarking deal is voluntary. The signatories – Google, Amazon, Meta, Microsoft, OpenAI Anthropic, and Inflection AI – said they’d apply it to text, images, audio and video. But many other companies refused to sign. These include Midjourney, whose product was used to make a fake clip of Donald Trump being arrested.

Other countries are trying other ways to regulate AI. More sweepingly, the Chinese Communist Party last week issued guidelines for China’s AI developers that stipulate the use of “socialist values” in any new products. This is, in effect, a demand that all the AI programs permitted access to the Chinese internet must promote Beijing’s authoritarian world view.

But overarching all the detail is a fundamental strategic principle. The creation of the nuclear bomb is, as Christopher Nolan says, a cautionary tale of humanity’s capacity for self-destruction. Yet only two have been used in anger, and none has been used in the last 78 years.

Loading

This is an instructive tale of the logic of mutually assured destruction, with one country’s atomic arsenal held in check by the threat posed by others’. And Ciaran Martin says that a similar principle applies to the online world, the logic of technological equilibrium.

“Can AI be used to scale malevolent software? Yes. But can the same techniques be used to scale the defence against that? Yes. And furthermore, can it be used to expand cybersecurity and solutions? Absolutely.

“This can be done for as long as our equilibrium holds – what can be done for attack can be done for defence, then we stay in a good place.” Of course, that equilibrium could be broken by the next revolution in computing, quantum. In truth, there’s always something to panic about if we choose to.

Martin says that we like to think that “because something can happen, it will happen”. Yet the Doomsday Clock never has struck midnight.

So “is there a robot that could be programmed to go and pour petrol in your toaster and then turn it on? Probably, but I think you’re more likely to face targeted extortion or financial theft threat from organised cyber criminals.” In other words, p(rip-off) is more likely than p(doom).

Peter Hartcher is international editor.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment