Techno Blender
Digitally Yours.

Why I Signed the “Pause Giant AI Experiments” Petition | by Rafe Brena, PhD | Apr, 2023

0 111


Photo by Álvaro Serrano on Unsplash
  1. GenAI systems are “powerful digital minds that no one — not even their creators — can understand, predict, or reliably control.” They are “unpredictable black-box models with emergent capabilities.” This explains why they are intrinsically dangerous systems. For instance, “emergent capabilities” means that when GenAI systems get large enough, new behaviors appear out of thin air –like hallucinations. Emergent behaviors are not engineered or programmed; they simply appear.
  2. AI labs [are] locked in an out-of-control race to develop and deploy ever more powerful digital minds.” This non-stop race can be understood in terms of market share domination for the companies, but what about societal consequences? They say they care about it, but the relentless pace points otherwise.
  3. Instead of letting this reckless race continue, we should “develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.”
  4. Another good point is not trying to stop AI research or innovation altogether: “This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.” Further, a reorientation of tech efforts is proposed: “AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.”
  5. Finally, an emphasis on policymaking is proposed as the way to go: “AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should, at a minimum, include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.”
  1. References are not authoritative enough. Oral declarations are not objective evidence. Even the Bubeck et al. reference is not really a scientific paper because it wasn’t even reviewed! You know, papers published in prestigious journals go through a review process with several anonymous reviewers. I review myself more than a dozen papers each year. If the Bubeck paper were sent to a reviewed journal, for sure, it wouldn’t be accepted as it is because it uses subjective language (what about “Sparks of Artificial General Intelligence”?).
  2. Some claims in the letter are plain ridiculous: it starts with “AI systems with human-competitive intelligence…”, but as I explained in a previous post, AI current systems are not at all human-competitive, and most human vs. GenAI comparisons are misleading. The reference supporting machine competitiveness is bogus, as I explained in the previous point.
  3. The letter implies claims of Artificial General Intelligence (AGI), as in “Contemporary AI systems are now becoming human-competitive at general tasks,” but I’m in the camp of those who place AGI as a very distant future and don’t even see GPT-4 as a substantial step to it.
  4. The dangers for the jobs market are not well put: “Should we automate away all the jobs, including the fulfilling ones?” Come on; AI is not coming for most of the jobs, but the way it’s taking some of them (like the graphic design capabilities made from scrapping thousands of images without giving any monetary compensation to their human authors) could be taken care of, not by a moratorium, but by imposing taxes to big tech and giving support to graphic designer communities.
  5. Sorry, but almost every single question the letter asks is ill-written: “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?” This is a “human vs. machines” scenario, which is not only ridiculous but also fuels the wrong hype about AI systems, as Arvind Narayanan (@random_walker) points out on Twitter. Terminator-like scenarios are not the real danger here.
  6. Just to conclude with nonsense questions in the letter, let’s check this one: “Should we risk loss of control of our civilization?” This is wrong at so many levels that it’s hard to comment on. For starters, do we currently have control of our civilization? Please tell me who has control of our civilization besides the rich people and the heads of state. Then, who is “we”? The humans? If this is the case, we are back to the human vs. machine mindset, which is basically wrong. The real danger is the use of AI tools by some humans to dominate other humans.
  7. The “remedy” proposed (the “pause” on the development of Large Language Models more capable than GPT-4) is both unrealistic and misplaced. It’s unrealistic because it’s addressed to AI labs, which are mostly under the control of big tech companies with specific financial interests –one of which is to increase their market share. What do you think they’ll do, what the FoL Institute proposes, or what their bosses want? You’re right. It’s also misplaced because the pause wouldn’t take care of the looting already taking place from human authors or the damage already being done with misinformation from human actors with tools that don’t need to be more powerful than GPT-4.
  8. Finally, some people signing the letter, and in particular Elon Musk, cannot be seen as an example of what would be AI ethical behavior: Musk has misled Tesla customers by naming “Full Self-Driving” the Tesla capabilities that not only fail to comply with Level 5 of the standard proposed by the Society of Automotive Engineers, but also fail to comply with level 4, and barely could fit into level 3. Not only that, but also Tesla has released to the public potentially deadly machines much before ensuring their safety, and Tesla cars in autonomous mode have actually killed people. What is the moral authority of Elon Musk to ask for “safe, interpretable, transparent, robust, aligned, trustworthy, and loyal” AI systems that he hasn’t put into practice in his own company?




Photo by Álvaro Serrano on Unsplash
  1. GenAI systems are “powerful digital minds that no one — not even their creators — can understand, predict, or reliably control.” They are “unpredictable black-box models with emergent capabilities.” This explains why they are intrinsically dangerous systems. For instance, “emergent capabilities” means that when GenAI systems get large enough, new behaviors appear out of thin air –like hallucinations. Emergent behaviors are not engineered or programmed; they simply appear.
  2. AI labs [are] locked in an out-of-control race to develop and deploy ever more powerful digital minds.” This non-stop race can be understood in terms of market share domination for the companies, but what about societal consequences? They say they care about it, but the relentless pace points otherwise.
  3. Instead of letting this reckless race continue, we should “develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.”
  4. Another good point is not trying to stop AI research or innovation altogether: “This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.” Further, a reorientation of tech efforts is proposed: “AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.”
  5. Finally, an emphasis on policymaking is proposed as the way to go: “AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should, at a minimum, include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.”
  1. References are not authoritative enough. Oral declarations are not objective evidence. Even the Bubeck et al. reference is not really a scientific paper because it wasn’t even reviewed! You know, papers published in prestigious journals go through a review process with several anonymous reviewers. I review myself more than a dozen papers each year. If the Bubeck paper were sent to a reviewed journal, for sure, it wouldn’t be accepted as it is because it uses subjective language (what about “Sparks of Artificial General Intelligence”?).
  2. Some claims in the letter are plain ridiculous: it starts with “AI systems with human-competitive intelligence…”, but as I explained in a previous post, AI current systems are not at all human-competitive, and most human vs. GenAI comparisons are misleading. The reference supporting machine competitiveness is bogus, as I explained in the previous point.
  3. The letter implies claims of Artificial General Intelligence (AGI), as in “Contemporary AI systems are now becoming human-competitive at general tasks,” but I’m in the camp of those who place AGI as a very distant future and don’t even see GPT-4 as a substantial step to it.
  4. The dangers for the jobs market are not well put: “Should we automate away all the jobs, including the fulfilling ones?” Come on; AI is not coming for most of the jobs, but the way it’s taking some of them (like the graphic design capabilities made from scrapping thousands of images without giving any monetary compensation to their human authors) could be taken care of, not by a moratorium, but by imposing taxes to big tech and giving support to graphic designer communities.
  5. Sorry, but almost every single question the letter asks is ill-written: “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?” This is a “human vs. machines” scenario, which is not only ridiculous but also fuels the wrong hype about AI systems, as Arvind Narayanan (@random_walker) points out on Twitter. Terminator-like scenarios are not the real danger here.
  6. Just to conclude with nonsense questions in the letter, let’s check this one: “Should we risk loss of control of our civilization?” This is wrong at so many levels that it’s hard to comment on. For starters, do we currently have control of our civilization? Please tell me who has control of our civilization besides the rich people and the heads of state. Then, who is “we”? The humans? If this is the case, we are back to the human vs. machine mindset, which is basically wrong. The real danger is the use of AI tools by some humans to dominate other humans.
  7. The “remedy” proposed (the “pause” on the development of Large Language Models more capable than GPT-4) is both unrealistic and misplaced. It’s unrealistic because it’s addressed to AI labs, which are mostly under the control of big tech companies with specific financial interests –one of which is to increase their market share. What do you think they’ll do, what the FoL Institute proposes, or what their bosses want? You’re right. It’s also misplaced because the pause wouldn’t take care of the looting already taking place from human authors or the damage already being done with misinformation from human actors with tools that don’t need to be more powerful than GPT-4.
  8. Finally, some people signing the letter, and in particular Elon Musk, cannot be seen as an example of what would be AI ethical behavior: Musk has misled Tesla customers by naming “Full Self-Driving” the Tesla capabilities that not only fail to comply with Level 5 of the standard proposed by the Society of Automotive Engineers, but also fail to comply with level 4, and barely could fit into level 3. Not only that, but also Tesla has released to the public potentially deadly machines much before ensuring their safety, and Tesla cars in autonomous mode have actually killed people. What is the moral authority of Elon Musk to ask for “safe, interpretable, transparent, robust, aligned, trustworthy, and loyal” AI systems that he hasn’t put into practice in his own company?

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.
Leave a comment