Techno Blender
Digitally Yours.

An Alliance Calling For More Open AI Should Heed Their Own Call

0 25



Recently, Facebook’s parent company, Meta, along with IBM and over 50 other founding members, announced an AI Alliance to “advance open, safe, responsible AI.” The group would be committed to “open science and open technologies,” promoting standards and benchmarks to reduce the risk of harm that advanced models might cause.

These are critically important goals, as many tech companies, driven by the breakneck AI arms race, have come out with products that could upend the lives and livelihoods of many, and pose an existential threat to humanity as a whole. Given the near-absolute corporate dominance in the U.S. tech sector, federal support for alternative AI pipelines and nonproprietary forms of knowledge are key to diversifying that sector, using that diversity as democratic guardrails for a dangerous technology.

The lineup of the alliance is impressive: NASA and the National Science Foundation; CERN and the Cleveland Clinic; and a deliberately eclectic group of universities: including Yale, University of California, Berkeley, University of Texas at Austin and University of Illinois, but also the University of Tokyo, Indian Institute of Technology, Hebrew University of Jerusalem and the Abdus Salam International Center for Theoretical Physics. Given the range of institutions represented and their diversity in goals and methods, the alliance could begin by laying a shared foundation of AI literacy, initiating a public conversation about the different kinds of models that could be developed, the different uses to which they could be put, and the degree of openness needed to ensure that developers and people affected by their uses would have input into their designs and operations.


On supporting science journalism

If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


In general, the word “open” in computing means that source code or the base technology is freely available to examine, and indeed to use and expand upon. But for makers of the recent AI models, operating under intense market competition, that word has become an oxymoron. This is especially glaring in OpenAI, the maker of ChatGPT. The company has been anything but open about just what is entailed in creating its products. Large language models (LLMs) like ChatGPT need to be trained on petabytes of data scraped from the Internet. Since the Internet is awash with racism, extremism and misogyny, a key part of AI training requires labeling the toxic material clearly to prevent it from fostering similarly toxic performances. Time has revealed that this labor-intensive and often traumatizing work was done by Kenyan workers that OpenAI paid less than $2 an hour. This hidden layer of inadmissible labor practices is integral to its aura of glamor and tech wizardry.

Another hidden layer of inadmissible practices comes to light in a number of lawsuits, most recently by the New York Times against OpenAI and Microsoft, and earlier by authors including John Grisham and George R.R. Martin, for copyright infringement and undisclosed and uncompensated use of human creations in LLM training.

OpenAI’s success has been a function of its withholding information, not its transparency. A return to open source now would tell us things that we need to know about the current models, including their carbon footprint and environment impact. Openness would also put labor relations and the ethics of fair compensation squarely on the table.

In heading the alliance, Meta and IBM claim to be on moral high ground, but they aren’t exactly aboveboard on this front. As the recent lawsuit by Kenyan content moderators makes clear, Meta too has been subjecting its foreign workers to unacceptable pay and work conditions. Its “Year of Efficiency” resulted in 10,600 job cuts in the first five months of 2023, following over 11,000 layoffs in November 2022, while IBM froze hiring for thousands of “back office” workers with the understanding that these jobs would be taken over by AI. Still, the two might have a point in distancing themselves from their rivals—OpenAI, Microsoft and Google—and their fanatic proprietary secrecy over their products.

With the push for more openness, however, new risks have arisen. Yann LeCun, Meta’s chief AI scientist, maintains that there’s nothing for OpenAI to be secretive about, that the technology underlying ChatGPT is common property, already developed by “half a dozen startups,” and owned by no one in particular. Meta has released a series of open-source models, Llama, on that premise, making the code available to a community of developers, to use as they see fit. OpenAI and Google have fought back, in turn accusing Meta of putting powerful software in dangerous hands. Within days of its release, Llama was leaked onto 4Chan, a website where extremists congregate.

There’s no question that open source could be risky. The risk might be more than matched, however, by a new danger that that has become unmistakable in recent months, unfolding in the boardroom of OpenAI. The dramatic ouster and prompt reinstatement of CEO Sam Altman, along with the dismissal of the watchdog board, suggest that the nonprofit OpenAI might now be playing second fiddle to its for-profit arm, putting unannounced market acceleration ahead of any existential threats a superintelligent AI might pose. Llama, released under a non-commercial license and open to inspection by a million eyeballs, might turn out to be the less unsafe of the two.

Other recent developments contextualize the AI Alliance as well. Within one day of its launch, Meta announced a new text-to-image generator, Imagine, trained up front on 1.1 billion Facebook and Instagram photos. “Openness” here could mean undisguised exploitation of users’ privacy and property. Meanwhile, a provisional clause in the landmark EU AI Actexempting certain open-source models from the strict regulations governing commercial products—puts yet another spin on the alliance’s lofty principles. Still, the distinction that the EU makes between “open-source” and “commercial” could be a goal to work towards, a road map to a more diverse, more democratic tech ecosystem. Only when there is a countervailing force to corporate dominance can there be guardrails representing humanity as a whole. And only when there is broad AI literacy can the word “open” be functional rather than ornamental.

This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.



Recently, Facebook’s parent company, Meta, along with IBM and over 50 other founding members, announced an AI Alliance to “advance open, safe, responsible AI.” The group would be committed to “open science and open technologies,” promoting standards and benchmarks to reduce the risk of harm that advanced models might cause.

These are critically important goals, as many tech companies, driven by the breakneck AI arms race, have come out with products that could upend the lives and livelihoods of many, and pose an existential threat to humanity as a whole. Given the near-absolute corporate dominance in the U.S. tech sector, federal support for alternative AI pipelines and nonproprietary forms of knowledge are key to diversifying that sector, using that diversity as democratic guardrails for a dangerous technology.

The lineup of the alliance is impressive: NASA and the National Science Foundation; CERN and the Cleveland Clinic; and a deliberately eclectic group of universities: including Yale, University of California, Berkeley, University of Texas at Austin and University of Illinois, but also the University of Tokyo, Indian Institute of Technology, Hebrew University of Jerusalem and the Abdus Salam International Center for Theoretical Physics. Given the range of institutions represented and their diversity in goals and methods, the alliance could begin by laying a shared foundation of AI literacy, initiating a public conversation about the different kinds of models that could be developed, the different uses to which they could be put, and the degree of openness needed to ensure that developers and people affected by their uses would have input into their designs and operations.


On supporting science journalism

If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


In general, the word “open” in computing means that source code or the base technology is freely available to examine, and indeed to use and expand upon. But for makers of the recent AI models, operating under intense market competition, that word has become an oxymoron. This is especially glaring in OpenAI, the maker of ChatGPT. The company has been anything but open about just what is entailed in creating its products. Large language models (LLMs) like ChatGPT need to be trained on petabytes of data scraped from the Internet. Since the Internet is awash with racism, extremism and misogyny, a key part of AI training requires labeling the toxic material clearly to prevent it from fostering similarly toxic performances. Time has revealed that this labor-intensive and often traumatizing work was done by Kenyan workers that OpenAI paid less than $2 an hour. This hidden layer of inadmissible labor practices is integral to its aura of glamor and tech wizardry.

Another hidden layer of inadmissible practices comes to light in a number of lawsuits, most recently by the New York Times against OpenAI and Microsoft, and earlier by authors including John Grisham and George R.R. Martin, for copyright infringement and undisclosed and uncompensated use of human creations in LLM training.

OpenAI’s success has been a function of its withholding information, not its transparency. A return to open source now would tell us things that we need to know about the current models, including their carbon footprint and environment impact. Openness would also put labor relations and the ethics of fair compensation squarely on the table.

In heading the alliance, Meta and IBM claim to be on moral high ground, but they aren’t exactly aboveboard on this front. As the recent lawsuit by Kenyan content moderators makes clear, Meta too has been subjecting its foreign workers to unacceptable pay and work conditions. Its “Year of Efficiency” resulted in 10,600 job cuts in the first five months of 2023, following over 11,000 layoffs in November 2022, while IBM froze hiring for thousands of “back office” workers with the understanding that these jobs would be taken over by AI. Still, the two might have a point in distancing themselves from their rivals—OpenAI, Microsoft and Google—and their fanatic proprietary secrecy over their products.

With the push for more openness, however, new risks have arisen. Yann LeCun, Meta’s chief AI scientist, maintains that there’s nothing for OpenAI to be secretive about, that the technology underlying ChatGPT is common property, already developed by “half a dozen startups,” and owned by no one in particular. Meta has released a series of open-source models, Llama, on that premise, making the code available to a community of developers, to use as they see fit. OpenAI and Google have fought back, in turn accusing Meta of putting powerful software in dangerous hands. Within days of its release, Llama was leaked onto 4Chan, a website where extremists congregate.

There’s no question that open source could be risky. The risk might be more than matched, however, by a new danger that that has become unmistakable in recent months, unfolding in the boardroom of OpenAI. The dramatic ouster and prompt reinstatement of CEO Sam Altman, along with the dismissal of the watchdog board, suggest that the nonprofit OpenAI might now be playing second fiddle to its for-profit arm, putting unannounced market acceleration ahead of any existential threats a superintelligent AI might pose. Llama, released under a non-commercial license and open to inspection by a million eyeballs, might turn out to be the less unsafe of the two.

Other recent developments contextualize the AI Alliance as well. Within one day of its launch, Meta announced a new text-to-image generator, Imagine, trained up front on 1.1 billion Facebook and Instagram photos. “Openness” here could mean undisguised exploitation of users’ privacy and property. Meanwhile, a provisional clause in the landmark EU AI Actexempting certain open-source models from the strict regulations governing commercial products—puts yet another spin on the alliance’s lofty principles. Still, the distinction that the EU makes between “open-source” and “commercial” could be a goal to work towards, a road map to a more diverse, more democratic tech ecosystem. Only when there is a countervailing force to corporate dominance can there be guardrails representing humanity as a whole. And only when there is broad AI literacy can the word “open” be functional rather than ornamental.

This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment