Techno Blender
Digitally Yours.

Amazon to warn customers on limitations of its AI

0 39


Amazon.com Inc is planning to roll out warning cards for software sold by its cloud-computing division, in light of ongoing concern that artificially intelligent systems can discriminate against different groups, the company said.

Akin to lengthy nutrition labels, Amazon’s so-called AI Service Cards will be public so its business customers can see the limitations of certain cloud services, such as facial recognition and audio transcription. The goal would be to prevent mistaken use of its technology, explain how its systems work and manage privacy, Amazon said.

The company is not the first to publish such warnings. International Business Machines Corp, a smaller player in the cloud, did so years ago. The No. 3 cloud provider, Alphabet Inc’s Google, has also published still more details on the datasets it has used to train some of its AI.

Read Also

Amazon may settle EU antitrust investigations by year end
AWS announces Digital Sovereignty Pledge to protect customers assets in Cloud

Yet Amazon’s decision to release its first three service cards on Wednesday reflects the industry leader’s attempt to change its image after a public spat with civil liberties critics years ago left an impression that it cared less about AI ethics than its peers did. The move will coincide with the company’s annual cloud conference in Las Vegas.

Michael Kearns, a University of Pennsylvania professor and since 2020 a scholar at Amazon, said the decision to issue the cards followed privacy and fairness audits of the company’s software. The cards would address AI ethics concerns publicly at a time when tech regulation was on the horizon, said Kearns.

“The biggest thing about this launch is the commitment to do this on an ongoing basis and an expanded basis,” he said.

Amazon chose software touching on sensitive demographic issues as a start for its service cards, which Kearns expects to grow in detail over time.

SKIN TONES

One such service is called “Rekognition.” In 2019, Amazon contested a study saying the technology struggled to identify the gender of individuals with darker skin tones. But after the 2020 murder of George Floyd, an unarmed Black man, during an arrest, the company issued a moratorium on police use of its facial recognition software.

Now, Amazon says in a service card seen by Reuters that Rekognition does not support matching “images that are too blurry and grainy for the face to be recognized by a human, or that have large portions of the face occluded by hair, hands, and other objects.” It also warns against matching faces in cartoons and other “nonhuman entities.”

In another warning card seen by Reuters, on audio transcription, Amazon states, “Inconsistently modifying audio inputs could result in unfair outcomes for different demographic groups.” Kearns said accurately transcribing the wide range of regional accents and dialects in North America alone was a challenge Amazon had worked to address.

Jessica Newman, director of the AI Security Initiative at the University of California at Berkeley, said technology companies were increasingly publishing such disclosures as a signal of responsible AI practices, though they had a way to go.

“We shouldn’t be dependent upon the goodwill of companies to provide basic details of systems that can have enormous influence on people’s lives,” she said, calling for more industry standards.

Tech giants have wrestled with making such documents short enough that people will read them yet sufficiently detailed and up to date to reflect frequent software tweaks, a person who worked on nutrition labels at two major enterprises said.

FacebookTwitterLinkedin



Amazon to warn customers on limitations of its AI

Amazon.com Inc is planning to roll out warning cards for software sold by its cloud-computing division, in light of ongoing concern that artificially intelligent systems can discriminate against different groups, the company said.

Akin to lengthy nutrition labels, Amazon’s so-called AI Service Cards will be public so its business customers can see the limitations of certain cloud services, such as facial recognition and audio transcription. The goal would be to prevent mistaken use of its technology, explain how its systems work and manage privacy, Amazon said.

The company is not the first to publish such warnings. International Business Machines Corp, a smaller player in the cloud, did so years ago. The No. 3 cloud provider, Alphabet Inc’s Google, has also published still more details on the datasets it has used to train some of its AI.

Read Also

Amazon may settle EU antitrust investigations by year end
AWS announces Digital Sovereignty Pledge to protect customers assets in Cloud

Yet Amazon’s decision to release its first three service cards on Wednesday reflects the industry leader’s attempt to change its image after a public spat with civil liberties critics years ago left an impression that it cared less about AI ethics than its peers did. The move will coincide with the company’s annual cloud conference in Las Vegas.

Michael Kearns, a University of Pennsylvania professor and since 2020 a scholar at Amazon, said the decision to issue the cards followed privacy and fairness audits of the company’s software. The cards would address AI ethics concerns publicly at a time when tech regulation was on the horizon, said Kearns.

“The biggest thing about this launch is the commitment to do this on an ongoing basis and an expanded basis,” he said.

Amazon chose software touching on sensitive demographic issues as a start for its service cards, which Kearns expects to grow in detail over time.

SKIN TONES

One such service is called “Rekognition.” In 2019, Amazon contested a study saying the technology struggled to identify the gender of individuals with darker skin tones. But after the 2020 murder of George Floyd, an unarmed Black man, during an arrest, the company issued a moratorium on police use of its facial recognition software.

Now, Amazon says in a service card seen by Reuters that Rekognition does not support matching “images that are too blurry and grainy for the face to be recognized by a human, or that have large portions of the face occluded by hair, hands, and other objects.” It also warns against matching faces in cartoons and other “nonhuman entities.”

In another warning card seen by Reuters, on audio transcription, Amazon states, “Inconsistently modifying audio inputs could result in unfair outcomes for different demographic groups.” Kearns said accurately transcribing the wide range of regional accents and dialects in North America alone was a challenge Amazon had worked to address.

Jessica Newman, director of the AI Security Initiative at the University of California at Berkeley, said technology companies were increasingly publishing such disclosures as a signal of responsible AI practices, though they had a way to go.

“We shouldn’t be dependent upon the goodwill of companies to provide basic details of systems that can have enormous influence on people’s lives,” she said, calling for more industry standards.

Tech giants have wrestled with making such documents short enough that people will read them yet sufficiently detailed and up to date to reflect frequent software tweaks, a person who worked on nutrition labels at two major enterprises said.

FacebookTwitterLinkedin


FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment