Techno Blender
Digitally Yours.

We Need a Consumer-First AI Approach, Consumer Reports CEO Says

0 41


A.I. marketing is at a fever pitch, promising that this new wave of generative AI tools, powered by large language models, can help us do everything from navigating legal contracts to saving hundreds on our phone bills. When a leader like Alphabet CEO Sundar Pichai calls AI “more profound than fire or electricity,” it’s hard not to be excited by the potential. But as the CEO of Consumer Reports, I know some of the shiniest objects in the market don’t always live up to their hype. The ravenous appetite of quarterly profits is too often the driver behind today’s society-changing generative A.I., and consumers will have to fight for the fair shake. Only when the A.I. revolution centers transparency, accuracy, and fairness, can we ensure that it lives up to its true potential for us, everyday consumers, not just corporate shareholders.

A consumer-first approach is never guaranteed. When Consumer Reports was founded in 1936, there was little information publicly available to help Americans assess the safety and performance of products. It was an era of unfettered advertising claims, rapid technological progress, and patchwork regulations. Sound familiar? Today, our transformative products don’t have a physical nature like they did in 1936 — or even 1996— but nonetheless, the rigor by which they must be tested and companies held accountable remains the same.

We only need to look to the recent past to understand what problems we may expect with the A.I. revolution we’re experiencing. The onslaught of social media and digital transformation of online marketplaces made many similar promises that the A.I. revolution does today — instant connection, increased speed and accuracy of information, the democratization of power. Yet these tools also spawned new variants of manipulation and discrimination — which we are still fighting to fully address with mixed results.

The fundamental problems are not new, but they are supercharged by A.I. For years, scammers have used the internet to take advantage of consumers. Now they’re using A.I. to mimic loved ones’ voices, tricking grandparents to “help” by sending money or providing sensitive information. Companies already use search engines to blur the line between answers to our questions and advertisements to sell us products. With generative A.I. search tools, consumers could be pitted against a supercomputer, powered by your personal data, that prioritizes a company’s profits, not honest answers or your best interests.

Arguably the most insidious problem to unroot is the biased data potentially powering this innovative technology. Even before the new explosion of generative AI, experts had already exposed how the algorithmic tech powering our world today could discriminate. For example, a joint investigation by Consumer Reports and ProPublica found that some auto insurance companies potentially used an algorithm that charged premiums on average 30% higher in zip codes with mostly minority residents than in whiter neighborhoods with similar accident costs. While generative AI is a newer field, there are already examples of it perpetuating biases, like giving advice on how to spread antisemitism online. What will happen if even more of the marketplace – of our everyday lives – is handled by untransparent technologies promoting systemic unfairness throughout our society?

We must ask a crucial question: are we creating a world where this technology serves us or where we serve this technology? As we embrace the benefits of A.I., we must ensure that this innovation in this space is driven by consumer-first values.

Generative A.I. must be transparent – the key to accountability. For advocates and regulators to review for potential harms, they need insight into the data used to inform any A.I, with models open to third-party researchers for testing. Many model providers are partnering with research institutions to evaluate and de-risk – but we cannot rely on self-organization and voluntary disclosure alone when profits drive company interests. For individual consumers, transparency means it should be crystal clear if someone is paying for the information we’re getting and whether we’re engaging with a real person or an artificial one.

Generative AI models need to also ensure accuracy by design. Consumers should be able to assume that the information they’re getting is true and accurate, not empty words or advertising in disguise. This requires due diligence and oversight from companies, as well as a process for people to correct or contest information A.I. provides.

And fairness should be at the heart of A.I., developed and deployed with equity in mind. That means reviewing for biases in the data inputted, during the design, and throughout the life of the product, ensuring the benefits of this technology are enjoyed by all communities.

The A.I. pitch promises so much – and I believe in the potential. But with the unleashing of this new class of technology comes a new class of responsibility, transparency, and accountability. Consumer protections are worth fighting for, and together, we can ensure this new era of A.I. revolution is guided by a new era of consumer rights.

Marta Tellado has been the CEO of Consumer Reports since 2014. 

Want to know more about AI, chatbots, and the future of machine learning? Check out our full coverage of artificial intelligence, or browse our guides to The Best Free AI Art Generators and Everything We Know About OpenAI’s ChatGPT.


A.I. marketing is at a fever pitch, promising that this new wave of generative AI tools, powered by large language models, can help us do everything from navigating legal contracts to saving hundreds on our phone bills. When a leader like Alphabet CEO Sundar Pichai calls AI “more profound than fire or electricity,” it’s hard not to be excited by the potential. But as the CEO of Consumer Reports, I know some of the shiniest objects in the market don’t always live up to their hype. The ravenous appetite of quarterly profits is too often the driver behind today’s society-changing generative A.I., and consumers will have to fight for the fair shake. Only when the A.I. revolution centers transparency, accuracy, and fairness, can we ensure that it lives up to its true potential for us, everyday consumers, not just corporate shareholders.

A consumer-first approach is never guaranteed. When Consumer Reports was founded in 1936, there was little information publicly available to help Americans assess the safety and performance of products. It was an era of unfettered advertising claims, rapid technological progress, and patchwork regulations. Sound familiar? Today, our transformative products don’t have a physical nature like they did in 1936 — or even 1996— but nonetheless, the rigor by which they must be tested and companies held accountable remains the same.

We only need to look to the recent past to understand what problems we may expect with the A.I. revolution we’re experiencing. The onslaught of social media and digital transformation of online marketplaces made many similar promises that the A.I. revolution does today — instant connection, increased speed and accuracy of information, the democratization of power. Yet these tools also spawned new variants of manipulation and discrimination — which we are still fighting to fully address with mixed results.

The fundamental problems are not new, but they are supercharged by A.I. For years, scammers have used the internet to take advantage of consumers. Now they’re using A.I. to mimic loved ones’ voices, tricking grandparents to “help” by sending money or providing sensitive information. Companies already use search engines to blur the line between answers to our questions and advertisements to sell us products. With generative A.I. search tools, consumers could be pitted against a supercomputer, powered by your personal data, that prioritizes a company’s profits, not honest answers or your best interests.

Arguably the most insidious problem to unroot is the biased data potentially powering this innovative technology. Even before the new explosion of generative AI, experts had already exposed how the algorithmic tech powering our world today could discriminate. For example, a joint investigation by Consumer Reports and ProPublica found that some auto insurance companies potentially used an algorithm that charged premiums on average 30% higher in zip codes with mostly minority residents than in whiter neighborhoods with similar accident costs. While generative AI is a newer field, there are already examples of it perpetuating biases, like giving advice on how to spread antisemitism online. What will happen if even more of the marketplace – of our everyday lives – is handled by untransparent technologies promoting systemic unfairness throughout our society?

We must ask a crucial question: are we creating a world where this technology serves us or where we serve this technology? As we embrace the benefits of A.I., we must ensure that this innovation in this space is driven by consumer-first values.

Generative A.I. must be transparent – the key to accountability. For advocates and regulators to review for potential harms, they need insight into the data used to inform any A.I, with models open to third-party researchers for testing. Many model providers are partnering with research institutions to evaluate and de-risk – but we cannot rely on self-organization and voluntary disclosure alone when profits drive company interests. For individual consumers, transparency means it should be crystal clear if someone is paying for the information we’re getting and whether we’re engaging with a real person or an artificial one.

Generative AI models need to also ensure accuracy by design. Consumers should be able to assume that the information they’re getting is true and accurate, not empty words or advertising in disguise. This requires due diligence and oversight from companies, as well as a process for people to correct or contest information A.I. provides.

And fairness should be at the heart of A.I., developed and deployed with equity in mind. That means reviewing for biases in the data inputted, during the design, and throughout the life of the product, ensuring the benefits of this technology are enjoyed by all communities.

The A.I. pitch promises so much – and I believe in the potential. But with the unleashing of this new class of technology comes a new class of responsibility, transparency, and accountability. Consumer protections are worth fighting for, and together, we can ensure this new era of A.I. revolution is guided by a new era of consumer rights.

Marta Tellado has been the CEO of Consumer Reports since 2014. 

Want to know more about AI, chatbots, and the future of machine learning? Check out our full coverage of artificial intelligence, or browse our guides to The Best Free AI Art Generators and Everything We Know About OpenAI’s ChatGPT.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment