Techno Blender
Digitally Yours.

ChatGPT Calls for AI regulation! Can AI be Tamed Ever?

0 25


Breton, an EU official proposed specific rules regulating products like chatGPT to mitigate risks

ChatGPT, the versatile AI chatbot has become one of the fastest-growing consumer applications and at the same time has raised serious concerns about the lack of regulations, for ChatGPT in particular and AI in general. ChatGPT is a generative AI model, and most of the problems arise due to the lack of proper data sets that train them. The biases, inconsistencies, and inaccuracies account for harmful content. On the other hand, though data filtering is possible, it leads to too much control making the model dunk in the first place. The issues with inconsistent data are not new to OpenAI as they have been grappling with it for years for developing DALLE, a text-to-image generation tool. When the team found explicit imagery in conflict with the general societal values it added the filters only to make the app avoid images of women!! In the case of ChatGPT, the testing has been on point but some of the early users found ways to breach the guardrails included in the design. And as a result, it could be prompted to generate racially inclined text. No wonder, a popular chatbot like ChatGPT calls for AI regulation.

Do Tech Companies Feel the Responsibility?

Most tech companies either have a reactive approach or totally evade making the tools accessible to the general public. Amazon’s pulling out its AI recruitment tool or the instances how Microsoft’s Tay or Meta’s Blenderbot or the way Galactica was pulled down are good examples. Tech giants like Google, Microsoft, and Meta are openly seeking government regulations even though their origin lies in the fear of a Government ban on the products – a kind of safety valve for knee-jerk reactions by Governments. The review processes, at the developmental stage and post-developmental stage are projected as a priority by them. Anna Makanju, OpenAI’s head of public policy is of opinion that the company steers clear of controversial areas and therefore prevents the application from speaking about those topics. However, she warns in the future the generative language models will reach a phase where the users will customize them according to their personal worldviews, a clear red flag for tech companies. It demands certain checks, which external bodies like governments and regulatory agencies will be in a good position to place.

ChatGPT Under Fire

The recently concluded US-EU agreement on regulating artificial intelligence is aimed at addressing larger issues rather than just privacy concerns and has significant consequences for startups as well as governments. As a first move pointing at the loopholes chatGPT carries, an EU official Thierry Breton proposed specific rules regulating certain concerns to mitigate risks from products like ChatGPT. “As showcased by ChatGPT, AI solutions can offer great opportunities for businesses and citizens, but can also pose risks. This is why we need a solid regulatory framework to ensure trustworthy AI based on high-quality data,” he told Reuters in an exclusive interview. Also, he said it is important for people to understand they are communicating with a bot that is not a human being nor can it ever think like one. EU draft rule categorizes ChatGPT as a general-purpose AI system that can be used for multiple purposes, and high-risk jobs like recruitment are not an exception. Breton said European Commission, EU Council, and European Parliament are working closely to provide further clarifications on the rules regarding the AI Act meant for general-purpose AI systems. He requested OpenAI to cooperate with the developers of high-risk AI systems so as to make the compliance with proposed AI Act easy.

The post ChatGPT Calls for AI regulation! Can AI be Tamed Ever? appeared first on Analytics Insight.


ChatGPT

Breton, an EU official proposed specific rules regulating products like chatGPT to mitigate risks

ChatGPT, the versatile AI chatbot has become one of the fastest-growing consumer applications and at the same time has raised serious concerns about the lack of regulations, for ChatGPT in particular and AI in general. ChatGPT is a generative AI model, and most of the problems arise due to the lack of proper data sets that train them. The biases, inconsistencies, and inaccuracies account for harmful content. On the other hand, though data filtering is possible, it leads to too much control making the model dunk in the first place. The issues with inconsistent data are not new to OpenAI as they have been grappling with it for years for developing DALLE, a text-to-image generation tool. When the team found explicit imagery in conflict with the general societal values it added the filters only to make the app avoid images of women!! In the case of ChatGPT, the testing has been on point but some of the early users found ways to breach the guardrails included in the design. And as a result, it could be prompted to generate racially inclined text. No wonder, a popular chatbot like ChatGPT calls for AI regulation.

Do Tech Companies Feel the Responsibility?

Most tech companies either have a reactive approach or totally evade making the tools accessible to the general public. Amazon’s pulling out its AI recruitment tool or the instances how Microsoft’s Tay or Meta’s Blenderbot or the way Galactica was pulled down are good examples. Tech giants like Google, Microsoft, and Meta are openly seeking government regulations even though their origin lies in the fear of a Government ban on the products – a kind of safety valve for knee-jerk reactions by Governments. The review processes, at the developmental stage and post-developmental stage are projected as a priority by them. Anna Makanju, OpenAI’s head of public policy is of opinion that the company steers clear of controversial areas and therefore prevents the application from speaking about those topics. However, she warns in the future the generative language models will reach a phase where the users will customize them according to their personal worldviews, a clear red flag for tech companies. It demands certain checks, which external bodies like governments and regulatory agencies will be in a good position to place.

ChatGPT Under Fire

The recently concluded US-EU agreement on regulating artificial intelligence is aimed at addressing larger issues rather than just privacy concerns and has significant consequences for startups as well as governments. As a first move pointing at the loopholes chatGPT carries, an EU official Thierry Breton proposed specific rules regulating certain concerns to mitigate risks from products like ChatGPT. “As showcased by ChatGPT, AI solutions can offer great opportunities for businesses and citizens, but can also pose risks. This is why we need a solid regulatory framework to ensure trustworthy AI based on high-quality data,” he told Reuters in an exclusive interview. Also, he said it is important for people to understand they are communicating with a bot that is not a human being nor can it ever think like one. EU draft rule categorizes ChatGPT as a general-purpose AI system that can be used for multiple purposes, and high-risk jobs like recruitment are not an exception. Breton said European Commission, EU Council, and European Parliament are working closely to provide further clarifications on the rules regarding the AI Act meant for general-purpose AI systems. He requested OpenAI to cooperate with the developers of high-risk AI systems so as to make the compliance with proposed AI Act easy.

The post ChatGPT Calls for AI regulation! Can AI be Tamed Ever? appeared first on Analytics Insight.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment