Techno Blender
Digitally Yours.

AI holds massive potential for malicious use, but who will be held accountable?

0 41


While generative AI is rapidly advancing, it also raises concerns about its potential to be used maliciously. Generative artificial intelligence (AI) models may be susceptible to bias, as they learn patterns and generate output/predictions based on the data they are trained on. If the training data is biased or incomplete, the model’s output can also be incorrect/biased. Also, given that AI language models can generate human-like text and can be trained to impersonate the writing style of humans, there are also serious concerns about its potential misuse for spreading fake news.

The other interesting concept being whether Generative AI are intermediaries can claim a safe harbour for the content published on their platforms. It is important to observe that, unlike search engines that only provide links to webpages/content available on the internet, Generative AI processes available data and generates an independent output. Hence, it may be difficult for all Generative AI platforms to be categorized as intermediaries under the law. Also, since, there are varied parties involved in the ChatGPT / Generative AI (GAI) ecosystem (third-party data owners, GAI companies, platform providers, and users), there could be multiple IP claimants, hence, the ownership rights in the output generated from such systems is highly contentious.

Moreover, there is limited guidance or obligation on the accountability of a GAI system and the way the output has been arrived at and this could lead to issues such as bias, accountability, and explainability. Additionally, the protection of user data and user rights is complex. It may not be possible to seek user consent when data is scraped from the internet. In such scenarios, the implementation of user right to correction, erasure, and portability among others becomes challenging.

Lastly, with the advent and wider acceptance of GAI in our daily lives, human-generated content could become a scarce commodity, hence more valuable.

By Huzefa Tavawalla, Head, Disruptive Technologies Practice Group, Nishith Desai Associates

NOTE: The views expressed are those of the author and do not necessarily reflect the opinions of HT Tech.


While generative AI is rapidly advancing, it also raises concerns about its potential to be used maliciously. Generative artificial intelligence (AI) models may be susceptible to bias, as they learn patterns and generate output/predictions based on the data they are trained on. If the training data is biased or incomplete, the model’s output can also be incorrect/biased. Also, given that AI language models can generate human-like text and can be trained to impersonate the writing style of humans, there are also serious concerns about its potential misuse for spreading fake news.

The other interesting concept being whether Generative AI are intermediaries can claim a safe harbour for the content published on their platforms. It is important to observe that, unlike search engines that only provide links to webpages/content available on the internet, Generative AI processes available data and generates an independent output. Hence, it may be difficult for all Generative AI platforms to be categorized as intermediaries under the law. Also, since, there are varied parties involved in the ChatGPT / Generative AI (GAI) ecosystem (third-party data owners, GAI companies, platform providers, and users), there could be multiple IP claimants, hence, the ownership rights in the output generated from such systems is highly contentious.

Moreover, there is limited guidance or obligation on the accountability of a GAI system and the way the output has been arrived at and this could lead to issues such as bias, accountability, and explainability. Additionally, the protection of user data and user rights is complex. It may not be possible to seek user consent when data is scraped from the internet. In such scenarios, the implementation of user right to correction, erasure, and portability among others becomes challenging.

Lastly, with the advent and wider acceptance of GAI in our daily lives, human-generated content could become a scarce commodity, hence more valuable.

By Huzefa Tavawalla, Head, Disruptive Technologies Practice Group, Nishith Desai Associates

NOTE: The views expressed are those of the author and do not necessarily reflect the opinions of HT Tech.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment