Techno Blender
Digitally Yours.

artificial intelligence: Writing the new rules for AI

0 33


The launch of ChatGPT in November 2022 heralded a new era in democratising the use of artificial intelligence (AI). Since then, use of AI has quickly expanded across many sectors, including healthcare, education, financial services, public safety, etc.

However, rapidly advancing capabilities of AI have also brought to the fore the criticality of safety and ethical use of these technologies.

Elevate Your Tech Prowess with High-Value Skill Courses

Offering College Course Website
IIM Kozhikode IIMK Advanced Data Science For Managers Visit
IIM Lucknow IIML Executive Programme in FinTech, Banking & Applied Risk Management Visit
Indian School of Business ISB Professional Certificate in Product Management Visit

At the Global Partnership for Artificial Intelligence summit in 2023 in New Delhi, Prime Minister Narendra Modi stressed on the importance of creating a global framework for ethical use of AI, including a protocol for testing and deploying high-risk and frontier AI tools. Earlier, at the first global AI Safety Summit 2023 at Bletchley Park, 28 countries gave a call for international cooperation to manage the challenges and risks of AI.

How can a global framework for safe and ethical use of AI be developed? Several countries have initiated efforts to regulate and govern AI. The US government issued an executive order in October 2023, focusing on safe, secure and trustworthy development and use of AI.

It seeks to address several critical areas, including national security, consumer protection, privacy, etc, and requires AI developers to share safety results with the US government. EU’s AI Act adopts a risk-based regulatory approach with stricter oversight for higher levels of risk of the AI systems.

At a fundamental level, a global framework for governance of AI must address the key concerns regarding development, deployment and use of AI. These include dealing with machine learning biases and potential discrimination, misinformation, deep fakes, concerns on privacy and access to personal data, copyright protection, potential job losses and ensuring the safety, transparency and explainability of the AI algorithms.

Discover the stories of your interest


The goal of AI governance should be to promote innovation and ensure safe, fair and ethical applications of the technology in promising sectors. To address the concerns noted above, the framework for governance of AI must be based on certain core principles, which can be enumerated as below.Innovation: The governance framework must promote innovation and competition in AI technologies to continuously improve them. This would require, for example, facilitating access to large amounts of anonymised datasets to startups for developing and training AI applications in various domains.

The National Data Governance Policy of GoI is an excellent initiative in this direction.

Infrastructure: The framework must also support expanding access to compute infrastructure and AI models to promote competition and encourage innovation. This would particularly be helpful to startups in this domain.

Capacity building and engagement: A sustained focus on capacity building holds the key to involving and engaging with more stakeholders in the development and deployment of AI across multiple sectors. This can significantly help in managing and reducing the risks. Engaging with stakeholders would also help in addressing any potential job losses and worker displacements due to deployment of AI.

Safety and risk management: This would involve development of standards and ensuring that AI models are tested and assessed for safety and risk. Appropriate risk management strategies must be put in place to address any likely harms that may be caused.

This would include ensuring transparency, fairness and explainability in the AI development lifecycle through selection of proper training data sets, removing any biases and ensuring that cybersecurity issues have been addressed.

Privacy protection: AI models must focus on privacy preserving technologies to ensure protection of privacy. This would help in creating trust in these models and enhancing their beneficial impact.

International cooperation: For any global framework to succeed, international collaboration and partnerships built on a shared vision and common goals are essential.

A global framework on AI must build on evidence in this rapidly evolving technology and promote collaboration across all countries to become effective.

Being a global leader in technology, India can play a proactive role in developing a global framework for governance of AI.

We also need to focus on the development of AI applications trained on Indian data sets in various domains, such as agriculture, education, health care, transportation, public safety, etc.

The author is a senior IAS officer and is currently the DG, ESIC. Views expressed are personal.


The launch of ChatGPT in November 2022 heralded a new era in democratising the use of artificial intelligence (AI). Since then, use of AI has quickly expanded across many sectors, including healthcare, education, financial services, public safety, etc.

However, rapidly advancing capabilities of AI have also brought to the fore the criticality of safety and ethical use of these technologies.

Elevate Your Tech Prowess with High-Value Skill Courses

Offering College Course Website
IIM Kozhikode IIMK Advanced Data Science For Managers Visit
IIM Lucknow IIML Executive Programme in FinTech, Banking & Applied Risk Management Visit
Indian School of Business ISB Professional Certificate in Product Management Visit

At the Global Partnership for Artificial Intelligence summit in 2023 in New Delhi, Prime Minister Narendra Modi stressed on the importance of creating a global framework for ethical use of AI, including a protocol for testing and deploying high-risk and frontier AI tools. Earlier, at the first global AI Safety Summit 2023 at Bletchley Park, 28 countries gave a call for international cooperation to manage the challenges and risks of AI.

How can a global framework for safe and ethical use of AI be developed? Several countries have initiated efforts to regulate and govern AI. The US government issued an executive order in October 2023, focusing on safe, secure and trustworthy development and use of AI.

It seeks to address several critical areas, including national security, consumer protection, privacy, etc, and requires AI developers to share safety results with the US government. EU’s AI Act adopts a risk-based regulatory approach with stricter oversight for higher levels of risk of the AI systems.

At a fundamental level, a global framework for governance of AI must address the key concerns regarding development, deployment and use of AI. These include dealing with machine learning biases and potential discrimination, misinformation, deep fakes, concerns on privacy and access to personal data, copyright protection, potential job losses and ensuring the safety, transparency and explainability of the AI algorithms.

Discover the stories of your interest


The goal of AI governance should be to promote innovation and ensure safe, fair and ethical applications of the technology in promising sectors. To address the concerns noted above, the framework for governance of AI must be based on certain core principles, which can be enumerated as below.Innovation: The governance framework must promote innovation and competition in AI technologies to continuously improve them. This would require, for example, facilitating access to large amounts of anonymised datasets to startups for developing and training AI applications in various domains.

The National Data Governance Policy of GoI is an excellent initiative in this direction.

Infrastructure: The framework must also support expanding access to compute infrastructure and AI models to promote competition and encourage innovation. This would particularly be helpful to startups in this domain.

Capacity building and engagement: A sustained focus on capacity building holds the key to involving and engaging with more stakeholders in the development and deployment of AI across multiple sectors. This can significantly help in managing and reducing the risks. Engaging with stakeholders would also help in addressing any potential job losses and worker displacements due to deployment of AI.

Safety and risk management: This would involve development of standards and ensuring that AI models are tested and assessed for safety and risk. Appropriate risk management strategies must be put in place to address any likely harms that may be caused.

This would include ensuring transparency, fairness and explainability in the AI development lifecycle through selection of proper training data sets, removing any biases and ensuring that cybersecurity issues have been addressed.

Privacy protection: AI models must focus on privacy preserving technologies to ensure protection of privacy. This would help in creating trust in these models and enhancing their beneficial impact.

International cooperation: For any global framework to succeed, international collaboration and partnerships built on a shared vision and common goals are essential.

A global framework on AI must build on evidence in this rapidly evolving technology and promote collaboration across all countries to become effective.

Being a global leader in technology, India can play a proactive role in developing a global framework for governance of AI.

We also need to focus on the development of AI applications trained on Indian data sets in various domains, such as agriculture, education, health care, transportation, public safety, etc.

The author is a senior IAS officer and is currently the DG, ESIC. Views expressed are personal.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.
Leave a comment