Techno Blender
Digitally Yours.

4 Principles of Responsible AI & Best Practices to Adopt Them

0 97


The use of artificial intelligence is transforming industries, helping businesses solve real-world problems, and improving the daily lives of people around the world. It is expected that its usage will become more widespread in the near future: 90% of commercial apps will use AI by 2025, and the AI industry could be worth more than $15 trillion by 2030.

However, the possibilities of AI bring with them great responsibilities, and many organizations are struggling to address the risks associated with AI adoption. According to Accenture, about 65% of risk leaders believe they are not fully capable of assessing the risks of AI.

Developing and scaling AI applications with responsibility, trustworthiness, and ethical practices in mind is essential to build AI that works for everyone. In this research, we’ll explore four principles for responsible AI design and recommend best practices to achieve them.

Fairness

AI models are increasingly being used in various decision-making processes such as hiring, lending, and medical diagnosis. Biases introduced in these decision-making systems can have far-reaching effects on the public and contribute to discrimination against different groups of people.

Here are two examples of AI bias in real-world applications:

  • In 2019, there were multiple claims (including from co-founder Steve Wozniak) that Apple’s credit card algorithm discriminates against women, offering different credit limits based on gender.
  • According to a recent report by Harvard Business School and Accenture, 27 million workers in the US are filtered out and unable to find a job due to automated and AI-based hiring systems. These “hidden workers” include immigrants, refugees, and those with physical disabilities.

These biased decisions can result from project design or from datasets that reflect real-world biases. It is critical to eliminate these biases to create AI systems that are inclusive to all.

Best practices to achieve fairness

  • Examine the dataset for whether it is a fair representation of the population.
  • Analyze the subpopulations of the dataset to determine if the model performs equally well across different groups.
  • Design models with fairness in mind and consult with social scientists and other subject matter experts.
  • Monitor the model continuously after deployment. Models drift over time, so biases can be introduced to the system after some time.

We have a comprehensive article on AI bias and how to fix it. Feel free to check.

Privacy

AI systems often use large datasets, and these datasets can contain sensitive information about individuals. This makes AI systems susceptible to data breaches and attacks from malicious parties that want to obtain sensitive information:

  • According to the Identity Theft Resource Center, there were 1862 data breaches in 2021, which is 23% higher than the previous all-time high in 2017.
  • The average cost of a data breach was $4.24 million in 2021.
  • In some cases, adversaries can obtain information about training data through model outcomes.

Data breaches cause financial loss as well as reputational damage to businesses and can put individuals whose sensitive information is revealed at risk.

Best practices to ensure privacy 

  • Assess and classify data according to its sensitivity and monitor sensitive data.
  • Develop a data access and usage policy within the organization. Implement the principle of least privilege which gives users the minimum level of permission needed to perform their jobs.
  • Leverage privacy-enhancing technologies (PETs) to protect both your data and your model. PETs include:

Security

The security of an AI system is critical to prevent attackers from interfering with the system and changing its intended behavior. The increasing use of AI in particularly critical areas of society can introduce vulnerabilities that can have a significant impact on public safety. Consider the following examples:

  • Researchers have shown that they can get a self-driving car to drive in the opposite lane by placing small stickers on the road.
  • By introducing adversarial input that is imperceptible to humans into the data, researchers were able to make a highly accurate medical ML algorithm to classify a benign mole as malignant (see Figure 1).

Figure 1. Misleading a medical AI system with adversarial attack.

These adversarial attacks can involve:

  • Data poisoning by injecting misleading data into training datasets.
  • Model poisoning by accessing and manipulating the models.

among others to cause the AI model to act in unintended ways. As AI technology evolves, attackers will find new methods and new ways to defend AI systems will be developed.

Best practices to achieve security

  • Assess whether an adversary would have an incentive to attack the system and the potential consequences of such an attack.
  • Create a red team within your organization that will act as an adversary to test the system.
  • Follow new developments in AI attacks and AI security. It is an ongoing area of research, so it is important to keep up with developments.

Transparency

Transparency, interpretability, or explainability of AI systems is a must in some industries such as healthcare and insurance in which businesses must comply with industry standards or government regulations. However, being able to interpret why AI models come up with specific results is important for all businesses and users to be able to understand and trust AI systems.

A transparent AI system can help businesses:

  • Explain and defend business-critical decisions,
  • Make “what-if” scenarios,
  • Ensure that the models work as intended,
  • Ensure accountability in case of unintended results.

Best practices to ensure transparency

  • Use a small set of inputs that is necessary for the desired performance of the model. This can make it easier to accurately pinpoint where the correlation or the causation between variables comes from.
  • Give explainable AI methods priority over models that are hard to interpret.
  • Discuss the required level of interpretability with domain experts.

Sponsored

If you don’t know where to start, you can benefit from Positronic’s AI consultancy services to build responsible AI systems. They’re experienced in developing AI systems in industries such as banking, finance, healthcare, and retail. 

Also, feel free to check our data-driven lists of AI consultants and data science consultants.

If you have other questions about responsible AI and how to adopt it in your business, we can help:

Let us find the right vendor for your business


The use of artificial intelligence is transforming industries, helping businesses solve real-world problems, and improving the daily lives of people around the world. It is expected that its usage will become more widespread in the near future: 90% of commercial apps will use AI by 2025, and the AI industry could be worth more than $15 trillion by 2030.

However, the possibilities of AI bring with them great responsibilities, and many organizations are struggling to address the risks associated with AI adoption. According to Accenture, about 65% of risk leaders believe they are not fully capable of assessing the risks of AI.

Developing and scaling AI applications with responsibility, trustworthiness, and ethical practices in mind is essential to build AI that works for everyone. In this research, we’ll explore four principles for responsible AI design and recommend best practices to achieve them.

Fairness

AI models are increasingly being used in various decision-making processes such as hiring, lending, and medical diagnosis. Biases introduced in these decision-making systems can have far-reaching effects on the public and contribute to discrimination against different groups of people.

Here are two examples of AI bias in real-world applications:

  • In 2019, there were multiple claims (including from co-founder Steve Wozniak) that Apple’s credit card algorithm discriminates against women, offering different credit limits based on gender.
  • According to a recent report by Harvard Business School and Accenture, 27 million workers in the US are filtered out and unable to find a job due to automated and AI-based hiring systems. These “hidden workers” include immigrants, refugees, and those with physical disabilities.

These biased decisions can result from project design or from datasets that reflect real-world biases. It is critical to eliminate these biases to create AI systems that are inclusive to all.

Best practices to achieve fairness

  • Examine the dataset for whether it is a fair representation of the population.
  • Analyze the subpopulations of the dataset to determine if the model performs equally well across different groups.
  • Design models with fairness in mind and consult with social scientists and other subject matter experts.
  • Monitor the model continuously after deployment. Models drift over time, so biases can be introduced to the system after some time.

We have a comprehensive article on AI bias and how to fix it. Feel free to check.

Privacy

AI systems often use large datasets, and these datasets can contain sensitive information about individuals. This makes AI systems susceptible to data breaches and attacks from malicious parties that want to obtain sensitive information:

  • According to the Identity Theft Resource Center, there were 1862 data breaches in 2021, which is 23% higher than the previous all-time high in 2017.
  • The average cost of a data breach was $4.24 million in 2021.
  • In some cases, adversaries can obtain information about training data through model outcomes.

Data breaches cause financial loss as well as reputational damage to businesses and can put individuals whose sensitive information is revealed at risk.

Best practices to ensure privacy 

  • Assess and classify data according to its sensitivity and monitor sensitive data.
  • Develop a data access and usage policy within the organization. Implement the principle of least privilege which gives users the minimum level of permission needed to perform their jobs.
  • Leverage privacy-enhancing technologies (PETs) to protect both your data and your model. PETs include:

Security

The security of an AI system is critical to prevent attackers from interfering with the system and changing its intended behavior. The increasing use of AI in particularly critical areas of society can introduce vulnerabilities that can have a significant impact on public safety. Consider the following examples:

  • Researchers have shown that they can get a self-driving car to drive in the opposite lane by placing small stickers on the road.
  • By introducing adversarial input that is imperceptible to humans into the data, researchers were able to make a highly accurate medical ML algorithm to classify a benign mole as malignant (see Figure 1).

Figure 1. Misleading a medical AI system with adversarial attack.

These adversarial attacks can involve:

  • Data poisoning by injecting misleading data into training datasets.
  • Model poisoning by accessing and manipulating the models.

among others to cause the AI model to act in unintended ways. As AI technology evolves, attackers will find new methods and new ways to defend AI systems will be developed.

Best practices to achieve security

  • Assess whether an adversary would have an incentive to attack the system and the potential consequences of such an attack.
  • Create a red team within your organization that will act as an adversary to test the system.
  • Follow new developments in AI attacks and AI security. It is an ongoing area of research, so it is important to keep up with developments.

Transparency

Transparency, interpretability, or explainability of AI systems is a must in some industries such as healthcare and insurance in which businesses must comply with industry standards or government regulations. However, being able to interpret why AI models come up with specific results is important for all businesses and users to be able to understand and trust AI systems.

A transparent AI system can help businesses:

  • Explain and defend business-critical decisions,
  • Make “what-if” scenarios,
  • Ensure that the models work as intended,
  • Ensure accountability in case of unintended results.

Best practices to ensure transparency

  • Use a small set of inputs that is necessary for the desired performance of the model. This can make it easier to accurately pinpoint where the correlation or the causation between variables comes from.
  • Give explainable AI methods priority over models that are hard to interpret.
  • Discuss the required level of interpretability with domain experts.

Sponsored

If you don’t know where to start, you can benefit from Positronic’s AI consultancy services to build responsible AI systems. They’re experienced in developing AI systems in industries such as banking, finance, healthcare, and retail. 

Also, feel free to check our data-driven lists of AI consultants and data science consultants.

If you have other questions about responsible AI and how to adopt it in your business, we can help:

Let us find the right vendor for your business

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment