Why Is It Crucial for Businesses to Adopt a Framework to Address AI-Related Concerns? | by Pieter Steyn | Jun, 2022


Will responsible AI frameworks be enough, or do we need to regulate the industry through policy?

Credit: Besjunior on Envato Elements

Consider your surroundings: it’s almost guaranteed that some form of Artificial Intelligence (AI) is already present. You may have had daily interactions with AI without realizing it. AI is currently in a highly developed state, revolutionizing our lives and business practices in ways we could never have imagined.

With the global market for artificial intelligence expected to reach $648.3 billion by 2028, it is safe to say that AI is rapidly disrupting our lives. The expansion of AI also suggests that the technology is gaining widespread acceptance, with nearly every industry adopting it.

For some, AI adaptation has everything to do with productivity and inspires excitement. However, many people associate the acronym with fear. With artificial intelligence commonly defined as any machine that can perform tasks that a human brain could perform (or even better), several other growing concerns are emerging. AI’s mainstream implementation attempts to address various legitimate concerns, including the replacement of the workforce, security issues, and lack of privacy.

While it is inevitable that all organizations will eventually increase their use of AI, organization leaders will need to be mindful of their approach to ensure compliance. In order to design and develop AI with the intention of empowering business and the workplace in a manner that impacts customers and society fairly, a Responsible AI framework is required.

What is AI?

Before understanding what Responsible AI is, let’s quickly review “Artificial Intelligence.” AI is a broad term that refers to any computer software that mimics human behavior, including learning, critical thinking, and planning.

But AI is a broad subject; a single term cannot encompass the entire curriculum. Machine learning — a subset of AI — is currently the most prevalent type of implementation in business processes. Machine learning is the capacity to process vast quantities of data autonomously. This type of AI consists of algorithms on an infinite learning trajectory.

Credit: Besjunior on Envato Elements

Today, Machine Learning is one of the most prevalent AI applications. From manufacturing to retail and banking to bakeries, businesses are expanding the scope of machine learning’s advantages. According to a survey conducted by Deloitte in 2020, 67 percent of businesses are currently utilizing machine learning, and 97 percent plan to do so in the coming years.

You’ve probably interacted with Machine Learning as well: the predictive text on your keyboard, Netflix recommendations, Amazon shopping suggestions, and the alignment of social media posts in your feed are all examples of machine learning.

When it comes to the business end, machine learning can rapidly analyze the data, identifying patterns and anomalies. In this manner, if there is a discrepancy in the production output, the algorithm can notify the delegated authority in charge of maintaining the system.

The case of Responsible AI

Machine Learning’s capabilities are limitless. If humans can be productive for no more than five hours per day, machine learning can achieve and maintain the same level of productivity for twenty-four hours. In contrast to other technologies we’ve seen, AI has the ability to automatically decide what to recommend to whom and even prioritize customers based on data.

With this level of capability, AI can rapidly replace variable costs dependent on people with fixed-cost software.

We are obligated to minimize our losses and act in the best interests of our shareholders as C-suite executives. But does this structure imply that we will replace humans with AI-driven algorithms?

As the impact of AI on our lives continues to grow, corporate leaders have a greater responsibility to manage the potential ethical and technical repercussions that AI may have. This can eventually lead to potential problems, so businesses must outline a straightforward AI strategy. This is where AI with responsibility comes into play.

Responsible AI is a process that highlights the need to design, develop, and deploy cognitive methods with ethical, effective, and trustworthy standards and protocols. Responsible AI must be integrated into every aspect of the AI development and deployment process, which must encompass every single step.

As AI supercharges business and society, the onus is now on CEOs to ensure that AI is implemented responsibly and ethically within their respective organizations. Hundreds of press articles on AI bias, privacy violations, data breaches, and discrimination circulate on the internet, placing business leaders in a difficult position when it comes to the deployment of AI.

Responsible AI is supported by three primary pillars:

Accountability

  • This is the need to explain and justify decisions and actions to the partners and other stakeholders with whom the system interacts. Accountability in AI is only fulfilled when the conclusions are derivable and explainable by the decision-making algorithms.

Responsibility

  • This refers to the people’s role and the capability of the AI systems to answer one’s decision and identify errors or uncalled results. As the chain of responsibility grows, means are needed to link the AI systems’ decisions to the fair use of data and stakeholders’ actions in the system’s decision.

Transparency

  • This refers to the requirement to describe, inspect and reproduce mechanisms through which AI systems make decisions and learn to adapt to their environment and govern the data used. The current AI algorithms are often referred to as black boxes — there need to be methods to inspect these algorithms and the results that follow.
Credit: MegiasD on Envato Elements

To ensure that data used to train algorithms and guide decision-making is collected and managed fairly, transparent data governance is also required. This is done to reduce prejudice and ensure privacy and security.

Advantages of Responsible AI?

With AI having a precise bearing on people’s lives, the ethical aspect of implementation should be the topmost priority.

Here are five key advantages that arrive with Responsible AI (based on Accenture AI’s research).

Minimizes unintentional bias

  • When you build responsibility into your AI, you ensure that your algorithms and the data supporting them are unbiased and represent the entire audience without singling one out.

Ensures AI transparency

  • One of the pillars to build trust is to bring clarity to AI practices. The existence of explainable AI will help employees and customers understand and perceive the system better.

Opens new opportunities for employees

  • Empower individuals in your organization to raise their concerns about the AI systems that will eventually improve the developments without hindering innovation.

Protects privacy and ensures data security

  • At a time when data security and privacy are concerning priorities, responsible AI practices will ensure that sensitive data is not used unethically

Added benefits to clients and markets

  • By creating ethical AI practices, you reduce your risk factor and establish systems that benefit each stakeholder that interacts with the business.

Responsible AI isn’t about ticking the boxes!

Credit: kenishirotie on Envato Elments

Responsible AI is much more than simply complying with regulations by checking off boxes. In addition, it is not a single-user journey, but rather one that requires the participation of all stakeholders.

Researchers and developers must be educated on their responsibilities when creating AI systems with direct societal impact. The regulators must comprehend how liability is governed. A good example will be determining who is at fault when a self-driving car hits a pedestrian by accident.

Is it the manufacturer of the hardware (sensor and camera manufacturers)?
The software programmer? Or the regulator who gave the go-ahead for the automobile?

All of these questions and more must inform the regulations that societies enact for the responsible use of AI systems, which all require participation.

Companies are now expected to self-regulate their AI, which entails developing and implementing their own guidelines and Responsible AI practices.

Companies such as Google, IBM, and Microsoft have documented process guidelines. The primary issue with this, however, is that Responsible AI principles can be inconsistent; what one organization applies may be entirely different for another. Smaller businesses would lack even the means to create their own policies.

Introducing a universal guideline for Responsible Ai is a workaround. Currently, the European Commission’s publication on Ethics guidelines for dependable AI could serve as a suitable starting point. Seven essential requirements must be met for an AI application to be trustworthy, as outlined in the guide.

However, these rules only exist in Europe. Although tech giants such as Google, Facebook, and Microsoft are pushing for additional regulations, little progress has been made in this regard. Time alone will tell.

Sample Responsible AI Frameworks to investigate

Google | Microsoft | IBM | European Commission

Responsible AI is crucial not only for businesses but also for nations and the global community. Elon discusses AI and its regulatory framework — here is the quote.

“I am not normally an advocate of regulation and oversight…I think one should generally err on the side of minimizing those things…but this is a case where you have a very serious danger to the public.”-Elon Musk

Courses on Responsible AI

There are many online courses on artificial intelligence, but fewer on its responsible application, covering topics such as ethics and bias in applied AI.

I highly recommend the short course “Data Ethics, AI and Responsible Innovation” presented through edX by the University of Edinburgh, in Scotland. This intermediate course is aimed primarily at professionals working in a related field.

Short online courses (ed.ac.uk)

Resources

If you are interested in learning more about what companies and organizations are doing with respect to ethics and responsibility in artificial intelligence, I have compiled a few resources for you.

Responsible Use of Technology: The IBM Case Study | World Economic Forum (weforum.org)

High-level expert group on artificial intelligence | Shaping Europe’s digital future (europa.eu)

Tech Ethics Lab | University of Notre Dame (nd.edu)

CODAIT — Open Source (ibm.com)

The call | Rome Call

AI Ethics | IBM


Will responsible AI frameworks be enough, or do we need to regulate the industry through policy?

Credit: Besjunior on Envato Elements

Consider your surroundings: it’s almost guaranteed that some form of Artificial Intelligence (AI) is already present. You may have had daily interactions with AI without realizing it. AI is currently in a highly developed state, revolutionizing our lives and business practices in ways we could never have imagined.

With the global market for artificial intelligence expected to reach $648.3 billion by 2028, it is safe to say that AI is rapidly disrupting our lives. The expansion of AI also suggests that the technology is gaining widespread acceptance, with nearly every industry adopting it.

For some, AI adaptation has everything to do with productivity and inspires excitement. However, many people associate the acronym with fear. With artificial intelligence commonly defined as any machine that can perform tasks that a human brain could perform (or even better), several other growing concerns are emerging. AI’s mainstream implementation attempts to address various legitimate concerns, including the replacement of the workforce, security issues, and lack of privacy.

While it is inevitable that all organizations will eventually increase their use of AI, organization leaders will need to be mindful of their approach to ensure compliance. In order to design and develop AI with the intention of empowering business and the workplace in a manner that impacts customers and society fairly, a Responsible AI framework is required.

What is AI?

Before understanding what Responsible AI is, let’s quickly review “Artificial Intelligence.” AI is a broad term that refers to any computer software that mimics human behavior, including learning, critical thinking, and planning.

But AI is a broad subject; a single term cannot encompass the entire curriculum. Machine learning — a subset of AI — is currently the most prevalent type of implementation in business processes. Machine learning is the capacity to process vast quantities of data autonomously. This type of AI consists of algorithms on an infinite learning trajectory.

Credit: Besjunior on Envato Elements

Today, Machine Learning is one of the most prevalent AI applications. From manufacturing to retail and banking to bakeries, businesses are expanding the scope of machine learning’s advantages. According to a survey conducted by Deloitte in 2020, 67 percent of businesses are currently utilizing machine learning, and 97 percent plan to do so in the coming years.

You’ve probably interacted with Machine Learning as well: the predictive text on your keyboard, Netflix recommendations, Amazon shopping suggestions, and the alignment of social media posts in your feed are all examples of machine learning.

When it comes to the business end, machine learning can rapidly analyze the data, identifying patterns and anomalies. In this manner, if there is a discrepancy in the production output, the algorithm can notify the delegated authority in charge of maintaining the system.

The case of Responsible AI

Machine Learning’s capabilities are limitless. If humans can be productive for no more than five hours per day, machine learning can achieve and maintain the same level of productivity for twenty-four hours. In contrast to other technologies we’ve seen, AI has the ability to automatically decide what to recommend to whom and even prioritize customers based on data.

With this level of capability, AI can rapidly replace variable costs dependent on people with fixed-cost software.

We are obligated to minimize our losses and act in the best interests of our shareholders as C-suite executives. But does this structure imply that we will replace humans with AI-driven algorithms?

As the impact of AI on our lives continues to grow, corporate leaders have a greater responsibility to manage the potential ethical and technical repercussions that AI may have. This can eventually lead to potential problems, so businesses must outline a straightforward AI strategy. This is where AI with responsibility comes into play.

Responsible AI is a process that highlights the need to design, develop, and deploy cognitive methods with ethical, effective, and trustworthy standards and protocols. Responsible AI must be integrated into every aspect of the AI development and deployment process, which must encompass every single step.

As AI supercharges business and society, the onus is now on CEOs to ensure that AI is implemented responsibly and ethically within their respective organizations. Hundreds of press articles on AI bias, privacy violations, data breaches, and discrimination circulate on the internet, placing business leaders in a difficult position when it comes to the deployment of AI.

Responsible AI is supported by three primary pillars:

Accountability

  • This is the need to explain and justify decisions and actions to the partners and other stakeholders with whom the system interacts. Accountability in AI is only fulfilled when the conclusions are derivable and explainable by the decision-making algorithms.

Responsibility

  • This refers to the people’s role and the capability of the AI systems to answer one’s decision and identify errors or uncalled results. As the chain of responsibility grows, means are needed to link the AI systems’ decisions to the fair use of data and stakeholders’ actions in the system’s decision.

Transparency

  • This refers to the requirement to describe, inspect and reproduce mechanisms through which AI systems make decisions and learn to adapt to their environment and govern the data used. The current AI algorithms are often referred to as black boxes — there need to be methods to inspect these algorithms and the results that follow.
Credit: MegiasD on Envato Elements

To ensure that data used to train algorithms and guide decision-making is collected and managed fairly, transparent data governance is also required. This is done to reduce prejudice and ensure privacy and security.

Advantages of Responsible AI?

With AI having a precise bearing on people’s lives, the ethical aspect of implementation should be the topmost priority.

Here are five key advantages that arrive with Responsible AI (based on Accenture AI’s research).

Minimizes unintentional bias

  • When you build responsibility into your AI, you ensure that your algorithms and the data supporting them are unbiased and represent the entire audience without singling one out.

Ensures AI transparency

  • One of the pillars to build trust is to bring clarity to AI practices. The existence of explainable AI will help employees and customers understand and perceive the system better.

Opens new opportunities for employees

  • Empower individuals in your organization to raise their concerns about the AI systems that will eventually improve the developments without hindering innovation.

Protects privacy and ensures data security

  • At a time when data security and privacy are concerning priorities, responsible AI practices will ensure that sensitive data is not used unethically

Added benefits to clients and markets

  • By creating ethical AI practices, you reduce your risk factor and establish systems that benefit each stakeholder that interacts with the business.

Responsible AI isn’t about ticking the boxes!

Credit: kenishirotie on Envato Elments

Responsible AI is much more than simply complying with regulations by checking off boxes. In addition, it is not a single-user journey, but rather one that requires the participation of all stakeholders.

Researchers and developers must be educated on their responsibilities when creating AI systems with direct societal impact. The regulators must comprehend how liability is governed. A good example will be determining who is at fault when a self-driving car hits a pedestrian by accident.

Is it the manufacturer of the hardware (sensor and camera manufacturers)?
The software programmer? Or the regulator who gave the go-ahead for the automobile?

All of these questions and more must inform the regulations that societies enact for the responsible use of AI systems, which all require participation.

Companies are now expected to self-regulate their AI, which entails developing and implementing their own guidelines and Responsible AI practices.

Companies such as Google, IBM, and Microsoft have documented process guidelines. The primary issue with this, however, is that Responsible AI principles can be inconsistent; what one organization applies may be entirely different for another. Smaller businesses would lack even the means to create their own policies.

Introducing a universal guideline for Responsible Ai is a workaround. Currently, the European Commission’s publication on Ethics guidelines for dependable AI could serve as a suitable starting point. Seven essential requirements must be met for an AI application to be trustworthy, as outlined in the guide.

However, these rules only exist in Europe. Although tech giants such as Google, Facebook, and Microsoft are pushing for additional regulations, little progress has been made in this regard. Time alone will tell.

Sample Responsible AI Frameworks to investigate

Google | Microsoft | IBM | European Commission

Responsible AI is crucial not only for businesses but also for nations and the global community. Elon discusses AI and its regulatory framework — here is the quote.

“I am not normally an advocate of regulation and oversight…I think one should generally err on the side of minimizing those things…but this is a case where you have a very serious danger to the public.”-Elon Musk

Courses on Responsible AI

There are many online courses on artificial intelligence, but fewer on its responsible application, covering topics such as ethics and bias in applied AI.

I highly recommend the short course “Data Ethics, AI and Responsible Innovation” presented through edX by the University of Edinburgh, in Scotland. This intermediate course is aimed primarily at professionals working in a related field.

Short online courses (ed.ac.uk)

Resources

If you are interested in learning more about what companies and organizations are doing with respect to ethics and responsibility in artificial intelligence, I have compiled a few resources for you.

Responsible Use of Technology: The IBM Case Study | World Economic Forum (weforum.org)

High-level expert group on artificial intelligence | Shaping Europe’s digital future (europa.eu)

Tech Ethics Lab | University of Notre Dame (nd.edu)

CODAIT — Open Source (ibm.com)

The call | Rome Call

AI Ethics | IBM

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – admin@technoblender.com. The content will be deleted within 24 hours.
AddressAdoptAIRelatedartificial intelligenceBusinessesConcernsCrucialFrameworkJunlatest newsmachine learningPieterSteyn
Comments (0)
Add Comment