Techno Blender
Digitally Yours.

Implementing AI is like buying and driving a car (but different) | by Koen Peters | Jan, 2023

0 50


Common myths and pitfalls on the road to implementing AI — by Babette Huisman

By now, most companies have jumped on the AI-bandwagon. Yet, a limited number of initiatives seem to end as a successful implementation. According to Gartner’s latest AI survey, 54% of AI projects make it to production [1]. In my personal experience as a data science consultant, far fewer projects made it to production. Given that the Gartner report also mentions 40% of companies surveyed have thousands of models deployed, it doesn’t feel like it represents companies very well who are just starting with AI. Those companies suffer from insufficient knowledge about AI throughout all layers of their organization. One of those layers are the decision makers. This lack of knowledge results in poor decisions, solutions not meeting expectations and underestimating required effort. To help decision makers ask the right questions, and ultimately make good and informed decisions, I presented an analogy which explains some intricacies, myths and pitfalls of implementing AI. By sharing the analogy here I hope to help others as well. Let’s dive in!

Implementing AI is like buying and driving a car (but different)

Pitfall: great sales pitch

No matter how great a sales pitch is or how fancy a car looks, there’s a couple of things you should check beforehand:

  • Does the car fit your driveway or garage? In a similar manner you should check if the AI is a fitting solution for the problem you are trying to solve. Although it may seem an open-door, all too often stakeholders try to push for a solution for the sake of using a specific technology. Yes, AI has been the buzzword of the last decade and mountains of gold have been expected by those who master its implementation first. But if your solution doesn’t fit your problem, that expectation will never be met.

Tip: Before starting with AI there are simpler methods you should try first. Like adding more business logic, changing or automating part of the work process, etc.

  • How well does the car handle different terrain? Can it drive on grassland, a dirt-road or off-road? Similarly you want to know how robust and fair your AI is. Does it perform equally well across different distributions of your data? Say, for different age groups, a particular category like gender, etc. Even though you may expect the AI to perform well across the board, recent history has shown us that unfortunately isn’t always the case [2]. This can have severe consequences for your business and the people affected by the AI’s outcome. It’s important to determine what your model is going to be used for and hence what its requirements are.
  • The horsepower of a car tells you something about how fast it drives. However, without sufficient torque your car won’t be able to accelerate and get up to speed. For an AI, accuracy tells you something about its performance. However, when 1 in 100 cases represents a fraud or a sick person, an AI can ‘predict’ no one is a fraud/sick and be 99% accurate. Such a model would be practically useless. Hence, it’s important to look at other performance metrics as well to obtain a solid understanding of how your AI will perform. For example, ‘recall’ indicates the percentage of actual fraud/sick persons identified correctly [3]. In this case, recall would have been zero, indicating the AI model isn’t performing as desired.

Tip: Data scientists are trained to interpret these metrics and should be able to advise you about an AI’s performance.

one in 100 who should be identified — by Babette Huisman

To summarize, if you hear about a great solution, dig a little deeper and find out if it is really going to solve your problem.

Pitfall: cost vs benefit

No matter how great the AI solution is, if there’s no baseline measurement of the process you’re trying to improve you can’t tell if the benefits outweigh the cost.

Speedometer meets KPI gauge — by Babette Huisman
  • Before you buy a car you want to check both the odometer (distance traveled in kilometers/miles) and navigation system. You probably want to track how many kilometers you drive over certain time periods and whether you’re approaching your goal or destination. If you drive very little, the cost of owning a car might not outweigh the benefit. All too often projects fail because the added benefit versus the cost couldn’t be quantified, leaving decision makers guessing and sometimes pulling the plug on an otherwise great project. So, before implementing an AI, you need a good understanding of the process, how that process currently performs and a goal to strive towards in terms of process improvement. You’ll gather data, define, calculate and track process KPIs, such that after implementing an AI solution you can monitor how much the process has improved or by what time you are estimated to reach a certain goal. These metrics in turn allow you to calculate if the investment was worth it and let decision makers make an informed decision about (dis)continuing the project.

Being able to communicate results, costs and benefits is crucial to make decisions about the project.

Myth: AI is fire and forget

A common misconception by people less versed with AI is that once you’ve implemented the solution, you’re done, it works, no need to check on it every once in a while.

Alerting for when your AI doesn’t perform as expected — by Babette Huisman
  • Just like a car, an AI needs maintenance. However, unlike a car, AI often doesn’t come prepackaged with a dashboard filled with warning lights and it doesn’t start to make funny noises either. If you do want to be notified (and you should), your data team needs to build its own dashboard and monitor or send out alerts when your AI starts showing signs of performance degradation. You could even implement something along the lines of a ‘periodic vehicle inspection’ your AI needs to pass (or certain adjustments which need to be made for it to pass inspection) to allow it to keep running in production. In addition, AI is a piece of software and like any other software product, you may want to update it with new features and security measures whenever they’re available.

Keeping an AI running successfully in production requires ongoing effort from your data team.

Myth: AI is plug and play

Unfortunately you can’t just get an AI, turn it on and expect it to work

gaspump vs your own datapipeline — by Babette Huisman
  • With a car you can drive to a gas station, fill it up with the correct type of gasoline and you’re ready to go. (or for an electric car you can quite literally plug it in). An AI solution needs fuel too, in the form of data. However, unlike a car, there aren’t many publicly available ‘gas stations’ where you can get the correctly processed data for your AI. You have to collect/mine the data yourself and then you have to build your own data-refinement factories (ETL processes, data pipelines, etc.). The required effort to get usable data is often underestimated. Bigger companies have whole teams focussed on getting data, cleaning data, improving data quality (preferably at the data collection level) and making sure the data represents reality. In addition, or most importantly, data changes over time and you need to continuously monitor your data pipeline and adjust your data processing or AI-engine to ensure it keeps receiving ‘fuel’ it can operate with.

Getting the correct type and quality of data also requires ongoing effort from your data team.

Pitfall and myth: AI replaces people or makes people redundant

Another common misconception, and one which can have negative long-term effects for the company if not handled correctly.

  • Today’s cars help make driving easier for you, but the fact that fancy cars have lane assist or self-driving features doesn’t mean the driver no longer needs to be behind the wheel! (at least for now). Similarly having an AI solution doesn’t mean you no longer need employees. You should employ those employees in a different way. Part of their work could consist of activities which add higher value for the company, like strategizing future goals, investing in customer relations or solving cases too complex for the AI. Another part of their work should consist of checking random samples of work done by the AI. This is called ‘human in the loop’ [4] and not only provides valuable feedback for the AI to learn from and make it more accurate. It’s also another layer of ‘security’ allowing employees to signal odd behaviors which might be missed by your monitoring dashboard. On top of that, it increases trust and adoption if employees can supervise their AI-colleague and continuously improve it by correcting it’s behavior. Inter observer reliability is key here to achieve the best results.
Human in the loop — by Babette Huisman
  • Much like you still need to sit behind the wheel, you also still need to know how to drive. It would be detrimental for your company if, for whatever reason, you cannot use the AI and there’s no one left in the company who could do the work manually. You want the expert knowledge of these business processes to remain in your company and the best way to do that is by having your employees ‘supervise’ the AI or solve the more complex cases as described above. In addition, whenever one of your monitoring dashboards highlights a problem, having the expert knowledge within your company to determine and implement the required fix is key to keep running your AI in production.

You generally don’t want to lose the expert knowledge within your company because that knowledge can help steer company activities.


Common myths and pitfalls on the road to implementing AI — by Babette Huisman

By now, most companies have jumped on the AI-bandwagon. Yet, a limited number of initiatives seem to end as a successful implementation. According to Gartner’s latest AI survey, 54% of AI projects make it to production [1]. In my personal experience as a data science consultant, far fewer projects made it to production. Given that the Gartner report also mentions 40% of companies surveyed have thousands of models deployed, it doesn’t feel like it represents companies very well who are just starting with AI. Those companies suffer from insufficient knowledge about AI throughout all layers of their organization. One of those layers are the decision makers. This lack of knowledge results in poor decisions, solutions not meeting expectations and underestimating required effort. To help decision makers ask the right questions, and ultimately make good and informed decisions, I presented an analogy which explains some intricacies, myths and pitfalls of implementing AI. By sharing the analogy here I hope to help others as well. Let’s dive in!

Implementing AI is like buying and driving a car (but different)

Pitfall: great sales pitch

No matter how great a sales pitch is or how fancy a car looks, there’s a couple of things you should check beforehand:

  • Does the car fit your driveway or garage? In a similar manner you should check if the AI is a fitting solution for the problem you are trying to solve. Although it may seem an open-door, all too often stakeholders try to push for a solution for the sake of using a specific technology. Yes, AI has been the buzzword of the last decade and mountains of gold have been expected by those who master its implementation first. But if your solution doesn’t fit your problem, that expectation will never be met.

Tip: Before starting with AI there are simpler methods you should try first. Like adding more business logic, changing or automating part of the work process, etc.

  • How well does the car handle different terrain? Can it drive on grassland, a dirt-road or off-road? Similarly you want to know how robust and fair your AI is. Does it perform equally well across different distributions of your data? Say, for different age groups, a particular category like gender, etc. Even though you may expect the AI to perform well across the board, recent history has shown us that unfortunately isn’t always the case [2]. This can have severe consequences for your business and the people affected by the AI’s outcome. It’s important to determine what your model is going to be used for and hence what its requirements are.
  • The horsepower of a car tells you something about how fast it drives. However, without sufficient torque your car won’t be able to accelerate and get up to speed. For an AI, accuracy tells you something about its performance. However, when 1 in 100 cases represents a fraud or a sick person, an AI can ‘predict’ no one is a fraud/sick and be 99% accurate. Such a model would be practically useless. Hence, it’s important to look at other performance metrics as well to obtain a solid understanding of how your AI will perform. For example, ‘recall’ indicates the percentage of actual fraud/sick persons identified correctly [3]. In this case, recall would have been zero, indicating the AI model isn’t performing as desired.

Tip: Data scientists are trained to interpret these metrics and should be able to advise you about an AI’s performance.

one in 100 who should be identified — by Babette Huisman

To summarize, if you hear about a great solution, dig a little deeper and find out if it is really going to solve your problem.

Pitfall: cost vs benefit

No matter how great the AI solution is, if there’s no baseline measurement of the process you’re trying to improve you can’t tell if the benefits outweigh the cost.

Speedometer meets KPI gauge — by Babette Huisman
  • Before you buy a car you want to check both the odometer (distance traveled in kilometers/miles) and navigation system. You probably want to track how many kilometers you drive over certain time periods and whether you’re approaching your goal or destination. If you drive very little, the cost of owning a car might not outweigh the benefit. All too often projects fail because the added benefit versus the cost couldn’t be quantified, leaving decision makers guessing and sometimes pulling the plug on an otherwise great project. So, before implementing an AI, you need a good understanding of the process, how that process currently performs and a goal to strive towards in terms of process improvement. You’ll gather data, define, calculate and track process KPIs, such that after implementing an AI solution you can monitor how much the process has improved or by what time you are estimated to reach a certain goal. These metrics in turn allow you to calculate if the investment was worth it and let decision makers make an informed decision about (dis)continuing the project.

Being able to communicate results, costs and benefits is crucial to make decisions about the project.

Myth: AI is fire and forget

A common misconception by people less versed with AI is that once you’ve implemented the solution, you’re done, it works, no need to check on it every once in a while.

Alerting for when your AI doesn’t perform as expected — by Babette Huisman
  • Just like a car, an AI needs maintenance. However, unlike a car, AI often doesn’t come prepackaged with a dashboard filled with warning lights and it doesn’t start to make funny noises either. If you do want to be notified (and you should), your data team needs to build its own dashboard and monitor or send out alerts when your AI starts showing signs of performance degradation. You could even implement something along the lines of a ‘periodic vehicle inspection’ your AI needs to pass (or certain adjustments which need to be made for it to pass inspection) to allow it to keep running in production. In addition, AI is a piece of software and like any other software product, you may want to update it with new features and security measures whenever they’re available.

Keeping an AI running successfully in production requires ongoing effort from your data team.

Myth: AI is plug and play

Unfortunately you can’t just get an AI, turn it on and expect it to work

gaspump vs your own datapipeline — by Babette Huisman
  • With a car you can drive to a gas station, fill it up with the correct type of gasoline and you’re ready to go. (or for an electric car you can quite literally plug it in). An AI solution needs fuel too, in the form of data. However, unlike a car, there aren’t many publicly available ‘gas stations’ where you can get the correctly processed data for your AI. You have to collect/mine the data yourself and then you have to build your own data-refinement factories (ETL processes, data pipelines, etc.). The required effort to get usable data is often underestimated. Bigger companies have whole teams focussed on getting data, cleaning data, improving data quality (preferably at the data collection level) and making sure the data represents reality. In addition, or most importantly, data changes over time and you need to continuously monitor your data pipeline and adjust your data processing or AI-engine to ensure it keeps receiving ‘fuel’ it can operate with.

Getting the correct type and quality of data also requires ongoing effort from your data team.

Pitfall and myth: AI replaces people or makes people redundant

Another common misconception, and one which can have negative long-term effects for the company if not handled correctly.

  • Today’s cars help make driving easier for you, but the fact that fancy cars have lane assist or self-driving features doesn’t mean the driver no longer needs to be behind the wheel! (at least for now). Similarly having an AI solution doesn’t mean you no longer need employees. You should employ those employees in a different way. Part of their work could consist of activities which add higher value for the company, like strategizing future goals, investing in customer relations or solving cases too complex for the AI. Another part of their work should consist of checking random samples of work done by the AI. This is called ‘human in the loop’ [4] and not only provides valuable feedback for the AI to learn from and make it more accurate. It’s also another layer of ‘security’ allowing employees to signal odd behaviors which might be missed by your monitoring dashboard. On top of that, it increases trust and adoption if employees can supervise their AI-colleague and continuously improve it by correcting it’s behavior. Inter observer reliability is key here to achieve the best results.
Human in the loop — by Babette Huisman
  • Much like you still need to sit behind the wheel, you also still need to know how to drive. It would be detrimental for your company if, for whatever reason, you cannot use the AI and there’s no one left in the company who could do the work manually. You want the expert knowledge of these business processes to remain in your company and the best way to do that is by having your employees ‘supervise’ the AI or solve the more complex cases as described above. In addition, whenever one of your monitoring dashboards highlights a problem, having the expert knowledge within your company to determine and implement the required fix is key to keep running your AI in production.

You generally don’t want to lose the expert knowledge within your company because that knowledge can help steer company activities.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.
Leave a comment