Techno Blender
Digitally Yours.

No-Code ML Platforms: Boon or Bane? | by Jojo John Moolayil | Jan, 2023

0 36


Photo by Scott Graham on Unsplash

In recent years, we have seen several no-code ML and data science platforms launched by several large enterprises and thriving startups. Today, most leading cloud providers have at least one offering under no code/low code ML platforms. Microsoft’s Azure ML Studio, Amazon’s Sagemaker Canvas, and Google’s AutoML are a few to mention. If you take a deeper look at them, the underlying mission is common i.e. democratizing AI/ML/DS. For the longest time, I firmly believed that no-code/low-code would not be an effective way to democratize ML. However, more recently, I had a change in opinion; the reason is probably not what you guessed. Let me explain.

Back in 2015, when I explored the Azure ML studio, I was indeed impressed. The platform for the time was mature and offered rich features to solve ML problems. The entire journey of data onboarding, exploratory data analysis, model building, hyper-parameter tuning, and deployment could be accomplished using drag-and-drop tools. This was one of the first tools that I used within this category and I felt a sense of completeness. The tool allowed me to achieve the objective I tested at the time — deploying a model into production without a single line of code (though a baby model for testing). Then, by late 2016, I was convinced that there is a huge market for this category of services and that no-code tools would soon have mass adoption for ML problems.

However, as years passed, I barely noticed the adoption of these tools within the community that I primarily engaged with. Some of these tools were indeed fancy with great demos, but in most cases, it made little sense to me. Slowly, I started inclining toward the thought that these tools were superfluous for democratizing AI. My reasons were simple; serious ML use cases that mattered for business and were eventually deployed into production were never suited to be built with tools that locked control in favor of a UI-based tool. Also, data engineering and data wrangling for serious ML use cases were a gargantuan part of the effort. The sheer volume and complexity of engineering could never be suitable for an over-simplified no-code tool. For me, no-code/low-code platforms suddenly became a glorified tool that only serves the purpose of great marketing.

Recently, I started looking at these tools from a different perspective. I thought that maybe I was biased in my opinion. It was quite likely, as I mostly interacted with data scientists who were already comfortable with some form of coding or were seasoned professionals in the field. Also, I mostly worked in an environment where we worked very closely with software engineers who helped in translating research prototypes into production pipelines. Therefore, it was key for us to establish a research workflow practice that ensured the efforts in translating between research prototypes and production artifacts were minimized. Thus, we mostly defaulted to Pythonic ecosystems supported by big data tools on established cloud platforms. It’s pretty natural to rule out no-code solutions in these cases.

To understand the situation with a wider lens and a different user base, I started reaching out to folks outside my existing network to understand the changes in their tech stack and the adoption of no-code tools. Overall, after reaching out to a fairly diverse audience I have a few learnings that finally changed my opinion.

To start with, I started taking a fresh look at how organizations are structured for science practices. Though the field of ML has matured, it’s still quite common to see organizations with little to no science functions. Most organizations struggle, they start small with ML, and usually with a largely understaffed team. Though the potential for science problems within these organizations may be large, it’s hard to zero down on the big bets from inception. The journey of discovering value from ML problems and realizing their business impact is a slow and iterative process and requires the stomach to learn from big failures. The perfect science path doesn’t exist that would help one navigate from identifying problems to generating business value as an over-simplified point A to point B exercise. The journey is usually an arduous and iterative path. That got me thinking — what tools are adopted across organizations with varying maturity in a science function?

In reality, not all organizations can afford or would want to invest in expensive science skills at scale from inception. The process is often an undefined path. The following visual illustrates a simplified path while navigating from problem discovery to solving a product-driven science solution. [Of course, each step has its own set of iterations, but you get the larger picture.]

[Image by Author] – Illustrative productization path for ML use cases.

The grey area represents the frequency of iterations for a given milestone. Quite naturally, we will have a large number of ideas that are weeded out before moving to implement a basic prototype, which is then further pruned before committing to serious prototypes and finally narrowing down to key refined ones for an end product.

For the longest time, I was looking at these products from a different lens and criticized the value add from no-code platforms unreasonably. My key question was — how valuable is this solution for serious business? Somewhere, it seemed superfluous for use cases that mattered. But then I realized I was comparing from the view of a workplace that had no dearth of ML skills and engineering resources. But this isn’t the case everywhere. Most organizations won’t have resources and teams to support science use-case validation at scale. And also may not have a mature science function to support this.

The following visual illustrates the thought process with a no-code platform’s effectiveness across the life stages of a business problem.

[Image by Author] — Illustration for No code tool effectiveness across problem life-stage

My bias was due to the inclination towards the more mature phases of the problem. However, this is one specific and narrow view. Each organization based on its position of science maturity will have different tools at its disposal. If we generalize the problem-solving process for most organizations, we need to understand that not all ideas are productionized. The ratio of ideas to prototypes to MVPs to final products looks like dominos falling in the reverse order. And therefore, there is a need to support each life stage of a problem differently with different tools. The following table dives deeper into the above-mentioned problem life stages.

[Image by Author]

As shown above, if we dissect the problem’s life cycle into smaller milestones, we can see the varying needs of skills and resources across stages. Dedicated Science teams are by no means frugal resources, they are usually at par or higher in cost to engineering teams. Therefore, it’s common for smaller organizations to not have many of them. So how can folks who may not have the capacity for dedicated science teams cycle through this process faster, without major trade-offs?

This is when I started seeing new value from no-code platforms.

Does it make sense to have a one-size-fits-all solution across the solution journey? Heck, no! What changes as the problem progresses? In an ideal world, to make Data Science and ML ubiquitous, there is a definite need to have an ecosystem in place that facilitates moving faster in areas where there is a very high frequency of iterations combined with high failure rates. To support the ideation phase, we already have the best tools in place that thrive — say, whiteboards, PPTs, docs, write-ups, etc. For basic and serious prototypes — do we have anything that can get this moving faster? Some argue Python is so well democratized that it can facilitate this. That may only be partially true; not all analysts are fluent in Python, and SQL (maybe). Therefore, there is something that can fill this gap.

This is why I strongly feel this is where no-code solutions can thrive.

Essentially, a no-code ML platform significantly lowers the barrier for the layperson to embrace data science. This is achieved by neatly abstracting key complex science components with modular building blocks to support the journey from ideation to experimentation + validation with additional room for customization. These tools offer robust defaults that would ensure the majority of the tasks can move forward with little to no customization inputs required from the user. Such tools thus accelerate the process of validating ideas by simplifying the process within data engineering and model-building tasks. Further, these tools also simplify the process of consuming results (outcomes) and support broader go/no-go decisions with sizeable experiments. For small organizations or new teams embracing ML for the first time, these tools offer phenomenal value to confidently accelerate the baby steps at affordable and effective price points.

No-code tools are by no means a replacement for large serious solutions. It isn’t a permanent toolset that can be used to address the problem as it navigates from prototypes to production. As and when the business problem is fairly validated for value and starts scaling, the value from no-code tools begins to diminish cueing the need for more fine-grained controls. No-code tools will lack the sophistication that facilitates the gears to run large production problems at the web-scale.

The iterative and experimental nature of ML and data science use cases would indeed make it a resource-hungry initiative. Enterprises that are growing in tech and/or have recently adopted ML for business would need time to validate ideas before doubling down. The tool set we have today may not be the most friendly and easy-to-begin means for new teams embracing data science. It is for sure a robust one, but would be less ideal for beginners. This is where democratizing AI/ML tools starts playing a pivotal role. Can an organization start a new journey with a data science investment as low as just one employee and no upfront costs? Can the ideas be validated with no serious engineering efforts and limited science maturity? Can a promising idea be slowly scaled until the team is confident in investing big? A definite yes to all of these may not be always an easy one with the existing Pythonic universe of ML; there need to be tools that offer more. For problems that demand quick validation and an effective means to iterate at scale toward maturity, no-code ML solutions hit the sweet spot.

When we democratize AI and ML tools, we are starting to facilitate the ecosystem with the right tools to nurture ideas like raising a newborn baby until kindergarten. Once in kindergarten, well, maybe it’s time to see better tools. But until then, no-code platforms are your best friends.

In general, quality production material is not recommended to be delivered through over-simplified tools. But the iterative and experimental nature of science use cases doesn’t make it a good fit for resource-hungry engineering from inception. Different stages of the problem and varying science maturity of the organization will need different tools to navigate the science journey. No-Code/Low-code solutions offer a great start and effectively lower the barrier for organizations to explore if the field offers value to their business. As and when the organization gets serious, only then there is a potential need to migrate to tools and services that offer more granular controls. Until then, no-code tools would be a great buddy for your team to explore.

Hello there, thanks for reading! If you would like to be updated with my upcoming blogs, please follow me on Twitter to be notified of new posts right away. Thanks again!




Photo by Scott Graham on Unsplash

In recent years, we have seen several no-code ML and data science platforms launched by several large enterprises and thriving startups. Today, most leading cloud providers have at least one offering under no code/low code ML platforms. Microsoft’s Azure ML Studio, Amazon’s Sagemaker Canvas, and Google’s AutoML are a few to mention. If you take a deeper look at them, the underlying mission is common i.e. democratizing AI/ML/DS. For the longest time, I firmly believed that no-code/low-code would not be an effective way to democratize ML. However, more recently, I had a change in opinion; the reason is probably not what you guessed. Let me explain.

Back in 2015, when I explored the Azure ML studio, I was indeed impressed. The platform for the time was mature and offered rich features to solve ML problems. The entire journey of data onboarding, exploratory data analysis, model building, hyper-parameter tuning, and deployment could be accomplished using drag-and-drop tools. This was one of the first tools that I used within this category and I felt a sense of completeness. The tool allowed me to achieve the objective I tested at the time — deploying a model into production without a single line of code (though a baby model for testing). Then, by late 2016, I was convinced that there is a huge market for this category of services and that no-code tools would soon have mass adoption for ML problems.

However, as years passed, I barely noticed the adoption of these tools within the community that I primarily engaged with. Some of these tools were indeed fancy with great demos, but in most cases, it made little sense to me. Slowly, I started inclining toward the thought that these tools were superfluous for democratizing AI. My reasons were simple; serious ML use cases that mattered for business and were eventually deployed into production were never suited to be built with tools that locked control in favor of a UI-based tool. Also, data engineering and data wrangling for serious ML use cases were a gargantuan part of the effort. The sheer volume and complexity of engineering could never be suitable for an over-simplified no-code tool. For me, no-code/low-code platforms suddenly became a glorified tool that only serves the purpose of great marketing.

Recently, I started looking at these tools from a different perspective. I thought that maybe I was biased in my opinion. It was quite likely, as I mostly interacted with data scientists who were already comfortable with some form of coding or were seasoned professionals in the field. Also, I mostly worked in an environment where we worked very closely with software engineers who helped in translating research prototypes into production pipelines. Therefore, it was key for us to establish a research workflow practice that ensured the efforts in translating between research prototypes and production artifacts were minimized. Thus, we mostly defaulted to Pythonic ecosystems supported by big data tools on established cloud platforms. It’s pretty natural to rule out no-code solutions in these cases.

To understand the situation with a wider lens and a different user base, I started reaching out to folks outside my existing network to understand the changes in their tech stack and the adoption of no-code tools. Overall, after reaching out to a fairly diverse audience I have a few learnings that finally changed my opinion.

To start with, I started taking a fresh look at how organizations are structured for science practices. Though the field of ML has matured, it’s still quite common to see organizations with little to no science functions. Most organizations struggle, they start small with ML, and usually with a largely understaffed team. Though the potential for science problems within these organizations may be large, it’s hard to zero down on the big bets from inception. The journey of discovering value from ML problems and realizing their business impact is a slow and iterative process and requires the stomach to learn from big failures. The perfect science path doesn’t exist that would help one navigate from identifying problems to generating business value as an over-simplified point A to point B exercise. The journey is usually an arduous and iterative path. That got me thinking — what tools are adopted across organizations with varying maturity in a science function?

In reality, not all organizations can afford or would want to invest in expensive science skills at scale from inception. The process is often an undefined path. The following visual illustrates a simplified path while navigating from problem discovery to solving a product-driven science solution. [Of course, each step has its own set of iterations, but you get the larger picture.]

[Image by Author] – Illustrative productization path for ML use cases.

The grey area represents the frequency of iterations for a given milestone. Quite naturally, we will have a large number of ideas that are weeded out before moving to implement a basic prototype, which is then further pruned before committing to serious prototypes and finally narrowing down to key refined ones for an end product.

For the longest time, I was looking at these products from a different lens and criticized the value add from no-code platforms unreasonably. My key question was — how valuable is this solution for serious business? Somewhere, it seemed superfluous for use cases that mattered. But then I realized I was comparing from the view of a workplace that had no dearth of ML skills and engineering resources. But this isn’t the case everywhere. Most organizations won’t have resources and teams to support science use-case validation at scale. And also may not have a mature science function to support this.

The following visual illustrates the thought process with a no-code platform’s effectiveness across the life stages of a business problem.

[Image by Author] — Illustration for No code tool effectiveness across problem life-stage

My bias was due to the inclination towards the more mature phases of the problem. However, this is one specific and narrow view. Each organization based on its position of science maturity will have different tools at its disposal. If we generalize the problem-solving process for most organizations, we need to understand that not all ideas are productionized. The ratio of ideas to prototypes to MVPs to final products looks like dominos falling in the reverse order. And therefore, there is a need to support each life stage of a problem differently with different tools. The following table dives deeper into the above-mentioned problem life stages.

[Image by Author]

As shown above, if we dissect the problem’s life cycle into smaller milestones, we can see the varying needs of skills and resources across stages. Dedicated Science teams are by no means frugal resources, they are usually at par or higher in cost to engineering teams. Therefore, it’s common for smaller organizations to not have many of them. So how can folks who may not have the capacity for dedicated science teams cycle through this process faster, without major trade-offs?

This is when I started seeing new value from no-code platforms.

Does it make sense to have a one-size-fits-all solution across the solution journey? Heck, no! What changes as the problem progresses? In an ideal world, to make Data Science and ML ubiquitous, there is a definite need to have an ecosystem in place that facilitates moving faster in areas where there is a very high frequency of iterations combined with high failure rates. To support the ideation phase, we already have the best tools in place that thrive — say, whiteboards, PPTs, docs, write-ups, etc. For basic and serious prototypes — do we have anything that can get this moving faster? Some argue Python is so well democratized that it can facilitate this. That may only be partially true; not all analysts are fluent in Python, and SQL (maybe). Therefore, there is something that can fill this gap.

This is why I strongly feel this is where no-code solutions can thrive.

Essentially, a no-code ML platform significantly lowers the barrier for the layperson to embrace data science. This is achieved by neatly abstracting key complex science components with modular building blocks to support the journey from ideation to experimentation + validation with additional room for customization. These tools offer robust defaults that would ensure the majority of the tasks can move forward with little to no customization inputs required from the user. Such tools thus accelerate the process of validating ideas by simplifying the process within data engineering and model-building tasks. Further, these tools also simplify the process of consuming results (outcomes) and support broader go/no-go decisions with sizeable experiments. For small organizations or new teams embracing ML for the first time, these tools offer phenomenal value to confidently accelerate the baby steps at affordable and effective price points.

No-code tools are by no means a replacement for large serious solutions. It isn’t a permanent toolset that can be used to address the problem as it navigates from prototypes to production. As and when the business problem is fairly validated for value and starts scaling, the value from no-code tools begins to diminish cueing the need for more fine-grained controls. No-code tools will lack the sophistication that facilitates the gears to run large production problems at the web-scale.

The iterative and experimental nature of ML and data science use cases would indeed make it a resource-hungry initiative. Enterprises that are growing in tech and/or have recently adopted ML for business would need time to validate ideas before doubling down. The tool set we have today may not be the most friendly and easy-to-begin means for new teams embracing data science. It is for sure a robust one, but would be less ideal for beginners. This is where democratizing AI/ML tools starts playing a pivotal role. Can an organization start a new journey with a data science investment as low as just one employee and no upfront costs? Can the ideas be validated with no serious engineering efforts and limited science maturity? Can a promising idea be slowly scaled until the team is confident in investing big? A definite yes to all of these may not be always an easy one with the existing Pythonic universe of ML; there need to be tools that offer more. For problems that demand quick validation and an effective means to iterate at scale toward maturity, no-code ML solutions hit the sweet spot.

When we democratize AI and ML tools, we are starting to facilitate the ecosystem with the right tools to nurture ideas like raising a newborn baby until kindergarten. Once in kindergarten, well, maybe it’s time to see better tools. But until then, no-code platforms are your best friends.

In general, quality production material is not recommended to be delivered through over-simplified tools. But the iterative and experimental nature of science use cases doesn’t make it a good fit for resource-hungry engineering from inception. Different stages of the problem and varying science maturity of the organization will need different tools to navigate the science journey. No-Code/Low-code solutions offer a great start and effectively lower the barrier for organizations to explore if the field offers value to their business. As and when the organization gets serious, only then there is a potential need to migrate to tools and services that offer more granular controls. Until then, no-code tools would be a great buddy for your team to explore.

Hello there, thanks for reading! If you would like to be updated with my upcoming blogs, please follow me on Twitter to be notified of new posts right away. Thanks again!

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment