Techno Blender
Digitally Yours.

The Missing Features in Your Data Product | by Chad Isenberg | Feb, 2023

0 46


I lead a monthly data discussion group at Zendesk, where I’m fortunate to get to hear a variety of thoughts and perspectives from smart, diverse, and talented people. We cover topics ranging from the technical to the procedural, and we regularly tie these back to our work streams. Online discussion groups, local meetups, and conferences are terrific, but I feel there’s even more value in getting the people in your organization together to discuss industry trends, as it can provide direction for your teams and their projects.

One of the themes we’ve been returning to is the idea of our team’s value and how we can measure it. In an earlier article, I talked about how data teams generate value, but this isn’t the only way. Regardless of whether you’re providing business support, enabling automation through reverse ETL, or feeding data to advanced analytical and machine learning use-cases, there’s always a lingering question: did we save or generate more than we spent?

As one of our attendees said, our customers frequently measure our impact “based on vibes.” This shouldn’t be taken as a slight, but rather a concession to the difficulty in creating firm metrics around impact and value. Additionally, embedded in this idea is that there is subjective and perhaps non-quantifiable value in analytical outputs, which we’ll discuss in a moment.

Building an attribution model for data products is a notoriously tough nut to crack, and I won’t try to tackle all of the elements here. While I believe we could benefit from a broad attribution framework, I don’t know if that’s even possible, given the number of ways data can reduce costs and increase revenue. Instead, I want to focus on a few specific components that I feel are neglected, in particular:

  1. The psychosocial value of data
  2. Customer delight and satisfaction
  3. Data quality as a core deliverable

This first observation is less about how we should be “doing data products” and more about what the table stakes are. During our discussion, I brought up the idea that we don’t necessarily know why data-driven companies succeed, but research indicates that they do. One person responded with this insight: maybe the dashboards aren’t “doing anything,” at least not in the sense of driving a specific, deliberate action. What if instead of (or in addition to) actionable insights, their value is in motivating and aligning our customers?

Cheerleading team tossing a member in the air
Photo by Rojan Maharjan on Unsplash

At one point in my career, I was working as a BI developer for a wholesale company, and we had reporting that provided our sales team with summaries of their sales metrics. This report saw reasonable usage throughout the month, but it sky-rocketed at month-end, with some users refreshing several times an hour. I remember being completely bewildered by this trend.

I’m sure there were several operational use cases behind this behavior, but in hindsight, I believe one of the most valuable outputs of the report was motivation. Seeing their shipments go out in near real-time made our salespeople excited; it gave them the extra “oomph” to make calls and close deals as they rushed to meet or exceed quota. This reporting was doing very real work and driving a business process, even if it wasn’t in the way that we would typically expect.

It’s also worth noting that there’s something inherently social about many of these data artifacts. A dashboard is a shared object and can serve as a common thread of conversation between colleagues. Even if the underlying data isn’t remarkable or particularly insightful, the dialogues and shared narratives sparked by them can be valuable. Alignment is a powerful tool for an organization, and data can drive alignment.

As “objective” people, many data professionals may be reluctant to accept that “inspiration” is a valid or valuable output; however, we’re missing something if we don’t use data to motivate us. I resisted this idea initially, but reflecting on my career, I have many stories and anecdotes like the above, and I can’t help but feel that this qualitative impact is significant. Perhaps the most important lesson I took away from this experience is that just because I couldn’t perceive a mechanism for generating value doesn’t mean that it didn’t exist.

Finally, the narrative of the data showing us some hidden insight or a contrarian opinion is a trope at this point, but there’s something powerful that happens when data affirms our beliefs and reassures us that we’re on the right track. That data often supports the status quo rather than upends it should be celebrated, not feared.

The Colosseum
Photo by Mathew Schwartz on Unsplash

In the same vein, I believe it’s important to consider whether or not our customers like our data products. This could range from providing ergonomic naming conventions and easily-consumable tables for analysts and data scientists to delivering aesthetically-pleasing dashboards for business users.

What does this look like? Should we be building in feedback mechanisms to our data products? Should we have surveys? I would caution that we don’t want to be too heavy-handed; as with all internal products, we’re at risk of having a tenuous relationship with the actual value chain (i. e., how we get paid).

Still, I think there’s room for providing lightweight systems to drive feedback in the data product lifecycle. Which features are loved? Which are hated? When is it safe for a data product to go into “maintenance mode,” and when do we sunset? In my experience, these questions are almost always raised in an ad-hoc fashion, and by that point in the product’s lifecycle, even identifying active users can be complicated by process automation that has been long since forgotten.

As we were closing our monthly meeting, Niral Patel, one of the data engineering managers on my team, brought up this cogent point: why is quality an afterthought when it’s a critical component to satisfy our customers’ demands?

To some extent, this is a matter of resources. Data teams are frequently scrappy and asked to take on far more work than they’re capable of delivering. They’re mired in operational woes and unclear requirements. When pressed for time, delivering an incomplete thing is better than delivering nothing.

But like standardized tests where you’re penalized for wrong answers, poor data quality damages trust. If your team consistently cranks out dashboards that are inaccurate, stale, and inconsistent, your customers will lose trust in you. And the damage isn’t limited to you and your team; you’re contributing to a culture of mistrust that can require years of work and huge turnover to repair.

We have a responsibility to our stakeholders to promote data quality as a key feature of our products, and we need to involve them in the tradeoffs. Since data quality has many dimensions, our customers have to be involved in prioritizing: how should we handle unusual data volumes? What should we do when we encounter a schema change? Is it more important to deliver incomplete data on time or complete data late? While there’s a sense of what “poor” data quality is, managing the tradeoffs in delivering “good” data is much more complicated and context-sensitive.

Important work is being done in the data governance space, and it’s time to bring that to bear in our data products, from the start and every time. Quality is an essential feature, every bit as important as the design, interfaces, and use-cases enabled by our products.

While there are products and services that can provide us with usage data, the data in itself isn’t enough; we need metrics and associated outcomes. I believe the following metrics (and related monitoring / observability) are critical components of a successful data product:

  1. A way of measuring usage, ideally differentiating automation from ad-hoc
  2. A way of measuring satisfaction, ideally at the feature level
  3. A way of measuring data quality, all key aspects

These components are far from sufficient, but I believe they are necessary for data teams to adequately judge whether or not their work is having an impact. Yes, dollar or FTE-equivalent attribution is the gold standard, but in cases where that’s not available, knowing that we’re delivering high-quality data to engaged, satisfied users is a good proxy. These metrics also allow us to capture those unintended benefits of a data product; perhaps our users are applying the outputs to different processes altogether.

One salient point that Niral raised in a follow-up conversation is that data products should be actionable; an automation or dashboard should drive some action that generates value. It’s critical to make the distinction between creating data products with the primary goal of psychosocial impact and considering psychosocial impact as a dimension of value. I wouldn’t suggest that, in the planning phase, we make a product with the sole goal of improving morale or team cohesion, as that value proposition is tenuous at best. Rather, if our product fails to drive its intended action but is still wildly popular, we should have the framework in place to investigate why. Essentially, we don’t want to throw away something valuable just because it’s not “working as intended.”


I lead a monthly data discussion group at Zendesk, where I’m fortunate to get to hear a variety of thoughts and perspectives from smart, diverse, and talented people. We cover topics ranging from the technical to the procedural, and we regularly tie these back to our work streams. Online discussion groups, local meetups, and conferences are terrific, but I feel there’s even more value in getting the people in your organization together to discuss industry trends, as it can provide direction for your teams and their projects.

One of the themes we’ve been returning to is the idea of our team’s value and how we can measure it. In an earlier article, I talked about how data teams generate value, but this isn’t the only way. Regardless of whether you’re providing business support, enabling automation through reverse ETL, or feeding data to advanced analytical and machine learning use-cases, there’s always a lingering question: did we save or generate more than we spent?

As one of our attendees said, our customers frequently measure our impact “based on vibes.” This shouldn’t be taken as a slight, but rather a concession to the difficulty in creating firm metrics around impact and value. Additionally, embedded in this idea is that there is subjective and perhaps non-quantifiable value in analytical outputs, which we’ll discuss in a moment.

Building an attribution model for data products is a notoriously tough nut to crack, and I won’t try to tackle all of the elements here. While I believe we could benefit from a broad attribution framework, I don’t know if that’s even possible, given the number of ways data can reduce costs and increase revenue. Instead, I want to focus on a few specific components that I feel are neglected, in particular:

  1. The psychosocial value of data
  2. Customer delight and satisfaction
  3. Data quality as a core deliverable

This first observation is less about how we should be “doing data products” and more about what the table stakes are. During our discussion, I brought up the idea that we don’t necessarily know why data-driven companies succeed, but research indicates that they do. One person responded with this insight: maybe the dashboards aren’t “doing anything,” at least not in the sense of driving a specific, deliberate action. What if instead of (or in addition to) actionable insights, their value is in motivating and aligning our customers?

Cheerleading team tossing a member in the air
Photo by Rojan Maharjan on Unsplash

At one point in my career, I was working as a BI developer for a wholesale company, and we had reporting that provided our sales team with summaries of their sales metrics. This report saw reasonable usage throughout the month, but it sky-rocketed at month-end, with some users refreshing several times an hour. I remember being completely bewildered by this trend.

I’m sure there were several operational use cases behind this behavior, but in hindsight, I believe one of the most valuable outputs of the report was motivation. Seeing their shipments go out in near real-time made our salespeople excited; it gave them the extra “oomph” to make calls and close deals as they rushed to meet or exceed quota. This reporting was doing very real work and driving a business process, even if it wasn’t in the way that we would typically expect.

It’s also worth noting that there’s something inherently social about many of these data artifacts. A dashboard is a shared object and can serve as a common thread of conversation between colleagues. Even if the underlying data isn’t remarkable or particularly insightful, the dialogues and shared narratives sparked by them can be valuable. Alignment is a powerful tool for an organization, and data can drive alignment.

As “objective” people, many data professionals may be reluctant to accept that “inspiration” is a valid or valuable output; however, we’re missing something if we don’t use data to motivate us. I resisted this idea initially, but reflecting on my career, I have many stories and anecdotes like the above, and I can’t help but feel that this qualitative impact is significant. Perhaps the most important lesson I took away from this experience is that just because I couldn’t perceive a mechanism for generating value doesn’t mean that it didn’t exist.

Finally, the narrative of the data showing us some hidden insight or a contrarian opinion is a trope at this point, but there’s something powerful that happens when data affirms our beliefs and reassures us that we’re on the right track. That data often supports the status quo rather than upends it should be celebrated, not feared.

The Colosseum
Photo by Mathew Schwartz on Unsplash

In the same vein, I believe it’s important to consider whether or not our customers like our data products. This could range from providing ergonomic naming conventions and easily-consumable tables for analysts and data scientists to delivering aesthetically-pleasing dashboards for business users.

What does this look like? Should we be building in feedback mechanisms to our data products? Should we have surveys? I would caution that we don’t want to be too heavy-handed; as with all internal products, we’re at risk of having a tenuous relationship with the actual value chain (i. e., how we get paid).

Still, I think there’s room for providing lightweight systems to drive feedback in the data product lifecycle. Which features are loved? Which are hated? When is it safe for a data product to go into “maintenance mode,” and when do we sunset? In my experience, these questions are almost always raised in an ad-hoc fashion, and by that point in the product’s lifecycle, even identifying active users can be complicated by process automation that has been long since forgotten.

As we were closing our monthly meeting, Niral Patel, one of the data engineering managers on my team, brought up this cogent point: why is quality an afterthought when it’s a critical component to satisfy our customers’ demands?

To some extent, this is a matter of resources. Data teams are frequently scrappy and asked to take on far more work than they’re capable of delivering. They’re mired in operational woes and unclear requirements. When pressed for time, delivering an incomplete thing is better than delivering nothing.

But like standardized tests where you’re penalized for wrong answers, poor data quality damages trust. If your team consistently cranks out dashboards that are inaccurate, stale, and inconsistent, your customers will lose trust in you. And the damage isn’t limited to you and your team; you’re contributing to a culture of mistrust that can require years of work and huge turnover to repair.

We have a responsibility to our stakeholders to promote data quality as a key feature of our products, and we need to involve them in the tradeoffs. Since data quality has many dimensions, our customers have to be involved in prioritizing: how should we handle unusual data volumes? What should we do when we encounter a schema change? Is it more important to deliver incomplete data on time or complete data late? While there’s a sense of what “poor” data quality is, managing the tradeoffs in delivering “good” data is much more complicated and context-sensitive.

Important work is being done in the data governance space, and it’s time to bring that to bear in our data products, from the start and every time. Quality is an essential feature, every bit as important as the design, interfaces, and use-cases enabled by our products.

While there are products and services that can provide us with usage data, the data in itself isn’t enough; we need metrics and associated outcomes. I believe the following metrics (and related monitoring / observability) are critical components of a successful data product:

  1. A way of measuring usage, ideally differentiating automation from ad-hoc
  2. A way of measuring satisfaction, ideally at the feature level
  3. A way of measuring data quality, all key aspects

These components are far from sufficient, but I believe they are necessary for data teams to adequately judge whether or not their work is having an impact. Yes, dollar or FTE-equivalent attribution is the gold standard, but in cases where that’s not available, knowing that we’re delivering high-quality data to engaged, satisfied users is a good proxy. These metrics also allow us to capture those unintended benefits of a data product; perhaps our users are applying the outputs to different processes altogether.

One salient point that Niral raised in a follow-up conversation is that data products should be actionable; an automation or dashboard should drive some action that generates value. It’s critical to make the distinction between creating data products with the primary goal of psychosocial impact and considering psychosocial impact as a dimension of value. I wouldn’t suggest that, in the planning phase, we make a product with the sole goal of improving morale or team cohesion, as that value proposition is tenuous at best. Rather, if our product fails to drive its intended action but is still wildly popular, we should have the framework in place to investigate why. Essentially, we don’t want to throw away something valuable just because it’s not “working as intended.”

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment