You’ve adopted an AI tool – or maybe a dozen. You’ve got big targets from the C-suite for how AI is supposed to improve productivity and reduce costs. But you don’t have any clear data on whether the tools are working. Worse, you don’t know how to get that data in time to change course, if necessary.
Welcome to the age of AI anxiety.
After the initial excitement about what generative AI can do is settled, boards and executive teams set ambitious targets for bringing the many benefits of AI to their organizations. And business leaders from around the org leapt at the opportunity.
But generative AI is unlike any other major technology introduced in the last several decades. Adoption depends fully on user willingness. If the tools aren’t making users’ lives better, they won’t use them – and the organization won’t reap any benefits.
In this piece, I’ll explain how to assess the performance of your AI tools and how to identify what is and isn’t working so you can focus your time and resources on things that will deliver the greatest ROI. First, though, let’s take a look at why AI is such a different beast than previous technologies.
AI Adoption Is Bottom-Up
Many major digital transformations of the last few decades were top-down: if you wanted to switch from on-prem servers to the cloud, you could make the command decision to do so and it would happen. Ditto if you wanted to reengineer your software’s backend to be modular. These were decisions that executives could impose on employees.
AI is different. AI tools are all about automating specific moments of work to improve productivity. If they automate one thing but then create three or four extra things a worker has to do, the worker will stop using them. And there goes your budgeted productivity increase.
Because of the bottom-up nature of AI tools, they will only increase an organization’s productivity if they make work easier for employees. And the only way to assess whether they’re doing that is to measure specific moments of work.
How to Assess the ROI of Your AI Tools
To assess whether an AI tool is leading to a positive return on investment, you have to look at the specific work moment in which the tool is used. For example, imagine a financial services company that implements an AI agent for its IT team. The goal is to increase development productivity by 25 percent.
But a month in, productivity is flat, despite adoption being at target. To figure out what’s wrong, the company can…
- Gather data on specific moments when the AI tool is used: to generate new code, for example, or gather documentation from the codebase.
- Identify what impact the AI tool is having in each of those moments for various worker groups, compared with what the process was like before.
- Identify moments of high work friction – i.e., places where the AI tool is making a process worse than it was.
- Assess which high-friction moments have the biggest impact on overall productivity.
- Tackle high-friction moments in order of impact.
In other words, the key to assessing the ROI of an AI tool is to gather first-person data insights about how it impacts the work of the people using it.
One thing that’s important here is that your method of data collection has to be scalable. Focus groups, surveys, and interviews can deliver a lot of information, but they aren’t scalable. For large organizations, scale is key. Without scalable data, all you have is anecdote, which is not enough to prioritize which moments of work friction are having the biggest impact on productivity and therefore which ones to address first.
As you may have guessed, you can also use scalable first-person data to prioritize future AI investments. Let’s take a look at how.
How to Prioritize Future AI Investments
Which AI investments will you prioritize next?
For many organizations, the answer comes from the top down: the call center is an important part of the business, so we’ll send AI resources to the call center.
But remember: AI is a bottom-up technology. A top-down approach is not likely to lead to a positive ROI.
Instead, organizations can start from the level of the worker by looking at something we like to call the user experience of work. Many orgs are familiar with UX when it comes to customer-facing products and services: where do leads drop out of a funnel? Which features do customers never use? Which ones do they use inefficiently?
Bad UX leads to lost customers. Similarly, bad UX of work leads to disengagement and ultimately attrition.
Applying UX principles to employee work can uncover areas of high work friction – and therefore areas that are prime candidates for AI intervention. When we give workers tools to alleviate their biggest pain points – and those tools work – they’re likely to use those tools as intended. This means that the impact on the organization will likely be close to what was projected by the AI tool’s vendors.
Evaluating AI Impact Starts with First-Person Worker Data
Right now, many leaders are experiencing AI anxiety driven by two questions:
- What are the best AI use cases?
- How do I know if an AI implementation worked?
Because AI is a bottom-up technology, the only way to answer these questions confidently is by examining first-person work data. FOUNT is the only solution that takes that approach. We conduct short surveys of employees about their moment-to-moment work, then contextualize and analyze the data we gather with the help of more than seven million other data points on work friction.
If you’re ready to ease your AI anxiety, get some clear answers about how your current AI investments are performing, or identify which AI investments you should prioritize next, let’s talk.