You Just Deployed a New AI Tool. How Soon Can You Know if It’s Working?

You’ve rolled out a new AI tool - now what? The pressure is on to show results, but how soon can you tell if it's working? Many AI projects fail not because of poor technology, but due to low employee adoption. Work friction data serves as a leading indicator of AI effectiveness, offering early insights into whether your investment is driving productivity or creating new challenges. Learn how measuring employee experience can help you course-correct before it’s too late.


You’ve deployed a new AI tool in your organization.

Congratulations!

Now the pressure’s off, right?

Wrong.

The pressure has simply shifted from the whirlwind of implementation to the expectation of tangible results. Whether the goal was to boost productivity, increase efficiency, or reduce costs, your AI investment is on the clock.

And that clock is ticking. While it typically takes a year or longer to see the intended impact of an AI investment, half of CFOs will cut funding for an AI project within that first 12 months if they’re not seeing positive ROI. In other words, if things aren’t working early, you may not get a chance to fix them before the plug gets pulled. That’s pressure.

Wouldn’t it be great if AI projects came with some kind of early warning system? In this piece, we’ll explain how having the right data can give you timely insight into what’s working, what’s not, and, most importantly, what you can do to course-correct a flagging AI project before it’s too late.      

Is Your AI Pulling Its Weight?

You invested in AI to solve some problem in your organization. Maybe you had a software development team that was getting bogged down in rote tasks. Or a call center that was too overloaded with simple requests to quickly respond to customers who had more complex issues.

So you deployed AI to relieve some of the stress on your employees and thereby increase their productivity and efficiency. For the development team, an AI tool is now checking software specs to give your developers more time to create new code. And in the call center, an AI chatbot is handling some of the more routine calls, freeing up employees to deal with callers who have more complex issues.

Both of these sound like great AI use cases. But are they actually working for your employees and your organization? Are your developers writing more code thanks to the extra time the AI tool is affording them? Are your call center agents helping more customers or really cutting into those hold times because of AI? Are you seeing the ROI you expected?

AI Success Is Dependent on Your Employees

The answer to all of these questions lies in something called work friction – those moments of difficulty or struggle that employees are dealing with in their day-to-day work. 

If the developers in the above scenario find, for example, that they can’t rely on the specs the AI tool is giving them, they won’t use it. The AI in this case not only hasn’t made their work easier, it’s forced them to go back and double-check information for accuracy – it’s actually created more friction. 

Likewise, if customers are having issues with the call center AI chatbot, they’re likely going to call back and wait to speak to a human being. Now those employees will not only be fielding the calls that AI was supposed to cover, but they’ll be dealing with callers who are upset from that bad previous experience. Here too, they’re dealing with even more friction.

In both cases, if the employees have a choice, they’ll likely disable the AI agent or cut it out of their workflow.

Measure AI Effectiveness Using Worker Data  

AI is user-dependent technology. If employees don’t see AI helping in specific areas where they’re encountering work friction, they won’t adopt the tool and you won’t see the results you’re hoping for. This is actually how many AI failures unfold – it’s not that the technology wasn’t good, but it wasn’t a good fit for employees in the specific use case it was supposed to help.

That’s why work friction is a key leading indicator of AI effectiveness. By getting to the heart of where your employees are experiencing friction, you can gauge whether the tool you’ve deployed is helping to ease their burden. Is your AI solution addressing the specific touchpoints or processes that are slowing down your employees? Is it making their work easier? Is it freeing them from routine tasks to focus on higher-value work? 

If the answer to any of these questions is yes, you’ll likely see the productivity gains you expected from AI. If not, work friction analysis can provide valuable data you can use to adjust your deployment and try again.

For example, we recently worked with a financial services firm that rolled out AI for its development team with a goal very much like the situation described above. But the company didn’t have a clear idea of how the tool was going to impact the developers’ biggest problem areas. Perhaps unsurprisingly, those employees found it didn’t help much, so they stopped using it. The company had a failed AI investment on its hands.

With a detailed work friction analysis, however, the firm was able to see that developers were running into issues reviewing pull requests, finding answers about the code base, and writing technical documentation. 

Now the company had a roadmap for adjusting the tool based on user feedback and redeploying it. When the firm implemented a GitHub Copilot to help with documentation and code review, the development team embraced it, which reduced their work friction.

And the company tallied $5.4 million in annual savings.

The Pressure Is on For AI to Deliver – Make Sure You Meet the Moment

The AI revolution forges ahead. A recent Wharton study found that while only 37 percent of large firms used AI weekly in 2023, 72 percent did in 2024. And the momentum doesn’t seem likely to slow in 2025.

Yet even as the pressure to deploy AI continues apace, the greater onus on leaders now will be to show the results of those investments. That can take time, of course, but one way to get an early idea of just how well (or how poorly) an AI experiment is going is to see how your employees are reacting to it.

Work friction data can be that leading indicator. Get in touch to find out how we can help provide the insight on whether your AI investment is headed for success – and, if it’s not, what you can do to fix it before it’s too late.