There’s a good chance you’re experimenting with generative AI right now. You might even be testing a few tools at once: a support assistant for call center agents, a coding copilot for software devs, and so on.
Each experiment has the potential to help your team work faster and smarter, saving you money in the process. But a staggering 80 percent of AI experiments fail – and the cost of error can be disastrous for your workers and your bottom line.
In this piece, we’ll break down this error cost to understand where it comes from, how it manifests, and how you can optimize each AI experiment for success.
The Source: AI Experiments That Create Work Friction
Like any new software, generative AI changes the way employees work – in good and bad ways.
In call centers, for instance, we’ve found that generative AI chatbots can be incredibly helpful for experienced agents. If a complex issue comes up on a support call, they can query the chatbot for help. The bot will return a few potential solutions, and the agent can choose the best option for the customer’s needs.
But new agents tend to have a harder time using generative AI. They might not have the knowhow to craft effective prompts. And they rarely have the experience to choose the best AI-generated solution. As a result, generative AI can become a stressor on already-stressful calls: new agents may agonize over support decisions and start to freeze up.
Experiences like this are a fixture of failed AI experiments. Even if the technology benefits some workers, it may not help – and could actively hurt – others. In other words, generative AI can create work friction. In the next section, we’ll explain how that translates to error cost.
The Cost: Wasted Work, Wasted Money, and Employee Attrition
High-friction AI experiments tend to cost organizations in a few important ways.
Consider the call center example: a new agent might use generative AI to help solve a complex customer problem. But maybe their initial prompt isn’t concrete enough to return a useful set of support options. So they waste time editing their query. And they waste more time stressing over the best solution.
Apart from extending the call itself, all that wasted time equals lost productivity that snowballs as the day goes on. But the agent is still under pressure to hit their call quota.
Together with other stressors (angry callers, absent managers, etc.), the added frustration of unhelpful AI might be enough to tank their job satisfaction. If they remain unsatisfied, they’ll likely start looking for a new job.
What does this mean for the organization? Wasted work, higher turnover, and a failed AI investment. And that’s just for one experiment. Without a strategy to keep generative AI from causing work friction, it’s easy to see how so many experiments flop – racking up huge costs in the process.
To Slash Your Error Cost, Uncover AI Work Friction with Data
The good news is that there is a way to boost the success of your generative AI experiments. The key is to continuously monitor AI’s impact on workers and tweak your technology or rollout to better meet their needs.
With targeted surveys – aimed at a small sample of employees – you can quickly understand…
- How generative AI affects employees (i.e., if it creates or removes work friction).
- How generative AI experiences vary (e.g., among new vs. tenured workers, younger vs. older employees, etc.).
- Whether there are other factors affecting the AI user experience (like inadequate job training).
In the case of call center AI, maybe you learn that AI-generated support options are too complex for new employees. So you test a model that uses plain language and only offers a single recommendation per query.
Or perhaps you discover that the AI user experience becomes almost frictionless after a year on the job. So you limit regular AI use to more experienced employees and gradually onboard new hires to the tool at the nine-month mark.
No matter your approach, work friction data can help you adjust your AI roadmap in an evidence-based way to maximize the technology’s value. And if an experiment creates too much friction, you can pull the plug before the error cost skyrockets.
Experiment with Confidence
Generative AI experiments are expensive by default. To ensure that money is well spent, let work friction data guide your implementation. When you do, you’ll eliminate bad experiments more quickly and achieve productivity gains with successful experiments faster – with minimal impact on frontline workers.
If you’re ready to gather the data you need to experiment with confidence, FOUNT can help. Let’s start a conversation.