This article is part of a series called EXposed: EX practices uncovered. It was co-authored by Stephanie Denino and Timo Tischer from TI People. It originally appeared on LinkedIn.
Why does this matter?
For teams focused on improving employee’s experiences, there are three key questions they must be able to answer to ensure they drive measurable impact:
- Which experiences should we improve?
- What about those experiences can we improve?
- Did our efforts improve the experience?
Every step of the way, those driving this work need data to help inform their answers. So, they are going out looking for data. On one hand, they hear: “surely we have enough data!”, as organizations are overflowing with data. On the other hand, these practitioners often feel that they don’t quite have what they need.
In fact, this later perspective was confirmed in the State of EX 2022: although over 90% of respondents run employee surveys at least once a year, 70% say the employee data they collect is inadequate for their needs. More on this below:
EX leaders and practitioners are determined and resourceful; they will do their very best to make the most of what they have, and when they still do not have what they need, they will go out and get it.
But what should they go get? First, let’s lay out the core types of data that are relevant to answering the three key questions:
- Moment and touchpoint data captures people’s feedback about specific experiences at work (e.g. your experience in specific moments like learning to perform your role, resolving a customer issue, taking leave or pursuing a new role, as people interact with human, physical, digital touchpoints)
- Aggregate sentiment data captures people’s feelings about their cumulative experience at work (e.g. as a result of your experience across many moments, you feel engaged, included, a sense of purpose, etc.)
- Behavioral data captures people’s actions at work (e.g. attrition/retention, individual productivity, absences, leave requests)
With this context, we can turn our attention to which type of data is most relevant to answer each of the three key questions.
What are we seeing?
Let’s look through the lens of a recent experience improvement project with a US-headquartered financial services institution of roughly 60,000 people.
Which experiences should we improve?
The team driving this work first considered behavioral data, noticing patterns in attrition data, but realized they couldn’t quite pinpoint why this was.
They then naturally looked to aggregate sentiment data, in this case data from their yearly survey and quarterly pulses and found that a key pillar of their EVP focused on ‘growth’ was showing a noticeable dip in scores. Even with tens of thousands of open-ended comments at the end of their survey asking, “what can we do better?”, the data did not help clarify which of the many growth-related challenges that people raised were most important to go address.
This led them to trigger the collection of moment and touchpoint data using the FOUNT product to understand, across 4 key talent segments (e.g. digital roles, call center agent roles), which moments were both most highly important (based on their correlation with overall engagement and likelihood to recommend working at the company) and not meeting their people’s satisfaction threshold.
One moment stood out as highly important across talent segments, based on thousands of responses: the act of pursuing a new role internally. The moment’s 60% CSAT suggested that 40% of those having experienced this moment were not satisfied. Without creating the possibility of internal mobility and access to new opportunities, it was difficult for people in this organization to feel as though they truly could grow the way they would hope to. (Note: we know a role change/promotion is not the only way for employees to grow, but it certainly revealed itself as highly important to employees in this organization)
What about this experience can we improve?
Within this moment of pursuing a new role, it was necessary to uncover the critical pain points.
In the past, the team explored experience challenges with research methods such as interviews and observation. However, conducting interviews was time-consuming, which often required limiting their number. And though they would uncover many issues through their exploration, it was hard to understand what the biggest pain points were, given the limited volume of data and its unstructured nature. This often led the team and their leaders to wonder: are we solving the right problem?
This time, thanks to the collection of moment and touchpoint data it was possible to identify which touchpoints were highly problematic. For example, job postings received a 34% CSAT and confirmed that this part of the experience needed attention, more so than other higher scoring touchpoints.
Because free text data tied to that specific moment was also collected, they learned through its analysis that job postings were too generic and unclear, making the work of finding a new role very confusing and overwhelming.
The project team then engaged employees in deep dive sessions to better understand what felt generic and unclear, and worked with employees as well as internal recruiters, to find ways to improve the job postings.
This is just one example of many specific changes made to improve the experience of pursuing a new role.
Did our efforts improve the experience?
Once changes and interventions are implemented, all the same data is relevant, but in reverse. First, re-measurement of moment and touchpoint data allows the organization to see if the touchpoint performance has improved and whether the overall moment score has also improved.
Over time, the team will look to aggregate sentiment data via items in their quarterly pulse and yearly survey to monitor higher-level improvement (e.g. increases in ‘growth’ related items such as “Do you feel you have good career opportunities?”). They will also monitor behavioral data – more specifically internal role application data and attrition data – to observe trends (e.g. are we seeing declining attrition from people who have signaled they are open to new roles and/or who have applied to internal roles?).
What do we recommend?
Given the sea of data that exists, it is difficult to know what type of data is most relevant to experience improvement efforts. Teams get the best insights from moment and touchpoint data because it provides what we call middle signals: data that is not too high-level or too low-level, which not only confirms that there is a problem requiring focus, but also indicates what to do about it.
However, we commonly see people spend too much time trying to make sense of macro signals or micro signals. Macro signals, like aggregate sentiment data, behavioral data and social listening data, offer high-level findings that can indicate a moment needs fixing, but are often not specific enough to show what needs fixing it.
Micro signals, like system usage data or process metrics, suggest the possibility of a problem, but often lack the broader context needed to interpret whether to act and how. Indeed, we have not spoken about micro signals to date in this article because it is not a core type of data that is immediately insightful without context.
See below a summary of macro, middle, and micro signal data:
As organizations mature in their collection, integration, and use of different types of data to improve people’s experiences, macro, middle and micro signals might all play a role. But to begin, middle signals provided by the collection of moment and touchpoint data are most valuable.
When collected in quantitative ways to begin, this type of data allows organizations to:
- Focus improvement efforts on the most highly important and highly broken moments. In this case, improving the moment ‘pursue new role’ was a higher priority according to people’s feedback than ‘learn in my role’, which might also be related to the organization’s growth pillar)
- Understand which touchpoints within the moments require most attention. While there were other areas to improve, the team needed to start somewhere, and this helped with prioritization. It then becomes relevant to involve employees in making deeper sense of the specific findings and co-create solutions to address the pain points
- Re-measure to quantify the impact of the work. In this case, pre-intervention CSAT was 60%; post-intervention(s) CSAT jumped to 78%. With this kind of data in hand, teams can undeniably describe the impact of their work.