Data Fluency Series #2: How to Get Reliable Ratings Data
November 07, 2017
by Marcus Buckingham
This is the second episode in our series on Data Fluency. You can also watch the first episode here.
In the first episode on Data Fluency, we talked about the three most important words when it comes to data: Reliability, Variation, and Validity. This week we’re discussing the first of those words – How do we know that the people data we’re collecting is reliable? How do we know that it’s measuring what we say it’s measuring? And can you really measure something like performance in another person?
There are three ways to measure things. You can count it (ie, how many inches tall are you?). You can rate it (how tall do I think you are?). And you can rank it (list the people on your team from tallest to shortest).
Ranking is the least-helpful measurement tool, because your rankings may tell you something in one context, but it’ll be irrelevant in any other group. HR data is all about interpersonal comparisons, and you can’t do that with ranking because one group’s rankings are irrelevant to another group’s.
The best, most reliable way to measure anything is to count it, because counting has inter-rater reliability. That means that no matter who is counting, the number will always be the same. Countable people data like payroll, time and attendance, and length of service will always be good, reliable data.
Unfortunately, when it comes to people data, many of the things we want to measure aren’t countable. We can’t count your performance or your leadership skills, your strategic thinking or your engagement. So to measure these things, we have to rate them.
Most of the people data we have is ratings data – and most of the ratings data is, unfortunately, bad. It’s not reliable. It doesn’t measure what we say it measures. And most of that is due to rater unreliability.
Maybe you’ve heard of the Idiosyncratic Rater Effect, but as a quick refresher, it means that humans are unreliable raters of anything besides their own experiences and intentions. If you are asked to rate another person on something, your rating will reflect much more about you as a rater than the other person. In fact, 61% of a rating is a result Unconscious Rater Bias. That means that your performance rating reflects your manager, and not you. And that’s a problem! We pay you, train you, promote you, fire you, as though it reflects you… and it doesn’t!
And maybe you’re thinking, “Well, if I just get more people to rate Marcus then our idiosyncrasies will be averaged out!” Hate to break it to you, but bad data piled on top of bad data doesn’t suddenly make it good data. It means you have an even bigger pile of bad data.
Bad Data + Bad Data does not equal Good Data.
Tweet thisThe other reason ratings are rarely reliable is called Data Insufficiency. When you’re rating a person, you rarely have enough data to rate them reliably. If your manager’s manager is rating you and they see you once a week, how is that rating going to be good data?
So, how do we solve for that? Well, a person can only reliably rate his or her own experiences and intentions. Whenever you see a ratings system that asks you to rate someone on some quality or competency, it is bad data. When a survey tool asks you to rate your own experience or your own intentions, it’s good, reliable data. It’s important to know the difference, so you can tell when you are reliably rating someone else, or when someone else is unreliably rating you.
A person can only reliably rate their own experiences and intentions.
Tweet thisWatch the next Data Fluency Series episode for more information on how to become data fluent, and the impact of bad data on our businesses.
Note: The views expressed on this blog are those of the blog author(s), and not necessarily those of ADP. This blog does not provide legal, financial, accounting, or tax advice. The content on this blog is “as is” and carries no warranties. ADP does not warrant or guarantee the accuracy, reliability, and completeness of the content on this blog.
ADP, the ADP logo and the ADP Research Institute are trademarks of ADP, Inc. All other marks are the property of their respective owners. Copyright © 2020 ADP, Inc.