Elevate your HR career by joining the most supportive community of HR leaders. Explore membership
Explore membership

Performance Calibrations that Work

Anyone who has been through the calibration process knows it can get messy…fast. Learn how to improve these meetings so that they lead to a better performance review process.

Calendar SVG
Jan 17, 2024
No items found.
Calendar SVG

Last updated on Aug 21, 2023

In a previous blog, I wrote about building performance reviews that are, among other things, designed to mitigate bias. This includes defining clear and consistent expectations against which all employees are assessed and limiting open text questions that are proven to introduce bias. By doing this, we can feel more confident in the objectivity and consistency of the outcome or our review process.


Unfortunately, even the most perfect performance review system will not make the human being writing the reviews any less human. They will naturally bring their deeply rooted biases into the process. Everyone needs support to ensure these biases are identified and addressed. This is the goal of a performance calibration.


Anyone who has been through the calibration process knows it can get messy…fast. Hours-long meetings, complicated spreadsheets (or, worse, no data at all) and plenty of rabbit holes have led calibration meetings to have a bad reputation and unimpressive results. Often, these marathon sessions, which cost the company tens of thousands of dollars in leaders’ time, lead to few changes to review cycle outcomes.


How might we improve the calibration meeting such that they lead to a better performance review process? How might we make the time spent truly valuable?  I have run hundreds of calibration sessions over the past 20 years and have seen good, bad and ugly. Below is a summary of what I have learned about how to make performance calibration worthwhile for employees, managers and leaders.


PREPARING FOR CALIBRATION:

Talk Openly About Calibration

Calibration is famously opaque. Twice a year, managers are locked in conference rooms with the (now virtual) blinds drawn. Employees know they are talking about them, but don’t know what is happening or whether it is helping or harming their chances for the performance or promotion outcomes they are hoping for.


As you talk to your employees about the performance review process, I encourage you to be transparent about why and how you calibrate. Talk about the outcomes related to fairness and consistency that you are trying to achieve. Consider sharing something like this:


“After reviews are written and before they are finalized, we conduct calibration meetings with managers. We use this time to align on how we are thinking about expectations and ensure we are consistent in how we are assessing performance. We also take time to talk about and help each other identify any biases that may have unconsciously entered the process. The goal with these calibration sessions is to ensure a more fair and consistent outcome for you.”


Not only will this transparency instill a great sense of trust and confidence in employees, it will also ensure managers take their role in calibration seriously.


Think about Sequencing

Top down? Bottom up? The best way to sequence your calibration meetings depends on your goals.


Top down calibration means starting calibrations with your sr. leadership team, calibrating the leaders that report to them, and then moving to functional level sessions. The benefit in tops down calibration is alignment. Your leadership team will leave their own calibration session aligned on expectations and bring these expectations to their functional calibrations, thus driving more consistency. Tops down calibration is great for:

  • Newly formed leadership teams
  • Leaders with less experience calibrating
  • Leaders who will be expected to run their own calibrations


Bottom up calibration means conducting functional level calibration prior to calibrating with the sr. leadership team. The benefit of bottom up calibration is visibility. Your leaders will be able to review where the rest of the organization landed and consider overall rating and promotion rates as part of the session. Bottom up calibration is great for:

  • Mature leadership teams who are aligned on performance expectations
  • Leadership teams highly interested in an organizational view of performance
  • Organizations for which a forced distribution is important


Whichever you choose, consider how you might reap the benefits of the other. For example, if you calibrate top down, find ways to report back on organizational performance following the completion of calibration. If you decide to calibrate bottom up, consider a conversation ahead of time to ensure alignment on the expectations of performance.


Determine who will be Discussed

One of the biggest mistakes made in calibration is failing to focus conversation on the right people. When we aim to talk about everyone, we usually run out of time. When we just talk about the top and bottom performers, we miss the opportunity to identify middle performers who may be miscategorized.


Rather than identifying certain categories of performance to guide the discussion, I recommend focusing on the employees for whom there are questions. In other words, surface conflict. Spend less time in your calibrations agreeing on how great Joe is and more time wrestling with whether Sam is truly exceeding expectations when their peers find it hard to work with them.


You can do this by sending ratings and promotion recommendations ahead of time and asking the calibration participants to “flag” those they want to discuss. They may flag someone because they:

  • Are surprised or disagree with the recommendation
  • Aren’t familiar with the employees work and want to learn more
  • Have specific questions about the employee’s performance
  • Want feedback on their own direct report

It’s ok to be fluid and add other employees to the list along the way. Just be clear about why you are spending time on each employee as part of the conversation.


Know your Goals: To Force or not to Force

As you enter your calibration session, be clear with managers about whether there will be an expectation to meet some sort of forced distribution of ratings. There are several approaches to managing the distribution of ratings in a performance review cycle:

  • A forced distribution requires a percentage of employees to fall into each rating category. This may be enforced at a team, department or company level. For calibration, alignment to the required distribution becomes a focus.
  • A recommended distribution offers an ideal distribution of ratings without forcing it. For calibration, this offers guidance or a “gut check” as to whether each rating is being used with the discretion intended. For example, if the recommendation is for 15% of employees to be in an Exceeds Expectations bucket, but there are 30%, this is a signal that the bar for the rating is not being held as intended.
  • Alternately, you may decide not to focus on distribution at all and rather look at individual performance only. For calibration, this places more emphasis on reviewing employees and assigning the correct rating, rather than zooming out and looking at the group.


Regardless of how you handle rating distributions, it is useful to think about whether the distribution of ratings is an accurate reflection of the outcomes of the business for a team, department or company. For example, if a department missed all or most of its goals, but skewed highly toward meeting or exceeding expectation ratings, there may be an important signal that the bar is not being held as high as expected.


FACILITATING CALIBRATION

Bust Biases

The goal of calibration is to ensure consistency and mitigate biases. But those biases will not bust themselves. Be intentional about identifying the kinds of biases you are looking for and empowering participants to take an active role in calling them out. Here are three simple ways to integrate bias busting into your calibration conversation:


  1. Share Common Biases: Simply naming the common biases that enter into the performance conversations (this is a tried and true list) is a great first step. Spend time at the beginning of the session educating on these biases - some will be more familiar than others.
  2. Have Participants Ask for Help: After reviewing the biases, have each participant choose one bias they would like the group to help them mitigate. This not only asks each participant to be self-reflective on their own biases, but gives others permission to call out biases when they see them.
  3. Create a Visual Cue: Sometimes something physical and/or visual makes it easier for someone to call out bias when they see it. If calibrating in person, print a card with tips and reminders about the biases you are on the lookout for. If calibrating virtually, select an emoji or visual cue that everyone can use to help participants feel more comfortable calling out their peers.


If resources allow, it can be effective to have an objective third party (usually a member of the HR team) act as a dedicated “bias buster” in the room. Their job is to listen carefully for bias language, call out inconsistency in understanding of expectations and ratings, ask questions like, “Tell me more about what it looks like to be a ‘superstar?’”


Use Data

One important way to make calibration more objective is to rely on data, rather than narrative, to review and assess individual performance across the team. Ideally, this takes the form of any data that can be pulled from the review. For example, if you have rated questions in your review, along with an overall rating, you can assess them for alignment and consistency. A review of the following two employees demonstrate inconsistencies with how review data were interpreted in determining an overall rating.

No alt text provided for this image

Additional useful data include past review ratings, tenure in role and any other performance data, such as goal or target results, that may not be captured in the review form.


Be a Master Facilitator

The calibration conversation is perhaps more prone to rabbit holes, side bars and circular logic than any other. The goal of calibration is to surface disagreement and use it as a way to align on performance expectations. Guiding the conversation this way takes active facilitation. Here are a few tools to use to keep the conversation focused and productive.

  • Limit Agreement, Encourage Debate: Have participants move agreement to chat (+1 all you want) or create a visual cue like a thumbs up if others agree. Reinforce that you will move the conversation quickly through agreement in order to surface productive debate. Ask questions like, “does anyone disagree?” rather than “does anyone else have something to add?”
  • Dig out of Rabbit Holes Quickly: Have a keyword or phrase that anyone can use to flag a conversation that has gone on too long or is no longer productive. I like the phrase E.L.M.O. (Enough, let’s move on). Anyone can drop into chat, or for in person meetings, consider making cards that participants can flash when it is time to push ahead.
  • Keep Track of your Progress: Continue to come back to your list of employees you need to review and check in on the process. While it may feel tedious, this consistent process and progress check helps push the meeting forward and stay aware of time. The prompt may sound something like this:

“We have reviewed all level 1 and 2s and talked through promotion to level 2. We will plan to move on to our large group of level 3 now, and calibrate their ratings first, then review promos from 2 to 3. Given the time, we should spend about 30 minutes on this next group”

  • Agree on Follow ups: When you do your job well, you surface disagreement and give managers more to think about in terms of their final review decision. Do not try to make all decisions in the room, but rather agree to and track follow ups. This keeps the conversation moving and ensures any changes that need to be made are captured. In these moments, you can ask the group if they are comfortable with that person making the final decision. This prompt may sound something like this:

“Sandy, it sounds like you got some really good feedback about Joe that may lead to an adjustment in his rating. I would suggest you and Katie take this offline and let me know by the end of the week what you decide. [To the group] Is this group comfortable with Sandy making this decision, based on the feedback you shared?”


Conclusion

Calibrations are a huge time investment. With thoughtful planning, preparation and facilitation, however, these meetings have the potential to take your performance review process and outcomes to a new level. It won’t be perfect the first time, but you will find that over time, your organization will build this calibrating muscle. Ultimately, leaders and managers will gain more clarity and alignment about expectations and the bar for performance. They will open lines of communication for feedback in the future. And most importantly, the outcomes of your performance reviews will be less biased and more consistent and fair.

Latest stories

green circleCircle Contact

Join the mailing list

You’ll get a weekly email with the best HR content and info on Troop events

Learn more about Troop

Troop is the must-have training ground and support network for today’s HR leaders.