In an earlier post, I talked about Job Descriptions and Roles. In another, I talked about Vertical Hierarchy and Titles. It’s time to talk about Performance Management.
One of the most hated systems in many organizations is the performance review system. Every few years the griping will rise to a level where the Talent Management professionals (as I’ve explained in prior posts, I refuse to use the word “resource” in relation to humans; if you don’t like “Talent Management,” then try “People Operations,” or the humble, unpretentious, “Personnel”), develop a new performance review system, which folks seem to hate about as much as they hated the old system.
It is important to understand that most performance review systems have nothing to do with managing performance. The primary objective of most performance review systems is to establish a legal paper trail in the event of future litigation. Performance management is a secondary objective.
From my chair, sports teams are way ahead of business teams when it comes to performance management. In baseball, when a pitcher is struggling on the mound, a co-worker, usually the catcher, will be the first to talk to him—players (peers) are the first to tend to an individual’s performance problem. If the troubles persist, but aren’t too serious, the pitcher will be visited next by the pitching coach. If the problem is serious or the struggles continue, then the pitcher will be visited by the team manager. None of these people––the co-worker catcher, the first-line supervisor pitching coach, or the big boss manager––will necessarily wait until an inning ends to talk to the pitcher. If the pitcher is struggling, they intervene right away—they can’t afford to wait. They’ll stop the action to visit the pitcher in the middle of an inning, and, if necessary, the manager will pull the pitcher right off of the mound and out of the game—the pitcher isn’t fired, they are just pulled from that game (think sprint), and the pitching coach works more specifically with that particular pitcher. If the pitcher’s struggles continue into succeeding games, then the team will either send the pitcher down to a lower league to work on their game, or they will trade them to another team, or they will release them outright. What the manager doesn’t do is wait until the end of the season to ask the pitcher what he was thinking, in the forty-third game of their 162-game season, when he threw that hanging curve ball to the other team’s number nine hitter, who had a one-and-two count in the eighth inning of a one run game, when there was one out with runners on first and third.
Other sports are no different. In American football, a coach cannot go out onto the field, but he can call a timeout and have a player come to him. There are assistant coaches sitting in skyboxes at the stadium recording the game and critiquing theirs and the opposing team’s performance in real time. Between possessions, when the offense or defense is off the field, the coaches in the skyboxes will transmit photos to the bench and talk to a player about something they saw in their previous series of plays. Feedback in sports is effective because it is immediate, specific, and frequent.
Another way in which sports teams are more advanced than most business teams is that sports teams know very specifically the behaviors they want out of their players, whereas most business teams have but a vague idea. As I explain in the classes and retreats that I conduct with executives regarding their roles in leading organizational transformations, and the BHG Framework that I have developed over my career of creating 9 new organizations, renovating 4 under-performing ones, and redesigning 2 others that were doing OK but were tooled for the past, not the future, all organizational transformation begins with leaders defining, in specific terms, desired behaviors—the behaviors that they want to see more of, and the behaviors that they want to see less of. This is a daunting task, but it is the job of a leader and it is their first task in leading an organizational transformation. The good news is, if “organizational agility” is your goal (i.e., agility in the ways and means with which an organization is led, managed and operated), then a large part of their behavior definition task is already done for them. I coach them to consult the Agile Manifesto. From the Manifesto’s core beliefs to its 12-principles, to its values tradeoffs, it is almost all behavioral based—behaviors defined at a level of specificity such that they can be taught, they can be measured, and they can be rewarded and/or corrected.
We need to remember that the annual performance review is all about documentation and has little to do with good performance management. Good performance management with employees at work (and children at home) is governed by the same principles as in sports, and, frankly, all the rest of life––what makes feedback effective is its immediacy, its specificity and its frequency. The key, therefore, is to provide feedback in as near real time as possible, and again at the completion of a Scrum or a Sprint––certainly more frequently than at the end of the fiscal year or on the employee’s service anniversary. This is not just a managerial obligation—I expect the senior team members to play a leadership role and provide their peers with feedback just like the catcher does with the pitcher.
That feedback needs to include not only coaching, but also rewards, and those rewards can’t come only at the end of the fiscal year or on the employee’s anniversary date either. What is your equivalent of the game balls that many sports teams award to players following a game? Why don’t we pay out profit sharing and/or bonuses to employees at the end of the quarter, at the same time we reward stockholders with their dividends, rather than waiting until the end of the year?
If you drop a sugar cube into a rat’s cage right after they ring a bell, you will train the rat to ring the bell. But if the rat rings the bell, and a sugar cube doesn’t drop until the end of the fiscal year or on the rat’s anniversary date, then no learning occurs. Same with housebreaking a puppy; if the puppy has an accident in your home, but hours pass between the accident and any action on your part, no learning occurs. You may say, “Yeah, but people are different than rats and dogs.” And I will tell you that there is very little evidence that people track any differently. What makes feedback effective is its immediacy, its specificity and its frequency. It’s the same for raising children. It’s the same for correcting misbehavior in society, and it’s the same for coaching adult members of any kind of team.
There’s an enlightened trend of companies doing away with annual performance reviews. However, if you are with one of the laggards, and need to bring all of the performance feedback together once a year and document it in the individual’s annual performance review, this will be easy if you’ve been giving feedback all along. Otherwise, the annual performance review process becomes an unnecessarily complex and overly sophisticated managerial burden of little to no value in regard to an employee’s actual performance.
If I were Talent Management King-For-A-Day, performance reviews would begin with the job description which is comprised of those five roles that I described in a prior post, Job Descriptions and Roles: Producer, Talent & Resource Manager, Innovator & Entrepreneur, Personal & Team Developer, and Friend & Citizen. The expectations for each role will vary depending on the position and its level: apprentice (i.e., still learning); journeyman (i.e., self-sufficient): and master craftsman (i.e., teaches others). Performance review periods would be defined around sprints or releases, not fiscal periods or anniversary dates. Prior to or during a sprint or release, the employee and their team would come to agreement as to what one thing the employee will do to advance them self in each role––one objective per role. This discussion would necessarily include those things the team will do to help the employee achieve their objectives (e.g., pairing assignments to develop a particular skill, training classes, mentoring assignments, etc.). When it comes time to write the performance review, I would ask each employee to comment––principally in prose––on his or her own performance in each role.
When it comes to assigning actual performance ratings, I would simplify the ratings and employ a triangulation process. First, although companies may differ in their use of ratings (e.g., some use numeric values, such as 1-5 or 1-10; others use category labels, such as below expectations, meets, exceeds or far exceeds; etc.). Whether on a numerical or category basis, companies try to delineate performance at a level of granularity that is too fine. Again, if I were Talent Management King-For-A-Day, I would have only three, self-explanatory, rating categories:
- Needs Coaching.
- Doing Just Fine.
- Super Star.
I think you could take a random poll of team members, customers and suppliers, and just about everyone would agree on who the super stars are and who are the people who need coaching; everyone else is doing just fine. Certainly, those who “Need Coaching” and those who are “Super Stars” are the outliers, as they should be. The bulk of your bell curve will be those who are “Doing Just Fine”, and I just don’t see the value in trying to parse the “Doing Just Fine” group into smaller subsets. Performance ratings are inherently subjective and to attempt a finer level of granularity just creates a false sense of precision; it just doesn’t add any useful value.
As subjective as they are, ratings are particularly important because they usually drive merit increases and promotions. The best way that I know for bringing objectivity to an inherently subjective process is through triangulation––spotting the person’s position based on multiple points of view. By this I don’t necessarily mean soliciting 360o feedback.
My experience with 360o feedback is that if you use it on an exception basis it can be a valuable tool. However, if you use it as a normal part of your process then it’s not worth the trouble. This is especially true if your company has the policy of performing all performance reviews at the same time (e.g., Q1), because people are then inundated with requests for 360o feedback, and what might otherwise be a useful tool is turned into a burden. The times I have solicited 360o feedback as a regular part of the performance review process, the feedback I received was of little value––frankly, it was mostly crap, shallow in substance, done with obvious haste by people probably overwhelmed with requests to provide 360o feedback on others. However, I would call for 360o feedback on an exception basis: whenever I sensed or discovered that my appraisal of a person’s performance and their self-appraisal were at odds.
During my career, I have used a different kind of triangulation process with my leadership team to assign performance ratings to their employees; I have used this as the Director of a 50-person, single site organization and as a VP/CIO of a 300-person global organization:
- I would call a meeting of my directors, their managers, and our Talent Management representative(s). These meetings were organized by location, and would usually last a full day (sometimes two days at the locations with larger organizations).
- At these meetings, each manager would walk through each of their employees stating the performance rating they thought that employee had earned (obviously, this required advance preparation by the managers). The manager did not have to elaborate on any employee they were rating as “Doing Just Fine,” or its equivalent. However, if the manager anticipated rating an employee anything other than this, whether it was “Needs Coaching” or “Super Star”, then that manager also had to provide concrete examples of that employee’s performance that would warrant the outlier rating.
- At any point in a manager’s presentation of their employees, other managers and directors could (and were expected to) speak up if they had experience with that employee that either supported or challenged the rating the employee’s manager had planned. It went both ways. Often another manager would present examples that challenged the employee’s manager’s rating as too high. Just as often, other managers would present examples that challenged the employee’s manager’s rating as too low.
- An important ground rule in these meetings was that a person’s rank (e.g., manager, director, VP, etc.) did not give more weight to whatever argument they might make. The weight of someone’s argument was a function of the argument’s merits, not the rank of the person making it.
- Another important ground rule was that because an employee’s manager is the one who is ultimately responsible for their employee, the employee’s manager is the one who makes the final decision regarding that employee’s rating. The group could not usurp the employee’s manager’s decision.
The discussion that would ensue was rich and healthy. The directors, the Talent Management representative and I were there not only to participate in the process and keep it on track, but to ensure consistent calibration among the managers across my entire organization.
I do not subscribe to the practice of force-fitting ratings to a normal curve. I know that when you have a population of sufficient size the ratings distribution should approximate a normal curve. However, the practice of force-fitting the distribution of performance ratings to any kind of curve is ridiculous. First, force-fitting a result is not a statistically valid thing to do. Second, it is a win/lose methodology; in order to give someone an “A” you have to give another person an “F,” and it just doesn’t always work out that way. Finally, and most importantly, it is unfair––people should get the rating they have earned, not the rating you need to fit your distribution to a curve. I understand that some companies adopted this practice because they saw performance ratings creep up over time. If this is the case, then the correct answer is to train, coach, measure, and reward/punish managers on their measurement of their employees’ performance. But to address ratings creep by enacting a broad policy that force fits ratings to a normal curve is just managerial laziness.
At the end, when we had gone through this rating process for all employees at all locations and consolidated the results to create a population sufficient in size, the final ratings did generally fit a normal distribution without having to be forced. There were some exceptions, of course, but given the rigor of the process, they were defensible.
All-in-all, a more meaningful, impactful, and certainly more Agile approach to performance management.