Thursday, January 30, 2025

Don't work against each other

Last week I explained what's wrong with using quantitative targets to measure your employees. There's one other risk you might run, but I didn't discuss it there because (unlike the others) it doesn't really derive from a flaw in applying the metrics themselves.

Say you offer a bonus to the highest achiever in a certain area—the one who packages and ships the most widgets, or something. This is probably a number that is easy to calculate; and if you are in the business of packing or shipping widgets, there's no obvious hidden bias. The problem, rather, is that you risk setting your people at odds with each other, because they will compete internally to get whatever reward you offer to The Winner. 

We are so used to thinking of competition as a spur to great achievement, that it's easy to forget how it can also shred your organization's ésprit de corps. If Fred and Max are both likely contenders for the prize, they might both lose sight of the bigger picture in the rush to win. In the extreme case, if it looks like Fred is going to pack more than Max, Max might sabotage Fred's work so that he falls behind. In a sense this is just an instance of "unhelpful optimization," which we have discussed before. What makes it especially poisonous is that the members of an organization have to be able to rely on each other or the organization falls apart. This is not just a mathematical misunderstanding which drives suboptimal behavior. This can strike at the heart of the organization itself.*

Of course your mileage may vary. If wages are generally fair, and if the prize is mostly symbolic—a stuffed plush toy, say, or a loaf of zucchini bread—then people will likely treat the competition with a sense of fun. Last year Fred won, this year Max won, next year maybe it will be Ermentrude, but in any event the whole team can go out for pizza afterwards with a good heart. But if your people do not feel that wages are fair, and if the prize is really substantial, then the effects are going to be corrosive.

You already have to compete against other firms in the marketplace. At least inside your own walls, your people should feel safe.

This clip from the movie "A Beautiful Mind" sums up the issue elegantly.**


__________

* To be clear, this is hardly a revolutionary insight. The earliest thinkers to address social organization were already aware of it. See, in particular, Plato's discussion of exactly this topic in Laws, near the beginning of Book 1, Stephanus pages 625D-628E.

** Comments on YouTube seem divided over whether this clip correctly explains the concept of Nash equilibrium. I'm not an economist, so I have no idea whether it does. But it is a great scene.  

       

Thursday, January 23, 2025

The mismeasure of performance

We've been talking for the last couple of months about metrics: how to make them work for you, and what common issues to avoid. And there's no question but that they can be very powerful. Rightly applied, quantitative metrics can give you unparalleled insight into the behavior of a process, allowing you to fine-tune it to improve efficiency and eliminate errors.

If metrics can tell us so much about our processes and machines, it is only natural to want to apply them to our people as well. After all, if Fred is more productive than Max, isn't it only fair for us to know this, and to reward Fred accordingly? It seems like an obvious line of thought, and in fact many companies have some kind of performance appraisal system in place.

So it's interesting that a number of high-profile voices have spoken out against the whole concept of annual performance reviews. Jack Dorsey—co-founder of Twitter and CEO of Blockis famously against them. Perhaps more seriously, so was W. Edwards Deming, who identified annual performance reviews as one of his "Seven Deadly Diseases of Management."

What's wrong with reviews?

If metrics are generally so useful, and if the use of metrics to evaluate human performance sounds so plausible, what's wrong with it? Why are Deming and Dorsey—and others, to be sure*—so opposed to them?

Jack Dorsey By cellanr - ,
CC BY-SA 2.0, Link

Their main objection is empirical: Deming and Dorsey both observe that no system of annual performance reviews ever seems to deliver the benefits that it promises. On the whole, performance doesn't improve. (Indeed, over time it may degrade.) People go through all the motions, but no one's heart is in it. And in the end the organization is left doing no better than it would have done without these reviews.

That doesn't mean that either Deming or Dorsey wants to do away with feedback between manager and employee. Not at all. They just argue that asking employees to hit numerical targets is an unreliable way to generate feedback, and talking only once a year is far too little. 

  • More precisely, Deming advises that 90-95% of an employee's performance is determined by the system and not the employee's individual initiative, so improvement efforts have to focus on making the system better and not simply on making the employees more zealous. He agrees that it is important for employees to feel motivated, but argues that the most effective motivations come from the work itself, and from feeling fully engaged by the organization's leadership.
  • For his part, Dorsey recommends ongoing, real-time conversation between employees and management on how well the work is getting done, along with focused coaching where needed to improve an employee's skills in the moment. Why wait another eleven months if you can address the topic now and get past it?  

How can this be?

This is all logical, so where did our original line of thought go wrong? What's wrong with applying the basic logic behind metrics to measuring human performance?

The error came in forgetting the cautions we have already observed about the use of metrics, because every single one of them applies to this use case. 

All in all, managing human behavior with a numerical dashboard is likely to miss all the things you really need to know. 

So yes, measure your processes. Measure your tools. But when it comes to your people, talk to them instead. You'll learn a lot more that way. 

Maybe I can end with a definition I saw recently, posted online by the reliably-entertaining Ralph Aboujaoude Diaz:


__________

* See, purely as an example, this recent post in LinkedIn.

           

Thursday, January 16, 2025

Calibration in wine-tasting

A couple months ago, I ran across an article about wine-tasting that I promptly lost and have not been able to find again. But it made some interesting points about calibration, so—as part of the current series on measurement—I'll try to reproduce the gist of it here. Since I can't find the article I can't give you a footnote to substantiate the factual claims I make about wine; but I think you'll agree that they are mostly common sense.

We all know that there is a difference between Good Wine and Bad Wine, and also that Good Wine generally costs more. But this article suggests that we recognize at least three levels: Terrible Wine, Good-Enough Wine, and Great Wine. And the differences between these levels are revealing.

As you climb from Terrible Wine to Good-Enough Wine, the price goes up by a bit but generally not by a lot. At the same time, the overall quality improves dramatically. Most wine drinkers can tell the difference between Terrible Wine and Good-Enough Wine.

But when you then climb from Good-Enough Wine to Great Wine, the variables shift. With this step the price may shoot up much higher. The wine gets a lot better too, but what is interesting is that not all wine-drinkers can taste the difference. More precisely, anyone can tell that the Great Wine doesn't taste quite the same as the Good-Enough wine. But unless you have a trained palate, you may not be able to distinguish the subtleties that make this bottle worth ten times as much as that bottle. Even so, those subtleties really do exist. But it generally takes a trained palate to recognize them.

What does this have to do with calibration? Everything.

In wine-tasting, your palate is the measuring instrument; the wine is the object to be measured; and its quality is the dimension in question. And the point is that the measuring instrument—your palate—has to be calibrated to meet the requirements of the measurement. But this calibration is of two kinds. 

  • On the one hand, you want to make sure no one is leaning on the scale; or in other words, that the measuring instrument reads zero when the inputs are (in fact) zero. 
  • On the other hand, you want to make sure that your measuring instrument is capable of the readings you need. If you need nanometer precision, don't use a yardstick. But if you are measuring carpet, don't use a nanomeasuring machine.

These principles apply exactly to the measurement of wine. 

  • The first requirement—that your palate should read zero when you aren't tasting anything—means that you shouldn't be distracted by other flavors. You can achieve this by taking a bite of something with a neutral flavor before sipping your wine.* 
  • The second requirement means that your palate has to be trained to match the use case you have in mind. 
    • If all you need is to find a table wine that will complement your hamburger or your Halloween candy,** you have to be able to tell the difference between Terrible Wine and Good-Enough Wine. And for that use case, a greater sensitivity might be wasted. 
    • On the other hand, if you are judging premium wines at the highest level—or if you are trying to re-create Alexandre Dumas's experience drinking Montrachet***—well, for that you need both sensitivity and training.

Once again, as always, what you need all depends on what you are trying to do.   

__________

* Note, for example, the care with which the Comte de Rueil offered his guests olives between each course to cleanse their palates before tasting the wine, in Dorothy Sayers, "The Bibulous Business of a Matter of Taste," in Lord Peter (New York: Harper & Row, pp.154-167.)   

** Yes, this is really a thing! See for example this blog post from October 2022.

*** Dumas once declared that Montrachet should be drunk only “on bended knee, with head bared.” It is supposed to be the best white wine in the world, or one of them.

   

Tuesday, January 14, 2025

FMEAs—Reducing the risk of failure

This morning, Manufacturing Tomorrow published my article, "FMEAs—Reducing the risk of failure." It's their article now so I won't post the text of it here, but you can find it by following the link. I hope you find it useful!


 

Thursday, January 9, 2025

Working with metrics that don't tell you much

We spent the whole month of December talking about metrics: how to create them, and how to avoid some common pitfalls associated with their use. Before we leave the subject, I want to address one more topic: What about when your metrics don't give you all that much information?

The first thing is to check how much information you really need. If you obviously need more than you are getting, that's almost like having no metric at all. Then maybe you need to create one, using whatever tools you have available. But not so fast. Sometimes even just a little data can be enough.

My son, Graham Mills, is a soil scientist, and recently he and I were talking about the kinds of measurements commonly used to classify Western rangeland. The Bureau of Land Management (BLM) has defined a strategy called Assessment, Inventory, and Monitoring (AIM). The standard source for this methodology is this publication,* which explains that:

Core methods generate indicators which represent the minimum information necessary to describe three key ecosystem attributes: soil and site stability, watershed function, and biotic integrity …. Nearly everything we value about ecosystems depends on these attributes. These core methods can also be used to generate many additional indicators that directly inform multiple management objectives, such as maintaining wildlife habitat, biodiversity conservation, producing forage, and supporting watershed health. Modifications to the core methods are discouraged as they limit the ability to combine and compare datasets, and thus describe ecosystem attributes at multiple scales.**

So far, so good. The catch, as Graham explained it to me, is that the actual measurements of soil health represent such a small fraction of the total characteristics of the soil that they are still maddeningly vague. It is, for example, not really possible to develop a solid theoretical understanding of the changes that have taken place over the years on a particular stretch of rangeland. To anyone with a scientific background, this limitation is frustrating.

Frustrating but not immobilizing. It turns out that soil scientists can still work with the AIM results.

The key is that the range of possible actions for restoring damaged or depleted rangeland is so very narrow. BLM scientists understand that rangeland is a biological system, and that systems—by definition!—are self-organizing and therefore unpredictable. So there are only a very few interventions permitted at all; and all of them are familiar and well understood. Plant this kind of bush here. Plant that kind of ground cover over there. If there is human garbage clogging a freshwater spring, remove the garbage. And so on. 

The list of approved actions is very short. And therefore a complete quantification of all possible soil characteristics is not needed. If the soil is seriously damaged, do this; if mildly damaged, do that; if already thriving, do a third thing—or maybe nothing at all. It turns out that that's enough to cover it.

As in so many other cases, the first step is to understand what you really need and want. Only then can you set about getting it.     

__________

Herrick, J.E., Jornada Experimental Range, 2018. Monitoring manual for grassland, shrubland, and savanna ecosystems. USDA - ARS Jordana Experimental Range, Las Cruces, NM.

** Ibid., p.1.

Photo credits: G. Mills, May 2021.      

Thursday, January 2, 2025

Why are you an auditor?—REDUX!

It's the New Year,* and I hope 2025 will be good for us all! In case any readers are still coping with a surfeit of champagne, let's start with something light.

A few months ago, I published a post asking "Why are you an auditor?" I told the story of Amir, who took a job as an auditor in order to learn how to manage a business. His plan was that, once he had learned how to run a business, he was going to quit auditing for entrepreneurship.

After writing this article, I posted notices pointing to it in a number of venues. One of these venues was the myASQ community, where Anthony DeMarinis of AJD Quality Solutions responded with the answers that he gives his students to the exact same question. I've wanted to share them with you ever since I read them, and this looks like a fine time to do so.

Top 10 Reasons to Become an Auditor

10  Get to see the big picture with exposure to Top Management

 9  Benchmark other areas and promote Out-Of-Box Thinking

 8  Good way to increase your personal knowledge and keep your job

 7  Paid time off and a diversion from your regular job

 6  Acquire transferable skills to prepare for the next layoff

 5  Experience with conflict resolution (which may come in handy at home)

 4  Free dinner at fancy restaurants with the Auditee

 3  Opportunity to network and look for another job

 2  You're not the Auditee and get to hassle someone else for a change

 1  Unlimited POWER!!!

Once again, I wish you all a very Happy New Year! Let's do good things in 2025.

  

__________

* OK, technically that was yesterday. But it's only been 2025 for a scant 32 hours so far.     

Five laws of administration

It's the last week of the year, so let's end on a light note. Here are five general principles that I've picked up from working ...