Thursday, May 14, 2026

"Hardening in the field"

Years ago, I worked for a small tech startup. We were scrappy and energetic, and we hadn't quite decided how finished a new product had to be before we could ship it. 

  • Did it have to be bug-free? There were always more bugs. 
  • How about if it had No serious bugs? That sounds nice, but what counts as "serious"?

In all these discussions, our head of engineering usually wanted to ship now and not later. (Of course he also saw the financial statements, and knew that we needed the revenue!) His argument was that if the product was basically good enough, then what we needed was for it to operate in a real-world environment so that we could identify which remaining defects really mattered. Then when we fixed those, the product would be ready. Other nominal defects might exist, but they would be merely cosmetic. He called this process "hardening in the field."

"Let's see: eggs, cheese, filling. I guess it's ready to serve!"
Or maybe not.
One of our project managers remarked that shipping a product with the hope that it will get better at the customer site is like a restaurant serving up raw ingredients and then hoping that the meal will get fully cooked once it's at the table. But the idea isn't quite as bad as that. In fact, this line of reasoning is exactly why the tech industry introduced the concept of beta testing. Admittedly it is a dirty trick to ship beta-quality product to a paying customer who expects something finished. But companies frequently do need to see their products operate in a real-world environment, and some customers are so eager for new technology that they will accept the risk that the beta product might fail unpredictably. Once my startup matured enough to establish regular beta programs for our new releases, we stopped talking about "hardening in the field."  

"But wait—this is OK?"
So companies developing new products face competing demands. The need for real-world data pushes them to release sooner; customer expectations about those products may push them to wait until the basics are solid. I assume that nobody will release a beta-version automobile whose brakes don't work yet. (Though I might be wrong about that. See for example the discussion in this post, and the linked news articles.) Likewise most restaurants won't serve uncooked food, unless the customer ordered sashimi or carpaccio. But the high tech market is more confusing, because the expectations conflict.

Rapid innovation is a more or less constant feature of the high tech market landscape. Everybody knows that brand-new implementations of new technology are usually full of bugs; stable, reliable implementations take longer. So what do you do? Partly it depends on the inherent risks of the exact product you are designing. Is it a car or a rocket that can hurt people if it fails? Or is it a toy, where failure will just disappoint them? Is it easy to recover? And what does the regulatory environment look like? Obviously you have to take account of all these factors.

Beyond those factors, though, you may just have to decide where you want your organization to fit in the ecosystem of high-tech products: do you want to be first to market with innovative technology, or are you willing to trade speed of innovation for product reliability?

And then, if possible, you would like to design your Quality Management System so that it supports your decision—so that it nudges you into being the kind of company you want to be.

If it is important for you to be first to market, you should measure your development process with KPIs that track (among other things) how fast new releases reach the field. Since your initial releases are likely to be buggy, your customer support process should monitor KPIs that track the speed with which customer issues are resolved. You may wish to implement an Agile development model, or offer customers the opportunity to work with you as partners in exchange for providing their feedback as active members of the development process.

Conversely, if it is more important to you that your products be fully reliable before they reach a customer, then you should not measure speed of delivery as one of your development KPIs. What you measure is what you optimize; if you are willing to sacrifice speed for reliability, don’t measure speed. In this case, you are more likely to set metrics around the extent and comprehensiveness of testing, and the number of known bugs at time of release. You might also choose to use a waterfall development model (instead of an Agile one) so that testing is done on one version at a time, thus reducing the number of variables in the development process and presumably some quantum of risk. 

It's interesting to realize that "Quality" doesn't always mean the same thing—or rather, that it can mean two different things (in this case both speed of innovation and reliability of performance) which are incompatible, and which you have to choose between. And that single choice can have ripple effects across your metrics, your processes, and your strategy.


           

No comments:

Post a Comment

Five laws of administration

It's the last week of the year, so let's end on a light note. Here are five general principles that I've picked up from working ...