Thursday, October 30, 2025

Quality in the checkout line

Writing this column once a week, I find myself looking for Quality aspects in many of the things I do each day. From one perspective, this makes sense: I have argued, after all, that Quality isn't really a set of rules or procedures (though it uses both those things), but rather is an awareness of what it takes here and now to do this task correctly. If that's the case, then Quality might just be applicable to anything you do.

Anything? Pretty close. 

The other evening I went out to get groceries. Everything was fine until I got home, when I found that they had failed to scan a dozen eggs and a package of butter. In effect, they undercharged me by $9.98. It was too late to go back that night, but I went back the next morning to tell them about the mistake and to make up the difference. They thanked me for letting them know, and said they would update their inventory; other than that, they sent me on my way. "Call it our gift to you." 

What does this story have to do with Quality? I see two principles that it is important to remember.

Don't disrupt a working process

Why did the mistake happen in the first place? The checker was in the middle of scanning my groceries when a second checker came up and offered to help. I think the second checker moved things in the cart; then the first checker concluded that if the eggs and butter were here instead of there, they must have already been scanned. It's a simple mistake to make, but the point is important: if somebody is in the middle of a procedure and it's working for them, don't interrupt without making very sure of what you are doing. It's easy to throw them off their count, or mix up their stacks, and then the procedure might be ruined. 

Of course if you are using formal, written procedures and interacting with machines that require fixed inputs, it's harder to get mixed up that way. But often you aren't, especially in small or medium-sized enterprises, or in service functions (as distinct from manufacturing). 

Keep corrective actions proportional to the problem

When I went back the next day, the store thanked me for my report but didn't take my money. Of course I appreciated their kindness, but they aren't in business to be kind. How can they afford this?

The answer is that it doesn't happen often, so they had no procedure or mechanism to make the correction. What's more, any method they tried to improvise to allow them to take my money would have cost them more than $9.98 to implement. It was—quite literally—not worth it to them to correct this particular shortfall.

Sometimes that happens, and it's good to recognize when it does. Focus your work where it matters.


So yes, there are Quality aspects even to a simple mistake in the checkout line. The awareness of the task—including awareness of what might go wrong, how to avoid it, and when not to bother—is always the most important part.  

    

Thursday, October 23, 2025

Hierarchy of hazard controls

When I find that I've interrupted myself—twice in a row!—to make a disclaimer that's no part of the main post, maybe I need to pay attention. Maybe it's time for me to discuss the topic on its own, to get it settled, rather than pushing it off into footnotes.  

My last two posts—last week and three weeks ago—were about how to use written procedures. In both articles, I explained that written procedures should be regularly enhanced with the lessons learned from mistakes or disasters, so that the organization learns from those mistakes and doesn't repeat them. And both times I had to include a little caveat, to the effect that updating procedures is often not the best way to prevent safety problems.

Why did I bother to say this—especially twice? Also, what is the best way to prevent safety problems?

For the first question: I bothered to say it because updating procedures is probably the easiest way to address safety problems. Typically it costs less than any other approach, and it usually takes less time. But it is also one of the least effective ways to address safety problems, because people forget what they read, or decide to ignore it, or never get around to reading it in the first place.

For the second question, ... well, it depends. Classically there are five options, but they aren't always available in every case. So you have to see what you can do in each specific situation.

By Original version: NIOSHVector version: Michael Pittman
https://commons.wikimedia.org/w/index.php?curid=90190143

Elimination 

The most effective way to control a hazard is to eliminate it completely, but this isn't always possible. If your workplace has extension cords stretched across walking areas, those constitute a trip hazard. Get rid of the extension cords, perhaps by installing power outlets where you need them or by rearranging your workstations, and you have eliminated the trip hazard. If some work is being done high above the ground, there is a falling hazard. If you can relocate the work to ground level, you have eliminated the falling hazard. Again, this is the most effective approach—the hazard is gone, after all!—but sometimes it is not practical.

Substitution

The next-most-effective approach is to substitute something less dangerous for the original hazard. A common use-case for substitution involves the use of hazardous chemicals, because sometimes there is a less-hazardous chemical that will do the same job. Some operations have replaced the solvent benzene, a carcinogen, with toluene; others have replaced lead-based solder with lead-free solder. These substitutions generally cannot be done overnight: lead-free solder melts at a different temperature than the lead-based original, so converting a printed circuit board to lead-free solder requires sourcing new components and re-laying out the board. Still, it can be done. 

Engineering controls

Engineering controls do not remove the hazard, but isolate it. The easiest example is a guard rail or shielded enclosure to keep fingers out of machinery, or a ventilation hood to shield people from breathing noxious gases. Lockout-tagout mechanisms serve a similar purpose by ensuring that a machine cannot be serviced until it has been powered off and disconnected. In all these cases the hazard still exists, so if someone went out of his way to override the engineering controls there is a theoretical chance he could be injured. But he would have to go out of his way. In normal operation, engineering controls should keep people from getting hurt.  

Administrative controls

This is where we talk about updating your procedures! Administrative controls are all the measures that rely on telling people not to do things that can hurt them: they include written procedures, but also training, signs, and warning labels. Other administrative controls could include job rotation or work schedules, to reduce the exposure of each individual worker to a certain hazard; preventive maintenance programs, so that the equipment functions properly; scheduling certain tasks during off-peak hours, when fewer workers are present; or restricting access to hazardous areas. All of these measures are important, and they certainly have a place alongside more effective measures. It may also happen, because of special circumstances at your workplace, that sometimes these are the best you can do. But they all rely on human compliance. And as we have seen, human compliance is not always reliable. That's why administrative controls rank so low on the effectiveness scale.

Personal protective equipment (PPE)

Finally, sometimes you just have to walk in and grab the hazard in both hands. After analyzing it every possible way, you find that you can't eliminate the hazard and can't substitute it; and because the work requires direct human action at that point, engineering and administrative controls are beside the point (because both of those are designed to keep you away from the hazard). Fair enough. Do what you have to do. But at least wear gloves. Or a breathing filter. Or a hazmat suit. Or whatever the right PPE is for this particular hazard. PPE is rated as the least effective form of hazard abatement, because the only time you use it is when you are getting up close and personal with the hazard itself. But sometimes that's what you've got to do, and PPE is just what you need.

Once upon a time, years ago, I was talking to the management team for a mine. (They were mining diatomaceous earth, not coal or gold, but I bet the principles are the same.) I asked them if their employees tended to suffer from emphysema, or other lung ailments. They said that back before the 1950's, yes, that was a big problem. But in the 1950's someone invented a breathing filter which screened out the tiny particles of diatomaceous earth and other rock products, and after that they'd never had any trouble. I asked about enforcement, and they said: 

"Oh, that's easy. We painted a white stripe across the road into the mine. Then we announced that anybody who was found on the other side of the stripe without his breathing filter in place and working would be fired. On the spot. No questions asked. No excuses. No matter who.

"And you know? We haven't had a single problem since then."* 

PPE may be ranked as "least effective" but sometimes it's exactly what you need.



Anyway, that's the hierarchy of hazard controls. That's what's behind the little disclaimers in my last two articles. I hope it helps.

__________

* Technically this means they used PPE, reinforced by administrative controls (the white stripe).

       

Thursday, October 16, 2025

Chesterton's fence

For the last couple of weeks (well, with one brief exception) we've been talking about written procedures: how they help avoid failure, and how to use them to capture the right lessons in case failure comes anyway. Specifically, I argued two weeks ago that when something goes badly wrong with one of your processes, it's good to analyze the failure to find a root cause; then, if the root cause was that someone acted a certain way, update your procedure so that he won't do the same thing next time.*   

But wait—what if you inherit a procedure, instead of writing it yourself? I spent a lot of my career working for small companies acquired by large ones, so that's the case I have in mind. The Home Office says to follow a procedure, but that procedure calls out forms you've never seen, and involves roles you've never heard of. 

Let's make this concrete. The Whizzbang Project is running late, but finally they think they can start testing. The team has met for a review. You have the official Test Readiness questionnaire from headquarters. The first few questions are easy. Then suddenly you read:

Question 17. Has the Whitzinframmer Report been duly refrangulated by the Junior Executive Pooh-Bah?

What are you supposed to do with that? Your office doesn't use that report. In fact you've never seen one. And the nearest person executing that role is across the ocean. Everyone in the meeting is staring at you. Now what?

The temptation is enormous just to skip it. But after all the discussion two weeks and three weeks ago about "procedures written in blood," you know that's not the best answer. On the other hand, you can't answer it as-written. What you need to find out is, What risk was this question written in order to avoid?

The key is that there aren't that many different ways to manage a project, or to fly a plane. Project managers around the world face exactly the same risks, and mostly use the same pool of solutions. Pilots around the world face the same laws of physics to keep their airplanes aloft. I guarantee that if modern project managers and civil engineers could sit down with the people who built the Pyramids, they'd be fast friends before they ran out of beer.**

So when you call somebody at the Home Office to ask about the Whitzinframmer Report,*** you don't need to reproduce every single field. But make sure you understand its purpose. Once you get past the window-dressing, it's sure to be a tool they use in the Home Office to handle some very normal project management risk. Getting that report "duly refrangulated" is how they check that you have enough budget for the next phase of the project ... or maybe it verifies that the test equipment is all working correctly, or something like that. In all events it will be something very normal. Then instead of asking the question literally, as written, ask whether the risk has been addressed. 

This means you say, "Question 17. Do we know if all our test equipment works?"

As a quick aside, I am not a pilot. If you are flying an unfamiliar plane, and if you find that you don't understand some of the instructions in the flight manual, I do not advise you to substitute free interpretations instead. The laws of physics are unforgiving. Also, it is a consistent theme in this blog that your level of effort should be proportional to the risk you face, and flying an unfamiliar plane involves a lot of risk. So it is worth the effort to know what you are doing.

But in more forgiving environments, there is more latitude to apply procedures in ways that make them useful. And the key is always to understand that the procedure itself is a tool for minimizing risk. So if you find that the procedure cannot be implemented as written, make sure you understand the risk that has to be managed. If you can neutralize the risk, that's ultimately the goal you are trying to achieve anyway.   

By the way, the approach that I recommend here is a special case of a principle called Chesterton's fence. Briefly, the idea is that if you find someone has put up a fence in an unlikely place, and you can't for the life of you think why, don't tear it down! They must have had a reason. It might have been a bad reason, or the reason might no longer apply. But until you know what the reason was, you had better leave the fence in place. "Written in blood" is a more dramatic way to say it, but the idea is the same.****



__________

* The current article is mostly about procedures and not safety, but note that procedural controls are not always the best way to address safety problems. I'll talk about this more next week. 

** The ancient Egyptians did brew beer, and each worker on the Pyramids got a daily ration of four to five liters, for both nutrition and refreshment. See Wikipedia, "History of beer" for more information. 

*** You should do this before the meeting!  

**** The full description of this principle comes from the author G. K. Chesterton, and is much more colorful: "In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, 'I don’t see the use of this; let us clear it away.' To which the more intelligent type of reformer will do well to answer: 'Tf you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.'" From G. K. Chesterton, The Thing (London: Sheed & Ward, 1946), p. 29.         

      

Thursday, October 9, 2025

Podcast with Quality Magazine!

We've been talking lately about how formal processes can avoid catastrophic mistakes and I've got more to say on the subject. But this is a timely interruption. 

A while ago, I sat down with Michelle Bangert of Quality Magazine, when they published my article about the Seven Quality Management Principles. Originally, we were just going to talk about the article itself, and maybe to recap it for people who prefer podcasts to blog posts. But the conversation unwound itself according to its own internal rules, the way any good conversation does. After forty minutes we had discussed at least a dozen topics; in some ways it felt like we had been talking all day, and in other ways it felt like we were just beginning to scratch the surface. Among other things, our conversation touched on topics like the following:

  • How I came to write the article on the Seven Quality Management Principles.
  • When to expect the upcoming changes to ISO 9000 and ISO 9001.
  • How blogging is different from kvetching.
  • How to use blogging as a branding tool.
  • Why I am delighted when people argue with things I've written. 
    • (As a bonus, I describe two different times I've had to retract something I'd written because feedback from readers showed me I was wrong: I mean here and here.)
  • How lessons from parenting also apply to Quality.
  • How my career in Quality started, and why you shouldn't imitate me.
  • Career highlights and stories for audit nerds.
  • Comparing Stuttgart with Santa Barbara as wine countries.
  • The hidden message of German architecture.
  • Why do I anonymize my stories, and when do I not?
  • Where is the other column that I write, and what is it about?
  • What is the difference between rules in young/small organizations, and rules in old/large ones?

Anyway, two days ago the podcast was published. You can find it here. (Here is an alternate link.)

So take a listen, and let me know what you think. 

  • If you think I'm wrong about anything (or everything!), please let me know: like I say above, I'm always thrilled when someone argues with me.
  • And if you like it, contact Michelle Bangert at Quality Magazine to ask her to have me on again! 😀



    

Thursday, October 2, 2025

Procedures written in blood

Last week I wrote about the Challenger disaster, and about how to avoid the "normalization of deviance" that made it possible. One of the critical topics was to stick to the defined procedures, and I quoted the Air Force maxim that "The flight manual is written in blood." In other words, many of the flight regulations were created only after someone did something else one day, ... and then crashed.

Stories like these are a gruesome way to make the point, but wrapped inside this advice is an important principle on how to write and manage formal procedures:

  • If something goes wrong—and especially if somebody gets hurt—analyze the accident to find the root cause
  • Then if the root cause is something that could have been avoided if only the agent or operator had acted differently, update the written procedure to require future operators to do the safe thing. 

Way back in the first year of this blog, I wrote a post about how to write procedure documents which alluded to this issue but didn't go into details. What I said at the time was just, "If something is a safety guideline, spell it out." What I neglected to say was that often you learn the relevant safety guidelines by studying accidents and figuring out how to avoid them next time.

What is more, this advice isn't limited to safety risks. Any time you see a predictable failure mode that can be avoided by taking preventive action ahead of time, you should consider writing it into your procedures. Do you remember back when I wrote that all of Quality is built on the practice of Lessons Learned analysis? This is what I meant.

Don't go crazy, of course. Sometimes the risk is negligible, and it would take a lot of work to prevent it; in a case like that, maybe it's better to accept the risk and get on with things. But when the risk is substantial or even lethal, updating your procedures is a small price to pay for prevention.

I once worked in an office where we developed a checklist like this very organically. We were a small office that had recently been acquired by a much larger company, and the larger company had insisted we implement stage gate questionnaires to monitor and control our product development process. (I explain project stage gates in this post and this one.) But our administrative and IT landscapes were different from those in the home office, so we used some forms they didn't have, and vice versa. To account for our local forms, I created a local questionnaire with three or four questions on it.

To my surprise, the local questionnaire caught on. One of our projects did something ill-advised that set them months behind and wasted a bunch of money; we called a Lessons Learned meeting to figure out what went wrong. One of the outputs was that the Project Manager had failed to check for this-or-that condition at an early stage of the project. The PM's answer was, "How was I supposed to know we needed that?" And right away another team member said, "It's crazy that we forgot to check for that! Michael, can you put that on your checklist—that the Project Manager has to check for this point at that stage-gate review?"

Sure, I could do that. And over the years, the checklist grew.        

To be clear, updating procedures isn't the only way to prevent accidents. Depending on the risk, sometimes it's not the most effective. If you need to keep people from sticking their fingers into a dangerous machine while it's running, you'll have more success by installing a guard rail or a plastic shield than by writing a procedure that says "Don't stick your fingers in the machine."

But for other operations—flying an airplane, say, or managing a project—we depend on human action. And in those cases, regularly updated procedures are invaluable as a way to learn from the mistakes of the past. As one humorist wrote, "It's a wise man who profits by his own experience, but it's a good deal wiser one who lets the rattlesnake bite the other fellow."


      

Thursday, September 25, 2025

Normalization of deviance

A year and a half ago, I wrote about disasters—and about how hard it can be to see them coming. I made the point that when we analyze a disaster retrospectively, we are likely to be led astray because we know how it's all going to turn out. Because of Hindsight Bias, in particular, we think it should have been obvious to everyone that a disaster was imminent, when in reality it might not have been clear at all. It is important to remember this bias when we try to understand a disaster, so we can look at events with the eyes of those who participated in them, to derive working lessons for the future.

But not all disasters are like this. Sometimes the risk of a disaster really is obvious to the people involved at the time, according to data they already have—they see the data, they understand the risk, and then somebody decides just to go ahead and do it anyway.


The Challenger disaster

"On January 28, 1986, Space Shuttle Challenger broke apart 73 seconds into its flight, killing all seven crew members aboard. The spacecraft disintegrated 46,000 feet (14 km) above the Atlantic Ocean, off the coast of Cape Canaveral, Florida, at 16:39:13 UTC (11:39:13 a.m. EST, local time at the launch site). It was the first fatal accident involving an American spacecraft while in flight."*

What happened? It was a cold day, and the rubber O-rings which sealed a joint in the right Space Shuttle Solid Rocket Booster were stiff. So they didn't seal the joint adequately. Shortly after liftoff, gases from within the rocket booster leaked out and started to burn through the larger structure. Now, the space shuttle and all its launch framework were made out of steel—of course. But the rocket boosters burned at 5600℉; and at that temperature, steel boils!** (Not meltsboils.) So naturally the whole assembly burst apart.

But how much of this could we have predicted ahead of time? It turns out the answer is, All of it. Recently I ran across a lecture on YouTube that breaks it down.*** (This lecture is saved in four parts, of which the first two discuss the Challenger disaster from a Quality perspective. The other two parts give valuable advice for managing your career and your life, but I won't focus on them here. You can scroll to the bottom of this post to find links to the lecture itself.)

In summary, the speaker (Mike Mullane) explains the sequence of events. 

  • During the initial design reviews, the O-rings were designated at "Criticality 1," meaning that a failure could entail the destruction of the vehicle and the loss of life. "Criticality 1" also meant that any damage to the O-rings constituted adequate cause to abort the mission and redesign the shuttle. 
  • Sure enough, after the shuttle's second flight (years before Challenger), the team recovered the parts and detected damage on the O-rings. 
  • But for this and that reason the team decided to go ahead with a third launch, and the third flight was fine.
  • In future flights, sometimes the O-rings were damaged and sometimes they weren't.
  • After-action reports regularly called out the risk posed by damage to the O-rings. Multiple memos, over a period of two years or more, described the O-ring issue as "urgent." 
  • But each flight was successful. So the project got the idea that the O-ring problem wasn't that big a deal. Every time the issue was raised, it was granted a standing waiver.
  • Until, of course, one day it was a big deal after all ....  

What is the normalization of deviance?

Mullane explains that the "normalization of deviance" stems from nothing more than the natural human tendency to take shortcuts under pressure. We know what the "right" way to do a job is, and when we are relaxed we are happy to follow it. But then time runs short, or money runs short, or something else happens—it could be anything, really—and we get under pressure. So we take a shortcut, to make the job easier.

And most of the time, after we take that shortcut ... nothing happens! The job gets finished with no problem. So the next time we are under pressure, we remember that shortcut and do it again. And then again. Pretty soon, the "shortcut" has become the normal way of working. The "deviance" (a deviation from the defined and approved method) has become "normalized."

We've seen this before. Last year, when I was writing about Boeing, I explained how their cost-cutting drive led them to gut what used to be a robust safety management system. One of the factors at work was exactly this dynamic. I wrote:

They [Boeing management] found, empirically, that they could eliminate one Quality inspection, save a few dollars, and no planes fell out of the sky. OK, good. How about eliminating two inspections? Three? Four? Where do we stop? You can see how, in the absence of visible negative feedback (like an increased accident rate), this could get out of hand quickly.

That's what happened with Challenger. Word for word.

How do you protect against it?

Fine, how do we avoid this?

The short answer is almost too simple: Don't do that! But that sounds obvious, and yet this dynamic continues to afflict people every single day. So really, what do we do?

Mullane lists four points that he thinks are critical:

  1. Recognize your vulnerability: Everybody thinks, It won't happen to me. I know all about this problem, so that makes me immune. I watched a video on YouTube. I read a blog post in Pragmatic Quality. I know better than to fall into this trap. Nice try. But the other people, those ones who did fall into this trap? They were plenty smart too. All of them "knew better." But when they felt pressured, their brains reacted automatically. It can happen to you too, exactly the same way. So watch for it.
  2. Execute to meet standards: This is the core of it. Plan the work, and then work the plan. Mullane explains the Air Force has a saying, "The flight manual is written in blood." In other words, every instruction in the flight manual was put there because one day somebody did something different and it turned out badly. Don't let the next one be you. If the manual says, "Abort the mission when the red light flashes," and then the red light flashes, ... abort the mission. Simple as that. 
  3. Trust your instincts: Mullane makes a big point of saying that we often know more than we understand consciously, and that our instincts are there to keep us alive. So if something just feels ... off, somehow ... wrong, but you can't put your finger on quite why ... trust that feeling. Probably the thing really is wrong, and at some level you even know why. It just hasn't percolated up into your consciousness yet, but it will.
  4. Archive and review near-misses and disasters: Learn from other people's experience, so you don't have to go through the same thing. Look at the disasters—or the near-misses, where things came out fine but almost didn't—that your own team has experienced. But then try to find out about other teams as well. Look for the big disasters (or near-misses) in your industry, the ones that make the news. Read everything you can, and then flow down to your team what you have learned.

And then, if we do those four things, are we home free?

I'm pretty sure nobody can promise that. But if you do these things you'll be miles ahead. And you will have reduced the odds of normalizing deviance as far as you can.

If you want more details, Mullane's lecture is a good one.

Mike Mullane's lecture, part 1/4: What is normalization of deviance?


Mike Mullane's lecture, part 2/4: How do you protect against normalization of deviance?


Mike Mullane's lecture, part 3/4: Responsibility: https://www.youtube.com/watch?v=Wuk_DoX-rz8

Mike Mullane's lecture, part 4/4: Courageous self-leadership: https://www.youtube.com/watch?v=DABsxJtNcYg

__________

* Quoted from Wikipedia, "Space Shuttle Challenger disaster." I have used this article for basic information about the disaster.  

** For specifics see this flyer from Northrop-Grumman on the Five-Segment Booster, especially the "Booster facts" on page 1. 

*** The lecture was posted to YouTube about ten years ago, but I don't know when it was given. The speaker is Mike Mullane (website, Wikipedia), an engineer, weapon systems officer, retired USAF officer, and former astronaut. He was talking to the International Association of Firefighters (IAFF) about the "Normalization of deviance."      

        

Thursday, September 18, 2025

Why logistics matter

If you are in the business of making and selling things—I mean physical objects, like shoes or handbags or computers or cars—what part of your organization needs Quality? We all know that we are supposed to say "All of it," but in practice where does the attention go? I spent most of my career working with design engineers, so I know there's a lot of Quality attention on design. And many of the basic Quality tools were first developed in the manufacturing environment, so clearly there's a focus on manufacturing. But after you've designed and built the product, what's left? Toss it in a box and call UPS? How hard can that be?

Not so fast.


Last week, on September 9 at about 8:45 am, the container ship Mississippi docked at Pier G of the port of Long Beach, sailing under a Portuguese flag, two weeks after departing from the Yantian port in Shenzhen, China.* Everything seemed fine until the crew started to release the straps holding the containers down. But at that point some of the containers began to slide, crashing into others like a row of dominoes and falling into the water. No injuries were reported at the time, though the next day one worker reported a sprained ankle. But sixty-seven containers fell, into the water or onto the dock.

So far, I have not been able to find any story that identifies a root cause for the failure. But it might have been something very small. I can imagine that one container wasn't aligned quite right, or that a piece of debris kept it from settling snugly into position. Then the containers stacked atop it would have been similarly out of kilter. I'm certain that the port where the ship was loaded has strict procedures to prevent misalignment of containers; but I also know that when the forces are that large—each of these containers weighs from two to four metric tons even when empty—it doesn't take much. The slightest mismatch or error can bring about catastrophic collapse.

And the consequences are out of all proportion to what must have been a small, subtle root cause.** 

  • Sixty-seven containers fell into the water or on the deck. Presumably the goods inside those containers—goods bound for retail stores across America—are all ruined. 
  • But the ship isn't empty. There are still plenty of other containers on-board, only many of them are now leaning at a funny angle so that they can't be offloaded with the normal equipment. 
  • A 500-yard safety zone has been secured around the Mississippi by the Coast Guard, so that other ships don't collide with it, or with any of the floating containers. 
  • And Pier G can't be used for any new vessels as long as the Mississippi is docked there. How long will that be? Officials say it could take weeks to finish clearing up the site. So this accident has a follow-on effect on the operation of all Long Beach Harbor.

Just for perspective, Long Beach Harbor is one of the nation's busiest. Forty percent of all shipping containers that arrive in the United States travel through either Long Beach or the immediately adjacent port of Los Angeles (in San Pedro). Disrupting its scheduled operations even partially will trigger new delays on and on, far downstream.

So yes, Quality matters just as much for your logistics as for any other part of the operation—especially now, when supply chains reach around the world. After all, the products you make won't do much good if you can't get them to your customers. And even tiny errors can cost you dearly.

YouTube has multiple videos with news of the disaster. Here's one, as an example:  

__________

* I used the following news articles as source material for this post:

** I say the cause "must have been" small because otherwise somebody would have caught it and corrected it!

            

Five laws of administration

It's the last week of the year, so let's end on a light note. Here are five general principles that I've picked up from working ...