Thursday, July 31, 2025

Does AI have a Quality problem?

We've all seen articles about the incredible power and potential of Artificial Intelligence (AI). Whole industries are being restructured to make use of AI's capabilities. One article from last year—that I chose almost at random—lists the use cases for Large Lanuage Models (LLMs) as follows: 

  • Coding: "LLMs are employed in coding tasks, where they assist developers by generating code snippets or providing explanations for programming concepts."
  • Content generation: "They excel in creative writing and automated content creation. LLMs can produce human-like text for various purposes, from generating news articles to crafting marketing copy." [Gosh, are they any good at writing niche products like Quality blogs? 😀 Just kidding, of course!]
  • Content summarization: "LLMs excel in summarizing lengthy text content, extracting key information, and providing concise summaries."
  • Language translation: "LLMs have a pivotal role in machine translation. They can break down language barriers by providing more accurate and context-aware translations between languages."
  • Information retrieval: "LLMs are indispensable for information retrieval tasks. They can swiftly sift through extensive text corpora to retrieve relevant information, making them vital for search engines and recommendation systems."

And so on. The article lists eight more use cases before summarizing with a list of half a dozen general benefits of LLMs. (I found myself wanting to ask if the author has an LLM in the family, perhaps as a favorite cousin or an in-law.) In short, LLMs can do quite a lot.

But LLMs hallucinate! 

We are discovering, though, that it is not safe to rely on LLMs for an accurate description of what is out there. When LLMs summarize content or retrieve information, sometimes they report things that aren't true. The first time I saw such a story, it was in this post from LinkedIn back in 2023, where Marcus Hutchins posted a conversation he had with the Bing AI chatbot. The bot claimed that it was still 2022, insisting "I know the date because I have access to the Internet and the World Clock"—even though it was verifiably already 2023!

Then more stories started rolling in. To my mind the most dramatic has been the recent legal case SHAHID v. ESAAM (2025), Docket No: A25A0196, decided on June 30, 2025 by the Court of Appeals of Georgia. The summary description of this case makes for delightful reading, and I enclose a selection below in an extended footnote.* But the gist is that one party's pleading must have been generated by an LLM tool. No human lawyer could have written it. The pleading rested almost entirely on bogus case law: either cases that never happened, or cases that had no relation to the point at stake. This is the kind of mistake that junior paralegals get fired for. Even worse, the initial trial court accepted it without blinking. The bogus citations were caught only by the Court of Appeals.

So employing LLMs comes with a risk. You can't just blindly trust whatever they tell you without cross-checking it, because they fabricate content so effortlessly. Looking back at that list of use cases at the top of this post, I have to qualify the claim that you can use them in writing or summarizing: maybe LLMs can suggest an interesting idea you didn't think of before, but they can't do your work for you. Some people, though (like Sufyan Esaam's attorney) want to use them for just that.

It's a problem.

What about Quality?

But is it a Quality problem? Here the answer is not so clear, because it depends how exactly you define Quality. You remember that I prefer to define Quality as "getting what you want"; and in that sense—especially if "you" means the end-user of the AI tools—then AI hallucinations constitute a big Quality problem. When AI hallucinates a false answer to my question, I'm not getting what I want.

But there is another definition, which says that "Quality is conformance to requirements." And with that definition the situation is rather different ... because the LLM programs are doing exactly what they have been told to do! 

Jason Bell of Digitalis.io made this argument in a recent LinkedIn post. The point is that the LLM tool is not programmed to see what's really there. It is not programmed to perceive reality, and it is not programmed to tell the truth. Its only programming is to say something that sounds good, subject to certain parameters that define what it takes for something to "sound good." But perceiving reality and telling the truth are never part of that definition, because AI has no mechanism or equipment to allow it to carry out those tasks.

In a sense, then, the problem is not with AI itself, but with user expectations. It's like if I use a hammer to comb my hair: the results are pretty sketchy, but it's not the hammer's fault.  

Of course I have no idea how long AI will be a big deal, or how much of an impact it will have on our work and our lives. But as long as it is here, it will be useful for us to be clear on its capabilities and limitations, so that we can distinguish reality from science fiction. In its current form, AI has no cognitive component, and therefore cannot observe reality or distinguish truth from falsehood. But it is very good at sifting through piles of words according to defined rules.

And honestly? That's enough for now. Let's use it for the things it really can do, and not try to make it comb our hair or understand the world. After all, if AI ever develops a cognitive component, that addition will doubtless bring new problems of its own. 

HAL 9000, of course.

__________

[Emphasis is mine, in all cases.] 

After the trial court entered a final judgment and decree of divorce, Nimat Shahid (“Wife”) filed a petition to reopen the case and set aside the final judgment, arguing that service by publication was improper. The trial court denied the motion, using an order that relied upon non-existent case law. For the reasons discussed below, we vacate the order and remand for the trial court to hold a new hearing on Wife's petition. We also levy a frivolous motion penalty against Diana Lynch, the attorney for Appellee Sufyan Esaam (“Husband”)....

Wife points out in her brief that the trial court relied on two fictitious cases in its order denying her petition, and she argues that the order is therefore, “void on its face.”

In his Appellee's Brief, Husband does not respond to Wife's assertion that the trial court's order relied on bogus case law. Husband's attorney, Diana Lynch, relies on four cases in this division, two of which appear to be fictitious, possibly “hallucinations” made up by generative-artificial intelligence (“AI”), and the other two have nothing to do with the proposition stated in the Brief.

Undeterred by Wife's argument that the order (which appears to have been prepared by Husband's attorney, Diana Lynch) is “void on its face” because it relies on two non-existent cases, Husband cites to 11 additional cites in response that are either hallucinated or have nothing to do with the propositions for which they are cited. Appellee's Brief further adds insult to injury by requesting “Attorney's Fees on Appeal” and supports this “request” with one of the new hallucinated cases.

We are troubled by the citation of bogus cases in the trial court's order. As the reviewing court, we make no findings of fact as to how this impropriety occurred, observing only that the order purports to have been prepared by Husband's attorney, Diana Lynch. We further note that Lynch had cited the two fictitious cases that made it into the trial court's order in Husband's response to the petition to reopen, and she cited additional fake cases both in that Response and in the Appellee's Brief filed in this Court.

As noted above, the irregularities in these filings suggest that they were drafted using generative AI....

Thursday, July 24, 2025

What KIND of Customer Focus?

What is it with the airline industry?

Last year it was Boeing that kept making headlines, with a series of spectacular equipment failures, leadership changes, criminal investigations, and the apparent suicide of a whistleblower just before he was scheduled to give testimony. (See this post for a quick list of links.) In the twenty-three weeks from January 25 to July 4 last year I wrote eleven posts focused on the rolling disaster that Boeing had become in the public eye. And in fact, Boeing has had at least one more highly-publicized crash this year, but I can't tell if it represents any fundamentally new root causes.

Not to be outdone, Delta recently announced a plan to move to individualized pricing, managed by Artificial Intelligence.* Delta president Glen Hauenstein recently explained to investors, “This is a full reengineering of how we price and how we will be pricing in the future.... [Eventually,] we will have a price that’s available on that flight, on that time, to you, the individual.”

What is "individualized pricing"?

Delta president Glen Hauenstein
What factors will go into determining that price? Airlines already charge different people different prices for the same trip based on such factors as how far in advance they book, or how they buy the ticket (such as whether they shop through a travel-agent or a price-comparison website). In the last few days before a flight is to leave, an airline might drop the price if they need to fill seats, or might raise the price if it's a popular route or date. Everyone knows that flying the day before Thanksgiving will cost you a lot.

But individualized pricing goes farther than that. If you are logged into a site while buying your tickets, Delta knows who you are. They have access to your flying history, so they know what you have paid for tickets in the past—and therefore they can guess what you plan to pay for this one. They know not just whether you are on a bargain-ticket website today, but how often you shop through bargain-ticket websites in general. They can watch your behavior on the website to determine whether you are casually checking prices, or in a hurry; whether you might change your mind, or whether you need to get that ticket right now and can't be bothered to waste time over a couple of dollars. I'm sure there are more factors that I'm not devious enough to think of. And all of these will affect the individualized price they offer you. 

To be very clear, the AI tool will use all these inputs to calculate the maximum price you are willing to pay for this flight on this day, and then that's what you'll be charged. This is why Delta expects individualized pricing to provide a huge support to their profitability. 

Is that Quality?

When I first read about this plan, ... well, as a customer I was worried. Sorry, but I cannot imagine this ever working out to my benefit. But as a Quality professional, I was actually intrigued. Can this possibly count as Quality? I'm pretty sure no customer ever dreamed of asking for such a thing, but how does it line up with the letter of the law?

At a first reading, I don't see any violation of the rules in ISO 9001. Those rules are mostly procedural: if you do wicked things but you plan them carefully and do them in the right way, ISO 9001 mostly won't stop you. (See also this post, for comparison.) But ISO 9000 spells out the quality management principles underlying those rules, principles that we examined in some detail earlier this year. Is there any conflict with those?

The very first quality management principle is Customer Focus. And right away I can imagine someone from Delta arguing, Of course this pricing plan involves Customer Focus! We focus on the customer with a microscope. We know his habits, his likes and dislikes—everything about him. We know how much he'll be willing to pay even before he knows it himself. That's the beauty of the whole scheme, after all.

And yes, that's all true. But that's not what the ISO means by "customer focus." Let's look at the text of ISO 9000:2015. Clause 2.3.1.1 states:

The primary focus of quality management is to meet customer requirements and to strive to exceed customer expectations.

I guarantee no customer has ever required or expected a pricing mechanism that studies him personally to squeeze out the maximum he will pay. Clause 2.3.1.2 continues:

Sustained success is achieved when an organization attracts and retains the confidence of customers and other relevant interested parties.

Roger Dooley, writing in Forbes, argues that this pricing tool will undercut customer confidence.

Every aspect of customer interaction provides an opportunity to create more value for the customer.

Note that it says "create more value for the customer," and not "extract more value from the customer"!

I could go on, but you get the idea. The rest of clause 2.3.1 reads the same way. It calls, in short, for an approach that doesn't have much to do with Delta's approach to individualized pricing. Unfortunately, it is not possible to write nonconformances against ISO 9000.

The death of Adam Smith

From a Quality perspective, that's pretty much it: individualized pricing does not contradict the procedural requirements of ISO 9001, but it seems to sit awkwardly with the spirit of the law. None­the­less, there is a wider significance to this innovation.

Ever since the time of Adam Smith, the strongest moral justification for laissez-faire capitalism has always been that it benefits customers—that it provides better goods at cheaper prices than any other economic system, and that it prevents the accumulation of vast wealth by sellers. After all, the argument goes, if one baker sells bread so dear that he can afford to dress in silk, then another baker who is content to dress in wool or cotton will sell his bread cheaper and take away all the first one's business. Therefore the "invisible hand" is supposed to keep business profits to a minimum.

Historically it hasn't worked out that way, for a number of reasons. Innovators are given a legal monopoly on their innovations for a period of years, so they have an opportunity to get rich until their patents expire. Some individuals collude with the government to restrict their competition. And there are plenty of other reasons as well. But behind the abundant list of instances where it has failed in practice, the principle has remained that buyers and sellers meet on a level playing field; then sellers offer their goods at the best price which will still keep them in business, buyers select from among the choices, and the winners are those who can meet their (more or less) similar expenses most efficiently in order to sell for the lowest price.

That principle ends now. With the introduction of AI-powered individual pricing, Delta will seemingly no longer set prices based on any calculation of their costs, nor yet of what margins they need to stay in business. There will be no way to measure Delta's efficiency at serving their customers, over against competitors like United, American, or Lufthansa. Prices will no longer be set, in fact, with any clear relation to Delta's own operations at all. Instead, prices will be targeted exclusively on the customer: just low enough to lure him in, but otherwise high enough to get the maximum benefit out of his trade. This looks like predation. And it may be the face of the future: the articles that I link in the footnote suggest that plenty of other industries are watching Delta with an eye to following them. But if we ever reach the point where a majority of customers feel their routine commercial relationships to be predatory, I fear the response may not be pretty.

Perhaps even now there is time to back away from this threshold and reconsider.

__________

See, for example, articles like this one and this one that explain it. See also multiple discussions through social media online, e.g. here and here.     

       

Thursday, July 17, 2025

Priorities in repair

There are costs when you develop a product. One of the easiest to forget is the cost to repair it later. I don't mean when the customer breaks something by mis-using it; you can charge for repairs like that. I'm thinking of the ordinary cost of warranty repair. What happens when the product breaks down through normal wear and tear, inside the warranty period? Or in a product powered by software, what happens when the customer has an unexpected use-case, and discovers a brand-new bug?

You fix it, of course. All reputable companies do. But that costs time and money. In the high-tech case, it may require the attention of very talented engineers.

Long ago—and yes, it seems like it was in a galaxy far, far away—I worked for a startup that had a problem. We were developing a NewProduct™ that was late. It relied on a very specialized technology. The whole work rested on just one or two of our engineers.

To make things worse, we had recently released an OldProduct™ that relied on similar technology. When problems were reported, they were routed to the same engineers to fix. But every hour those engineers spent fixing a bug in the OldProduct was one more hour they weren't working on the NewProduct. So the NewProduct continued to get later.

One day, our President had had enough. In a moment of frustration, he fired off an email that said, "Effective immediately we will stop fixing bugs on OldProduct until NewProduct is released!" Clearly he hoped that this message would set the right priorities for the company, so that we could finally make some progress.

By coincidence, our annual ISO 9001 surveillance audit was scheduled for just a month later. And you can probably guess what happened. At one point the auditor asked me, "How do you respond to customer complaints?" In the course of the discussion, I mentioned the recent email about OldProduct.

Our auditor took a very dim view of this story. Had we really stopped fixing all bugs on OldProduct? He reminded me that clause 8.5.2 of ISO 9001:2000* specifically requires the organization to review nonconformities, including customer complaints. If we really had stopped fixing all reported bugs, he would have to write that up as a Major process nonconformity in the audit. 

I hadn't worked closely with this product, so I called the Customer Service Department. They explained that no, we hadn't stopped fixing bugs. When he sent that email, our President was just blowing off steam. What we had done in reality was to set up a filtering system to prioritize bug-fixing.

  • If OldProduct genuinely didn't work, we fixed the problem.
  • But if OldProduct actually did work, but just not the way the customer wanted it to work—this was by far the majority of the cases!—we took the customer's improvement suggestions and put them all in a big pool to be addressed when we had time. This meant we would look at them after NewProduct finally got released.

The Customer Service folks brought objective evidence to prove that yes, one or two bugs really had been addressed since President's email, so the filtering system was working. Our auditor downgraded his finding to a suggestion that it would be helpful if everyone in the organization had the same understanding about this policy, and the rest of the audit was uneventful.

This all took place long ago, but I remember it because the distinction is important. 

  • On the one hand, there are things—like repairing genuine flaws under warranty—that you simply have to do: all responsible companies do them, and they are required by external standards like ISO 9001. 
  • But on the other hand, you often have broad latitude to prioritize your work in a way that makes sense based on current conditions.

This flexibility can make all the difference. 

__________

* Yes, we were still working to the 2000 edition of the standard. Remember this was many years ago!   

    

Thursday, July 10, 2025

What is "ISO thinking"?

A while ago, I ran across a thin little book called Why Adopt ISO Thinking? I use the word thin advisedly, as the whole work is less than 50 pages long. But I was curious about the title. I thought to myself, I'm sure I can think of reasons why to adopt it. But what exactly does the author mean by "ISO thinking" anyway?

I should add that the author doesn't approach the question the way I would, which is part of why I wanted to understand his answer. The author, by the way, is Robbie Sheerin of DV Die Cutting in Danvers, Massachusetts; Quality Manager by day, and fiction writer by night. (You can find his personal website here.) As he explains in the foreword, he came to the ISO 9001 standard from a career in welding, machining, and aluminum dip brazing. His perspective is resolutely practical rather than theoretical. He shows no interest in taking the reader on a guided tour of the seven Quality Management Principles, as I did this spring.

What does he find important? He doesn't break it out this way, but I think the three critical points for him are these:

  • Standardization
  • Root-cause analysis
  • Process approach

Over and over, he explains how these simple points can make all the difference in your business, if you take them seriously.

His description of standardization starts with a hair-raising example: the Great Baltimore Fire of 1904.

Aftermath of the Great Baltimore Fire
"With 1231 firefighters, 57 fire engines, nine trucks, two hose companies, one fire boat and one police boat, the fire still raged on for thirty-plus hours. 1526 buildings were destroyed. A total of 2500 businesses were lost. More than 30,000 people were left unemployed, and a staggering $150 million in damages, which in today's money is into the billions. Incredibly, only one person died. Why was the destruction and cost so large? Threads!! Firefighters could not get enough water because of the variety of threads on the hydrants and the hoses." [page 4.]

Sheerin then explains that standardization does more than save lives. Even when lives aren't at stake, it saves money by simplifying operations.

When he sketches out what the clauses of ISO 9001 represent, he gives multiple examples of the right and wrong ways to correct problems or to address customer complaints. He makes it clear that if you address only surface causes then you will have to keep solving the same problem over and over again. This means that the costs for solving that problem never stop adding up. On the other hand, if you can find and address the root cause, you only have to correct it once. So in the long run, doing it right is far cheaper than doing it fast.

As for the process approach, Sheerin introduces this along with root-cause analysis. If you understand your work in terms of processes, you can tell where to act so that you have the most leverage—whether you are fixing a problem or introducing an improvement. And he gives a concrete example which sounds like it might really have happened (or something much like it).

Let's say the customer wants a brass connector. But every time these are produced, there is a small burr .... It has always been removed by hand .... One day the customer complains that the burr is not completely removed or that there are scratches where the burr was removed. A CAR (Corrective Action Report) is created .... During the investigation, it is discovered that the operator does not have adequate hand tools for this job. It also takes the operator 4 hours to rework 500 parts by hand .... [but] by adding a 5-second pass of the part in the machine, the burr can be removed. Now you have removed the rework op and gone from 4 hours to 41 minutes (extra machine time). [page 19.]

Look at the whole process—study it as a process—and you find ways to make the work both easier and more reliable.

Sheerin insists that you can benefit from adopting one or two clauses of ISO 9001, even if you don't bother to comply to the whole thing. When I first picked up the book, that idea sounded odd to me. But this example illustrates what he means. Any improvement is an improve­ment. If you can't afford to take on the whole standard all at once, something is better than nothing. A bit of standardization here, a bit of root-cause analysis there, and an overall awareness of your work as processes—step by step these can help you improve. 

It's a pragmatic approach, and an encouraging one.    

    

Thursday, July 3, 2025

Climate change and bad audits

A while ago—six weeks, now that I check—I wrote about how to audit the recent Climate Amendment to ISO 9001. I based my advice on a guidance document issued by the ISO 9001 Auditing Practices Group, entitled (unimaginatively enough) ISO 9001 Auditing Practices Group Guidance on: Auditing Climate Change issues in ISO 9001. At the time, my basic conclusion was that the requirements were not heavy or onerous.

A couple of weeks later, Kyle Chambers and Caleb Adcock of Texas Quality Assurance released a podcast on the exact same topic. (See the YouTube link below.) I usually appreciate Kyle's pragmatic approach to the ISO standard, so I assumed he would see this guidance the same way I did; but it was a while before I had time to listen to his podcast. It turns out, though, that his take is almost the opposite of mine! He summarizes the APG document as "a whole guidance document of gotchas!"

How is this possible? After listening to the rest of Kyle's podcast, I think I see what happened. Like so much in the ISO world, it all comes down to context and interpretation. Also, it seems like some of Kyle's clients have been saddled with really poor auditors. I'll explain what I mean as we go on.

Agreements

Let me start with the parts where we agree. Kyle's basic advice to his clients (around 21:00) is that you shouldn't need to do anything you aren't already doing to manage your business. But you may have to document what you are doing, and wrap it up in the language of risk management. Caleb makes the same point in different words (around 22:10) by spelling out that if you have determined that climate change has a significant impact on your business, then naturally you are going to implement steps to manage that impact so that you can keep your doors open. You'll do this with or without ISO. So just document it.

If you believe that climate change has no impact to your business at all, Kyle suggests (24:48) that you should still keep an eye on the legal and regulatory aspects affecting you, in case they change.

And Caleb makes the critical point (13:53) that "The auditor is not the risk police." In other words, when a business makes this or that determination about the relevance of climate change to their operations, it is not up to the auditor to overrule them.

I agree with all of this. 

Bad audits

Where Kyle and I part company is over what to expect in your audits. When I read the guidance document, I approach it as an auditor and I think about what I would do. Practically speaking, an auditor is constrained by the clock: you've got to get through the whole organization, and you've got only a few hours to cover it all. So mostly you don't have time to add any new topics, because the old topics will already keep you busy enough. (I discuss some of the auditor's mindset in this post here.) Therefore, if I were the auditor, I would scan through the ten-page document to look for two or three extra questions I could ask. Then, depending on the answers, I'd see where the trails led from there. Obviously this guidance document includes a lot more than two or three questions—it's ten pages long, after all!—but there are lots of different companies out there to audit. Questions that work for one won't work for another. So the document includes a lot of examples to choose from. 

But some of Kyle's clients have had auditors who took a very different approach. He describes (at around 8:00 and following) having long arguments with auditors who insisted that the client organizations had to answer every single question in the document, or at least a couple of questions from every section—regard­less whether the questions made any sense for the business! Kyle kept saying, "The company doesn't do that," and the auditor kept insisting, "My Certification Body gave me a form to fill out and I have to put a piece of evidence in every blank. So I don't care if it's irrelevant. Give me an answer anyway!"

This argument went on for a long time.

In a sense I feel bad for the auditor. If the CB really did give him a form like that, and if they really did insist that he had to fill out every blank, then they set him up for failure. But whoever is ultimately responsible, this is bad auditing! How much time did the auditor waste arguing over these points? Whatever it was, he didn't get it back later. The more time he spent barking up the wrong tree on climate change, the less time he had to check calibration, operational controls, handling of noncon­forming material, documents and records, internal audits, or management review. In fact, if a company were unethical, they could use arguments over climate change evidence to run out the clock, so they never had to confront hard questions in other parts of the operation. And any auditor who lets himself be played like this isn't good at his job.

I wish I could say this will never happen, but obviously I'd be wrong because it has. All I can say is that no experienced auditor should let himself get tangled up like this, and I'm sorry that Kyle's clients got stuck with someone who did. 

Other topics

Podcasts are conversations, and conversations meander. In the course of this discussion, Kyle and Caleb raise a number of interesting or tantalizing side topics. 

  • At about 15:30, Caleb asks if there's a difference between addressing risks from climate change and addressing risks from natural disasters. (Answer: It depends.) 
  • At 18:50, Kyle asks how it is possible for a really big company to get certified to ISO 9001, because it has so many parts that they can't all play together. (Answer: I've worked for global companies that were committed to ISO 9001 certification, so it's possible. But you're right that it ain't easy.) 
  • And a little later, at 19:28, they ask, "What about the company who says 'We have only 100 employees, so how can we save the polar bears?'" (Answer: I've talked to small companies too, and yes, that's a concern. But some of the answers are interesting.) 

Each of these deserves a longer answer, along with plenty of other questions that they raise but don't have time to pursue. I won't take the time here. But if you'd like me to address one or another of them, leave a comment to let me know.

Meanwhile here is a link to their podcast on YouTube.



      

Thursday, June 26, 2025

Lying on your resume

A week ago or so, I saw a story reposted from somewhere online and I've been mulling it ever since. It seems like it should be easy for me to make up my mind about it, but no such luck. I'll repost it here so that we are all on the same page. 

Read it. Then decide what you think about the author and what he did.

In case the graphic doesn't show up for you, it's a story about someone who lied on his resume by inventing a fictitious degree, to get a job. Since then he's done well at the job and been promoted twice.
In case the graphic doesn't show up for you, it's a story about someone who lied on his resume by inventing a
fictitious degree, to get a job. Since then he's done well at the job and been promoted twice.

What's my dilemma? That's easy. I'm torn between two principles, both of which I believe strongly.

On the one hand: Integrity is non-negotiable. I've discussed this topic before under multiple headings, including (just for example) why not to accept bribes, why not to lie to your auditor, and why ethics are too important to put them in a standard.* Briefly, if you can't trust what people tell you, you can't work with them. Lying dissolves all the trust that binds an organization into a whole. If you have to work together, it's poisonous.

On the other hand: One of the consistent themes of this blog is that good work is more important than paper certificates. Of course the paper certificates have their place. In a world of strangers they serve as a common language and as a proxy for reputation, since most likely you will never know a stranger's real reputation. But it's only a proxy. A company can have a quality management system and still fail.** 

From the first perspective, I think that this man lying on his resume is a deal-killer. From the second perspective, I think that his good work during the next four years should be all that matters. I wish I could settle on one of these opinions and not hold both.

I've had to deal with this issue only once in real life. I hired a candidate who—just like the fellow above—claimed a degree. Our HR department was relentless about checking qualifications, so a couple of weeks after he started they let me know his degree was fictitious. They also reminded me that the employment application was a legal contract, and that it stated clearly "I understand that I can be dismissed for any false statement on this form."

So I called him in. I told him what I knew and asked why. Again, his story was just like the one above: his resume had gotten no interviews without a degree, so he added one to make himself more attractive. I explained that legally I could fire him for the falsification. But then I went on.

ME: Look, you don't need a degree for this job, but integrity is non-negotiable. So tell me the truth. That school you claimed the degree from—did you ever go there at all?

HIM: Yes, for a couple of years. But I didn't graduate.

ME: OK, here's what I'm going to do. First, new hires routinely have a three-month probationary period; I'm extending yours to six. Second, bring me some kind of proof that you really attended this school so I know your current story is true. I'll put a copy in your file, along with this agreement, and we'll call it good enough.


We documented the agreement. He brought me a copy of his old student body card. And he was a good employee.

But that's just an isolated case. I'd hate to build it into a general rule.

So leave me a comment. What do you think about the story that I started with, up at the top of the post? What about people who lie on their resumes?

__________

You can find others by clicking the tag "lying" in the right-hand margin of this blog.

** See for example this post and my other posts about Boeing. It is true that Boeing's QMS is not certified; but they still have one, for all that.     

          

Thursday, June 19, 2025

How to overcome "Cartesian anxiety"

A few months ago, James Pomeroy of the Arup consultancy group published an article in LinkedIn on what he called "Cartesian anxiety"—the fear that, without proper planning and metrics, we are all lost. And yet, Pomeroy continues, planning and metrics can never prepare us for every eventuality.

Pomeroy's argument should sound familiar to regular readers of this blog. He begins by describing a mindset that he finds to be common among professionals in Quality, safety, and environmental management. He calls this "a PDCA mindset," and describes its fundamental tenets as follows:

We've discussed many of these topics before. (See the embedded links for some relevant posts.)

But then Pomeroy goes on to point out just how fragile these assumptions are. In normal operations, of course they are fine; in fact, in normal operations these principles more or less define how to function best. But "in situations of significant uncertainty, high levels of complexity or a continually emerging environment, deterministic methods such as PDCA become problematic." These methods break down because they rely on certain preconditions to operate.

  • In order to plan, you have to know what the default future will look like (before you act), so that you can assess what to do. 
  • In order to measure, you have to know what to measure and you need a way to observe it without disturbing it. 
  • In order to form any kind of cause-and-effect analysis, you need enough data to understand what interacts with what, and you need a clear understanding of how they interact—an understanding, so to speak, of which direction the causal arrows point. 

And under conditions of serious uncertainty, high complexity, or rapid change, none of those preconditions obtain.

Does that mean that when things get crazy, then all is lost? Not at all, says Pomeroy. But at that point the organization has to rely on other tools besides planning and measuring. He tells the famous story of the Hungarian soldiers lost in the Alps, who were saved by following the wrong map.* And he argues that in times of crisis it is better for the organization to do something, see the outcome, and react promptly—"feeling" its way through the tumult—rather than to wait for things to settle down far enough that the planning process can engage. He concludes that "by embracing [this kind of] agility ... we can use trial and error to 'feel' our way through complex situations and navigate uncertainty. This is the focus on doing over planning, trialling things and seeing what works, and adapting to an ever-changing situation."

It's a reasonable argument, and in fact we have seen something like it before. Nearly four years ago, I wrote a series of posts** drawing on a talk by Jeff Griffiths about "People Before Process." Over the course of these posts, I talked about the difference between organizations that have a process focus and those that have a competence focus. Of course in real life, any organization needs both. But I concluded that while in many ways leaning into a process focus scales and replicates faster than leaning into a competence focus, it is also more fragile. A competence focus, by contrast, is more resilient when things go wrong. (See especially the long discussion in Part 3.) The reason is precisely the one Pomeroy highlights: in a crisis, you don't have enough time or data to use the conventional tools of the process focus. You have to be nimble and improvise. And the higher the overall competence of your people, the more capably and creatively you can improvise.

In fairness, I have to make one other point. Pomeroy is not arguing that any organization in crisis should throw its rational tools out the window to navigate purely on vibes. If you look at it, the approach he promotes is topologically identical to the PDCA cycle: decide to do something, do it, see what happens, and react. The difference is in the time-scale. Organizations in crisis don't have time for lengthy data collection or analysis, and often may not even know which data are relevant. So the selection, the analysis, and the decision all have to be done by informal methods. But those informal methods themselves rely on the competence, expertise, and intelligence of the executives making the decisions. Nobody's going to suggest leaving the decisions to the toss of a coin or to ChatGPT.

It's a good article—by all means go read it—and it supports the basic point I always try to make here. All of the Quality principles are sound, as principles. But the point is to get results, not to follow the rules. That's pragmatic.     

__________

* This story derives from a poem by Miroslav Holub, recounting a story told by Albert Szent-Györgyi about a scouting troop of Hungarian soldiers in World War One. You can find the poem here. Briefly, the troops got lost in the snow and expected to die. Then one soldier found he was carrying a map of the mountains. After the troop used the map to return to base, they realized it was not a map of the area where they had been! 


** Here are the links: Part 1, Part 2, Part 3.    

       

Five laws of administration

It's the last week of the year, so let's end on a light note. Here are five general principles that I've picked up from working ...