Thursday, June 30, 2022

Quality in public service and private enterprise

In last week's post, I talked about some of the topics that come up when you start to evaluate the performance of government institutions through the lens of Quality. And in fact the International Organization for Standardization has issued a guidance standard—ISO 18091—to support the application of ISO 9001 in a government context. But when you try to apply Quality principles to governments, you find right away that there are important differences between public and private organizations. Here are a few that I can see; please feel free to add more in the Comments.

Shortly after I started this post, in a fortuitous synchronicity, the Excellence Connection channel posted a discussion on YouTube, in which Tracey Polowich of Organizational Excellence Specialists interviews Dan Batista of the Institute for Citizen Centred Service (ICCS) on the subject of customer-orientation in public service. I use some elements of that discussion in what follows.

What's different about the public-sector environment?

Several things. For example, ....   

Financial constraints are different

There's a stereotype in some corners of the private sector that governments operate without any financial constraints at all, because a government can always raise taxes or else print more money. Of course this can't be true in any literal sense: raising taxes isn't as easy as it sounds, and creating fiat money out of thin air can have bad long-term consequences. Still, nobody worries that the government might actually go broke and have to shut its doors. (By the same token, if the lines are too long when you go to get your driver's license, you can't just take your business to a competitor instead.) 

On the other side of the ledger, projects can't be justified by Return-On-Investment, because government activity is not profit-generating. If you fill the potholes on Main Street and replace some of those stop signs with traffic lights you'll make traffic safer and residents happier, but it won't bring you any increased revenues. The absence of a quantitative ROI does not, however, mean you shouldn't fix the potholes or install the traffic lights! Batista makes the point in his interview that the relevant consequences of good or bad service are (respectively) enhancement or erosion of the overall public trust in government, rather than a bottom-line quantifiable in dollars.

Customers can want different things

In last week's post we talked about how hard it can be to identify who the real customers are for a governmental organization. But identifying them is only the first problem. Next comes getting them to agree on what they want from you.  

If you work for a private business that makes widgets, you know what your customers want: good quality widgets for the price, delivered on time, that don't break often, and that are backed up by prompt service when they do. In some government agencies, the demands are almost as tangible: your constituents want to see potholes filled, they want to know their tap water is drinkable, and they want their kids to learn how to read and write. 

But issues of policy can rapidly become controversial, partly (at a regional or national level) because different localities have different needs. A policy that benefits Ontario might not be good for Alberta; a policy that's popular in Los Angeles or San Francisco might be rejected in Iowa or Arkansas. There is a defined process for deciding among these different needs—namely the political process—but sometimes that process leaves the losing constituency unsatisfied and unreconciled. This is a problem that Quality expertise by itself cannot fix. As I argued in the first weeks of this blog, Quality means getting what you want; so if you can't come to an agreement on what you want, Quality methods will struggle to find a foothold.

There are limits to what you can change

Back at the widget factory, let's say that the preventive maintenance procedure is failing, and the equipment is falling apart. You pull an 8D team together, analyze it in detail, and find the gap that's causing the problem; then you rewrite and republish the procedure, and the problem is solved. Done and dusted. Sure, the details might be complex; but conceptually the hard part is the root-cause analysis: after that, fixing the problem is more or less straightforward.

But if you find a flaw in the operating procedures of a public agency, it might turn out that the procedure was defined that way by legislation, and that it's not so easy to change. Or the situation might be more extreme than that. As an American, I regularly have to explain to colleagues from other countries the process by which we elect our Presidents. Usually they tell me that it sounds complicated; and they ask why in all these years no-one has ever thought to simplify it? The answer is that people have suggested simplifications from time to time; but the method is written into Article 2 of the Constitution, and it can be very difficult to change the Constitution. This is another instance of the situation above, where the Quality process runs smack dab into the political process. When that happens, it is safest to assume that the political process wins.

So what's the same about the public-sector environment?

At an operational level, all the basic principles of Quality management apply equally well to the public and private sectors. In his interview, Batista points out that in order to achieve operational excellence, public organizations require:

  • Commitment by senior leadership to working in a citizen-centric way.
  • A clear and compelling vision of excellent service, understood by everyone in the organization.
  • Employee empowerment, so the people on the front lines can make decisions on the spot (within the boundaries of legislation) to solve the problems of their clients who need help.
  • Regular training and attention to supplies, so that the employees of each agency have the physical and procedural tools to do their jobs.
  • Regular attention to the Voice of the Customer, collecting feedback from constituents through as many channels as possible in order to facilitate, finally, ...
  • Continuous improvement, every day. 

Does this list look familiar? It should. And the upshot is that even though many of the boundary conditions around the QMS—in essence, the Context of the Organization—are very different in public organizations compared to private ones, the day-to-day work of Quality is broadly the same. 

          

Thursday, June 23, 2022

What is Quality in government?

A couple of days ago, I started a conversation with Dawn Ringrose on LinkedIn, about the role of Quality in government. Dawn, you may remember, is the founder and principal of Organizational Excellence Specialists. She and her team of experts study and teach a set concrete behaviors that improve business performance, and I have referenced her work in earlier posts, for example here and here.

Anyway, our discussion began when she recommended a book (Democracy in Canada) by Donald J. Savoie highlighting issues in some of Canada's governmental institutions and proposing remedies. At the same time, she referenced an hourlong talk of his, available on YouTube, which summarizes some of the themes in his book. I haven't read the book yet, but I watched the speech. What I found fascinating was the way that Savoie's speech and my subsequent discussion with Dawn kept echoing topics we have already discussed here in other contexts.

One point that Dawn made early on was the following: "In my experience, I have found the most difficult question for people working in government to answer is 'Who Is Your Customer'? To me, this speaks volumes." And of course it is clear that if you don't know who your real customers are, you can't possibly understand what it means to satisfy them. In that case, how can you tell whether you are doing a good job?

But I think it is equally clear why the question might be baffling for someone working in government service. Think of all the different people who might have a legitimate claim to the title of "customer":

  • The member of the public standing right in front of you, who has brought you a problem and is asking for help. This might seem to be your "customer" in the sense of service, but he's not the one paying for the service.
  • The Taxpaying Public as a collective whole. These are the people paying for the services you offer, but it's hard to ask them all what they actually want from you. What is more, they likely don't agree with each other. (That's part of why we have governments in the first place.)
  • The elected government* who are constitutionally presumed to speak on behalf of the Taxpaying Public. This means the Prime Minister and the Cabinet, at any rate until there is a vote of confidence or another general election.
  • Your immediate superior in the civil service bureaucracy. This is the one person that you clearly have to satisfy to avoid getting the sack. But then who is your boss's customer?

We've seen this exact problem before, in our discussion whether you can ISO-certify a university. At that time we examined several contenders for the title "customer of the university," including: the students themselves, their parents, their future employers, regional governments (to the extent that they subsidize some or all of the educational process), and society-at-large. All of them get some benefit from universities, but the alignment between "Who gets the benefits?" and "Who pays the costs?" is pretty rough. At the time we concluded that maybe the best solution is to use the language of ISO 9001:2015 and call them all "interested parties" (i.e., stakeholders) without trying to get more precise. And maybe that's the right answer when talking about governments too.

But it doesn't stop there. One point that Savoie makes at some length in his speech is that there have been multiple initiatives** to encourage public institutions to learn from the management of private ones, and that the main consequence of these initiatives have been a proliferation of metrics and reports; but (he goes on) the metrics don't measure anything useful, the reports go unread, and the only practical consequence is a slow degradation of the organization's ability to perform. 

Does this sound familiar? It should. Whenever you implement a system of monitoring and measurement, there is always a risk of measuring the wrong things. And my recent exchange with Scotlyn on parasitic certifications included a discussion of her charge that jobs in monitoring and certification will progressively drive out jobs in production [or service, as the case may be] until Quality eats the world. In theory there is some kind of brake on this behavior in for-profit organizations, because when the Quality overhead gets too large it starts to affect the bottom line; and at that point the organization presumably cuts back. I won't claim that there are no such brakes in government service, because departments are given budgets and expected to adhere to them. But in any event the braking function must look very different.

Can governments benefit from the application of Quality expertise? Of course they can. We in the Quality field—and in Quality-adjacent fields like Excellence—specialize in understanding how organizations can go wrong and what it takes to set them right. In this sense our work applies to any kind of organization whatever. But we have to apply our expertise pragmatically, and we have to be aware of the ways in which public service is different from private enterprise. Next week I'll look at a few of those differences.   

__________

* Using the word in a parliamentary sense.  
** Savoie is specifically talking about the Canadian federal government, though I have no doubt the same thing has been done elsewhere too.         

Thursday, June 16, 2022

Quality management in a learning organization

I mentioned before that the American Society for Quality held their World Conference on Quality and Improvement last month. As with any such conference, there were keynote speakers and then multiple breakout sessions to present special topics. One of the side-effects of this format was that different topics built on each other in unexpected ways. 

And so early on the second day I found myself wondering, We talk a lot about "learning organizations." If we want to create such an organization, how should we design the management system in order to support it? I don't claim to have an answer that is final or complete, but I started to collect some notes. Also, I hope you will join the discussion with ideas of your own. With luck, your ideas will be better than mine, and I will learn something.

I said this topic occurred to me early on the second day. The inputs from the first day, that started me thinking in this direction, included the following:

  • Ugwem Eneyo, CEO of Shyft, discussing in her keynote address how technical development in Africa is leapfrogging over the stages familiar in Europe and North America to move in new and unexpected directions
  • Charles Cox, on "QMS for data driven decision making and growth"
  • Jane Keathley, on "Aligning organizational structure with Quality 4.0"
  • Jeremiah Genest, on "Enabling the process owner to drive improvement"
  • Heather McGowan opened the second day with a keynote on the rapid rate of change in the modern economy.  
And my initial thoughts, in no particular order, included the following.

A Quality Management System includes the following elements:

  • analysis of the context of the organization
  • policies, procedures, and records
  • roles and responsibilities
  • competence management
  • tools for problem-solving
  • internal audit
  • management review
Regarding documentation and records, Cox made the point that, if you don't know what you need, all documentation systems look equally good. But in fact, different systems are good for different things. A learning organization needs the ability to create testable hypotheses and then record the results correlated with those tests, in order to evaluate the results for future planning. In addition, a learning organization has to get feedback as soon as possible; so whenever possible, long-lagging indicators should be replaced with short-lagging ones, and short-lagging indicators should be replaced with leading ones. It's not always clear how to define KPIs in this way; but whenever it's possible, it should be done.

If the organization truly is learning how to do things in new and better ways, roles and responsibilities are unlikely to remain fixed. Keathley described a networked organizational structure as one in which organizational charts are temporary, designed around particular initiatives; and such a structure sounds like it should be appropriate for a learning organization. There may be constraints imposed by the technology you have to deploy: the structure in a manufacturing plant is likely less fluid than that in a service organization, because using the machines correctly requires specific technical skills that not everyone shares. But again, so far as a networked structure is possible, it is probably the way to go.

In a learning organization, there is always the risk that a specific set of skills, or a specific job classification, might become obsolete—a risk that McGowan explained is inherent in the modern economy anyway. In this context, competence management means more than just training people to do a job. Rather, for each role the organization should identify the needed competence profiles, and then forecast what jobs with similar profiles are likely to be needed in the future. The people working in today's jobs should then be offered continual learning opportunities to expand the number of useful roles they can fill. 

By the way, it is critical that these opportunities be communicated in the right way. The message must never be, Your job is going away so you better learn something else instead. In the first place, that message is dispiriting; in the second place, it could be false. A "learning organization" is one that doesn't know the future yet; so the expectations for the future might not come to pass. A better message is, You are already good at A; we think that in the future we might need someone who knows how to do B, which is similar; and you already know us and our organization, so you would be a natural for this new role if it opens up. Would you like to learn more about it? Then even if it turns out that you don't replace role A with B, your organization will be stronger for investing in employee competence. And your employees will know that they get something out of it too.

In a learning organization, internal audits take on a new role because they are no longer simply tools for enforcement. Yes, of course an auditor has to check whether the defined procedures are being followed. But the auditor also has to check what results the organization is achieving. If the procedures are not strictly followed but the results are still good (like in my story half a year ago about the plating bath), this may be evidence that the organization can rewrite the procedures to make them looser or more generous, or to allow alternative ways to get the job done. If the procedures are followed perfectly but the results are poor, this too may be evidence that it's time to rewrite the procedures, because obviously they are no longer effective. Either way, the organization has to treat the audit as a fact-finding event, and then has to evaluate what the information really means.

Management review may change most of all. During management review, you check all your process metrics, to assess how things have gone in the last quarter or the last year. But in a learning organization, the point—oddly enough—isn't to meet your targets. The point is to learn how to do better. Targets in a learning organization are set at least partly as scientific experiments, to test hypotheses. If the numbers come back green, that shows the hypotheses were correct: but that's not necessarily a good thing. It depends where you want to go next. On the other hand, if the numbers come back red, that shows the hypotheses were incorrect: but again, that's not necessarily a bad thing. Even if your first guess turned out wrong, you now have more information than you had before. You have learned something. And you can do better next year. 

There's another element to learning, too, and management review is where it happens. Peter Drucker used to make the point that companies must regularly re-think what business they are really in, because the business landscape is always changing. Who are your real customers? What do those customers really want? These answers are never fixed, but too many companies treat them as immutable truths, laws of nature that never have to be reconsidered ... until one day the company goes broke because its old market ceased to exist while nobody was looking. It is exactly these questions to which a learning organization must learn and then relearn the answers, evaluating all the data from the whole Quality Management System in order to keep a step ahead of obsolescence. But to do this, top management has to use management review to force different questions than it asks in most companies. And it can never settle for easy answers.


All this is just a sketch, just a beginning. There is far more to work out. So now it's your turn. Leave me comments on this post to tell me where I'm wrong, and what I've missed. Figuring this out will be a collaborative effort, and I know I need your help.

What is more, our organizations need to know the answers so they can become better and more resilient. So please, give me your feedback.

                    

Thursday, June 9, 2022

Steve Jobs on Joseph Juran

Last month, the American Society for Quality held their annual World Conference on Quality and Improvement; and by a couple of twists of good fortune I was able to attend. The conference had a lot of speakers with a lot of good information, and I hope that my notes are thorough enough to let me write a few posts drawn from it.

But this first one will be easy. One of the speakers remarked, as almost a sidebar to his main topic, "There's an interview out on YouTube that you have to see. It's a 20-minute video from 1990, where Steve Jobs talks about Joseph Juran. If you've never seen it before—and I bet you haven't—go watch it. Jobs makes it clear that even back then he clearly got what Juran was saying; and of course we all know how it went after that."

I'd never even heard of this interview before, which I guess made the speaker right about his first point. And when I looked it up, I found he was also right that it is a really good interview. Steve Jobs is still a very young man in this piece, and he introduces himself as the CEO of NeXT. He keeps the focus strictly on Juran. And while he pauses for a moment to collect his ideas after each question is asked, his answers are always clear, fluid, unbroken, and deeply thoughtful. He explains the new idea (new then, of course, not now) of breaking down your work into discrete processes so you can measure them, analyze them, and make them better. And at one point he rejects the whole premise behind the interviewer's question—kindly, but very pointedly—because he says that the Quality approach that NeXT was learning from Juran had them working in such a way that the question made no sense.

I won't try to summarize the talk. Far better to let Jobs speak for himself. Here is the interview:


    

Thursday, June 2, 2022

What's so great about PEARs?

Back in March* I took an online class from Christopher Paris on "Process-Based Internal Auditing." Paris is always an interesting guy, so I was sure he would have some new insights or approaches I hadn't run across before. And sure enough, he absolutely did. I won't list them all here — for that you should send him a note and ask him to give the class again — but I wanted to mention one template he described that can be used for a quick overall status summary for any high-level business process.

Photo by Steven Giacomelli and posted on Pixabay. 
In any management system audit, we always look at business processes. And we have to evaluate them from two directions. 

  • On the one hand, is the process implemented? In other words, are people on the floor actually doing whatever the documents say they are supposed to do?
  • On the other hand, is the process effective? In other words, is the process itself accomplishing whatever the system says it is supposed to accomplish?
Both points are critical. If nobody is following the process, then your process documentation is just waste paper and you might not even know what's really going on during the day. And if the process isn't achieving its goals, the business itself is going to suffer: you picked those goals because they mattered to you, and now you are failing to hit them. If your business is to survive, your management system has to be effective as well as compliant; and that's why management system auditors have to look for both.

At the end of the audit you report your findings to Senior Management, and right away you face the problem of explaining the critical information you have uncovered while not losing them in all the details. In order to make improvements they have to see the whole forest, and not get lost in all those trees. And this is where this template comes in.

It's called a Process Effectiveness Assessment Report, or PEAR. The one Paris showed us in the class is his work, but it is based closely on the official PEAR form from the AS9101 report template. As he remarked, "Mine is just tweaked for internal use." Paris provides this form as part of an Internal Audit Report template that he makes available for free on his website: you can download the single report template from this page here, or get the entire template kit as a single ZIP file here.

In essence, the PEAR form is a simple 3x3 matrix.



So for each high-level process, ...

  • Vertically, you measure process implementation, which means you look at your list of findings. Are there Majors? Are there Minors? Or do you just have a handful of Opportunities for Improvement? Based on this list, you can easily decide whether the actual working practice matches the written process documentation, partially matches it, or does not match it.
  • Horizontally, you measure process effectiveness, which means you look at the current process metrics. Are they green? Are they red, but with corrective actions planned and under way? Or are they red and ignored? Based on this information, you can easily choose which column of the matrix applies to this process.
  • Once you've got your row and your column identified, mark an X in the right cell, copy the corresponding score into the space at the bottom, and you're done!
Of course this matrix doesn't take the place of a full report. What it does is to tell Senior Management at a glance where their risks are. And it relies entirely on information you have already collected during the course of the audit. In principle it doesn't add any new information to what you are already reporting. But I like the visual presentation, and the way it derives so easily from the audit data.          

__________

* Has it been so long already? Time flies when you're having fun.       

Quality and the weather

“ Everybody complains about the weather, but nobody does anything about it. ” The weather touches everybody. But most people, most of the ti...