Login

Top Ten Problems Reported by Process Improvement Leaders

Error-proof Your Deployment

January 24, 2018

Looking back on our experiences working with clients over the last 18 years, we've collected a short list of tips on how to avoid common issues found in process improvement deployments:

1. Leadership cannot be delegated.
Successful and durable process improvement efforts depend on senior leadership engagement. Leaders should be active teachers. "Engaged" means process improvement activities are on their calendar and on their "to do" lists - not an initiative that is assigned to others.

2. This is not an "organic" exercise at the beginning.
A certain amount of authoritarianism is required to get things started. It may seem counter intuitive when your goal is to build an empowered workforce, but you need a strong directive to re-orient people to enterprise processes. Most important projects will cross functional boundaries, so leadership will need to enforce value stream thinking that puts customers ahead of departmental priorities.

3. The "M" in DMAIC does not stand for Months.
Don't let people get hung up on playing with tools at the expense of getting things done.

4. Don't take on projects that have massive scope.
It is better to execute a series of smaller, tightly-focused projects that get done.

5. Remember the "3APs":
Go to the Actual Place (Gemba) where the work is done, observe the Actual Process as it is performed, and talk to the Actual People who perform the process. Beware of Gembaphobia (the fear of going to where the work is actually performed) - tough problems can't be solved from a conference room.

6. Don't automatically pick the most available people to become project leaders (Black Belts and Green Belts).
There might be a reason why those people are available and, they may not be the most knowledgeable or the best at getting things done. Make the functional leaders cough up their best people. Those people will get more done with the right attitude and good people skills than with a mastery of advanced technical methods.

7. Avoid establishing a "Caste System" or "Expert Culture" where only experts can solve problems.
Everyone can use these tools and this thinking in their daily work. Waiting for an "expert" can become a convenient excuse.

8. Don't operate in secret.
Over-communicate to offset the natural fear of change and suspicion.

9. Don't forget middle management.
The layer of clay in-between senior leadership and front line leaders requires extra attention to penetrate. Middle managers must get on board for the approach to have legs. If leaders lead, middle managers will follow.

10. Don't train without projects!
It's a total waste of time and money. Don't over-train, in advance, in batches. Try to pull as needed. Most improvement is accomplished with the most simple tools. The discipline to recognize problems from a customer perspective and address them head-on is more important than technical skills.

NOTE: Thanks to Jim Womack for coining the term "Gembaphobia".

+ View/Add a Comment (1 comments)

Hide Comments

You must be logged in to comment on this blog.

Please login above or

Create an Account

1. The first one about Senior Leadership. My experience suggests just the opposite, although I only have a handful of companies I’ve worked for and worked with. It is helpful to have senior leadership allocate monies to a LSS program and provide some structure, training, etc. But I worked in organizations for almost 30 years without this and were, in many respects, MORE successful than Senior Leadership pushing down their desires. What you need are one or more middle managers who are sold and being a champion for process improvement. That is where action takes place. In a company I worked for 15 years, without any senior leadership or commitment, there were two. One was my boss who ensured a headcount was there for me to do my work and one was an engineering manager who wouldn’t sign off on any engineering change without my approval. Within a year or two, every other manager in the organization saw the value that I brought their organizations in the discipline of collecting and analyzing data as well as designing experiments. It truly bubbled from the bottom up, where the day to day actions and decisions existed. No one kept track of savings or had a scorecard. Next company I went to, same thing. Bottom up vs top down. When that company changed hands and decided it wanted a LSS program things actually got worse in many respects. It was a program rather than a way of life. The top down approach actually squelched the proliferation of projects and the use of the tools and the culture changed to a few people rather than many (one of the other points in which I agree) and doesn’t really reap the benefits they wish to achieve long-term. Which maybe is a new item for discussion… J 2. “M” – I love the humor on this – I totally agree that if there isn’t a significant MSA involved “M” should take days, perhaps even hours. Where we take forever is the confusion we create on the MSA itself as well as the fanaticism of needing the data to be normally distributed. I have fought with some consultants on their diligence to making sure the data is normal so that the sanctity of the process capability is correct. If I’m sitting in the CEO’s seat, I could care less if you get it “correct”. All I know is my process is bad enough I asked you to work on it. Stop trying to figure out an exact measure of bad and go fix the problem!!!! If we are getting to root cause, this is one of them (at least in my experience). Another is the MSA itself. I’m a firm believer in ensuring whether or not the data is “good” before you get going. However, a lot of creativity is needed here as many of our processes don’t fit the nice MSA format of measuring the same specimen twice and getting countless people measuring something. If it fits, great. If it doesn’t (usual case) get creative really quickly so you confirm or deny validity. My litmus has always been this – go to the gemba and ask the people if they trust the data. Ask them questions about whether they use it to make decisions and how confident they are that it is right. You find out really quickly how much effort to exhaust on this. Now, I’ve seen one example that always keeps me honest in doing this. One belt looked at the data and as she uncovered it, she realized that all of the data had been incorrect for over a decade. Every decision ever made was made with bad data and she spent 2-3 months rectifying the entire system. Huge problem that was uncovered. But most times it is either good enough or really bad and too many consultants force the belt to check the box by doing something artificial and meaningless. And even if the data is bad, depending on what bad means it could be just that we can still see relative improvement. Get going on improvement rather than fixing the measurement system unless it really matters (and in some cases it does, but not many).

January 29, 2018

Kevin Keller