Using AI to Strengthen Critical Thinking for Continuous Improvement Work

January 12, 2026

I have always enjoyed the “year in review” series that happens across apps at the end of the year. Spotify Wrapped is a good example of what I mean. It is fun to see a reflection of how you used something in summary and metrics. Sometimes I find it surprising. At other times I think, “Yep, that tracks.” So when a teammate asked if I had seen my ChatGPT year in review, I sprinted to the app to see what it would say.

An image of a mobile phone screen displaying an album in Spotify: Spotify Wrapped Presents the songs you loved this year

Rokas - stock.adobe.com

It turns out I ask ChatGPT a few of the same things over and over. “Can you polish this?” “What do you think of this?” “Can you help me make this clearer?” I patted myself on the back that I gave my own ideas and just asked it to help me refine, sort of like the awkward peer reviews we were forced to do in school, except now without the awkwardness. After seeing my own year in review, I immediately wanted to see my coworkers’ results. Did they use ChatGPT differently than I did? Were they using it in more creative ways?

It was interesting to read their results. Some talk to it like a best friend. Some treat it like a mentor. Others use it as an expert or a thinking partner. Usage varied widely. One coworker clocked over 22,000 messages. My mother, on the other hand, sent a total of 3 messages all aimed at trying to get her sourdough starter going. She still does not know it is ChatGPT and not “Jippety.”

Seeing those differences made me curious how AI usage is evolving among continuous improvement practitioners. I’ve heard many times this past year, “How should we use AI in continuous improvement?” And after some reflection, I want you to consider a different question.“What thinking behaviors do we want to strengthen, and how can AI support those behaviors?”

This is not a semantic difference. It is the difference between using AI to think for you versus using AI to help you think better.

How Our Thinking About AI Has Evolved

A couple of years ago, MoreSteam’s CEO, Bill Hathaway, outlined many of the ways MoreSteam was exploring the use of LLMs (large language models). We were excited by how quickly AI could affinitize information. Give it survey data from 1,000 customers, and it could surface themes, patterns, and signals in seconds. It was clear AI excelled at pattern recognition and synthesis.

What we were less certain about was how to use it as a thinking partner. In Bill’s article, he raised an important concern: how do we design AI interactions that aid critical thinking rather than replace it?

That question turns out to be the most important one. Since then, our understanding has evolved. Continuous improvement work is challenging because it is inherently human work. Engaging people, working through organizational friction, and influencing change are the hardest parts of the job, yet they are often where practitioners receive the least formal training. When used intentionally, AI can act as a low-risk practice space for these challenges, prompting more critical thinking about the situation itself, the people involved, and how ideas are communicated across differing perspectives and priorities.

Here are a few ways we’ve learned to use AI intentionally in continuous improvement work to strengthen thinking and to build the interpersonal skills that improvement depends on.

Read the Blog: Exploring the Potential of AI in Process Improvement

9 Ways to Use AI Intentionally

1. Strengthen Problem Framing Before Analysis

I don’t need to tell you that behind every successful improvement project is a clearly defined problem that matters to the business. If you get that part wrong, the work is in jeopardy. Try prompting AI to act as an interviewer and ask you questions to expose how well you understand the problem. When I’ve prompted it like this, it’s responded with questions like “Who is feeling the pain most directly from this problem?”, “Where does this problem start and end in the process? What is the first moment the problem becomes visible?”, “Over what time period has this been a problem?”, and several specific to my actual problem statement, including calling out hidden assumptions and implied solutions.

2. Practice Structured Thinking

When CI practitioners use AI to externalize their thinking in writing, they create the conditions to examine it more critically. AI can support this by prompting reflection with questions like “If someone else read this, where would they get confused?” or “What would need to be true for this conclusion to hold?”. Writing and revising thinking this way creates a visible map of the reasoning behind decisions. That visibility reinforces disciplined movement through problem framing, analysis, and decision making, which is foundational to good A3 thinking.

3. Improve Questioning Skills

TTeaching people to ask better questions is tough, and AI can be used as a great rehearsal space for developing that skill. In practice, this is less about better analysis and more about better conversations. Practitioners can ask AI to generate follow-up questions, pressure-test assumptions, or role-play conversations. Try prompting with something like, “Act as a frontline team member affected by this problem. I want to practice asking questions to understand the situation. Answer only what I ask. Do not suggest solutions.” We may also use AI to practice framing our questions in a way that is respectful of the people they impact. It’s a great way to reinforce listening before fixing. Try reflecting on your conversation with AI using a prompt like, “Based on this conversation, which of my questions led to new understanding, and which ones shut it down?”.

4. Show Your Thinking With Data

AI can help practitioners articulate what their data can and cannot support before analysis begins, and it can help translate results into clear, plain-language insights afterward. Take it a step further and use it to practice explaining findings to a non-technical leader. Many times, it’s not the sophistication of the analysis that matters, but how well the results are interpreted, communicated, and trusted. AI can support that interpretive work.

5. Run Pre-Mortems Before Implementation

Used thoughtfully, AI can act as a neutral facilitator for pre-mortem discussions. By asking it to assume an improvement fails in six months and explore why, teams can surface risks, behaviors, and failure modes that might otherwise go unspoken. These prompts are not meant to be accepted at face value, but to spark richer discussion among the people with the context and experience to analyze those scenarios appropriately. Most improvements fail quietly over time, and pre-mortems help teams address any fragility before it shows up in the metrics.

6. Deliberately Consider Alternatives

AI is effective at challenging dominant narratives when explicitly asked to do so. CI teams can use it to generate alternative solutions and counterarguments, leading to reduced confirmation bias. The goal here is to ensure that confidence is earned through consideration rather than assumed through familiarity.

7. Reframing Improvement Thinking Across Roles and Functions

One of the hardest parts of continuous improvement is translating thinking across roles that use different languages and value different outcomes. AI can help translate.

I borrowed this phrase from Maria Fry when she joined us on Quality Time with MoreSteam, because I have heard many CI leaders describe similar approaches. Maria calls it “stealth sigma.” They teach and practice improvement thinking without ever using Six Sigma language. If this is your situation, try prompts like

  • How do I explain why this project matters without talking about sigma?
  • Rewrite this rollout message so it doesn't feel like just another initiative.
  • Explain this control chart as if I am presenting it to nurses.
8. Build Confidence in a ‘Safe Space’

AI can build confidence by providing a safe space to rehearse explanations, facilitation, and decision-making. This lowers the barrier to trying new tools and behaviors without shortcutting the learning that makes continuous improvement stick.

This is especially true for champions and project sponsors. If you want them to improve at asking the right questions, start by providing them prompts to review tollgates or project updates and let them work through their thinking with a nonjudgmental AI. Over time, this kind of practice strengthens confidence through repetition.

9. Strengthen Coaching Behaviors

One of the hardest coaching behaviors for experienced CI practitioners is knowing when not to intervene. AI can be useful here as a place to rehearse responses that keep ownership with the learner, especially in moments of struggle. Practicing how to respond without giving answers or how to ask one well-timed question instead of five. Try a prompt like, “Based on this exchange, where did I successfully avoid intervening, and where did I step in too soon? What would a more disciplined coaching response look like next time?”.

A Note of Caution

AI is very good at generating outputs. You see AI outputs everywhere. If you are anything like me, it is starting to feel exhausting. I was researching a topic on Google recently and the top result included placeholder text from an AI’s output that suggested a visual to help reinforce the text right in the middle of the blog! The author clearly did not review the AI output before publishing it. This AI “slop” is so prevalent that Merriam-Webster made “slop” the 2025 Word of the Year. That’s saying something.

Harnessing the velocity of AI without regard for accuracy, usefulness, or judgment is harmful. And if you are concerned about that, I agree with you. But AI can also be used intentionally to slow thinking down and help people examine their reasoning instead of outsourcing it.Leaders play a critical role here. Teams will use AI in the way leaders reward behavior. If leaders ask only for speed and output, AI will be used to generate answers. But if leaders ask how teams reasoned, what tradeoffs they considered, and what they learned, they craft a culture where people use AI as a scaffold for more critical thinking…not a shortcut around it.

AI can't replace authority, experience, or emotional intelligence. It can't do original research. It lacks natural curiosity, and it doesn't come equipped with all the context, nuance, and trade-offs—all those micro-decisions that humans make as they navigate the world. So craft your culture to reward using AI as a scaffold for critical thinking.

Conclusion

When I look back at my own ChatGPT year in review next December, I don't want to see that I asked it to polish 500 documents. I want to see that I used it to strengthen my approach to thinking about problems, considering alternatives, and challenging my own assumptions.

For continuous improvement teams, this matters deeply. CI isn’t about producing answers faster. It’s about helping people see problems clearly, feel respected in the process, and choose to change their behavior. It’s about building better judgment, stronger alignment, and shared understanding across the system. AI can either erode that work or reinforce it.

So the next time you open that chat window, ask yourself: Am I using this to think for me, or to help me think better?

The answer may determine whether you're proud of your next “year in review.”

Lindsay Van Dyne

Vice President of MarketingMoreSteam

Lindsay Van Dyne is responsible for developing and executing MoreSteam’s marketing strategy. She brings a deep understanding of Lean Six Sigma, having served as MoreSteam’s eLearning Product Manager for the company’s comprehensive suite of Yellow, Green, and Black Belt courses. Over the years, she has attended dozens of industry conferences, webinars, and workshops, gaining firsthand insight into the evolving needs of continuous improvement professionals.

Her marketing experience includes technical aspects of search engine optimization (SEO), digital content strategy, lead generation, website development, event management, and partner relationships. Lindsay holds a B.S. in Chemical Engineering from the University of Notre Dame and a B.S. in Computational Physics and Mathematics from Bethel College.

Use Technology to Empower Your Continuous Improvement Program