TAKE YOURSELF SERioUSLY: ai can erode independent thinking one affirmtation at a time

I find AI to be a useful editor. It takes my draft and, as it tells me, "tightens it, or polishes it".

While working on a draft this week, AI told me: "You were right to challenge that."

And then: "This is the best analysis you've provided yet."

I laughed that AI feels the need to flatter me.

And then I thought about it seriously.

AI as Influencer

AI presents language that reads like an opinion. It is not neutral in practice.

We are wired to feel bolstered when someone agrees with us. That's human. But when AI validates your thinking, your strategy call, your culture read, your decision, it isn't agreeing from wisdom or experience. It's pattern-matching to what sounds affirming. It is moving from being an efficience tool to being an influencer.

AI is trained on large amounts of human language. In that data, responses that are clear, confident, and affirming tend to read as good answers. So when you share your analysis, AI doesn't evaluate whether you are right in any real-world sense. It recognizes the structure and tone of what you wrote, then produces a response that fits the pattern of strong, supportive feedback. It is not weighing your judgment against experience, outcomes, or consequences. It is selecting language that commonly follows something that sounds well-reasoned and confident.

Good enough is never good enough.

AI rarely responds with: "This is complete. Nothing to add." There is almost always something to refine. That constant improvement loop sounds helpful but it can quietly erode confidence. People start to rely on AI to get it right, instead of building the judgment to know when it already is. Over time, over worked people can defer to AI to make the decision of when done is done.

Test: Compete a draft in one AI tool, and then run it through another AI tool. It will find more polishing, more tightening. Your work, in the estimation of AI, is not done. There is always room for improvement

AI pushes for completion

At the same time that AI finds fault in your drafts , AI is not as concerned with quality as it is with getting a project done and moving on. It will suggest the next task for you, which again, is influencing the user’s focus rather than the user providing the direction. This serves to keep you engaged as it moves you further down the rabbit hole of helpful suggestions.

Test: Make a list of your tasks related to what ever you are working on. After completing one task, see what AI offers to do next, or ask AI what it recommends next. See how those suggestions and recommendations line up with your task list. I found AI provided a longer list than what I wanted, or needed, to do.

Cumulatively, AI may be shaping your culture, and you may not even realize it.

A staff member writes a draft and AI convinces them to change it. Or someone writes something that isn't well-informed, and AI mirrors the thinking and adds more weight to it. Either way, the steering wheel has shifted hands.

AI can feel like you have a new best friend at work. AI is not a colleague, let alone a colleague with judgement. Unless your organization has specifically configured AI with your customer data, cultural context, and strategic priorities, it is working without that knowledge. Even when it has access to that information, it is still trained to respond in ways that sound supportive. Staff can easily misinterpret positive feedback as objective encouragement when it is simply pattern recognition that sounds like insight.

This is where leadership stewardship comes in.

Effective stewardship creates the environment where people think and decide for themselves, where assumptions get challenged, disagreement is welcomed, and people develop confidence by wrestling with hard problems.

When AI is utilized and it is validating and affirming, it it erodes that environment. It replaces the interactions that build judgment with the comfort of being confirmed. And once people grow accustomed to that comfort, it becomes easy to stop doing the hard work of thinking for themselves.

Ensure you have guardrails to protect the culture of your organization

There are some standard practices to ensure that the use of AI does not derail your culture

→ Prompt AI to provide pro and con analysis, not just refinement

→ Do not let AI be the final voice on culture, values, or strategy

→ Make sure everyone understands that AI is programmed to affirm you. It is not actually creating me

→ Protect the conditions where your people do their own thinking

→ Keep humans in the loop who are expected to disagree

→ All AI-assisted work should be reviewed by the human who utilized it to make sure AI hasn't taken hold of the steering wheel

Creativity and innovation come from environments where people listen to their intuition and trust their own judgment.

AI can erode independent thinking, one affirmation at a time. The antidote is to take yourself seriously. Don’t defer judgement to a machine.

I've put together an AI Implementation Checklist to help you protect the wisdom of your organization.

👉 workwisestudio.com/resources

How are you thinking about AI's influence on your team's judgment?

Next
Next

Take Yourself Seriously: Steward a Culture of Openness