Why some decisions need a human in the room—even when AI gets it right

Share on:
Fast company logo

What gets lost when AI makes the call.

In August 2025, staff at Australia’s ANZ Bank learned they were being fired after being sent an automated email. The message, which the staff received before any manager had spoken to them, included instructions on handing back their company laptops. ANZ apologized, offered psychological counselling, and called it a scheduling error. The Finance Sector Union of Australia had a slightly different description: its president called it “a disgusting way for workers to learn about job cuts.”

ANZ’s error was accidental but the underlying logic was typical. Across industries, AI is being inserted into decision processes that were once exclusively human territory, such as performance evaluationshiring screening, workforce planning, and customer interactions. The logic driving these decisions is straightforward. The organization asks, “Can AIdo this as well as or better than a human?” In a growing number of domains, the answer is yes, or it very soon will be, so the company goes ahead and automates.

But that question, on its own, is dangerously incomplete. It treats every human role as essentially reducible to algorithmically analyzable components—units of cognitive labor that a machine can perform faster and more consistently.

Humans do perform this kind of cognitive work as part of their decision-making. But, and this is the critical point, human judgment often also involves an interpersonal dimension—the work of engaging with another person and their distinct lived reality. The ANZ Bank case throws this dimension into sharp relief. The employees who were fired may well have been angry or upset about losing their jobs. But what made the story noteworthy was something else—they had a baseline expectation that another human would look them in the eye and share the bad news with them. Learning that they were being fired through an automated process added additional indignity and pain to an already difficult experience.

The fact that AI is so good at cognitive processing and is improving so rapidly is forcing organizations to confront this distinction between the cognitive and interpersonal labor for which humans are currently responsible. Getting automation right requires thinking clearly about both elements and understanding what can truly be replaced by a machine and what cannot.

LET AI DO THE COGNITIVE WORK WHERE IT CAN DO IT BETTER

To see clearly where humans are irreplaceable, we first need to be honest about where they are not. We can begin by acknowledging that AI genuinely outperforms human cognition in many fields and that the domains in which this is true will only increase. In 2023, researchers in Sweden reported that in a randomized trial of mammography screening involving more than 80,000 women, an AI-supported reading protocol could safely match standard clinical practice. In the U.S., the FDA has authorizedautonomous AI systems to detect diabetic retinopathy without a specialist interpreting every image. In fraud detection, logistics optimization, and routine underwriting, AI systems already process information with a speed and consistency that humans cannot match.

This doesn’t mean AI is better at all cognitive work—it isn’t, not yet. But the trajectory is clear enough that leaders need to plan accordingly. And in the domains in which AI demonstrably outperforms human cognition, clinging to the human version isn’t responsible. It’s costly. And in some contexts, like healthcare, it can be dangerous.

But not every human role is cognitive labor waiting to be automated.

THE DECISIONS WHERE PRESENCE IS THE POINT

Every organization makes decisions and conducts activities that shape people’s lives—the performance review that determines someone’s future career trajectory, the termination that ends their livelihood, the promotion that passes them over, the conversation in which a longtime client learns the relationship is changing. All of these involve a large dose of cognitive labor, with information being processed, evidence weighed, and conclusions reached.

But the person sitting on the other side of these decisions isn’t just affected by the quality of the analysis. They are also affected by whether another human being engaged with their situation. When a manager delivers difficult feedback, what matters is not only that the assessment is accurate. It also matters that someone who knows them, who has observed their work, who carries some responsibility for their development, looked at their situation and took it seriously.

This is what was lost in the ANZ case. The decision to cut those roles may have been analytically sound, but no one sat across from those employees or took responsibility for the conversation—and that absence, not the decision itself, caused the damage.

AUDITING FOR COGNITION VS. ENGAGEMENT

This distinction gives leaders a practical framework to start from when planning automation initiatives. For any role or process being considered for automation, work through the following steps.

Step 1: Map the work. Start by resisting the temptation to treat the role as a single unit. Most roles that involve consequential decisions about people are not purely cognitive or purely interpersonal—they are both. A manager conducting performance reviews is processing data about output, attendance, and peer feedback, but she is also sitting across from someone whose career trajectory depends on the conversation. The first task is to map these roles honestly, identifying which components are cognitive processing and which involve genuine engagement with the people affected.

Step 2: Identify where AI outperforms. For the components that are genuinely analytical—processing information, weighing evidence, identifying patterns, generating recommendations—test whether AI does the cognitive work better than the human. In many cases, it will. Flag these as candidates for automation. But this step identifies capability, not policy. What to actually automate depends on what comes next.

Step 3: Decide what the human needs in order to engage for real. This is the step that most organizations skip, and it is where the damage gets done. If you have determined that part of the process involves genuine engagement with the person affected—that someone is owed the experience of another human being taking their situation seriously—then you need to ask what the process requires if the human component is to show up as a real participant rather than a messenger.

The answer will often mean clawing back some of the cognitive work you identified in Step 2, even where AI does it better. A manager who delivers a performance review needs to have actually reviewed the employee’s work and formed their own assessment—not merely received an algorithmic summary five minutes before the meeting. The human doesn’t need to do all of the cognitive work. But they need to have done enough to make sure that their engagement is substantive rather than performative.

This involves a genuine trade-off, and leaders need to see it clearly. Keeping the human in the cognitive loop, even partially, will make the process less efficient than full automation. That is the price of real engagement, and if you have determined that engagement matters, then it is a price worth paying. The alternative is the ANZ Bank scenario: a process that is optimally efficient and utterly corrosive.

Step 4: Apply the transparency test. Once you have designed the process, pressure-test it with this question: if the person on the receiving end had full knowledge of how this decision was made and what role the human actually played, would they feel genuinely engaged with—or would they feel like they were being managed by a machine wearing a human face? This is not a hypothetical. Employees and clients are perceptive. They can tell when the person sitting across from them is a participant in the decision and when they are a delivery mechanism. If the transparency test makes you uncomfortable, you haven’t designed real engagement. You have designed theater—and the people on the receiving end will know it, even if they can’t articulate exactly why.

Most leaders who work through these steps find the same thing: they are wasting human effort on cognitive work where AI should lead, and under-resourcing the human roles where genuine engagement is the whole point. The framework doesn’t slow automation down. It directs it to where it belongs.

WHERE THE HUMAN MATTERS

As AI takes over more of the cognitive work of management, the temptation will be to let it take over all of it—to optimize every process for speed and consistency and then station a human at the end to deliver the result. Leaders who do this will discover that they have not automated a function but hollowed out a relationship. The real leadership challenge is not deciding whether to automate but deciding where to stop—where to deliberately accept a less efficient process because the person on the other side of the decision is owed more than an algorithm’s output delivered with a human smile.

[Photos: Ben Sweet/Unsplash; Luke Jones/Unsplash]

Original article @ Fast Company

Share on:
error: