The looming AI risk: Automating middle management destroys critical ethical layer

Share on:

The looming AI risk: Automating middle management destroys critical ethical layer

by Faisal Hoque,Pranay Sanklecha, Paul Scade

Companies seeking to automate middle management risk eliminating capabilities that algorithms cannot replace. Leaders must identify tasks requiring practical and ethical judgment.

In 2023, an investigation by ProPublica revealed that Cigna, one of America’s largest health insurers, had built a system that denied certain insurance claims without any human review of the patient files. First, an algorithm flagged mismatches between patient diagnoses and a list of approved procedures. When a patient’s doctor had ordered a procedure that was not on the approved list, these claims were routed for denial. Cigna’s medical directors – physicians employed specifically to exercise their clinical judgment by reviewing claims – signed off on the algorithm’s decisions in batches. One doctor denied over 60,000 claims in a single month. On average, physicians spent just 1.2 seconds on each case. “We literally click and submit,” one former Cigna doctor told ProPublica. “It takes all of 10 seconds to do 50 at a time.”

The goal of the system was clear. Paying humans to carefully assess and judge claims is expensive. Allowing an algorithm to make the decision instead is much faster and brings with it significant cost savings. Once the company believed the algorithm could do the job effectively, the only reason for retaining humans in the decision-making loop was regulatory compliance.

The logic behind Cigna’s system is one that many companies would happily apply to their entire middle management layer: replace the expensive humans with machines that can make the same rule-based judgments at a far lower cost. This is not a hypothetical threat. Gartner estimates that half of middle management positions could disappear at many companies by the end of this year as AI is deployed more widely. Many companies are now looking to flatten their hierarchies as they explore AI-driven automation and middle management roles already account for a growing share of white-collar layoffs.

The reason middle managers seem so vulnerable to algorithmic replacement is because their roles are often viewed through a mechanistic lens. There is a long-standing distinction in management thinking between leadership and management. On this view, leaders make choices that drive change: they set direction, define strategy, and determine what the organization should become. Managers, by contrast, are responsible for the systematic execution of those choices. Their job is to take what leadership has decided and make it happen – translating strategy into operations, coordinating resources, ensuring compliance.

Despite the controversial distinction between leadership and management, there is no doubt that many management functions are conceived of as being essentially mechanical. It is easy to think of the ideal manager as pursuing the perfectly optimal path toward a pre-determined goal. And if this is indeed how the management function works, then the conclusion seems inescapable: to the extent that algorithmic systems can execute tasks more quickly, consistently, and cheaply than humans, the human manager becomes redundant.

This view is profoundly misguided – and not merely as a matter of theory. The reduction of management to the optimally efficient performance of mechanical tasks is ethically dangerous. Moreover, it is frequently bad for business. The middle management role often involves making the kind of judgment that cannot be performed by an algorithm. These judgments carry distinctive ethical responsibilities and leaders who fail to recognize this risk hollowing out both the effectiveness and the moral integrity of their organizations.

Why management cannot be reduced to an algorithm

Many managerial decisions are not optimization problems; they are judgment problems. The view that management is a mechanical activity – one that can be performed by algorithmic systems without meaningful loss – rests on an assumption that is false. This is the assumption that the optimal execution of strategy is a fully determined process that can, in principle, be specified completely in advance. There are at least two reasons why this assumption fails, one practical and one ethical.

The practical reason is that many management decisions involve weighing considerations that cannot be measured precisely or objectively and that cannot be compared to one another using some universal scoring system. In these situations, humans use their faculty of judgment. When a manager decides how to deliver critical feedback to an employee, they are balancing a highly complex web of interrelated considerations: the employee’s need to hear the truth, their emotional state that day, the relationship the manager hopes to maintain, the signal the conversation sends to the rest of the team, and a whole host of other factors. There is no formula for making such a decision. It is not a measurement problem that better data could solve; it is intrinsic to the situation that such factors must be weighed through judgment, not calculation.

The ethical reason stems from a basic fact: management decisions frequently have human consequences. When a customer service policy affects whether a vulnerable person receives help, when a staffing decision determines whether someone keeps their job, or when an algorithmic recommendation is applied to a real human case – these are not merely operational matters. They affect the lives and welfare of real human beings.

As such, these decisions fall into the realm of ethics – and ethical choices cannot and should not be outsourced to algorithms. They must remain the privilege and the responsibility of human beings. As an IBM training manual put it nearly 50 years ago: “A computer can never be held accountable, therefore a computer must never make a management decision.” This principle extends to any decision that affects human lives.

This point applies even to those who believe, as Milton Friedman does, that a corporation should be bound only by legal constraints and some minimal, poorly defined notion of ‘ethical custom.’ Ultimately, it doesn’t matter whether you think your company has moral obligations that extend beyond the most basic – what matters is that your customers, employees, and regulators do, and they will act accordingly if you fail to meet them. The algorithm cannot anticipate every situation in which applying its logic will provoke public outrage, regulatory scrutiny, or legal liability. Human judgment is required not only to do the right thing, but also to recognize when doing the wrong thing will be costly – and those two considerations are not always distinguishable in practice.

The fact that middle management has an unavoidably ethical dimension means that it is crucial to understand the ethical obligations that come with the role.

The middle manager as ethical champion

The fact that middle management has an unavoidably ethical dimension means that it is crucial to understand the ethical obligations that come with the role.

Some of the ethical obligations that middle managers have apply to them because they apply to all humans: don’t lie, don’t steal, don’t harm others unnecessarily. These are clearly important, but they are also relatively straightforward to specify. We will simply note that middle managers have these duties, and then turn to the more interesting question of the special duties middle managers have by virtue of the position they occupy.

What are special duties, and how do they differ from the general duties of all human beings? Consider the example of a judge. While in the courtroom, she has a duty to be impartial; once she is at home, her role as a mother demands partiality and preference for the interests of her children. The special duties flow from the position itself: the role creates the duty.

Just like judges or parents, middle managers have certain special duties that arise from their specific roles. We can think of those duties in terms of three categories:

  • Ownership of implementation. When a middle manager applies a policy or acts on a system recommendation, they must own that choice. They cannot hide behind “I was following strategy” or “The algorithm told me to do it.” Cigna medical directors cannot escape moral responsibility by pointing to the system that generated the denials. The decision to click “approve” on a batch of 50 cases without review was their decision, and the consequences were consequences they helped bring about.
  • The duty of judgment. Not every decision requires agonizing deliberation. Often the job is to keep the machinery running by faithfully executing the strategy that the organization’s leaders have settled on. The ethical skill lies in knowing when to execute and when to stop – recognizing when something is a routine case the policy was designed for and when it is an edge case the policy’s authors never anticipated.
  • Serving as the organization’s conscience. Middle managers are positioned to see what strategy and systems do to people – employees, customers, and communities. They are direct witnesses to the gap between intention and impact. The middle manager who sees a problem and speaks up makes it possible for the organization to respond. The middle manager who stays silent – whether from fear, convenience, or failure to recognize the significance of what they are witnessing – allows harm to continue. They become complicit in what the organization is permitted to ignore.

The thread that connects the special duties of middle management may be summed up in one word: agency. Much more than the traditional view realizes, and perhaps much more than even many middle managers realize, effective and ethical middle management requires independent thought, judgment, and action – even when the easier option would be to do nothing.

Leaders as moral architects

Leaders must recognize that middle managers are not mechanical executors of strategy. They are co-creators of it. Middle management is the layer where abstract principles become concrete action, and where the organization’s conscience resides. If leaders design systems that treat managers as button-pushers – optimizing for speed and mechanical obedience to rules above all else – they will hollow out both the effectiveness and the ethical integrity of the organization. Instead, leaders must design systems that support and enhance the agency of middle management.

Concretely, this means:

  • Evaluating judgment, not just efficiency. Assess middle managers on the quality of their decisions and their willingness to surface problems, not just on throughput and compliance metrics.
  • Granting real authority to override. Give middle managers genuine power to question and override algorithmic recommendations and make it clear that exercising this authority appropriately is something that will be valued, not punished.
  • Protecting time for deliberation. Judgment requires time. Systems that eliminate time eliminate judgment. Build space for middle managers to pause and think, rather than designing workflows that reward only speed.
  • Designing systems that support judgment rather than bypassing it. When implementing algorithmic tools, ask whether the system preserves the space for human judgment or engineers that space away. If middle managers are only there to click “approve,” they are not really in the loop.

When leaders fail to treat middle management in this way, change efforts are highly unlikely to succeed. Organizational change requires middle managers to do more than implement new processes – it requires them to interpret the changes needed and apply them in a thousand particular situations, to adapt the new processes to local realities as they emerge minute by minute, and to bring their teams along with them. Middle managers who are treated as cogs lose the adaptive capacity that change requires.

The pressure to automate the middle layer will only intensify as AI systems grow more capable – which makes it all the more important that leaders understand what is at stake.

The organizations that answer these questions honestly – and design accordingly – will be the ones that harness AI’s power without losing their moral compass.

Retaining judgment and ethics

AI will transform middle management – and some of that transformation is overdue. There is no reason to preserve human involvement in tasks that are genuinely mechanical, and no virtue in retaining inefficiency in the name of job preservation. But leaders must make hard-nosed distinctions between the parts of the middle management function that can be handed to algorithms and the parts that cannot. As machines take over the routine – the scheduling, the processing, the pattern-matching – what remains is precisely what matters most: the judgment that completes strategy, the contextual sensitivity that reads shifting situations, the conscience that asks whether what can be done should be done.

Every organization deploying AI must ask: are we building systems that preserve the space for judgment, or are we engineering it out? Are our middle managers genuinely in the loop, or are they simply there to absorb blame when something goes wrong? The organizations that answer these questions honestly – and design accordingly – will be the ones that harness AI’s power without losing their moral compass. The ones that don’t learn, as Cigna did, that humans in the loop are only as good as the loop allows them to be.

Original article @ IMD.

Share on:
error: