
What changes when algorithms interact with algorithms, and what to do about it.
A new business infrastructure is emerging with enormous potential impact but almost no conscious design. In this new world, algorithms negotiate with algorithms, making decisions that shape markets, determine the course of careers, and decide whether companies succeed or fail. Humans, meanwhile, risk being left to watch from the sidelines.
On LinkedIn, posts written by AI models are liked by bots and commented on by AI assistants. In recruiting, candidates use AI to draft résumés while companies use AI to evaluate them. In procurement, some organizations are already using AI to draft requests for proposals, or RFPs—detailed documents that invite vendors to bid on supplying goods or services—while vendors are turning to AI to generate the proposals they have been invited to submit.
The efficiency gains that AI can deliver are very real—automation can save time, cut costs, and improve consistency. But this does not mean we should ignore the dangers that those gains obscure. If we want to avoid slipping into a world in which humans are increasingly irrelevant, we need to be both alert to the risks and intentional about designing processes and tools to mitigate them.
WHAT CHANGES WHEN ALGORITHMS INTERACT
In order to navigate this new reality, business leaders must first understand it more precisely. Here there are four important features of our algorithmically abstracted world:
The Audience Changes
New technologies often transform business, but what’s happening now is different. The new technology isn’t just providing new tools, but a new audience. This isn’t an entirely new phenomenon. Humans have been tuning content for algorithms in some areas for years, as in the case of search engine optimization for websites. But not only is the scale now changing, but the algorithmic audience is taking over both sides of the conversation.
When algorithms speak to other algorithms, language changes from a medium for human understanding into code for machine processing. For a job seeker writing an application today, the best path forward is not always to try to tell their professional story in a way that will be compelling to a human audience. Instead, it will often be better for them to encode keywords and phrases to maximize their score in the applicant tracking system (ATS scores). And, ironically, the best tools for creating this kind of optimized application are often algorithmic themselves: generative AI models.
This does not mean that communication has stopped. It has not. Rather, it has changed. In addition to, and sometimes in place of, human meaning, a different kind of meaning is becoming increasingly important, one that is measured in match scores, engagement rates, and ranking positions. Humans are still involved in the loop, but only at certain points, and much of the process goes on without human intervention.
Metrics Are Replacing Reality
In 1975, the British economist Charles Goodhart came up with what is now known as Goodhart’s Law—the idea that when a measure becomes the target for action, it ceases to be a good measure. The idea is that once people make decisions with the goal of meeting certain metrics, the underlying behavior that the metric was meant to measure is changed as people shift from focusing on the real, underlying goal to trying to optimize their score.
Briefly put, once we understand there is a system, we always try to game it.
Goodhart’s Law becomes increasingly relevant as we move toward autonomous algorithmic interactions. For example, ATS systems score candidates based on keyword matches, years of experience, and educational credentials. Candidates respond by using AI tools to optimize for exactly these metrics.
But high scores in the assessment system then lose their intended meaning: Where a high score once meant that a candidate was probably a good fit for the job, now it may just mean that the candidate has access to tools that are good at gaming the scoring system.
Tacit Knowledge Erodes
Teachers and sports coaches have long known that much of the most important learning for their students or athletes happens in the process of doing the work rather than in a flash of insight when an explanation is given.
When managers write performance reviews, they aren’t just documenting performance; they are also developing their ability to observe, evaluate, and articulate feedback. When teams craft project proposals, in addition to bidding for work, they are clarifying their thinking, discovering gaps in logic, and building shared understanding.
This tacit knowledge—the skills and insights that emerge from doing rather than consuming information—erodes when AI takes over the process.
Purpose Shifts
Our current business functions evolved in a human-driven world. They contain processes designed by humans, for humans, to achieve some human goal. When these processes are outsourced to autonomous algorithmic interactions, often they stop serving the original purpose. In fact, the whole point of doing them can be lost.
Take performance reviews. These originally had the clear goal of assessing employee capabilities to support actions aimed at increasing the effectiveness of the human worker. But if we end up with AI on both sides of the interaction, the whole process becomes performative. For instance, if a knowledge worker uses AI to write his reports, and his managers uses AI to generate the worker’s performance reviews, the original purpose of the review process is no longer being served.
This doesn’t mean that nothing valuable is taking place: an AI assessment of the quality of AI outputs can still tell us something useful. But it does mean that the reason for carrying out the reviews is now a pretense—improving the effectiveness of the human worker has become irrelevant to the process that is actually being conducted.
FOUR STRATEGIC RESPONSES
As algorithms increasingly transact with algorithms, business now operates on two levels at once: an algorithmic layer where signals are exchanged between machines, and a human layer where meaning and value are created. Leaders must guide the interaction between these layers so that efficiency gains do not come at the expense of judgment, learning, or purpose. Here are four practical steps:
- Protect Human Judgment: Not every decision can or should be automated. Leaders must deliberately ring-fence certain domains—final hiring calls, creative development, setting organizational purpose—and ensure that human judgment retains the final say in these areas. Generally, where values, creativity, and culture are at stake, a human should be the final decision maker.
- Translate Between Worlds: As business language splits into two distinct tracks—signals for machines and meaning for humans—leaders will need translators. These are people and processes that can interpret ATS scores, SEO rankings, or engagement metrics and reconnect them with human insight. A résumé may score well, but does the candidate bring originality? A post may “perform,” but did it actually persuade? Translation layers stop organizations from mistaking algorithmic proxies for real understanding.
- Design for Learning: Some activities are valuable not only for their output but also for the tacit knowledge they generate. Leaders must protect key processes as sites of practice, even if they are slower or less polished. Short-term efficiency gains should never come at the cost of eroding the capabilities on which long-term success depends.
- Protect the Purpose: When business activities shift into algorithmic exchanges, it’s easy for the form to survive while the function disappears. A performance review still gets written, but the developmental conversation never happens. A proposal gets generated, but the shared thinking never occurs. Leaders must continually bring activities back to their underlying purpose and ensure that the process still serves that purpose rather than becoming an empty performance.
Algorithms are now part of the basic fabric of business. Resisting this shift is as pointless as commanding the tide not to come in. But while this change is inevitable, it must still be managed and steered by leaders who are aware of what is at stake. By protecting judgment, translation, learning, and purpose, organizations can ensure that automation delivers efficiency without erasing the human meaning that business depends on.
[Source Illustration: Freepik]
Original article @ Fast Company.