To Understand AI We Must Understand Ourselves

Share on:

The core value of humanity is our ability to make choices about what matters to us.

AI will have all manner of effects over the course of its development.

Faced with this, it is important to ask a simple question: should we care about all of those effects?

In short, the answer is no.

But how do we get there?

Let’s imagine that one of the effects of AI is that autocomplete functionality increases the usage of the letter “s” in English-language text and chat messaging by 8%. Let’s stipulate that this change is completely irrelevant to the interests of human beings or any other living beings or any systems that affect sentient beings. Let’s further stipulate that the fact that the letter “s” appears 8% more frequently also has no effect on anything relevant to the interests of sentient beings.

Now, from the perspective of human beings, the only reasonable response to The Great S Revolution is: So what? Who cares? The frequency of the use of this letter just doesn’t matter. We simply end up residing in a slightly more sibilant environment.

This thought experiment shows something that is almost banal but that is nonetheless profoundly important in figuring out how to respond to AI. AI will have a bunch of effects, and not all of them will matter to human beings. And even among the effects that do matter, there will be differences in how much they matter. Any wise response to AI must take both these things into account.

To Understand AI We Must Understand Ourselves

Put simply, if we want to understand what matters about AI, we first need to gain some clarity about what matters for humans and about how much different things matter. Or, to put it another way, we need to start trying to understand what is valuable to human beings and about human beings. If we don’t do this, it is almost inevitable that we will waste a great deal of time, energy, and resources on actions and policies that do little or nothing to unlock human potential. Worse, we will likely fail to protect what does matter about humanity and to humanity.

To start ourselves down this essential road, we need to engage with even more fundamental philosophical questions:

    • What is a human being?
    • What does it mean to be human?
    • What is the proper aim of human life?
    • What should human beings be striving towards?

Our answers to these questions will have an enormous effect on what we think we need to do about AI.

The ancient Greek philosopher Aristotle famously defined man as a rational animal. Let’s say, then, that this is the “essence” of human beings and that what is special about human beings is our capacity for rational thought and our ability to act based on rationality rather than impulse. If this is what we believe, then when we think about how to respond to AI, we will place enormous importance on what we need to do to make sure that human beings continue to exercise and cultivate those capacities. For instance, we may argue that it is necessary to keep AI companions out of educational settings, because it is important to make sure that children develop the capacities for rational thought that are the essence of humanity rather than always and immediately turning to an AI assistant for answers.

On the other hand, let’s say that instead of thinking of human beings as rational animals, we believe that the secret sauce of being human is our ability to feel. Further, let’s say we narrow this down to our ability to feel love. If this is what (we believe) is the most special and important thing about human beings, if this is the feature of human beings that we need to protect, then we may not care at all about AI assistants helping children with mathematical calculations. On the humans-as-lovers model, the ability to maneuver symbols according to certain conventions would be seen as irrelevant to the value of humanity. We would instead see the opportunities and threats of AI through the lens of what it is about AI that can help and harm human beings in developing and exercising their capacity to love.

In order to respond well to AI:

    • we need to understand what to respond to,
    • we need to understand what effects of AI will be relevant, and
    • we need to understand their relative importance.

This requires understanding more than AI. It requires understanding human beings and the things that matter to human beings. And to understand this, we need to go back one step further and ask: What does it mean to be human?

We often talk about AI as if it is “Coming soon to a reality near you!” Just around the corner is tech that will improve the accuracy of cancer diagnoses, power self-driving cars, control bionic limbs, and provide the backbone for smart, adaptive work environments. In a couple of years, we can expect AI to be writing readable books, planning corporate strategies, and providing all sorts of direct assistance to individuals. But it’s not just a coming attraction. The truth is, it’s already here. AI hasn’t up-ended society just yet or fundamentally altered our experience of what it is to be human, but it is hard at work in our daily lives.

Most of us have been carrying some basic AI functionality around in our pockets for years now. Facial recognition technology to unlock our phones, the autocomplete tool in messaging apps, and the algorithms that suggest the next song or video on a streaming platform all rely on some form of machine learning. For the most part, the way AI has moved into our lives so far seems pretty innocuous. It simply enhances or extends existing human capabilities, letting us do things a little faster or a little more effectively. These changes may raise some ethical issues here and there, but there’s nothing particularly dramatic that we need to worry about. No one is really concerned about losing their humanity to Google Maps or Netflix’s algorithm, right? It’s the big picture changes that are the stuff we should worry about: the conscious machines enslaving us or the intelligent nanobots reducing the planet to a fine paste as they replicate endlessly.

But if we think a little harder about the apparently trivial ways in which AI intersects with our lives, it quickly becomes clear that there are bigger issues at play. Take autocomplete, one of the most ubiquitous and apparently innocuous functions of all. Who could possibly object to their phone suggesting the most likely word to use next in a sentence, saving the user dozens of taps on their screen for every message they send? Not us.

We like the convenience as much as the next person. But we do want to point out something important here. Autocomplete makes life a little bit easier. And that’s great. But in making things easier, it also creates a motivating force that feeds into our decision-making processes. It is easier to use this functionality than to not use it, and this convenience gives us a reason to accept the word offered by the algorithm rather than expressing ourselves in a more detailed, more nuanced, or less statistically commonplace way.

Sometimes we’ll be laser focused on using a certain phrase, so we’ll ignore the suggestion that comes up. If we’re really punctilious about our prose, we might have several goes at typing out a word even if the spelling is challenging. But often … very often … we’ll accept the easy option rather than spending ten or twenty times as long to get our preferred word just right. Without making any conscious decision, and without anything being forced on us, we find our use of language constrained, variety and precision sacrificed on the altar of convenience.

And while this is only a minor instance, it points the way to how scaled up interactions with AI systems could lead to much more significant constraints. Once we begin thinking down this path, we quickly find ourselves confronted with questions about the value of freedom versus efficiency, about what counts as help versus what counts as control, about which uses of AI are truly enhancements of human beings and which will ultimately end up harming us.

As soon as we start thinking in any serious way about AI in even its most trivial forms, we immediately become embroiled in some very deep questions – philosophical questions – about human beings, about values, and about why we make the choices we do. We will argue that these philosophical questions are both foundational and deeply practical. They are foundational because our philosophical views regarding the nature and value of humanity are the basis from which we will think about how to respond to AI. And they are practical simply because it is necessary to think about them when deciding how to respond well to the emergence of AI.

We will not argue that there is something essential to humanity or something unique about it that needs to be protected or enhanced – that may be the case, but it is not a line we think useful to pursue. Instead, we will make a much more basic claim. We will claim that the core value of humanity is our ability to make choices about what matters to us. When AI enhances this ability, we should actively pursue it; but when AI detracts from this ability, we should flee as fast and as far as we can.

Most of the capabilities AI is likely to develop will fall into a more neutral middle ground. Sometimes, these capacities won’t matter at all to the question of what it means to be human in the age of AI. And sometimes humans will make them matter even though there is nothing intrinsic to them which demands that they must. Ultimately, it will turn out that understanding the importance of AI – and understanding its potential for good and ill – will depend first and foremost on improving our understanding of ourselves.

From my book TRANSCEND.

Adapted/published with permission from ‘TRANSCEND’ by Faisal Hoque (Post Hill Press, March 25, 2025). Copyright 20204, Faisal Hoque, All rights reserved.

Look for my latest book TRANSCEND available for pre-order now.

Share on:
error: