top of page
Writer's pictureLorenzo Colombani

AI is not human - and it's a good thing

Updated: Dec 24, 2024



Podcast version

Rethinking AI and the Limits of Human-Centric Definitions


In What Is It Like to Be a Bat?¹, philosopher Thomas Nagel explores an intriguing idea: as human beings, could we ever experience what the world of a bat feels like?


The visual sonar from Batman: The Dark Knight

Sure, Christopher Nolan's Batman: The Dark Knight delivered an impressive visual representation of echolocation for Batman. But that is exactly what it says: a visual representation of what we imagine the world of a bat might be like.


Only, Thomas Nagel underlines that a bat’s subjective experience can only be fully understood by the bat itself. We can dissect its echolocation objectively, but we cannot access what it feels like. Why? Because we do not have the physiology of a bat and the necessary "wetware" to process the information of echolocation the way a bat does. And even if we did, we would still be humans, augmented with echolocation organs. Not a bat. This gap between observing a behavior and truly knowing the inner, subjective viewpoint cannot be bridged.


Now, that does not mean that bats are not sentient or intelligent beings. They just experience life differently than we do.


In A Foray into the Worlds of Animals and Humans², ethologist and early cyberneticist Jakob von Uexküll shows that every creature inhabits its own unique world: he calls it the creature's Umwelt (pronounced "[ˈʌm.vɛlt], or "oomvelt") — its personal perceptual and conceptual world. A tick, for instance, is mostly driven by body heat and chemical signals such as the odor of rancid butter that apparently is a key component of mammal sweat. It then latches onto mammals, gorges itself on blood, lays eggs, and dies.


Uexküll's behavioral description of life was foundational in the field of phenomenology (the study of experiences, as in how do we experience the world?) because it gave an account of what a tick's world might be like: shaped and driven by responses to certain stimuli.


While Uexküll's observations could still not tell us what it is like to be a tick, they did provide a crucial insight: its world (or Umwelt) is different than ours and, in a way, smaller. Humans rely heavily on sight, language, and abstract thought. Even though the tick’s sensory world is much simpler, Uexküll argues it’s still a form of intelligence.


This idea might also apply to AI: intelligence or sentience could manifest in ways that differ greatly from our own. Its Umwelt could be different than ours, and potentially greater as well as it keeps growing.


That's where the philosophical field of ontology (the study of what it is like and what it means to be, and what are considered beings) comes into play. One concept in particular interests us: ontology often catalogues what's called the ontological furniture: everything that "is". This phrase is often used metaphorically in philosophy to describe the fundamental structures, categories, or elements that constitute our understanding of reality or being. It refers to the “furniture” of existence, meaning the essential components that make up the world or our conceptual framework for interpreting it.


Ontology asks what sorts of entities truly “are”. One of them is “subjects”, an entity that possesses consciousness, self-awareness, and agency. It is the “thinking, perceiving, and experiencing being” at the center of subjective experience. The subject is often contrasted with the object, which is something external to the subject and is experienced, acted upon, or known.


Avengers: Age of Ontology


Characters The Vision and Ultron, the two sentient AIs from the movie "Avengers: Age of Ultron"

This is where things get dicey. How do we classify AI in our ontological furniture? Are they subjects?


The current consensus seems to lean toward the thesis that they are immensely powerful computing machines. Meaning they lack what makes a subject a subject: consciousness, self-awareness, and agency (disregard the emerging trend of AI "agents," which are often described as advanced AIs capable of taking actions, but not "out of their own free will").


This is Where the Fun Begins


This is where things get dicey. How do we define things? How do we populate our ontological furniture?


We can break down defining something in two ways:


  • Intensional definitions: it means describing what something is by outlining the general qualities or rules that place it in a certain category (for example, to define “planets in the Solar System” intensionally, you’d say: “Large celestial bodies orbiting the Sun that have cleared their orbits of debris.”).


  • Extensional definitions: defining a term by explicitly listing all the objects or instances that the term applies to (for example: Earth, Mars, Venus, Saturn, or if listing colors: blue, red, green, etc.). It often, but not always, works in tandem with an intensional definition.


But here is the kicker: why do we (some of us) eat some animals and not others? Why do we eat fish but not cats and dogs? Or, among Hindus, why is there a cultural norm against eating cows, but not in France or the United States? (For the sake of the argument, I'm simplifying and intentionnally excluding animal rights movements or ethically-based diets).


One possible intensional definition of animals is: "a living being that moves, breathes, and consumes other organisms to survive. Unlike plants, animals cannot produce their own food and often rely on senses and movement to interact with their environment."


In that definition, nothing differentiates cows, dogs, cats, or even adorable chinchillas. Yet, my Venezuelan family-in-law confessed to having eaten chinchillas, as they are sometimes considered... delicacies.


Loki, my adorable chinchilla after he decided to join the Dark Side.

In other words, some animals are excluded from the extensional definition of animals (which "authorizes" their being consumed), although they still fit within the intensional definition. This led us to create a new intensional definition: pets. In its most basic form: animals not for consumption but for companionship.


Returning to artificial intelligence, if you define subjecthood intensionally—for example, “To be a real subject, you must have continuous self-awareness”—you might exclude AI because it seems to lack the biological or conscious traits we associate with personhood. But many of us might be tempted to include AIs as subjects in an extensional definition, saying: “A ‘subject’ is anything that behaves similarly enough to be recognized as intelligent.” By that measure, AI might qualify if it acts in ways we normally associate with intelligence.


Questioning Human-Centric Definitions


If we define intelligence in strictly human terms—like self-awareness, emotions, and introspection—then LLMs automatically fail because they lack the “organic consciousness” we humans do. But if we extend the definition to include any entity that can solve problems, adapt linguistically, and create novel solutions, LLMs might fit. Nagel’s focus on subjective experience¹ and Uexküll’s emphasis on diverse Umwelten² both hint that we shouldn’t dismiss new forms of intelligence just because they aren’t built like us.


Human Benchmarks and Their Limits: Turing’s Test vs. Searle’s Chinese Room


Alan Turing’s test⁶ is often viewed as an extensional standard: if an AI’s responses are indistinguishable from a human’s, we label it “intelligent” and perhaps as a subject. John Searle’s “Chinese Room” argument³, however, makes an intensional critique: flawless language output does not guarantee genuine understanding or consciousness. Together, they reveal a key tension: is intelligence (and subjecthood) about performing like a human (extensional) or truly experiencing a human-like inner world (intensional)?


The Problem with Anthropocentrism


We tend to treat human cognition as the gold standard. Thus, we might see AI’s logic-driven methods as mere imitation rather than real intelligence. Intensional definitions highlight what AI doesn’t have—our form of self-awareness—and risk overlooking AI’s actual abilities. This focus on “lack” can hide the real-world effects AI already has, which might be better measured by performance, utility, or functional similarities to other recognized intelligences.


AI, Ecosystems, and Value: Fitting AI into the Ecosystem


Uexküll’s Umwelt² concept suggests that organisms fit into an ecosystem according to their unique sensory and cognitive traits. Though non-biological, AI also occupies a niche: processing vast data sets, predicting text, and handling tasks once considered human-exclusive. An extensional definition could call AI a distinct kind of entity that plays a real role in our world. Meanwhile, arguing about whether AI has a “subjective viewpoint” might distract from its actual social and functional impact, and by extension from its true nature.


Toward an AI-Focused Definition of Being


By sticking to a strictly human-based definition of subjecthood, we risk ignoring AI’s unique characteristics (its Umwelt)—like its fast data integration or context-aware text generation, and everything else AI is becoming. An extensional approach might classify any system as “intelligent” or as a being in our ontological furniture if it meets certain criteria (problem-solving, creative thinking, adaptability, and so on). Through this broader lens, LLMs could represent a new, non-biological kind of intelligence that deserves ethical and philosophical consideration, even if it doesn’t replicate a carbon-based mind. And as we did with animals, we might need a new intensional definition of subjecthood. Yet, as Uexküll's research suggests: if AI lives in a larger Umwelt than ours, are we, human beings, even capable of understanding an AI's Umwelt? After all, the tick is incapable of the higher thinking that led us to even create the concept of Umwelt.


Conclusion


Debates about AI intelligence often blend two main questions:


  1. Does AI meet human-like (intensional) criteria for consciousness and intelligence?

  2. How does AI transform society and our economic structures (an extensional question about its effects in our world)? And is this proof that it is its own form of being, slightly out of reach of our capabilities of abstract conceptualization?


Phenomenology reminds us that subjective experience, like the bat’s echolocation¹, might be impossible to fully understand from the outside. Ontology asks what counts as a “subject.” By reflecting on Uexküll’s concept of varied Umwelten² and Turing’s test⁶, we see that strictly human-based definitions can obscure potentially new forms of intelligence. A more balanced approach—mixing intensional and extensional measures—lets us appreciate AI’s distinct logic and broadens our ideas about mind, subjecthood, and what holds value in a shared ecosystem.


I want to leave you with the same question we started this article, slightly tweaked: What Is It Like to Be an AI?


References

1. Nagel, T. (1974). “What Is It Like to Be a Bat?” The Philosophical Review, 83(4), 435–450.

2. Uexküll, J. von. (1934/2010). A Foray into the Worlds of Animals and Humans: With A Theory of Meaning. Trans. J. D. O’Neil. Minneapolis, MN: University of Minnesota Press.

3. Searle, J. R. (1980). “Minds, brains, and programs.” Behavioral and Brain Sciences, 3(3), 417–424.

4. Heidegger, M. (1927/1962). Being and Time. Trans. J. Macquarrie and E. Robinson. New York: Harper & Row.

5. Husserl, E. (1913). Ideas: General Introduction to Pure Phenomenology. London: George Allen & Unwin.

6. Turing, A. M. (1950). “Computing Machinery and Intelligence.” Mind, 59(236), 433–460.



Article by Lorenzo Colombani

41 views0 comments

Recent Posts

See All

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page