Author: Kevin Zhang, Independent Member of UHHC from Huizhou, president of Zeitgeist: An Open Quarterly for Ethics and Human Values, a UHHC partner organization
Introduction
In 2022, Google engineer Blake Lemoine was put on leave for claiming that LaMDA, an AI chatbot, had developed the ability to feel emotions.[1] Examples like this led many to suspect a future in which intelligent machines could, like us, sustain rich, emotional lives.
I argue that this intuition is justified. Specifically, letting M be the thesis that human-like machines can experience emotions like we can, I say that M is the case, assuming that we will eventually create such machines. For simplicity, I will refer to human-like machines collectively as artificial general intelligence, or AGI.
My argument comes in three parts. First, I define what I mean by “AGI” and “emotion.” Second, I show that if we meet Martians who differ from us only in their material constitution, we should believe that they can feel emotions. Third, I argue that if this conditional is true, then knowing what AGI is like would give us enough reason to think that M is the case, given the relevant similarities between AGI and Martians.[2]
Definitions
To begin, we must clarify two concepts in my argument: AGI and emotion.
AGI: I define artificial general intelligence (AGI) as machines that behave and function like a typical human. By “behave,” I mean that AGI can perform behavioural tasks (e.g., driving a car) at least as well as a trained human. By “function,” I mean that AGI has an internal functional organisation that abstractly resembles human neural networks.[3]
Two things should be noted from the outset. First, the concept of AGI used in this paper is put in purely behavioural and functional terms, which leaves the question open as to whether AGI is conscious, self-aware, or has emotions. Second, my argument for M (that machines can experience emotions) assumes the eventual creation of AGI. For brevity, I will not consider the technical details here, but I think we should at least have some confidence that this assumption is true, as most AI researchers are optimistic about creating AGI.[4]
Emotion: Although emotion cannot be defined in a way that is both precise and workable, most of us have an intuitive notion of what an emotion is. We think that emotions such as joy, fear, and hope are essentially composed of three aspects: behaviour, intention, and phenomenal experience.[5] This third component is especially crucial to the debate surrounding artificial emotions, as it requires AI to be able to feel the “what it is like” aspect of emotions.[6]
Now, the ability to feel emotions presumably isn’t a random or haphazard affair. That is to say, we can expect there to be reliable regularities in nature, such that it is nomologically necessary that, if a physical system meets some condition, it experiences emotions. Given these clarifications, M roughly amounts to the hypothesis that AGI will satisfy some nomologically sufficient condition for experiencing emotions.[7]
The argument from analogy
Imagine this: Exploring the Red Planet, we encounter a bustling village of Martians. These Martians fret, grumble, cheer for their friends, and tremble timorously when in danger. Suppose we also learn that their internal functional organisation resembles ours in all the relevant ways. They have complex nervous systems that process and store information in ways that abstractly resemble human neural processing. However, a scientific study found that Martians are composed of an exotic sulphuric goo. Despite this, I think most of us will happily declare that we have discovered fellow beings with emotional capacities and welcome our Martian friends with open arms. This leads to the first premise of my argument:

Premise 1: Martians who differ from us in their material constitution, but resemble us in high-level functional respects and exhibit seemingly emotional behaviour, can (probably) experience emotions.
Premise 1 is motivated by its intuitive appeal. Admittedly, if one thinks having a carbon-based substrate is necessary for experiencing emotions, one should reject Premise 1. But there is no reason to think carbon is more nomologically relevant to emotions than, say, sulphur or silicon.[8] Perhaps non-carbon biochemistry cannot sustain life, but this doesn’t contradict the claim that if such aliens––who, by stipulation, meet the functionally defined conditions for life––existed, they could probably feel emotions. Indeed, I think much of the debate will hinge on my second premise:
Premise 2: If Premise 1 is true, then a sufficiently behaviourally and functionally humanlike machine (i.e., AGI) can experience emotions. That is to say, M is the case, assuming the eventual creation of AGI.
We should accept Premise 2 because Martians and AGI are closely analogous with respect to having the nomologically sufficient condition for emotions. In both cases, we have a being that closely resembles a typical human in behaviour and function, but that differs from us in material substrate. If a difference in material composition isn’t a good reason to reject the notion that Martians can feel emotions, it isn’t a good reason to reject M.
Objections and replies
Those who want to reject Premise 2 must point out a relevant difference between Martians and AGI, which justifies believing that Martians can feel emotions, but AGI cannot. In particular, one would need to identify some property F, such that (i) F is nomologically necessary for having any emotional capacity (so we have F),[9] (ii) Martians have F (so they can feel emotions), but (iii) AGI lack F (so they cannot feel emotions). Any F that fails to meet condition (i) wouldn’t be relevant, and any F that fails to meet conditions (ii) and (iii) wouldn’t be a difference between AGI and Martians.
Finding a suitable F is difficult. On the one hand, F couldn’t be about low-level material constitution, or else it would divide humans and Martians.[10] On the other hand, F couldn’t be about high-level behavioural or functional capacity, or else it couldn’t distinguish between humans and AGI (which is arbitrarily similar to humans in behaviour and function).[11] Despite this difficulty, some candidates for F are worth considering.
* F = being created via the right sort of causal history. This objection argues that while humans and Martians arose from purposeless natural evolution, AGI was intelligently designed by humans for a purpose. And since the capacity for emotions can only form via a natural process, AGI is relevantly different from Martians. However, this objection fails because one shouldn’t think that only beings created by purposeless processes can feel emotions. Suppose we discover that we are merely products of a planetary experiment by some higher civilisation. While shocking, this discovery shouldn’t cause us to doubt our capacity to experience emotions. But even if evolution turns out to be necessary for emotional capacity, AGI could still be created via evolutionary algorithms (and thus possess the right sort of evolutionary history).[12] Hence, causal history is not a relevant difference.
* F = being brought up in a social environment. This objection appeals to our intuition that social interactions (with parents, elders, peers, etc.) during stages of growth are essential in cultivating a robust capacity for emotions.[13] While humans and aliens are nurtured in complex societies through such interactions, AGI lacks this social developmental experience. My answer is twofold. On the one hand, if we interpret “social” in a weak sense that does not require social interactions to involve emotional connection or engagement, then I claim that AGI can be raised in a social environment by performing the cognitive and behavioural tasks necessary to partake in these interactions. On the other hand, if we interpret “social” in a strong sense that requires social interactions to have an emotional dimension, then I claim that this objection begs the question by already assuming the falsity of M (that machines can experience emotions) in attempting to object to M.[14] Either way, this objection fails.
Finally, one can object that even if machines can experience emotions, they can never experience them in the same way as we do. This is a legitimate worry, as the specific social, functional, and psychological mechanisms involved in emotions would presumably differ across biological and computational systems. However, recall that emotion has three aspects, and AGI can be engineered to approach humans asymptotically in behavioural and intentional dimensions. For instance, we can create AGI with the same physical fragility, cognitive limitations, and moral weaknesses as a typical human. When we do, we should expect that the third, phenomenal aspect would approximate humans too, given the aspective unity of emotions observed in humans.[15]
Conclusion
In this essay, I argued for M, the thesis that machines can experience emotions like we can, given our eventual creation of a sophisticated, humanlike machine, which I call artificial general intelligence (AGI). My argument rests on an analogy between AGI and Martians, stipulated beings serving as an intuition pump.[16] Specifically, I argued that if we encountered Martians who behave and function like us, we should believe that they can feel emotions. By parity of reason, if we create machines sufficiently similar to us behaviourally and functionally, we should also think that these machines can feel emotions. Therefore, I conclude that with the advent of increasingly human-like AGI, machines can experience emotions like we can.
References
[1] Ruiz (2022).
[2] This argument is largely adapted from Brian Cutter’s argument from analogy for AI ensoulment (Cutter, Forthcoming).
[3] The particular resemblance may be abstract, since AGI, being machines, is based on computational architecture and inorganic substrate.
[4] Specifically, the most comprehensive survey of AI research found that the median credence in the creation of an AI that can outperform humans on every task by 2061 is 0.5 (Grace et al., 2018).
[5] Pace Vaidya (2024).
[6] Misselhorn et al. (2023).
[7] This modification allows us to make claims about phenomenal states without committing ourselves to a specific theory of consciousness. Cutter (Forthcoming) suggested this strategy.
[8] The multiple realizability of emotions across biochemical substrates is debated (Tahko, 2020). Those who deny non-carbon-based emotions usually do so on the grounds that non-carbon systems cannot exhibit certain behavioural-functional properties, which simply amounts to rejecting AGI. However, in this paper, I take the prospect of AGI for granted on expert testimonial evidence, so I will not engage with such objections.
[9] Or, more precisely, that F belongs to a set of properties S, and the nomologically necessary condition for emotions must contain some elements or combination of elements in S. This is because if a physical system lacks a property in S, it is prima facie less likely to experience emotions than a system with that property.
[10] Supposing that the property is not ad hoc.
[11] The structure of this part follows (Cutter, Forthcoming).
[12] See Gent (2020).
[13] For example, some psychopathologists argue that the acquisition of affective capacities requires “scaffolding,” or support, in the form of bodily, social, and material resources (Krueger, 2018).
[14] In other words, for this objection (so interpreted) to work, one must already have some independent reason to think that M is false, since the thought that AGI cannot engage in emotion-laden social interactions presumably gains its plausibility from the prior notion that AGI cannot feel emotions.
[15] For example, see Goffin and Viera (2024).
[16] See Dennett (2013).

(The author of this article is Kevin Zhang, an Indipendent Member of UHHC. He is also the founder of Zeitgeist: An Open Quarterly for Ethics and Human Values, a UHHC partner organization. Therefore, this passage is available on the website of Zeitgeist (https://zeitgeistmag.org). Edited by Peter Tian from UHHC, The picture is from the Internet. If it infringes any rights, we will delete it immediately. All the copyrights of this article belong to the author. Anyone who infringes will be held accountable by both the author and UHHC to the fullest extent.)
