anthropomorphization of algorithms

June 13, 2021 · humans/not

motivation

Daily human life is becoming increasingly intertwined with algorithms. The people who develop these algorithms (and, to a lesser extent, the implicit users of these algorithms — the general public) can only use their native languages to describe algorithmic nature and activity. E.g., “Netflix is recommending this movie to me”, “my Tesla is looking ahead to find obstructions blocking our path”, or “the software loading wheel indicates the computer is thinking”.1 Though, I wonder about the difference between a Netflix algorithm making a recommendation and a human doing the same. When a Tesla car looks, is it doing the same thing a human would do when they look? More importantly, what are the consequences of conflating algorithm activity with human activity?

In Why AI is Harder Than We Think, Melanie Mitchell cites computer scientist Drew McDermott (1976) as credited with coining the phrase “wishful mnemonic” to describe the common use of words like “understand” or “goal” when talking about purely algorithmic processes such as loops. I think the following quote of McDermott sums up the question we ask here quite well:

“A major source of simple-mindedness in AI programs is the use of mnemonics like “UNDERSTAND” or “GOAL” to refer to programs and data structures. […] If a researcher […] calls the main loop of his program “UNDERSTAND,” he is (until proven innocent) merely begging the question. He may mislead a lot of people, most prominently himself. […] What he should do instead is refer to this main loop as “G0034,” and see if he can convince himself or anyone else that G0034 implements some part of understanding. […] Many instructive examples of wishful mnemonics by AI researchers come to mind once you see the point.”

Think of this as an extension to the discussion Mitchell provides in the “Fallacy 3” section of the aforementioned paper. Here, I will shy away from using the term “wishful”, preferring a more “anthropomorphic” standpoint, since I think these terms are “wishful” precisely because they are anthropomorphizing. These terms are each purposed for a very human thing, and when applied elsewhere, they become wishful until the burden of proof is satisfied by the user that the application is appropriate. We conflate human things with non-human things, and that is the concern of this essay.

Though a call for perfectly distinct terms is farfetched to say the least, being conscious of the issue and being selective in our word choice may lead us to use terms which are more appropriate than others. As Mitchell says so well in her paper: “Indeed, the way we talk about machine abilities influences our conceptions of how general those abilities really are.”

the words we use

It is hard to talk about (let alone imagine) an algorithm doing something very animal without using anthropomorphizing language. But, it does not follow that this is appropriate. Further, until there is an explicit and linguistically represented difference between animal and algorithm,2 we will continue to conflate the actions of one with those of the other in our words, e.g., a dog “reasoning”, though different, will be described in the same way as an algorithm “reasoning”. This conflation may set ourselves up for particular consequences related to misinterpretation or misattribution.

I offer a few potentially familiar phrases for conversation:

  1. Your iPhone is listening to your conversation
  2. Facebook is recommending a product to you
  3. Computer vision models are looking for specific objects in images

In any of these cases, we could have (and, as we’ll see later, should have) used less anthropomorphic language: your iPhone is recording your conversation, Facebook assigns a product to everyone, whereby any assignment can be interpreted as a recommendation, and computer vision models scan the pixels of images, and the patterns are processed during modeling. True, even with these less anthropomorphic translations we may still succumb to human-esque imagery, but it will be to a lesser extent than their analogous counterparts. By being selective about word choice, we can do a better job of distinguishing between a human doing a thing and an algorithm doing the same thing in a different way.

The words we use are a representation of how we perceive the world, and conversely, our perceived reality is necessarily defined in terms of language. Therefore, using phrases like the ones above must indicate how we view things like iPhones, Facebook, and machine learning models. In the same way we like to attribute human traits and actions to non-human (animal) beings (e.g., “my dog is talking to me”, or “this cat loves you”), we are led to do the same for algorithmic processes. But, animals and algorithms seem different enough to merit a discussion on the appropriateness of this language. In the coming sections, I’ll use the three above phrases as devices to understand our intentions, our interpretations, and some potential consequences thereof, when it comes to anthropomorphizing algorithms.

listening

Listening seems like a relatively active thing, especially compared to its more passive analog, hearing. With listening comes the connotation of internalization, and some kind of personal association. For example, when I listen to someone telling me a story, I picture in my mind’s eye the characters of the story, and create for them a personalized version of what I think they ought to be. Or, when I listen to the sounds of the outdoors, I pick out particular attributes of what I’m hearing (e.g., the leaves, the highest pitched bird tweet) and transiently place an emotional value to each of these. If there’s a sound (or part of a story) that I find to be annoying or unsettling, there will be a feeling associated with that thing, and my overall interpretation of what I’m listening to will be affected. Thereafter, if I’m asked to describe that sound (or that story), my description will be necessarily biased by my history, my character, and my essential/existential being. Now, when we use the term listening to describe algorithmic activity, how can we be sure the attribution of characteristics similar to those above are consistent?

The first phrase in the preceding section is

“Your iPhone is listening to your conversation”

I am not privy to the exact mechanics of the Apple voice recognition process, but I am confident that it involves a set of algorithms, and it is not solely a human listening to and acting on vocal input. So, what does it mean for the algorithm to be listening? And, what is so wrong with us saying that an algorithm is “listening”, as we do? The answer to the first question must be dependent on the algorithm, and requires an understanding of the audio processing methods used, as well as the modeling pipeline through which this processed audio percolates. I.e., we define “listening” to be whatever the algorithm is doing during that time. A reasoning through this sort of thing is provided a few sections later. As for the second question, I’ll present a thought experiment as a tool for us to better understand what is going on.

For the sake of convenience and familiarity, suppose we all carry an electronic handheld device. Freely, we can interact with this device, and it will in turn, execute some corresponding reaction. E.g., we can speak in close proximity to this device and verbalize a question or a statement, and the device will then play (or stream) words in some familiar language.3 Now, imagine that for every device, exactly one of two scenarios is true: either $a$, the device is assigned to a human being somewhere, listening to the audio coming through, managing the non-verbal digital cues initiated, and every reaction made by the device is purely the doing of this human at the other end (it is essentially a smartphone with “desktop sharing” capability), or $b$, the user is the only human involved in the interchange, and any reaction made by the device is purely the result of algorithmic causes (e.g., prerecorded audio is selected and played via machine learning models based on the input). Finally, suppose the user of the device has no way of telling whether it is an $a$ device or $b$ device.

If there is no difference in our experience between $a$ or $b$, then there should be no difference in the language we use to describe it. Put in another context, if two patients in a hospital present exactly the same symptoms, it would not be a farfetched conclusion to attest they are suffering the same ailment. In a way, we are inclined to believe that similar effects imply similar causes, but this is a fallacy. In the same way, we can be easily fooled into thinking $a$ and $b$ are the same device when the only real difference between the two (i.e., their source) lay behind a Veil of Perception. Consequentially, there comes a point where the language we use requires a more refined sense of precision.

Suppose I notice my device randomly taking pictures of me, and I say “this device is spying on me.” The act of spying is more than the interaction between the spy and the spied. A photographer can take candid pictures of me, and I’ll have no problem with it, as long as the cause is just and we agreed upon the interaction ahead of time. I’d even be fine with having my photos taken due to a computer glitch. But, if the pictures are intentionally taken against my will, there must be a subversive cause which precedes the action. To “spy”, then, requires a subversive intention which occurs at the end of the agent behind the device, absconded from the user. In other words, for us to be able to properly use the word “spy” to describe an action made by our device, it must be an $a$ device. If it were a $b$ device, we’d imply that intention can be achieved by a set of algorithms. Today, there is no reason to believe this is possible, but if this changes in the future, then maybe our analysis of appropriate terminology will change in turn.

In general, we must think about the causal faculties involved in and required for certain activities before we attribute them to an agent. We should get into the habit of determining whether a word requires strictly human faculties before we attribute it to a non-human. This could mean asking questions like, “is it scanning the page, or is it analyzing the page?”, or “is it listening to our conversation, or is it recording our conversation?” Some anthropomorphizing words (e.g., “jump”) do not require strictly human faculties. A human can love and they can touch, but a machine (at least, as of yet) is only capable of the latter. In short:

Proposition A: When describing algorithmic processes, use words which imply faculties attainable, but do not require faculties unattainable by the algorithm.

This evaluation can be calibrated to match the apparent capability of algorithmic progress.4

recommending

It may be that the faculties required to make a recommendation are not strictly human. Namely, taking stock of a person’s preferences and using (say) a well-suited probabilistic function to offer up something similar could be interpreted as a recommendation. Or, conversely, for a recommender to make a recommendation, we only require that it highlight an option which is especially catered to the user; this doesn’t seem strictly human. So, a set of algorithms seems perfectly capable of recommendation without misattribution to the word. Though, even after we follow this reasoning to establish that a word like recommendation can be safely applied to a set of algorithms in deed, the question still remains how this attribution manifests post hoc.

In an attempt to understand the post hoc reaction to word attribution, suppose you’re driving in a two-car caravan, and your GPS recommends an alternate route. Say you take this detour, but your friend in the other car does not, and your friend arrives first. (Call this scenario $g$.) What is your reaction? Now, take the same scenario, but this time it is a human passenger in your car who recommends the same alternate route, your friend stays the course, and again arrives first. (Call this scenario $h$.) What is your reaction now? It seems to me that at the very least, our feelings will be different between the two scenarios. For example, when I imagine this thought experiment, I feel a sense of wanting to assign blame for both hypotheticals, but they manifest in different ways. Namely, when I have a human to blame, I will do just that: in scenario $g$, I will tend to blame myself before the GPS, and in $h$, I will blame the passenger before myself. In short, the way we feel about a recommendation from a human is different from the way we feel about the same recommendation from a non-human. Implicitly now, in the same way we may feel differently about a recommendation from a lawyer than from a liar, the use of the word “recommendation” in the preceding sentence seems appropriate: an action through differing mediums may be called the same thing. A rose by any other name would smell as sweet, and thus, Proposition A still maintains its sufficiency for our use of anthropomorphizing terms.

Even though it is possible that some terminology can be appropriately interchangeable between human and non-human agents,5 the point still stands that our feelings about any action will differ depending on the actor. For example, when a human is behind the wheel of a vehicle, we say that the human is driving the vehicle. Suppose after some analysis, we allow by Proposition A that when there is no human behind the wheel of an autonomous car, the algorithm set is then said to be driving the car. If this car commits a hit-and-run (maybe hitting a human, and driving off), who is to blame? When we refer back to the event, and use the same terminology to describe the experience, we feel led to ask first “who/what was in control of the car at the time?”, or “who/what was driving the car at the time?” And in this way, post hoc, we assign some element of conscious intent to the word “driving” which might not have been there upon initial analysis. This calls into question the viability of simply relying on Proposition A, or something like it.

In 2018, a self-driving Uber vehicle struck and killed a woman. Two years later, it was reported that any criminal consequences for this action should fall on the vehicle’s “backup” driver who was “responsible for monitoring the car’s movements.” Was the backup driver driving, or were they monitoring while the car was driving? Does the burden of responsibility fall on the monitor or the driver? When a student driver drove into a building during their driver’s license exam, actually injuring the person administering the test (the “monitor” in this case), no files were charged. This illustrates a crucial, albeit subtle, difference between an algorithm “driving” and a human “driving”. Namely, with driving comes some legal responsibility which must be attributable to the driver. If that legal responsibility is a faculty which cannot be attainable by an algorithmic entity, then Proposition A does not hold, and the word is not appropriate.6 Since this phenomenon only revealed itself post hoc, we are best to caveat Proposition A:

Proposition B: A term passing Proposition A may be rendered not appropriate if the algorithm cannot maintain the appropriate legal responsibility assigned to that term.

looking

Of course, until now, I have implicitly asked the reader to take Proposition A at face value, similarly accepting Proposition B as a relatively “simple fix” to one particular issue with the former. On the contrary, what it means for a faculty to be attainable is an extremely difficult question. For example, what do we require for some entity to be able to “look”? First, we may think the word implies some sort of visual processing, or pattern recognition, but then it is easy to associate the word with interpreting, or even evaluating visual cues. In this way the string of faculties required of a word might never end,7 but some will seem more crucial than others, e.g., to look, we could categorize pattern recognition as more crucial (or more common) than emotional evaluation of images. In this essay, we concern ourselves with the more crucial faculties, and for this section, we will investigate the intersection between these faculties and their potentially associated algorithmic activity.

I recently had a student who built an artificial “neural network” model,8 and for her purposes it performed quite well. The idea was to classify animals in field photographs of the African savannah; so, given some field image, determine what animal is featured. Most of these images were blurry, or of low quality, so it took a human a slight bit of work to discern what animal was in the image. As it turns out, the model was able to correctly classify the vast majority of animals in these blurry images, but interestingly, it suffered greatly when it came to classifying animals in images of much clearer, high quality. In other words, the algorithm outperformed humans on a difficult task, but grossly underperformed humans on a much “simpler” version of the same task.9 Just this fact alone should be enough for us to question the amount to which we can imbue algorithmic activity with a human likeness. Of course, there are many ways the student could have improved her model to generalize to higher fidelity images, but this is not the point of the matter. In short, human-like behavior in a non-human entity (e.g., identifying blurry images using “neural” connections) might only be an illusion, obfuscating strictly non-human behavior (e.g., whatever makes clear images harder to classify than blurry images).

So, what is there to be done? As much as we may see ourselves in the behavior of these algorithmic creations, it may be that our linguistic projection is actually misguided. Try as we might to design algorithmic processes such that we can describe them using human-based terms, our description may simply be a stranger to some latent, unintended phenomenon lurking in the algorithm, which could cause (yet unseen) non-human consequences, rendering our description bunk. Shortcut learning is one such cause, likely responsible for the events described above, where deep learning (i.e., neural network) models will be overly sensitive10 to particular patterns in “training” data (roughly speaking, “experience”), which are not applicable in general — an especially non-human consequence in that humans are masters of generalization. So, as much as we may like to use terms like “neural”, or “look”, or “find”, etc., to describe human-like behaviors, phenomena like shortcut learning (whether we are aware of them or not) may eventually cause us to question the appropriateness of our word choice.

As I investigate this question of anthropomorphisation of algorithms, I can’t help but wonder if it is possible to create a one-to-one mapping from the simplest most atomic mathematical process (e.g., addition or some set of numbers) to the simplest most atomic human behavior. Maybe this way, we could walk our way back from this proposed map to a perfect equivalence between human and algorithmic terminology. But, in thinking this way, I find myself in a black hole of uncertainty. What is the simplest most atomic human behavior? If Heidegger’s Uncertainty Principle applies here at all (which, I believe it does), there must be no hope at all of such a map. If smaller and smaller measurements imply larger and larger errors, then we can never know what exactly is the simplest atomic human behavior, and the map is impossible to create. We can use probability, randomness, bootstrapping, parametric equations, etc., to approximate human attribute and action, but it can be only just. So too, our anthropomorphisation of any algorithm may only ever be an approximation.

what’s the harm in it?

Here’s the thing. It is not at all apparent that this anthropomorphisation may cause any immediate (or even future) detriment to the well-being of humans. We can say things like “my computer is thinking”, or “my personalized recommendation algorithm understands me”, and apparently no harm is done. We then go about our days, the sun rises and sets, the moon waxes and wanes, and life goes on. So, what’s the big deal? Well, I hope it is not a surprise that in recent history, it was a common practice for some groups of people to talk colloquially about some other groups of people as if they were non-human. This talk, and the words that were used, greatly affected the way the former group thought of the latter, and we all know the rest of that story.

To be clear, slavery is a very different phenomenon from the question I ask in this essay. Indeed, it is almost the opposite. But, the image might help inform the sorts of questions we’d need to ask in a world where it is natural to think of and treat non-humans as, essentially, humans. So, imagine they become one and the same. What sorts of societal or legal consequences may ensue from this conflation? Is this a world we want to be a part of? Maybe so, and maybe the alternative is actually less desireable (as slavery was many years go), but at least we should be cognizant this future is a likely consequence of using (or not using) anthropomorphizing language to describe non-human entities. If we talk about algorithms like they are humans, we will think of them as humans, and any difference between the two will be obscured and eventually disappear. If that is the world we want to live in, we should prepare our society, our legal system, and our minds to accommodate this reality. If it is not the world we want to live in, I believe we are still held to this same charge. In short, we write our own future with the language we use, so we are best to speak with intention and not ignorance.

conclusion

Humans have a natural proclivity to assign human-like terms to non-human (e.g., algorithmic) activity. The tendency presents as an immediate reflex, and becomes an unconscious inclination which often goes completely unchecked. The unsupervised nature of our linguistic choice here is reasonably concerning. With Propositions A and B, above, we illustrate a rough initial model which can be used to keep our word choice in check; something which might prevent dangerous and legally knotted misattribution.

Make no mistake, this is a notoriously hard problem, and we need to be comfortable with the possibility that we may never have a great answer. The difference between what we see in algorithmic behavior and what we know about human behavior is not comprehensive, but it is the best tool we have to distinguish between the two. We may never be able to draw a perfect map between a human action and its algorithmic counterpart, but we can at least be cognizant of the issue and make a concerted effort to be careful with our words. The framework briefly discussed in this paper could be a start.

To be fair, it is likely that most AI practitioners will confidently attest that algorithms do not actually do the actions their anthropomorphized descriptions purport, and that there is a need for some separation between the two. However, there should be special weight placed on the fact that the general public will perceive the nature of the field of AI based on the words those experts use. Recursively, these words may, in turn, shape or even distort the whole field of AI due to this fact.

Maybe this thing is just too hard — everything we do is from our own flawed perspective. Maybe there’s nothing we can do. Our common language is already cemented, and our future is already written. But, we cannot forget that we are still far from understanding the true nature of algorithmic behavior. In Mind Children: The Future of Robot and Human Intelligence, I think Hans Moravec says it best:

“Encoded in the large, highly evolved sensory and motor portions of the human brain is a billion years of experience about the nature of the world and how to survive in it. The deliberate process we call reasoning is, I believe, the thinnest veneer of human thought, effective only because it is supported by this much older and much more powerful, though usually unconscious, sensorimotor knowledge. We are all prodigious Olympians in perceptual and motor areas, so good that we make the difficult look easy.”

We must tread lightly on this path we pave. We don’t know what’s ahead, and there’s no going back.


  1. The question of whether an algorithm is “thinking” or making a “recommendation” is not only a notoriously fraught topic, addressed many times already, but it is notably different from the question I aim to address later in this essay. Namely, asking whether an algorithm is thinking is different from asking whether we should say that it is. In other words, analyzing the action is different from analyzing the word. It will suffice for my purposes to present the questions themselves as examples of our usage of the word(s). ↩︎

  2. It is not difficult to imagine a context in which an animal is considered an algorithm, or vice versa. An animal’s biology and chemistry could be interpreted as algorithmic processes which define them — quite a deterministic perspective — whereas algorithms have the apparent capacity to almost perfectly mimic animal behavior. The intuitive agreeableness of these conclusions may be the culprit in the event we reach a decision standstill, though the reality behind them is yet unclear. ↩︎

  3. It’s worth noting I made a concerted effort to write this sentence such that it did not assign (interesting, now we are talking about words in a human-like way) human-specific words to the device; it does not answer a question, nor do I speak to it. It is also worth noting that this attempt was especially challenging. Should it be? ↩︎

  4. This framework is generalizable, such that when the difference between human and machine is sufficiently obscured, we can still choose our words thusly. ↩︎

  5. Even within the non-human category, we can find examples where non-algorithmic agents may have attributed to them an action like recommendation. E.g., when a seeing-eye dog guides its companion to walk in a particular direction, the use of the word “recommendation” seems relatively appropriate. ↩︎

  6. This way of thinking could be a consequence of our inclination to think about algorithm behavior as human-like, if anything. Or, rather, their ability is typically compared to human ability. AI ethicist (and lawyer) Kate Darling at MIT Media Lab proposes a more useful way to think about algorithmic behavior is as if they were animals, specifically, as if they were pets. Algorithms should be companions, not something to be feared; and, not only does this guide the way we think of algorithms now, it can guide how they are developed in the future. E.g., I’m not worried about my dog spying on me, maybe I shouldn’t worry about an algorithm doing the same. ↩︎

  7. Of note, a whole field and history of semiotics is dedicated to studying the root of this question. ↩︎

  8. I.e., a relatively complex and uninterpretable machine learning algorithm, which uses a series or network of mini-algorithms to assign labels to input data. ↩︎

  9. This phenomenon is a consequence of the so called “Moravec Paradox” (after roboticist Hans Moravec) which can be roughly summarized in the words of computer scientist Marvin Minsky: “easy things are hard”, and “in general, we’re least aware of what our minds do best”. ↩︎

  10. The use of this word is interesting here. I’m referring to an algorithmic entity as sensitive, but it seems not to be unsettling. E.g., we can speak of a metal detector as being highly sensitive to particular metal, or a sonar/radar as being sensitive to particular inputs. Even so, this word can simultaneously carry a comparatively poignant meaning in the human sense, in such a way that it is rendered inappropriate under Proposition A. So, is it really inappropriate? Maybe in time, some words, if used enough in non-human contexts, will attain some sort of human-and-algorithm connotations. Nonetheless, until this point is reached for a given term, Proposition A seems to be an innocuous policy at the very least. ↩︎