It might sound odd, when you first read it. We spend each day with a variety of machines, of various complexities. Even humans themselves are complex, albeit living, machines.
So what does it mean to anthropomorphise a machine?
Beyond pareidolia
You walk into a room, and realize your phone battery is about to die. As a minor panic overtakes you, you spot a power outlet across the room and, as you approach it, you realize the outlet kind of looks like a little face.
A little bit like: 😮
But only a short moment after noticing this, you likely shrug it off, then proceed to shove two metal prongs through its shocked adorable eyes (pun intended).
If you had truly assigned any humanity to the face on the power outlet, the thought of plugging your charger in would likely put you off.
“Intelligent” machines
In the last couple years we’ve seen an explosion in computers suddenly becoming more “intelligent”. In fact, computers can now
generate complex, high-quality 3D art
write in length on nearly any topic
pass graduate-level exams
identify objects/locations, track motion, etc.
and so on, and so forth.
But how do they achieve this? The answer is both simple and complex, depending on who you ask, but a suitable analogy easily explains it:
the functionality of AI is like the functionality of the human mind
Of course, components of the functionalities of each are profoundly different below the surface (as early in the design as the hardware level, if you will).
Still, the majority of people do not and will not ever understand either one. Others will eventually come to understand, though, even if they do not yet.
For that majority though, it boils down to one convenient word.
“Magic”
Magical things are things that operate beyond our level of understanding. The most common magical things are spells and curses, deities and spirits, orcs and unicorns (among heaps of other things).
But what is “magic”? At points in time, it was an excuse to brutally execute people. It has also served as a basis for many stories, good and bad.
However, I’d argue that contemporary “magic” has transcended whatever benign purpose it might have had before, and instead now serves as a double-edged sword.
You might ask
What is this sword?
Don’t worry, I’ll tell you. It’s all about the power of abstraction. Unfortunately, as is the nature of abstractions, they obscure what is beneath them by picking out themes, purposes, goals, etc. and managing scale.
But what does this have to do with machine anthropomorphism? Well, it all centers around this “magical” quality that we’ve assigned modern machines, similarly to have we’ve long done for the mind.
The edge that cuts forward
Abstractions make things easy to explain to people. Instead of explaining the deepest, chemical inner-workings of the human mind, we often like to say that we “think”, and that “our thoughts build off of one another to form arguments.” This helps us explain why when we write a post, like this one, that words seem to escape our fingers like wine from a tilted bottle.
In fact, we reason in terms of abstractions all of the time. Love is one of the most commonly discussed abstractions (in this case, abstracting away reason and atomic emotions).
Any abstraction, ultimately, is a concept that carries beneath it a bunch of hidden information, which is often not contextually relevant. When I say I love somebody (romantically or not), it usually merits no further explanation. The abstraction of love itself explains (sufficiently enough for most people) the sole reason why I choose to spend my time with someone.
People (myself included) like to express their love in terms of metaphor, but even this dances around the deep, chemical and psychological reasons for love that would be boring or depressing to most.
Still, to most people, it’s not clear what “thinking” actually is, and most people don’t care (or simply don’t have the luxury of time) to delve any deeper. Abstractions are comfortable, convenient, and efficient. That is their beauty.
The edge that cuts backwards
Unfortunately, that is also their curse. When we load up an AI like ChatGPT, all we see is a box into which we enter questions or requests.
As we interact with GPT, we begin to see qualities in its responses that we’d normally only attribute to sophisticated humans. Speed, accuracy, breadth and depth of knowledge. In fact, to feign the emotionality of a human, you might see it output things like:
Sorry, I didn’t realize that’s what you meant. Let me fix that.
Eventually (or, for some, immediately), we might start our queries with the word “please.” Why do we do this? Because, unfortunately, abstractions fall victim to a flaw in human nature - the art of projection.
Most people essentially see AI like GPT to be magical in nature (not knowing that it’s an LLM running on an insanely expensive computer that is able to run these computations faster than any person can think using electricity), we want to make the magic feel natural, because otherwise, its functionality is very unnatural to us. Nothing like it has ever existed.
In other words, we want the magic to feel human, and so we start to assign human traits or emotions to otherwise (essentially) inanimate non-human objects. This is where things get bad.
Where the danger lies
Honestly, the real danger is not immediate here. For now, it’s a superficial danger, or in terms of a metaphor:
the seed is only still germinating
So, what’s the problem? Why shouldn’t you treat a computer like your friend?
Well, for many people there’ll be no harm. If you’re merely a polite person that has a habit of saying “please”, “thank you”, and so on, there’s no problem. However, if you start to get an emotional attachment to some AI or other machines, that’s where the roots start to grow (to follow the metaphor from above).
Eventually, there might come a moment when you have to decide. Do you kill this “friend”, or let it live?