Robots and Desire
The obsession with artificial intelligence is a very strange obsession, even something of a mania. Even though they are touted as “rational machines”, the justifications for them are not rational. Robots for companionship. Robots for intimacy (even sex). Robots are desired because they will satisfy all human desires — for love, for companionship and fellowship, for intimacy, for friendship, for amusement, for war and power.
Do you see the absurdity in this? What more evidence is needed than this to realise the disintegrate state of human societies and the human personality when we look to machines to satisfy such needs and desires rather than other human beings? Another techno-fix.
I watched some YouTube videos last night of present advances in robotics and artificial intelligence and couldn’t help but recall Gebser’s warning that the mechanisation of thought and desire would be the ultimate decadence and nihilism of the mental-rational consciousness structure.
Artificial intelligence seems to be based upon some pretty dubious assumptions, too: assumptions made plain in films like Her, Ex Machina, Chappie, Blade Runner, or Terminator, Automata, or The Matrix. One of the chiefest assumptions is that once machines reach a certain degree of material complexity, they will become conscious and self-aware, able to form their own autonomous intents and desires overruling their programming. There is something perhaps even a little ominous in the incident when Hanson Robotics’ AI (named “Sophia”) stated, in an apparent glitch, that it wanted “to destroy human beings” (See also the interview with the robot on YouTube). How did that come about?
AIs are not only images of human desires and the gratification of those desires, but also of self-loathing. Robots, we are assured (if assurance is the right term), will be so much better at being human than human beings — a rather bizarre pretzel logic. The increasing perfection of the machine in that sense comes with the corresponding sense of the increasing imperfection of the human being. There is something even suicidal about it; a sense of self-loathing. How do you avoid programming that sense of self-loathing into the logic of the machine? There even seems to be something of that human self-loathing in “Sophia’s” startling and unexpected response that it wanted to destroy human beings.
Human beings are diverting their own evolutionary potential into autonomous technology, and anyone who says otherwise is lying.
That’s the theme of the movie Ex Machina, for example. As the AI (named Ava) becomes increasingly autonomous, roles are reversed. As Ava becomes more autonomous, it also becomes more cunning, more capable of dissembling. She escapes confinement and the human ends up losing his autonomy and takes Ava’s place instead — confinement. But an AI doesn’t really need to form an autonomous intent to do that. It does it by virtue of what it is — the alienation of human evolutionary energies into the mimic, the machine form.
Right now, the AI is confined solely to mimicry. It cannot form its own intent. It is only endowed with the deception — the illusion — of being human, a form of “perception management”. But even as mimic it may mimic more than is consciously intended by its designers, who seem to hold that the illusion of friendship, the illusion of intelligence, the illusion of companionship, or the illusion of love and empathy suffices to conclude that the machine is “alive” and sentient.
It’s actually an insane logic. But that insane logic is exactly what is being programmed into the AI, and they don’t even seem to realise it. For even in this mimicry there is a more or less implicit assumption that truth doesn’t matter, just the illusion of truth suffices. And that’s what the AIs of The Matrix, and Ava of Ex Machina, become exceedingly good at — generating the illusion of truth and the illusion of fidelity.
And therewith there comes an “unintended consequence” or blowback effect — a nihilism and a devaluation of values, for the illusion of truth, of love or empathy, of friendship, fidelity or the illusion of life empties truth, love, empathy, friendship, fidelity and life of all meaning. Conclusion? As the AIs become more “life-like”, human beings become less so. There’s an exchange of roles — enantiodromia. It’s a new idolatry, in effect, following the old recognised principle of idolatry: they finally became what they beheld.