You Don’t Have to Be a Robot if You Don’t Want to

I started off my last post remarking on how as a species we had lost our confidence and optimism with regard to robots and AI. I then speculated that this may be connected with prevalent views of what it means to be a human being. I suggested that we thought of ourselves as destructive of our environment, neurophysiological and addicted. This view, I thought, added up to a view that we have no free will. I quoted a piece from Yuval Noah Harari where he argues that we have no free will and that the sooner we get clear about that, the better our lives will get.

When I wrote it, I must confess, I failed to see something that was staring me in the face all along. I didn’t make the strongest connection possible between the AI-phobia, our view of ourselves and the Harari quote. I meandered from one to the other, not realising that there was a straight path between them.

I only realised, when I put my post on Reddit (r/philosophy) where you generally get good discussion amongst people with an interest and varying degrees of training in philosophy. I suspected, and half hoped, that a lot of people would comment critically on my suggestion that we think of ourselves as destructive, neurophysiological and addicted. Surely that is too limited and pessimistic. In fact, most people commented critically on my swerve towards defending freedom of the will. And there was something interesting about the language they used in doing so. Here are selective quotes from four different commenters:

“I think worrying about free will only makes sense if you are holding on to some shred of dualism. The fact I can’t non-deterministically make inputs into the mechanistic universe and change its course is meaningless if I accept that I am a computational machine made of chemistry inside the mechanistic universe which is processing sensory inputs and producing behaviours as outputs. I am still an entity doing things, the pattern of computation exists and is me, that’s enough.”

“Why would an ego possibly reject the idea that it is a machine with purely deterministic outcomes?”

“Do you believe in some kind of dualism? Do you believe in some kind of physicalism? Do you believe in a deterministic universe? Depending on your answers to these questions, it seems quite obvious that you are a computational machine made of chemistry inside a mechanistic universe.”

“You can pretend that it’s a matter of free will that determines whether you are fooled by a fake article or not, but I would argue that it’s based on your brain’s ‘software,’ so to speak, the way it’s been moulded to work through the information and ‘variations of thought’ that it has experienced, and the biases it has gained along the way.”

This gives us a strand of our current Menschenbild, our view of what it means to be a human being that is probably a consequence of the trends that I mentioned but goes further. Not only are we our brain chemistry now, but we are, in the view of these commenters, “computational machines inside a mechanistic universe” running on software.

Then I realised that this is not some kind of eccentric view from people who are spending too much time online. It had been there, represented in my blog post all along. In the Harari piece that I quoted he talks about “hacking the human spirit.” In bits that I hadn’t quoted he talks about “hacking the human animal.” In order to do that, he says, you need a lot of “a good understanding of biology and  a lot of computing power.” He says we need to “come to terms with what humans really are: hackable animals.” And then he analyses how we might get some support in our predicament:

“It is particularly important to get to know your weaknesses. They are the main tools of those who try to hack you. Computers are hacked through pre-existing faulty code lines. Humans are hacked through pre-existing fears, hatreds, biases and cravings. Hackers cannot create fear or hatred out of nothing. But when they discover what people already fear and hate it is easy to push the relevant emotional buttons and provoke even greater fury.

If people cannot get to know themselves by their own efforts, perhaps the same technology the hackers use can be turned around and serve to protect us. Just as your computer has an antivirus program that screens for malware, maybe we need an antivirus for the brain. Your AI sidekick will learn by experience that you have a particular weakness – whether for funny cat videos or for infuriating Trump stories – and would block them on your behalf.”

So this is, when it comes down to it, what human beings are: hackable animals who need AI to protect our spirit from being hacked. This is really just a further development of the trends I discussed. If we are purely brain chemistry, out of control entities carrying out what the electrons in our brains require, then the idea that we’re machines is almost a logical conclusion. And that also explains why we’re so worried about the robots, or the AI now. If we’re just machines ourselves, then clearly a new, better, more intelligent generation of machines is likely to be a threat to us. At least this is the bleak picture some people want to paint of human beings. But believe it or not, alternative views are actually available.

Harari argues that “if governments and corporations succeed in hacking the human animal, the easiest people to manipulate will be those who believe in free will.” This, I think, is up for discussion.

I think those who believe that they are just machines running some virus-vulnerable software, relying on an AI sidekick, will be much easier to manipulate. They would find it easier to make excuses to themselves for their own behaviour. They are subject to the limiting belief that they are not in charge of their own choices. The person who believes that he or she is making real choices for which he or she is responsible, I would say, is more likely to reflect on whether to click on something and whether to buy what they see. They are more likely to pause and reflect before acting, rather than acting on auto-pilot.

There are big questions around the Harari-style view of human beings and the world: Software is usually written by someone to make a machine perform a certain task. In this image, which clearly is more than a limited metaphor, who wrote the software, and to make the machines perform what purpose? And since we must assume that the human beings who are working for and leading the governments and corporations that are hacking the human spirit have no free will either, what programme are they running, and on whose behalf, when they are manipulating others? And if it’s the human spirit and human ingenuity that builds the AI to protect themselves from being manipulated, why can’t they just protect themselves? And what drives this titanic clash between the manipulators and those who build the “AI sidekicks” to protect the human spirit? What decides which side any given individual is on? Harari doesn’t need to have a good answer. It could all be random and pointless and just happening. It could be driven by selfish genes, or by energy and matter on the move since the big bang. But the whole scenario is full of assumptions and consequences that are far less obvious, scientific and factual than the initial description pretends.

Once again, like I said in my previous post: There are many alternative views we can take of human beings and their (our) place in the world. Some of them do not see us as outdated technology. Some of them do not see us as mere machines. There are more hopeful views we can take of our nature. And I believe, that our beliefs about ourselves very heavily influence what we do and ultimately what we are. But that will have to be for another blog post.

Advertisements