Useful Concepts -#16- Supererogation (2) – Doing More Than You Know You Can

As mentioned in the previous post, supererogation means going above and beyond. In moral philosophy it is often applied to acts that are good and praiseworthy, but not required. So anyone not doing these acts could not be criticised from a moral point of view. But when people do them, we are nonetheless pleased and think they deserve special merit.

[Previously, I discussed how it relates to the principle from moral philosophy that “ought implies can.” I wrote about the fact that the unequal distribution of what people can do, means there is also an unequal distribution of what they ought to do. This opens up some space for a discussion about whether some supererogation stems from that unequal distribution.]

In fact the principle that “ought implies can” relates to supererogation in another way as well: It is not always clearcut and obvious in advance what someone can. Two people with similar abilities may therefore take different actions, based not so much on a different assessment of what they can do, but based on differing ideas about what to do when they are not entirely sure whether they can or can’t do it. One person may take on the relevant ought on the basis that he possibly can, another may not, on the basis that maybe he cannot.

I recently read Michael Lewis’ fascinating book The Undoing Project about psychology professors Daniel Kahneman and Amos Tversky who founded the field of behaviourist economics. It contains this episode:

“By late 1956, Amos was not merely a platoon commander but a recipient of one of the Israeli army’s highest awards for bravery. During a training exercise in front of the General Staff of the Israel Defense Forces, one of his soldiers was assigned to clear a barbed wire fence with a bangalore torpedo. From the moment he pulled the string to activate the fuse, the solider had twenty seconds to run for cover. The soldier pushed the torpedo under the fence, yanked the string, fainted, and collapsed on top of the explosive. Amos’s commanding officer shouted for everyone to stay put—and leave the unconscious soldier to die. Amos ignored him and sprinted from behind the wall that served as cover for his unit, grabbed the soldier, picked him up, hauled him ten yards, tossed him on the ground, and threw himself on top of him. The shrapnel from the explosion remained in Amos for the rest of his life. The Israeli army did not bestow honors for bravery lightly. As he handed Amos his award, Moshe Dayan, who had watched the entire episode, said, “You did a very stupid and brave thing and you won’t get away with it again.” Occasionally, people who watched Amos in action sensed that he was more afraid of being thought unmanly than he was actually brave. “He was always very gung ho,” recalled Uri Shamir. “I thought it was maybe compensation for being thin and weak and pale.” At some point it didn’t matter: He compelled himself to be brave until bravery became a habit.”

Clearly, Tversky was able to pull off this feat (even though taking some shrapnel in the process). Equally clearly, no one knew for sure in advance that it could be done. In fact, people must have thought that it couldn’t. The commanding officer would not have given the order to stay put if he had thought that it was possible to rescue the collapsed soldier. The act in question, was called, even after its successful conclusion, “stupid” and “gung ho.” It was also praised as “brave” and given the highest award for bravery. If it had been only foolish and suicidal, it could not have been called brave. But if it had been just brave and good, it could not have been called stupid.

What is in play, perhaps, is Tversky’s perhaps unusual decision that he ought to rescue the soldier, even though he couldn’t be sure that he could.

This example also sheds some light on the vagueness of the principle “ought implies can.” It is not entirely clear, for example, what constitutes “can.” Most people would probably not think that doing something while losing one’s life, would constitute the kind of “can” that would be implied by an “ought.” What about doing something that carried the risk of losing one’s life? Or doing something that meant having shrapnel in one’s body for life?

None of the other soldiers could be criticised for obeying their commanding officers instruction. And yet, had everyone acted on it, a life would have been lost unnecessarily. The difference between Tversky and his fellow soldiers was unlikely to be that Tversky was the only one who had the ability to rescue the collapsed soldier. It’s more likely that he was more prepared to think “I ought to do this,” without knowing whether he would be able to do so unscathed. Or that he was more prepared to take on the risk of damage to himself in shouldering an “ought.”

[Let’s leave aside for now the fact that it was Tversky’s life work to show that human beings are ill equipped to make rational assessments of costs and benefits to ourselves. And let’s leave aside also the separate debate in ethics about how many thoughts one ought to have in this kind of situation.]

Supererogatory acts are not necessarily reliant on physical ability alone. They may, for example, also include finding forgiveness for someone who has wronged us grievously. In that case too, forgiveness may be a process that someone enters into without knowing whether they can do it. They may not know whether ultimately they will truly be able not only to say that they have forgiven, but also feel it for themselves. Or they may not be sure in advance that forgiving wouldn’t mean trading off too much of their own personality or what is important to them. Nonetheless, some people might set off on the journey of forgiveness under those circumstances. Others may not. The latter shouldn’t come in for criticism. The former are making a supererogatory effort.

Advertisements

Useful Concepts -#16- Supererogation -Going Beyond the Call of Duty

“Supererogation” has long been one of my favourite words and concepts. (In saying that, I’m not claiming to live up to it much…) It stems from the Latin words “super” meaning “over” or “above” and “erogare” meaning to “expend / pay out.” So it’s about expending above what might be expected, also known as going beyond the call of duty, or going the extra mile.

In ethics, the concept describes ways of acting that are morally good and praiseworthy, but acts that are not necessarily required. Supererogatory acts are those that are good and such that we definitely would want people to perform them, but at the same time such that we couldn’t criticise people for not performing them. Heroic acts spring to mind, like running into a burning building to rescue others. Or saintly acts such as giving up all and any comforts in order to devote oneself to caring for the poorest, sickest people.

Some people, let’s call the supererogation enthusiasts, see the concept of supererogation as a good criterion for whether a given moral theory looks plausible. Any reasonable conception of morality, they argue, should allow a place for supererogatory acts. If a theory has no such place, say because it too stringently requires human beings to do whatever achieves the greatest good for the greatest number of people, in such a way that for every good act there might have been an other better act, and even for the best possible act a human being could have done, it was just what he should have done, then that is a reason to be suspicious of that theory. We would want there to be a category of good acts, or ways of acting, that deserve special merit, rather than just a shrug and the acknowledgement that “he just did what he had to.”

Others, let’s call the supererogation deniers, argue exactly the opposite: That there is no place for supererogation in any plausible moral theory. In the history of theological uses of the concept, supererogatory acts, such as making large donations to a church, were initially seen as being able to wipe out sin for the supererogatory agent and those around him or her. Against that, a view was taken that human beings were so flawed, so unable to live up to the expectations and requirements of God, that there was no possibility of supererogation. In that view of the world, whatever anyone managed to do, would fall short of what was required. Human beings are so dependent on God’s mercy and grace, that there is no point in talking about going above and beyond.

The denial of supererogation doesn’t need to make any theological assumptions about the relationship between person and God. It could simply be argued, for example, that human beings are such weak and stupid animals, so incapable of ensuring their own flourishing or supporting that of others, that our societies are so unhealthy and corrupt and the world such a sub-optimal, inhospitable and degraded environment, that even incredible, super-human acts could not do good at the level required. In such a context, it might then be unrealistic and unhelpful to acknowledge and applaud a category of specially good acts. It would be better simply to require of each and every agent to do his or her utmost. There would almost be a duty to go beyond the call of duty.

“Ought Implies Can”

There is a principle in ethics that “ought implies can.” It is often taken as axiomatic without further argument. To some extent hat makes sense. It would be strange for a moral theory to require something from someone who is unable to do that. I think the principle “ought implies can” sheds an interesting light on the possibility of supererogation.

First of all “ought implies can” creates an inequality in terms of what can be required from individuals. Some people can achieve more than others and therefore ought to try harder than others. Say person A is a strong swimmer while person B is physically weak and has never learned to swim. They stand at the seashore and suddenly spot someone out there in the sea frantically waving his arms and shouting for help. The weather has suddenly turned stormy and the waves are high in the strong wind. Let’s say in this scenario there happens to be no alternative means of rescuing the drowning man, than for person A or person B to jump in and rescue him themselves: no coastguard to be alerted, no rescue boats or helicopters, and no other devices at all. Person A could very likely rescue the drowning man without any great detriment to herself, but person B would equally likely fall victim to the elements before getting to the drowning man. “Ought implies can” means that the requirement to jump in and rescue the drowning man falls asymmetrically on A and B. While A could be criticised if she didn’t make the attempt to rescue the drowning man, B could probably escape criticism even if he didn’t make the attempt.

Supererogation enthusiasts and deniers may place a different emphasis on the analysis of the situation but may find they’re not as far apart as it originally seemed.

“Person A just did what was required of her,” says a supererogation denier.

“But because of her superior skills, so much more could be required of her, and the drowning man got rescued. Now isn’t that worth celebrating?” replies the supererogation enthusiast.

“It’s worth celebrating perhaps that the outcome was a happy one and that person A had this great ability to swim and rescue drowning people. But if she hadn’t made the attempt, she would have been open to severe criticism, so she really just did what could have been expected,” responds the supererogation denier.

The supererogation enthusiast then has at least one further point to make: Let’s assume that person A’s ability to rescue the drowning man was down to more than just inborn physical ability. Let’s say she trained her swimming abilities a lot and spent some time doing a course in rescue swimming. Let’s say she did that while person B was playing video games and eating pizzas. Couldn’t the supererogation enthusiast point out that person A never had a duty to lead that lifestyle and person B can’t be criticised on moral grounds for choosing his way of life? She wouldn’t have been in the situation where she could rescue someone from drowning, and therefore ought to have done so, if she hadn’t set out on a certain path, that of honing her abilities long ago. Wouldn’t the acts of supererogation have started with the lifestyle chosen and the skills developed, rather than just with the act of jumping into the water to rescue someone?

The supererogation sceptic could try one counter to that. He could refuse to accept that person B cannot be criticised on moral grounds. Or at least he could say that narrow moral grounds aren’t the only consideration here, and that broader ethical issues arise. He could say that we could call B’s lifestyle lazy, self-indulgent and selfish and that this are precisely words of criticism. He could also say that we would call A’s lifestyle industrious, committed to self-improvement and altruism.

But at the same time, person A could have trained in rescue swimming, a non-moral skill, all her life, but never got into a situation where she could perform the morally valuable task of rescuing someone. It would be merely bad luck that she never had the opportunity to perform that good act. In the same way as it is bad luck for agent B to be stuck in a situation where rescue swimming abilities would have carried moral weight, rather than video-gaming skills. And at the same time person A could have been quite useless in a situation where a different kind of skill might have been required, say rock climbing, in which we assume she had no ability. No human could possibly train to be able to perform excellently in every situation, he or she could get into. That would take us to the realm of superheroes. Nonetheless, the situations where someone happens to be able to perform morally excellently due to work they have done to prepare themselves, are those where supererogatory action is relevant. (Of course, some people train themselves and seek out such situations, e. g. by choosing careers where they might be first responders in critical situations.)

The other way in which “ought implies can” creates a space for supererogation comes from the fact that it is not always clear-cut what a person can achieve. This might only become clear in the attempt. The opportunity for supererogation would then arise from where someone takes an optimistic view of what he or she can, and is therefore taking on a higher burden of what he or she ought. But I’ll write about that in the next post.

 

 

Determinism 11 – Ethics as a Means of Living with Determinism

[This post is a part of a series of posts on free will and determinism. The first one in the series is here. The most recent one is “Is it Better to Believe That we Have Free Will.”]

Thomas Nagel, one of the greatest living philosophers, approaches the subject of free will with humility. He writes:

“I change my mind about the problem of free will every time I think about it, and therefore cannot offer any view with even moderate confidence; but my present opinion is that nothing that might be seen as a solution has yet been described. This is not a case where there are several possible candidate solutions and we don’t know which is correct. It is a case where nothing believable has (to my knowledge) been proposed by anyone in the extensive public discussion of the subject.”

He ends his contribution to the discussion of the subject – 28 pages of tightly argued complex philosophical writing – with the remark, “As I have said, it seems to me that nothing approaching the truth has been said on this subject.”

The problem, as Nagel frames it, is one of perspective:

“In acting we occupy the internal perspective, and we can occupy it sympathetically with regard to the actions of others. But when we move away from our individual point of view, and consider our own actions and those of others simply as part of the course of events in a world that contains us among other creatures and things, it begins to look as if we never really contribute anything.

From the inside, when we act, alternative possibilities seem to lie open before us: to turn right or left, to order this dish or that, to vote for one candidate or the other – and one of the possibilities is made actual by what we do. The same applies to our internal consideration of the actions of others. But from an external perspective, things look different. That perspective takes in not only the circumstances of action as they present themselves to the agent, but also the conditions and influences lying behind the action, including the complete nature of the agent himself. While we cannot fully occupy this perspective towards ourselves while acting, it seems possible that many of the alternatives that appear to lie open when viewed from an internal perspective would seem closed from this outer point of view, if we could take it up. And even if some of them are left open, given a complete specification of the condition of the agent and the circumstances of action, it is not clear how this would leave anything further for the agent to contribute to the outcome – anything that he could contribute as source, rather than merely as the scene of the outcome – the person whose act it is.”

As Nagel sees it our problem concerning free will is a “bafflement of our feelings and attitudes – a loss of confidence, conviction or equilibrium.” The problem is that when we take an external view of our actions, we clearly see that our actions are events in a natural order caused by any number of factors outside of our control. Thus we get the “feeling that agents are helpless and not responsible.” And we can’t find ways of making sense of our internal view where we act autonomously. Neither can we get rid of our felt sense of autonomy in action. “We are apparently condemned to want something impossible,” says Nagel.

So if we can’t have the autonomy that we crave, the next best thing, according to Nagel, is to be able to reconcile our internal view with the external perspective. “This does not meet the central problem of free will,” according to Nagel. “But it does reduce the degree to which the objective self must think of itself as an impotent spectator, and to that extent it confers a kind of freedom.” So what we must do, is to learn to act from an objective standpoint as well as to view ourselves from an objective standpoint. Nagel adds, that, since we can’t act in light of everything about ourselves, the best we can do is to try to live in a way that wouldn’t have to be revised in light of anything more that could be known about us.

Nagel proposes an ascent towards this greater reconciliation of internal and external views along four steps:

1.) Self-awareness

“We might try, first, to develop as complete an objective view of ourselves as we can, and include it in the basis of our actions, wherever it is relevant. This would mean consistently looking over our own shoulders at what we are doing and why (though often it will be a mere formality). But this objective self-surveillance will inevitably be incomplete, since some knower must remain behind the lens if anything is to be known.”

This seems like a burdensome procedure, as well as one that might undermine confidence in action and make it hesitant. But this self-surveillance could potentially become a practice that runs in our mind quite routinely. The examples Nagel gives of things we might catch through the look over our shoulder are influences over our actions that we would resist if we became aware of them: prejudice, irrationality and narrow-mindedness. We can avoid acting under their influence by increasing our self-awareness.

Self-awareness, though, can never progress so far towards objectivity that it wouldn’t include a blind spot.

2.) Practical rationality – stepping outside of impulses and desires

Nagel refers to “ordinary practical rationality” as “roughly analogous to the process of forming a coherent set of beliefs out of one’s pre-reflective personal impressions. This involves […] actual endorsement of some motives, suppression or revision of others, and adoption of still others, from a standpoint outside that within which primary impulses, appetites, and aversions arise. When these conflict we can step outside and choose among them.”

3.) Prudential rationality – stepping outside of the present moment

An important subset of practical rationality, is prudence, where we don’t just step outside ourselves to arbitrate between a number of our motives for action, but we step outside of the present moment to consider future considerations that may have a bearing on our actions. (So this is where I judge the present desire to eat the second piece of cake against the future consideration of feeling like I’ve eaten too much.) Nagel warns against over-using the ability to do this: “The dominance of a timeless view of one’s life may be objectively unwise. And compulsiveness or neurotic avoidance based on repressed desires can easily be disguised as rational self-control.”

“But in its normal form,” he concludes, “prudence increases one’s freedom by increasing one’s control over the operation of first-order motives through a kind of objective will.”

4.) Morality – stepping outside oneself

The next step goes even further than just accepting considerations from outside the present, to accepting considerations from outside one’s life:  “More external than the standpoint of temporal neutrality is the standpoint from which one sees oneself as just an individual among others.” This step leads to the formation of impersonal values, and the modification of conduct and motivation in accordance with them.

The Paradox – Morality as Freedom

There is a paradox here: Nagel started us off on this ascent with a promise that it would get us to a more comfortable place with regard to our problem with freedom of the will. But we end the journey under the yoke of moral and ethical considerations. Nagel is fully aware of this paradox: “there is an internal connection between ethics and freedom: subjection to morality expresses the hope of autonomy, even though it is a hope that cannot be realised in its original form. We cannot act on the world from outside, but we can in a sense act from both inside and outside our particular position in it. Ethics increases the range of what it is about ourselves that we can will – extending it from our actions to the motives and character traits and dispositions from which they arise.”

 

Determinism 9 – The Real Oedipus Complex: Moral Responsibility Without Free Will

[This post is a part of a series on determinism. The previous one is here. The first one of the series is here.]

If Dr. Freud hadn’t named his particular complex after him, Oedipus might have become famous for the way he exemplified the relationship of human beings with their predetermined lives rather than just for that matter of killing his father and marrying his mother.

For Oedipus the force of determinism is expressed by oracles. Even at the time of his birth, his father Laius receives the prophecy that he will die by the hands of the newborn son. And it is precisely because Laius aims to avoid that fate by having the baby killed that a course of events is set in train that leads to the fulfilment of that prophecy. The baby isn’t killed but abandoned in the mountains and adopted by a couple. He kills his father in a chance meeting, not knowing who he is, in an early example of road rage. And, of course, as presaged, he marries his mother, Jocasta, not knowing that she is his mother either. In the course of events he also becomes king of Thebes. The abandoned baby, Oedipus, grows up and goes through life like a human wrecking ball, or an avalanche wreaking havoc. The people of Thebes are suffering from the plague visited upon the city in punishment for the terrible deeds its king has committed. Jocasta ends up hanging herself and Oedipus, when it all comes to lights, puts his lights out, gouging out his eyes in self-punishment.

It is only then that Oedipus accepts his further oracle that he would die in a place consecrated to the Furies, and finally be a blessing, not a curse, to the land where his life ends.

One of the many points about the myth of Oedipus has been made by the Czech writer Milan Kundera. In his novel, The Unbearable Lightness of Being, he writes:

“The story of Oedipus is well known: Abandoned as an infant, he was taken to King Polybos, who raised him. One day when he was grown into a youth, he came upon a dignitary riding along a mountain path. A quarrel arose, and Oedipus killed the dignitary. Later he became the husband of Queen Jocasta and ruler of Thebes. Little did he know that the man he had killed in the mountains was his father and the woman with whom he slept his mother. In the meantime, fate visited a plague on his subjects and tortured them with great pestilence. When Oedipus realised that he himself was the cause of their suffering, he put out his own eyes and wandered blind away from Thebes.

Anyone who thinks that the Communist regimes of Central Europe are exclusively the work of criminals is overlooking a basic truth: the criminal regimes were made not by criminals but by enthusiasts convinced they had discovered the only road to paradise. They defended that road so valiantly that they were forced to execute many people. Later, it became clear that there was no paradise, that the enthusiasts wree therefore murderers.

Then everyone took to shouting at the Communists: You’re the ones responsible for our country’s misfortune (it had grown poor and desolate), for its loss of independence (it had fallen into the hand of the Russians), for its judicial murders!

And the accused responded: We didn’t know! We were deceived! We were true believers! Deep in our hearts we are innocent!

In the end, the dispute narrowed down to a single question: Did they really not know or were they merely making believe? (…)

But (…) whether they knew or didn’t know is not the main issue; the main issue is whether a man is innocent because he didn’t know. Is a fool on the throne relieved of all responsibility merely because he is a fool? (…)

Oedipus did not know he was sleeping with his own mother, yet when he realised what had happened, he did not feel innocent. Unable to stand the sight of the misfortunes he had wrought by ‘not knowing,’ he put out his eyes and wandered blind away from Thebes.”

The case Kundera makes is that a lack of knowledge concerning one’s actions does not absolve you from responsibility for them. The same case though can also be made about the freedom with which one chooses to perform one’s actions.

If anyone could have argued that he was not free to choose his actions, it was Oedipus. After all, his misdeeds – killing his father and marrying his mother – were predicted by a powerful oracle at birth. And despite actions taken to avoid them, they come to pass. But Oedipus recognises that it is he who has carried out the crimes, even if it was all predetermined and presaged.

Why did Oedipus feel that he needed to take responsibility for his actions even though they were foretold before he knew anything and all steps were taken to avoid them? The point is that it was still he, Oedipus as a person, who had done these acts and so they would be with him until atoned. As the king of Thebes he was in danger of continuing to bring the wrath of the Gods onto innocent citizens due to the person he had become. As the king of Thebes, he felt responsible for the welfare of his subjects. Oedipus’ strict self-punishment leads him to be redeemed, averts the plague from Thebes. Ultimately, having taken responsibility and accepted his predetermined fate, he is sought out as a person who could bring blessing to the land.

We have to make do without oracles, seers and divine punishments. Nonetheless, the things we do are strongly associated with us as individuals. If we harm others by acting on faulty reasons, we are the ones who hadn’t developed sufficient rationality to see the better reasons. We can be criticised for that and it can be hoped that we can correct and better ourselves. Taking responsibility for our actions, owning them, even if they were determined by factors outside ourselves, could be a first step to that kind of improvement and development of greater insight.

We stay responsible for the actions we take, even if we can point to factors that have caused us to take them. We took the actions that had that effect and by doing so set in train an other series of cause and effect. Being the cause of something just gives us responsibility for the impacts. There doesn’t need to be a further concept of moral responsibility that comes from having freely chosen to do it.

 

Determinism 8 – The Knowledge of Determinism

[This post is a part of a series on free will and determinism. It starts here. The previous post is here.]

The thought experiment suggested not only that it comes natural to us to think of ourselves as exercising free will in our decision-making and in our actions, but also that we find it practically impossible to imagine a life in which we don’t exercise free will. Even if we became intellectually convinced that everything is predetermined, we wouldn’t know what it would mean to just lay back and allow ourselves to do what we are predetermined to do.

We then looked at the role of our rationality, our ability to perceive and act on reasons, as the mechanism that makes determinism work for human beings and that provides the feeling of exercising free will. In this way of looking at it, our ability to perceive certain things as reasons for actions, our sensitivity to certain kinds of reasons for action, our capability to act on them and the reasons themselves are always already given.

Seeing our rationality as that mechanism explains an important phenomenon: the idea that knowing or coming to believe that determinism is a fact of our life can be in some way helpful to us.

At a first glance, it is hard to see how that idea would make sense. If we believe in determinism or know it to be true, it is hard to see how we could use that belief or knowledge to influence the course our life takes. After all, we are intellectually committed to the idea that we have no control over the way our lives turn out. And yet a number of philosophers and schools of thought teach something along the lines of: given determinism, we should live in such-and-such a way.

This makes better sense if rationality is involved in the way in which the predetermined course of events unfolds with human beings. Then the knowledge or belief in determinism can itself become a reason for certain actions or to act in certain ways for those human beings who come to believe in it.

So, for example, a human being who has become convinced that determinism runs his or her life, can take that as a reason not to get too upset if things don’t go his or her way. Or if I think that determinism is at the foundation of other people’s behaviours, that knowledge can become a reason for me not to react too strongly to any perceived slights, bad behaviour or unpleasantness from others.

 

Determinism 5 – The Split Second of Freedom?

[This is a part of a series of posts on free will and determinism. The first post is here.]

In the 1980s, Benjamin Libet performed some experiments relating to the free will. He sat people down in front of a kind of clock face with a dot moving around it very fast. They could stop the dot with a flick of the wrist. Libet asked them to note where the dot was when they formed the intention to move their wrist in order to make the dot stop. He also measured via an electrode on their head when the “specific electrical change in the brain (the ‘readiness potential’)” that “precedes freely voluntary acts” occurred.

He found that the electrical change in the brain occurs more than half a second before the action is taken. And that the human subject becomes aware of the intention to act 350-400 milliseconds after the electrical change but still around 200 milliseconds before the action is taken.

This research was pounced on by those arguing that we have no free will. How can we be said to choose freely to act when the evidence for the action about to be taken is there before we are even consciously aware of it?

But Libet himself wasn’t quite as categorical about his findings. He clearly took the view that we should assume that we have free will. He suggested that his experiment showed that free will might consist in being able to veto actions that the brain proposes to undertake. In the 200 or so milliseconds between our awareness of our intention to act and the action itself, we can stop ourselves from acting. He says that sometimes the electrode showed a readiness potential in some of his experimental subjects and they became aware of an intention to act but didn’t ultimately take the required action to stop the dot moving. His conclusion is:

“The role of conscious free will would be, then, not to initiate a voluntary act, but rather to control whether the act takes place. We may view the unconscious initiatives for voluntary actions as ‘bubbling up’ in the brain. The conscious-will then selects which of these initiatives may go forward to an action or which ones to veto and abort, with no act appearing.”

The  ethical conclusion Libet reaches is that guilt and the attribution of moral wrong-doing should relate only to actions taken, not to thoughts about actions. Specifically, he rejects the kind of doctrine expressed in the Sermon of the Mount:

“Ye have heard that it was said by them of old time, Thou shalt not commit adultery: But I say unto you, That whosoever looketh on a woman to lust after her hath committed adultery with her already in his heart”’ (Matthew, 5.27–8).”

But those who see Libet’s experiments as proof of the absence of free will, have to force themselves to overlook the fact that Libet quotes – at the end of his article published in a scientific journal – the author Isaac Bashevis Singer who said:

“The greatest gift which humanity has received is free choice. It is true that we are limited in our use of free choice. But the little free choice we have is such a great gift and is potentially worth so much that for this itself life is worthwhile living.”

But it doesn’t end there. Psychologist and meditation teacher Tara Brach discusses the Libet experiment and quotes with approval another Tara (Bennett-Goleman) who calls the milli-seconds between our awareness of an intention forming and the physical movement to implement it, the “magical quarter-second.” Tara Brach concludes that:

“By catching our thoughts in the magic quarter-second, we are able to act from a wiser place, interrupting the circling of compulsive thinking that fuels anxiety and other painful emotions. If our child asks us to play a game and we automatically think “I’m too busy,” we might pause and choose to spend some time with her. If we’ve been caught up in composing an angry e-mail, we might pause and decide not to press the send button. The basic mindfulness tools for working with compulsive thinking are “coming back” and “being here.”

It’s time to disentangle some thoughts. Here’s what I think:

  1. Libet’s experiment is interesting but it doesn’t necessarily show any of the things he or others claimed. It could simply be a case of brain-hand-eye co-ordination takes certain amounts of time. Also, if we believe that we have free will, as Libet does, then how can we tell that the electrical charge in the brain isn’t just something we generate when we exercise our free will?
  2. The ability to say no to things that bubble up in the brain an unconvincing and unsatisfying version of free will. Surely we would want the ability to choose positively what actions to take, rather than just a power of veto.
  3. I don’t think Libet’s moral conclusions would follow from his interpretation of free will as a power of veto. In a later blog I’ll aim to argue for moral responsibility even for predetermined, not just for freely willed actions.
  4. Even if Libet’s and Bennett-Goleman’s magic quarter-second doesn’t follow from the experiments, there is clearly the possibility of a reflective space before any action. I think making use of it, and practising the ability to extend it, can make our actions better, even if not necessarily freer. This is again something I’ll want to discuss in a future blog.

 

Determinism 4 – Marginalising the Forces of Determinism

[This is a part of a series of posts on determinism. It starts here.]

A bit of cognitive bias can’t fully explain our strong experience of free will on its own though. The other move that we tend to make is down-playing the extent to which determinism is a force in our lives.

Sometimes the rejection of “nature” (“it’s in our genes”) as a determining factor in favour of “nurture” (“it’s in our upbringing”) is presented as a rejection of biological determinism in favour of choices we make about the way we organise society, support we give parents and children, educational policies and so on. But the point, as far as we’re concerned, is that we can’t chose what kind of home, society, socio-economic context we’re born into. We don’t choose our parents or the time and space we’re born into. Nature and nurture are forces of determinism influencing the way we are.

How is it that we reject the idea of determinism due to our lived experience of exercising free will in our every day life, but we don’t reject the idea of free will due to the influence of deterministic forces in our lives. Do we systematically avoid thinking about what it means for us that over a lot of our lives we exercise no control and have no choices to make. We do this not only by forgetting how much about is given by the circumstances of our birth, but when we don’t admit to ourselves that we perform many actions without choosing what we do.

We do this by pretending that exercising a deliberate choice is the paradigm case of action for us. We pretend that consciously deciding to do one thing after another is the normal way in which our daily life develops. Here are some of the things we need to ignore or play down in order to maintain that pretence:

Addictions: The addict does things because the addiction forces him to do so. By definition, he doesn’t chose to do them but is subject to forces outside of his control. We can blame the addict for not having got over his addiction. In doing so, we can refer to former addicts who have learned to control their addiction. “If he really tried,” we can tell ourselves, “the addict could exercise control over his life.” But in the moment where the addict is an addict, as opposed to a former addict, the addiction controls his actions. At that point, whatever happens later in his biography, the actions relating to his addiction are determined by forces outside of his control.

Phobias: The arachnophobe should be able to walk near the tiny little spider in the room to do what she needs to do. The person who is scared of heights should be able to walk up the mountain to enjoy the view. The agoraphobic should be able to leave her house like any other person. But they can’t. From a non-phobic perspective, we pretend that they’d have a choice if only they pulled themselves together. But that is a simplistic view of their condition. The phobia determines the action or inaction in this case. There is no choice.

Reactivity, Habit, Auto-pilot: But much as the forces of determinism can take obvious and strong forms, it’s just also the case, that often we act without thinking much. The science and meditative discipline of mindfulness show us that we don’t control the thoughts going through our minds which often are the basis for actions we take.

What proportion of our actions would we be want to perform after full deliberation, intentionally and conscious of all the facts having weighed up the pros and cons carefully in order to say that we are exercising free will in a meaningful way in our lives? What proportion would we be happy to cede to the forces of determinism? Is it enough to be able to say that for the things that really count, for the big life decisions and when it really matters we exercise free  will? And can we really say that? Is it enough to tell ourselves that even when we don’t, we could if we tried?

[The next post in this series is here.]

Determinism 2 – What is the Problem?

This post is a part of a series. Here’s the first one of the series.

The majority of people who responded to my thought experiment said they would try to forget about the news and just spend their day as they were planning to do anyway. Slightly fewer people saw themselves newly absolved of responsibility for their actions and therefore went for ice cream and telly. There were also a few who were going to spend the day proving that we do have free will, regardless of the panel’s findings, some of them brought issues of ethics or religion into it. One or two just remarked that they would do whatever they were pre-determined to do and one or two others said that of course we have no free will and everything is predetermined.

Of course there is more than one problem surrounding determinism and free will. It’s worth untangling them a bit.

First of all there is a relatively straightforward problem: In the pursuit of our daily lives we appear to exercise our will freely. From minor decisions as to what kind of breakfast cereal to buy, to major life choices such as whom we should marry or whether we should change jobs, careers even, or move to a different country, our life seems somehow to be up to us. Or at least we seem to have a say in the direction it takes. And we would like to think that even with major moral dilemmas, such as – during times of war – whether to join the resistance and fight the forces of oppression, or stay at home to look after a sickly relative, we would be free to make that decision. In such cases, I suspect, many of us would prefer the ability to make our choices freely to the alternative of not choosing at all. That would remain the case even if it ultimately means having an ability to make choices that will turn out to have been bad choices, tragic choices or fatal choices.

On the other hand, we understand the universe we inhabit to be a physical universe in which things follow the laws of physics and other sciences. Bodies move according to laws of physics that we can work out through observation and the other methods of science. In that physical universe every cause has an effect and every effect its cause. Certain things follow each other as night follows day. Even where a divine spirit is assumed to be a part of this picture, this spirit is the provider and enforcer of these laws that govern bodies. And human creatures are undeniably physical beings who – as bodies – are subject to the same laws. What’s more, with the progress of neuroscience, the more we can look into the activities of tiny particles in our brains and the mental processes triggered by these activities, the more scientists conclude that the lives of our minds are as governed by these laws of science as our bodies.

Secondly, there appears to be a kind of psychological problem: In the thought experiment, we have come across an overwhelming reason to believe that we have no free will. And yet, it is not just my perverse construction of the experiment that leads us ask ourselves the question “so what do we do now?” Acquiring the knowledge that we are predetermined creatures doesn’t seem to change our sense of agency. And for the small number of people who answered the thought experiment by saying “I’d do whatever I’m predetermined to do,” the challenge would be to describe how the experience of doing so is qualitatively different from the experience of living life exercising free will. A life where we just surrender to determinism, switching off whatever faculty we think we’re exercising when making choices or decisions for ourselves, doesn’t seem feasible.

Thirdly, there is the question of responsibility for our actions. Some people found the certain knowledge that their actions are a result of determinism, rather than an exercise of their free will, liberating. They chose to sit in front of the telly and eat ice cream. Watching TV and eating ice cream are of course just representative examples for how we might chose to live our lives if we were freed from the responsibility for our actions that we normally place upon ourselves or see ourselves under. There are unlimited other things people might choose to do if they saw responsibility for their actions lifted from them. Again, isn’t it an odd and paradoxical psychological effect that the sudden knowledge that their actions are pre-determined suddenly seems to free people up to do what they always wanted to do?

But aside from the psychological effect, there’s the ethical point that a lot of people see the seeming absence of responsibility for our actions, moral responsibility in particular, as so repugnant, that they would take that as a starting point to argue against determinism. It may also be possible to rescue moral responsibility through into a deterministic picture of life and the universe.

[The next post in this series is here.]

Cats and Dogs in the Library – Non-Human, Human and Superhuman Rationality

Writing this last post about some philosophers’ treatment of animals reminded me of another philosopher’s, Alasdair MacIntyre’s, book Dependent Rational Animals.

Philosophers over the centuries have been fairly binary in distinguishing between human beings and other animals, mostly on the basis that non-human animals lack some capacity for reasoning or deliberation. They act on instincts and drives, whereas human beings act on reflection and reasoning.

Rationality (meaning the ability to reason) also tends to be connected with language skills. What is key, is the ability to formulate for oneself and express to others one’s reasons for actions, to reflect on them and critique them even before acting. The advanced language skills of human beings have helped set ourselves apart – in the minds of philosophers at least – as the species that is able to reason, against the others that are unable.

This binary view can be attacked from two sides: Firstly, an argument could be made to bring human rationality (in the sense of being able to reason and act on reasons) closer to certain animal behaviours. Secondly, it could be argued that animals actually do have some capacity for reasoning that is not qualitatively different from that of human beings.

McIntyre pursues both those lines of attack. He argues, that we would do well to see our human reasoning capability as a development that emerges from our animal nature and is continuous with animal behaviours:

“It is not only that the same kind of exercise of the same kind of perceptual powers provides, guides, and corrects beliefs in the case of dolphins – and some other species – as in the case of humans, but that our whole initial bodily comportment towards the world is originally an animal comportment and that when, through having become language users, we under the guidance of parents and others restructure that comportment, elaborate and in new ways correct our beliefs and redirect our activities, we never make ourselves independent of our animal nature and inheritance. Partly this is a matter of those aspects of our bodily condition that simply remain unchanged, of what remains constant through and after the social and cultural scheduling and ordering of our bodily functions: toilet training, developing what one’s culture regards as regular sleeping and eating habits, and learning what constitutes politeness and rudeness by way of sneezing, spitting, burping, farting, and the like. And partly it is a matter of what is involved in our becoming able to reflect upon our overall comportment and our directness towards the goods of our animal nature, and so in consequence to correct and redirect ourselves, our beliefs, feelings, attitudes and actions.”

McIntyre also discusses at some length the research showing the ability of some species, e. g. dolphins, to learn and use language to develop and communicate hunting strategies and to adjust their behaviours to a changing environment.

In some experiments, dolphins were able to learn a made up vocabulary and syntax made up by human beings using dolphin sounds and distinguish sentences like “take the surfboard to the frisbee” from “take the frisbee to the surfboard.” (Dolphin researchers seem to live a fun life full of frisbees and surfboards.)

This ultimately leads MacIntyre to the suggestion that there is a spectrum of reasoning ability, and that some animals are further along that spectrum, closer to where human beings are, than others:

“To acknowledge that there are these animal preconditions for human rationality requires us to think of the relationship of human beings  to members of other intelligent species in terms of a scale or a spectrum rather than of a single line of division between ‘them’ and ‘us.’ At one end of this scale there are types of animal for whom the sense of perception is no more than the reception of information without conceptual content. […] At another level are animals whose perceptions are in part the result of purposeful and attentive investigation and whose changing actions track in some way the true and the false. And among such animals we can distinguish between those whose perceptions and responses are more fine-grained and those whose perceptions and responses are less so.”

This leads MacIntyre to a revision of a famous moment in philosophy:

Wittgenstein remarked that ‘If a lion could speak, we could not understand him’ (Philosophical Investigations II, xi, 223). About lions perhaps the question has to be left open. But I am strongly inclined to say of dolphins that, even although their modes of communication are so very different from ours, it is nonetheless true that if they could speak, some of the greatest of the recent interpreters of dolphin activity would be or would have been able to understand them.

The “spectrum” idea of animal rationality reminds me of one more thought. That is a text by the philosopher-psychologist-theologian William James, who is forever condemned to have the tagline “brother of the novelist Henry James” after his name. He wrote:

“I firmly disbelieve, myself, that our human experience is the highest form of experience extant in the universe. I believe rather that we stand in much the same relation to the whole of the universe as our canine and feline pets do to the whole of human life. They inhabit our drawing rooms and libraries. They take part in scenes of whose significance they have no inkling. They are merely tangent to curves of history the beginnings and ends and forms of which pass wholly beyond their ken. So we are tangent to the wider life of things.”

Even if the human species represents a point relatively far along a spectrum of rationality, it is still only a point on a spectrum. That leaves open the possibility that there are points on the spectrum beyond human rationality. Not everyone will find the idea palatable that there are already beings in the universe – divine or alien, presumably – who have a higher form of experience than ours, relative to whom we are like domesticated cats and dogs in drawing rooms and libraries. But whether it is already available to any creature, or not, the possibility remains there that rationality could develop further than that of human beings.

There is no reason to be so ego-centric and grandiose from the human perspective to assume that we represent not only the high-point, but the end-point of rationality. And it is intriguing to think about some of the consequences of that. Some points, briefly, that spring to mind:

  • It could be argued that human beings don’t even use their rationality for much of the time. We often act automatically, instinctively, reactively, habitually. That is fine and probably saves time as well as mental effort. But we need to be clear that for much of the time we don’t make use of the highest form of our rationality. If, as Viktor Frankl says, there is a space “between the stimulus and the response and in that space lies our power and our freedom” we should be aware of how often we don’t make use of that space, but act in a more animal-like stimulus-response mode.
  • The cats and dogs that thrash the furniture of the drawing rooms or make a mess of the libraries are not the ones that are most popular with the people who understand the features of those rooms. In the same way we should approach our environment, the universe, whose features we can’t fully comprehend, with a certain humility and a desire to leave it intact.
  • We should keep alive the hope that it is possible to refine our rationality to a higher point on the spectrum, not just over evolutionary history for our species, but over a lifetime. The dolphins that learned a more advanced level of vocabulary and syntax, developed their language and reasoning capabilities to a point that wasn’t necessarily available to other individuals of their species. But they were trained by human beings who were further along the spectrum of rationality. If we were to aspire to develop beyond our point, whom would we look for training? It’s a tough question. But we have concepts of perfection: Plato’s idea of the Good, the Stoic concept of the wise person, religiously inspired images of the highest attainable mode of living, the contemplation of beauty, the virtues, or even love. (“Will not ‘Act lovingly’ translate ‘Act perfectly’, whereas ‘Act rationally’ will not? It is tempting to say so” writes Iris Murdoch)

 

Drowned Rats and Mad Dogs: Terrible Things That Have Been Done to Animals to Learn about Human Nature

Drowned Rats

In the 1950s Curt Richter was doing experiments on stress responses in wild and domesticated rats when he accidentally came across a strange phenomenon.

The experiment involved putting rats into large jars full of water and measuring the length of time the rats would swim in water of varying temperatures before they drowned. He was able to show that there were temperatures at which rats survived for longer, and temperatures at which they drowned sooner. (Don’t ask me about the use of this experiment.) The problem was that there were outliers with large variation in the results. Some rats swam for 60-80 hours, while others, particularly wild rats, would drown within minutes.

This variation reduced the significance of Richter’s findings, so he wanted to work out why some of the rats drowned almost immediately. Having ruled out some other factors, Richter worked out what was going on by considering the whole situation the rats are in. He writes:

“The situation of these rats scarcely seems one demanding fight or flight—it is rather one of hopelessness; whether they are restrained in the hand or confined in the swimming jar, the rats are in a situation against which they have no defense. This reaction of hopelessness is shown by some wild rats very soon after being grasped in the hand and prevented from moving; they seem literally to “give up.”

Next Richter finds a way to prevent the rats from literally just “giving up.” He does this by training them in the idea that their situation is not hopeless. As he describes it:

“Support for the assumption that the sudden death phenomenon depends largely on emotional reactions to restraint or immersion comes from the observation that after elimination of the hopelessness the rats do not die. This is achieved by repeatedly holding the rats briefly and then freeing them, and by immersing them in water for a few minutes on several occasions. In this way the rats quickly learn that the situation is not actually hopeless; thereafter they again become aggressive, try to escape, and show no signs of giving up. Wild rats so conditioned swim just as long as domestic rats or longer.”

Let’s just note for now that the rats who learned not to become hopeless in this way didn’t necessarily survive the experiments. They simply stop messing up Richter’s experiment by being hopeless outliers. Ultimately they were still participating in an experiment to work out how long a rat normally struggles for in a tank of water before it drowns. The difference is that after having been given hope they then died of exhaustion rather than hopelessness. (I’m sorry if that sounds gruesome. It is what it is.)

Let’s also note for now that Richter thought these experiments were relevant to human beings. He suggested that the immediate drowning (“sudden death”) is comparable to so-called “voodoo” deaths – instances of “mysterious, sudden, apparently psychogenic death, from all parts of the world.” But he also thought it might be comparable to patients dying in hospitals, not from disease or unsuccessful operations, but simply from fear of an operation. He also cites instances of soldiers dying in good health during the second world war.

Richter’s hopeless rat experiments have become famous for their simple message: look at what the simple presence of hope in the mind makes possible! The physical endurance of a rat in a water tank is greater by a factor of hundreds, just due to a simple mental ingredient: hope!

What strikes me as interesting though, is the picture one must have of the kind of universe we inhabit, if these experiments are meant to be meaningful to our situation. Presumably the experimental set-ups would need to reflect our environment in some way and the things that happen to the rats and dogs would have to be comparable to the kinds of things that happen to human beings.

Is a rat struggling to stay afloat in a water tank suitably similar to life on earth for a human being?What of the experimenter holding the rat briefly and  then freeing it? What about immersing it in water for brief periods of time at first? Is life meant to be like that – short periods of captivity, pain and struggle followed by momentary relief giving us hope that there is point in struggling on? But what for? Only to be able to withstand longer periods of struggle and then drown anyway?

I know that the experiment by necessity is a simplified model of reality. But this experiment is said to deal with concepts like hope, death and survival. And all this in an environment where there is no meaning and no vision of the good (or even just the good life for a rat) apart from survival itself? What are these rats who are not hopeless meant to be hoping for?

And what is the experimenter who gives the rats a careful taste of freedom every now and then in order to make them hopeful? Is it a God in our universe? A cruel God? Or is the experimenter just trying to recreate a situation where painful experiences alternate more or less randomly with less painful, neutral or even positive ones while we make up our own minds about the meaning in it all?

What if the rat experiment had been carried out in an entirely different framework of thought, say in one where death was seen as liberation from the necessary suffering that is life? What if the end of life for a rat was seen as an opportunity for re-incarnation as a different, less ratty, life-form or a chance to enter into nirvana?  Then the rats that drown first aren’t in fact losing hope or giving up, but simply letting go, no longer clinging on to life under the misguided notion that it is worth clinging onto?

Then the experimenter who gives hope to the rats by holding them briefly, then letting them go, or by putting them into the water tank briefly, then taking them out is not giving them hope, but rather strengthening in them a tendency to grasp, to believe that it is possible, if they just work hard enough, to fulfil their cravings, to be free of struggle and suffering.

Mad (Sad) Dogs

About fifteen years after Richter’s rat experiments, Martin Seligman and colleagues did some influential experiments with dogs.

They gave electric shocks to a group of dogs who had access to a switch with which they could make them stop. They also gave electric shocks to a group of dogs who couldn’t make them stop. Later they put the dogs into a cage where they received electric shocks but could move over a small obstacle to a different part of the cage where they wouldn’t receive shocks. The dogs from the first group found out quickly how they could avoid the shocks and largely did so. The dogs from the second group just suffered the shocks. The conclusion: These dogs had learned helplessness.

Seligman was immediately interested in the implications for human suffering and wellbeing. He says the animals who had learned helplessness looked “downright depressed.” And it was the implications of learned helplessness in dogs for depression and other mental illnesses in human beings that looked interesting.

But again, there are some outliers. And the outliers begin to look even more interesting than the normal cases. As Seligman writes:

“It all stems from some embarrassing findings that I keep hoping will go away. Not all of the rats and dogs become helpless after inescapable shock, nor do all of the people after being presented with insolvable problems or inescapable noise. One out of three never gives up, no matter what we do. Moreover, one out of eight is helpless to begin with – it does not take any experience with uncontrollability at all to make them give up. At first, I try to sweep this under the rug, but after a decade of consistent variability, the time arrives for taking it seriously. What is it about some people that imparts buffering strength, making them invulnerable to helplessness? What is it about other people that makes them collapse at the first inkling of trouble?”

Seligman’s experiments provided the foundations for a new school of psychology. Positive psychology focussed on helping people lead happier, more effective lives, rather than on removing psychological diseases and weaknesses. Some of it focussed on the characteristics of those outliers who refused to learn helplessness, assuming that these could help others. This led to the insight that it helps to view bad events as temporary rather than permanent and specific rather than universal. These non-human and human animals apparently have hope. Hope again emerges as a key factor, this time not only in longer survival but in wellbeing and happiness.

But What Does It All Mean?

The experimental set-up again contains some ideas about what life in this universe is like. Some individuals may experience phases in life where they are unable to control the painful events (electric shocks or other) that they are exposed to. From this experience they may conclude that it is pointless to try to avoid painful events in later life phases and surrender to them. They no longer struggle against painful events or look for ways to avoid them. They continue in this resigned state, even in later phases of their lives when they could avoid painful events

But how is the universe and human life really set up with regard to painful events? Is it more like having a switch with which we can make them stop or more like not having one? Is it more like being able to move from a part of a cage where we are exposed to electric shocks to another part where we aren’t?

The kinds of painful events human beings outside of experimental settings are exposed to are more diverse than electric shocks. And there are other things we can aim for in life than the avoidance of pain. What if the painful events are on the path to a greater good that makes them worthwhile? (To be fair to Seligman, he fully recognises that purpose, meaning, pursuing a greater good are key to happiness. In that he seems to have left the dog experiments well behind.)

Another experiment, more of a “thought experiment,” comparing a dog’s life to that a human being’s, stems from around 2000 years before Seligman. It’s that of an ancient Greek Stoic philosopher who says that a person’s relationship to fate is like the relationship between a dog strapped to a cart and his master. The master will get the dog to pull the cart from A to B. It’s the dog’s choice whether he goes willingly, or whether he gets beaten by the master every bit of the way.

Note how the assumptions about man’s (and dog’s) ability to avoid pain are different in this example from the assumptions in Seligman’s experimental set up. By necessity, we have to undergo the experiences predestined for us. ( How painful they are depends not so much on our efforts to avoid them. Quite the opposite – the ride becomes less unpleasant if we adjust our mental attitude to undergoing them willingly.

Clearly the strength of our belief in our ability to avoid events that are bad for us and move towards those that are good for us influences how hard we try. So a belief that we can change things for the better and that some events are under our control can be a positive thing to have.

What I’m less sure about at the moment is what happens to hopeful people when it turns out that events really aren’t under their control. (Making the assumption that such events are an inescapable feature of the human condition in this universe.) Do more realistic people then fare better – in that they waste less time struggling unsuccessfully? (Think about the fact that in the universe Richter creates for them all the rats drown in the end.) Or in that they are better, like the Stoic dog, at embracing the journey and submitting to it, thus at least not compounding the pain of painful events with the pain from thinking that things should be otherwise and struggling against them.