Today was the first day of summer school 2014: an intensive one-week graduate programme in ‘Morality: Cognitive and Evolutionary Origins’, organized by the Central European University in Budapest. I’ve decided to keep a blog to document this week, as it is already shaping up to be a wonderful experience.
Monday 23rd June, 2014 (Day 1)
Today was the first day. The course has about 30 students in, and is taught by leading scholars who work on morality from a range of disciplines. In fact, the faculty list for the course reads somewhat like a Who’s Who of moral psychology: Nicolas Baumard; Jean-Baptiste André; Paul Bloom; Redouan Bshary; Leda Cosmides; Molly Crockett; Gergely Csibra; Fiery Cushman; Keith Jensen; Dan Speber; John Tooby; and Karen Wynn. The course tackles a variety of approaches to understanding the proximate and ultimate explanations of human morality, including evolutionary biology, comparative psychology, evolutionary psychology, cognitive neuroscience, developmental psychology, cognitive psychology, and cognitive anthropology. Each faculty member gives 2 talks of 1.5 hours, with time for questions, allowing the students to really connect with and consider their research. With that many leading scholars – and a number of very keen and bright graduate students, the energy in the room has been inspiring.
The course is taking place at the CEU Campus in downtown Budapest, right by the river, and surrounded by some of Budapest’s most iconic and beautiful landmarks.
We started the course yesterday with lectures on the evolutionary biology and comparative cognition approaches. First, Bshary walked us through the empirical evidence for reciprocity in animals. This work from evolutionary biology is relatively unknown to me, and I found it fascinating. Particularly thought-provoking was Bshary’s contention that actually we have relatively few examples of reciprocity in animals – that actually, by-product mutualism and forms of pseudoreciprocity are more common. Bshary went on to discuss why there might be so little evidence for reciprocity, taking us through the game-theoretic assumptions of the Prisoners’ Dilemma and systematically providing evidence of how these assumptions are actually violated in nature.
Next, André provided a complementary talk on reciprocity in animals – this time providing a theoretical approach to contrast Bshary’s empirical one. André began by discussing the special problem of cooperation: that the costs and benefits are not paid and received by the same individuals: there are externalities. André then discussed bootstrapping problems in explaining cooperation for more than just immediate benefit. For example, we might consider the issue through the lens of kin-selection: cooperators mostly interact with other cooperators, and so they will get an indirect benefit. However, the bootstrapping problem is evident such that genetic relatedness with neighbors is not actually a given, but in part an outcome of evolution. Relatedness must evolve first for immediate benefit, and then allow for the evolution of greater cooperation. Similarly, one can try and argue that cooperation derives from common interest, but again there is a problem that common interest itself must have evolved first for an immediate purpose. André compellingly argued that all the biological functions needed to reciprocate could not evolve together at once: some functions needed for reciprocity must evolve first for different reasons. Again, this talk was somewhat different from my academic background, but was very interesting and raised a number of important conceptual issues that I hadn’t really considered before.
Finally, we had an incredibly engaging speech by Keith Jensen on primate prosociality. Jensen managed to electrify the room with his talk, fusing a discussion of deep theoretical issues in the use of studying primate prosocial behavior, as well as providing a number of laughs and fun. In particular, Jensen discussed how for an act to be prosocial, ulterior motives have to be ruled out – and they haven’t in some work conducted on primate prosociality. I found this discussion, again, very interesting, and it reminded me of discussions in social psychology about egoism vs. altruism. In one sense I think I agree with Jensen, but I also think that it must be clear that other non-altruistic reasons must be proximate to say they are not altruistic – that is, there must be some immediate reason why an individual helps that is not due to other regarding preferences. I worry otherwise that we run the risk of ruling out the potential for altruism, since it is likely that all of our actions have some ultimate purpose for self-interest, even if we’re not aware of them.
Finally, we had some student presentations. Nicola Raihani provided a fascinating talk about the reputation of punishers, suggesting that the reputation of punishers depends on whether partner choice is possible, and whether punishment is possible. She suggested that people like fair partners, but not necessarily tough ones. This talk, again, made me think a lot about the reputation of punishers, and I hope to include this concept later in some of my own work. Next, there were two student talks on the effects of eyes on moral decision making, by Zoi Manesi and Anna Kis. I think this work on the eyes is very interesting, and am interested in further discussion about why this effect occurs, and what exactly this tells us about morality. Finally, Alejandro Vasquez gave a talk about how episodic foresight relates to social competence in pre-school children, which is in turn linked to prosocial behavior.
Overall, this has been a stimulating and exciting first day, and I’m looking forward to tomorrow.
Tuesday 24th June (Day 2)
Second day of the Summer School, and today may have been even better. We started with an utterly fascinating talk by Leda Cosmides on the architecture of motivation. It was very exciting to be afforded the opportunity to hear her talk as I have studied the work of Leda Cosmides and John Tooby for many years. She didn’t disappoint. She talked in depth about why – and how – traditional frameworks of motivation are limited; they can only account for a small subset of the emotions we feel. We therefore, she argued, need new theoretical tools and concepts. One such core theoretical concept that has already begun to drive our understanding of the mind further is the recognition that the brain is an organ – its evolved function is to extract information from environment and is used to regulate physiology and behavior. The brain, in other words, is a computational device. This much is familiar to anyone who has read her work, but what I found particularly exciting was her discussion of what she termed Internal Regulatory Variables (IRVs). Cosmides argued that IRVs evolved to track narrow, targeted properties of body, social environment, and physical environment whose computation provided necessary inputs to evolved decision making, and focused in particular on a kinship index and a welfare tradeoff ratio. It is beyond the scope here to attempt even a preliminary coverage of her arguments, but suffice it to say that it was an excellent and compelling speech. I did have some reservations, however, regarding the status of this work as it connects with the massively modular view of the mind. While some modularity of the mind seems uncontroversial, I am unconvinced that the degree of modularity – massive modularity – proposed and implied in their work over the years is an accurate depiction. In some ways I worried that the discussion of the IRVs in the social domain puts one on a philosophically tricky road.
Next, we had an energetic talk by the always-wonderful Paul Bloom. Like Keith Jensen yesterday, his talk invigorated the room, with the other students and faculty held in rapt attention. After criticizing the focus on definitions in morality (‘we don’t try and define cancer accurately before treating it, so why must we with morality?’), he discussed the broad scope of contexts in which morality is usually applied. Paul asked the room to raise their hands and indicate whether they thought three things morally wrong or not: incest between consenting adults (no-one thought it wrong: bloody liberal hippie types); eating your dog after it is hit by a car (about 6 people, including me, thought it wrong); and cleaning your toilet with your national flag (only me). Bloom went on to discuss examples of third party punishment and associated emotions, discussing the role of perceived immorality in the increasingly observed ‘twitter-storms’. Finally, Bloom discussed in more depth the role of emotions – and particularly empathy – in explaining prosocial behavior. While I enjoyed his talk, I did have some concerns regarding the explanatory use of his position. Bloom – in my understanding – was arguing that empathy is a crucial part in leading to prosocial behavior, and that the people we are not nice to (e.g. Jews in 1940s Germany) are those that do not inspire empathy. This seems somewhat problematic to me, as it amounts to the suggestion that we help people because we feel empathy, and that we feel empathy to people who we are likely to help. This seems to push the question back, in my mind: why do some people inspire empathy, and others don’t? What psychological processes lead to the scope of empathy we feel? If prosocial behavior is to be explained by empathy, we must have a psychological account of the processes leading to, and hindering, empathy.
The last faculty lecture of the day was by Gergely Csibra, who discussed costs and benefits for agents, and the implications for social relations for humans. It was very interesting to hear him speak, particularly as I had studied his work in my undergraduate, and was interesting to hear his thoughts on how this connected to moral behavior.
As for yesterday, after the three faculty lectures we had four student presentations in the late afternoon. Dr. Laura Kimberly gave a very interesting talk on a potential asymmetry bias with regards to virtues and vices: are more interested in, and pay more attention to, immoral behavior rather than moral behavior? The always wonderful Jamie Luguri gave a fascinating talk on counterfactual moral reasoning, and how this is linked to social cognition in general. While Jamie focused on negative (immoral) events, I would be interested in future to read research exploring counterfactual reasoning in extremely moral events. Mark Sheskin gave a thought-provoking talk concerning his thoughts about problems with the current ingroup-outgroup work in prosocial behavior. Perhaps due my group-based social psychology background, I wasn’t particularly convinced by his suggestions, but it was certainly interesting and I am excited about talking to him more about this tomorrow.
Today was also the day that I gave my own presentation. I was, to use the wonderful British vernacular, “shitting a brick”. I have given talks before, but never to a room full of 40 people who are experts in the field. I was very nervous before, desperately hoping I wouldn’t embarrass myself in front of these leading scholars whose work has inspired me for many years. In the event, I think it went OK. I presented some empirical work on ‘utilitarian’ judgment in moral dilemmas and an extended theoretical discussion on why it may not be utilitarian at all. I received some very helpful and illuminating comments from Fiery Cushman and others, and I was pleasantly surprised after the talk when about 15 people came to me independently to say how much they’d enjoyed my talk. Still, I’m glad it’s done.
In the evening I went sightseeing around Budapest with some of the other students, and managed to take some beautiful pictures while having a great time. The other students here are amazing, and I do feel like I’m forming friendships that I hope will be long-lasting.
Wednesday 25th June (Day 3)
And on the third day, we discussed partner choice. We picked up again from Monday’s lectures, with Bshary and André giving complementary lectures on partner choice. As on Monday, Bshary discussed empirical work on partner choice in non-human animals, while André complemented this by discussing a theoretical approach to partner choice. Both talks were very interesting, and yet I fear that I didn’t understand them as well as I could have done. I really enjoyed listening to their talks, but I am still unsure what I actually think about their arguments as I don’t have a good enough background in evolutionary biology. I’ll just take it that they are right.
Unfortunately, by lunchtime I had developed a killer headache (I blame the complexity of evolutionary biology) and so missed the afternoon talk by Baumard. I was really disappointed to miss it, but I figured that it would be worse to stick through it and continue feeling bad than to try and recover and be fresh for tomorrow. I talked to quite a few people about it, and the consensus was that it was a really exciting and interesting talk. Very sad to have missed it.
A number of other course participants came up to me throughout the day to tell me how much they liked my presentation yesterday. I’m really pleased it went well, and it’s given me quite a confidence boost with regards to presenting my work.
Thursday 26th June (Day 4)
I woke up feel bright-eyed and refreshed after my extended sleep yesterday. I was particularly excited about this morning, as both Fiery Cushman and Molly Crockett (one of my PhD supervisors at Oxford) were giving lectures. In many ways, their two talks – both on moral learning and moral decision-making – were mutually reinforcing. This is perhaps not entirely surprising, given the very odd coincidence whereby they both – independently – published articles around the same time, both arguing for the importance of both model-based and model-free models in explaining behavior in the trolley-type moral dilemmas.
Fiery kicked off the day, beginning by talking about reinforcement learning in moral decision making. Roughly, reinforcement learning is an approach in artificial intelligence concerning how people learn about the world. The general idea is that innately you get rewards and punishments, and then a learning algorithm assigns values to different actions you perform based on the reward/punishment. Then, you have a different deciding system that decides the behavior to perform based on this learning. Within this, you can have both model-based reinforcement learning and model-free reinforcement learning. Molly came back to this in her lecture, and it’s beyond the scope of this blog post to discuss this in detail, but the general utility of these different approaches is that it allows one to distinguish between moral actions and moral outcomes. A moral behaviour can be valuable because of the good outcome (a typical consequentialist approach) and/or because the action itself is valuable (a typical deontological approach). To understand moral decision making, we need to distinguish moral actions and outcomes, and in particular applying this to the traditional dual-process model of moral cognition (e.g. Greene).
Fiery highlighted an important feature of both deontological (e.g. Kant) and consequentialist (e.g. Bentham) approaches to morality that is often neglected when considering a dual process approach to morality. People often refer to deontology as being ‘emotion’ driven, while consequentialism is ‘reason’ driven. This is, of course, a gross simplification. On one hand, deontology isn’t solely emotion – there are lots of rules and duties that provide structure: there must be some computational structure, and this involves ‘reason’. On the other hand, utilitarianism isn’t solely cognition as there clearly is a crucial role for emotion – one doesn’t simply say that 5 lives is better than 1, but rather than 5 lives has greater value. For deontologists, moral value resides in action – morality is in performing the right actions. For consequentialisms, moral value resides in outcomes. And yet there are still emotion and cognition involved in both.
Fiery spent the rest of the lecture focusing on examples where people show aversion to performing ‘fake’ harmful actions: for example hitting a hammer hard on a persons ‘leg’ (actually a PVC pipe). Even though participants were aware that the ‘leg’ was actually not a real leg, they feel extremely uncomfortable – with some even refusing to do it. This makes little sense if moral outcomes are the only crucial thing in moral decision-making – rather, it suggests we have strong emotional responses to certain moral actions, and this may drive responses in traditional ‘trolley’ problems.
One interesting study that Fiery mentioned in passing was the work of Shenhav and Greene (2010), who using imaging to look for the existence of domain general mechanisms in morality. One interesting feature of their study, which asn’t actually discussed at length in the paper, was the finding of a kind of moral diminishing of marginal utility: the first person to die matters a lot, but there is not much difference between 1000 and 1001. This reminded me of the Stalin quote that “A single death is a tragedy; a million deaths is a statistic”. We do in fact see this every day – one British child dying causes a nation to engage in emotional soul-searching and calls for punishment, while a thousand children dying in Syria barely raises concern at all amongst the general population.
Overall, I really enjoyed Fiery’s talk (of course I would: I love his work) – and it seems that my feelings were shared by most of the other attendees.
After a short coffee and cigarette break, we returned for Molly Crockett’s talk. In many ways, Molly talk picked up where Fiery’s left off. In particular, as well as highlighting the role of model-free and model-based learning the systems, she discussed a separate Pavlovian learning system. As she noted, in neuroscience it is generally accepted that there are three systems for decision making: Pavlovian; Habitual (Model-Free); Goal-Directed (Model-Based). Each valuation system has appetitive (reward) and avoidance (punishment) valences, and the systems can be distinguished by the way they process actions, and the kind of actions they take input from. As Molly noted, the instrumental (model) systems can be primarily differentiated from the Pavlovian system because the instrumental (model) systems can learn to emit any arbitrary action to obtain desire outcome, while the Pavlovian system emits fixed actions in response to learned stimulus-outcome associations. How might this relate to moral judgments? Molly discussed how if there is a Pavlovian type learned association whereby we exhibit avoidance to a negative action (“if it’s a bad thing, don’t do anything”), moral judgments in the trolley problem should be sensitive to whether there is an active or passive frame: for example “is it permissible to push?” vs. “is it permissible to not push?”. Molly then discussed some research that looked at emotions and found that negative emotions increases likelihood that people say “no don’t do it” – toward both the active and passive frame. Molly then went on to discuss how computational pruning in a Pavlovian system might help to explain the doctrine of double effect. Overall, Molly gave a very compelling and interesting talk, and I know that a lot of people came away from the talk with some new perspectives on the cognitive architecture underlying our moral decision making. I am excited about hearing her second talk on Sunday morning.
In the afternoon, Nicolas Baumard gave his second lecture, taking a ‘life history’ approach to the variability of moral behavior. Nicolas noted that moral motivation should vary according to costs and benefits of moral behaviour at each developmental stage. In particular, he asked why, if humans are a moral species, we don’t always act morally? If morality is a biological adaptation, why is it so variable? Nicolas went onto discuss different factors explaining the emergence of moral religions, discussing first the potential roles of large politicies, and secondly the potential role of affluence. His talk raised a lot of interesting questions, and led to some very lively debate! I’m not entirely sure exactly I think about his approach – it certainly seemed relatively convincing, but I’m just not sure I have enough knowledge about the topic to come to any reasonable judgments.
Next, we had the student presentations. First, we had Karolina Prochownik, who gave a talk entitled “”How do moral religions work? Different modes of religious prosociality”. As the only ‘sole’ philosopher, her talk naturally focused on definitional and conceptual issues regarding the study of moral religions and prosociality. I really liked the way that she mapped out the existing state of the research, and she answered some tricky questions from the audience very competently. Secondly, we had Denis Tatone, who gave a talk entitled “Beyond the triad? Giving — but not taking — actions prime fairness expectations in dyadic interactions for human infants”. This was, again, interesting as it focused primarily on developmental studies with children. As he noted, some studies have suggested that infants spontaneously compute entitlement of others across different situations and on the basis of a range of relevant factors (e.g. effort; need, etc). However, as he explained in his talk, his data suggests that conclusion may be a bit premature – that actually such effects may be limited to triadic interactions. Thirdly, we had a talk by Telli Davoodi who discussed whether children might have a concept of nothingness. In this, she discussed why – or why not – children might struggle to have a concept of nothingness, and proposed an experimental study design to explore this issue further. Finally, we had a talk by Robin Kopecky, entitled “Do philosophers’ moral beliefs differ from layman’s intuition? No”. Robin talked about utilitarian responses in a large sample from the Czech Republic. His talk was very interesting, again, and he found some evidence that there are gender effects in utilitarian judgment, and that philosophers do not differ a lot from laypeople. I wished that Robin would have talked a little more about the theoretical implications of his results, as I think this would have been very thought-provoking and could have inspired some good discussion. That said, he did give the best line I have heard in all the presentations so far, describing a vignette where a fictional person was described as enjoying “[porn] with violent sex, children, animals, and even supernatural beings like Jesus or Unicorns”.
Friday 27th June (Day 5)
Today, the Morality course went on tour. The CEU arranged a wonderful boat trip for us to go to the nearby town of Szentendre. Szentendre is a small riverside town in Pest county, about an hour or so away from Budapest. It was a really beautiful town, with some gorgeous architecture and scenery. You can see a few of the pictures at the bottom of this page.
I managed to have some wonderful chats with people today on the trip. I think one fantastic thing about this summer school has been the opportunity for close informal discussion with experts in the field. As one other attendee described it, it is like being backstage at a rock concert. I had great discussions with Molly Crockett and Paul Bloom about the differences between Oxford and Yale, and another thought-provoking chat with Fiery Cushman sitting in the sun at the end of the boat. It really has been a lovely day, and I’m so grateful I’ve been afforded this wonderful opportunity.
Saturday 28th June (Day 6)
It’s so strange to think that this course is nearly over. It has been such an intellectually exciting – not to mention exhausting – week.
We started off today with a talk by the wonderful Karen Wynn on the Moral Baby. In her talk she drew heavily from some really exciting work conducted by Kiley Hamlin (whom I collaborated with on my Macbeth paper) and Paul Bloom. She suggests that there is an early emergence of moral emotions, motivations, and cognitions – that from early in life we evaluate social and psychological traits, behaviours, and affiliations. These assessments guide our attitudes, actions, and judgments of what others deserve, and in these social judgment we can see the roots of morality. Karen discussed evidence in human infants for the three general requirements for reciprocal altruism to become a stably evolved system in a social group: first, a desire to help others even at immediate cost; secondly, an ability to distinguish cooperators from non-cooperators; and thirdly, a willingness to punish bad guys and reward good guys. I really enjoyed this talk, and am really excited about hearing Part 2 on Monday.
Next, we had the second talk by Keith Jensen, looking at the ‘dark side’ of primate origins of fairness and punishment. I honestly think that he could have been a comedian in an alternative life – he again provided one of the most amusing academic talks I’ve had the pleasure of attending. In this talk he discussed the cooperation problem – that cooperation poses obvious benefits, but that free-riders also share the benefits, but not the costs. As he discussed, one potential solution is punishment. In discussing punishment, he talked about the good, the bad, and the ugly of fairness and cooperative behavior: the good can be an end to itself, or positive reciprocity; the bad can be negative control or negative reciprocity; and the ugly can be a spiteful end in itself.
Finally, Fiery Cushman gave a talk in which he followed on from his previous talk. In this talk he discussed whether there might be different mechanisms predicting perceptions of moral wrongness, and mechanisms predicting punishment. In particular, he presented evidence that people may punish others even in moral accidents because they can constitute a ‘teachable moment’: punishment – even in accidental outcomes – helps people learn. His talk was, again, really compelling and made me think a lot. In particular, it brought to the surface a long-standing interest I’ve had in how moral actors affect our judgment: that we might not simply judge moral outcomes and intentions, but also the moral actor. I’m looking forward to talking more with Fiery and others about this in the last few days of this course.