International Convention of Psychological Science 2015

 

 

ICPS

I’ve now arrived in a very sunny Amsterdam for the inaugural International Convention of Psychological Science (ICPS), and as I’ve done before I’m writing a few blog posts about some of the talks I attend. I hope that this can ‘bring the conference’ in some small part to those who cannot attend.

I should begin by saying how grateful I am to the Oxford Martin School Programme on Resource Stewardship, which has funded me for this trip, as well as supporting a lot of my research. Given that I’m presenting work on Saturday morning about sustainable prosocial behavior, I thought it would a bit incongruent if I didn’t attempt to take the more environmentally-friendly travel method of the train. In fact, not only was this cheaper and more environmentally friendly, but it was also much more relaxing and pleasant. To any Brits reading this: I highly recommend travelling by train to Amsterdam. Upon my arrival in Amsterdam I was extremely happy because not only had I previously booked a cheaper special rate room at the Grand Krasnalopsky Hotel (where the conference is held), I was told I had been upgraded to an apartment upon arrival. Having checked in to my wonderful canal-side two-storey apartment with a kitchen, living room, and two balconies, I am now ready to start with the psychological science. Let the inaugural International Convention of Psychological Science begin.

Click below to jump to a specific symposium:

Exploring humility: theory, measurement, and development

Nice and right: clarifying the relations between empathy and moral judgment

The processes and consequences of trustworthiness detection

Religious dimensions and morality: perspectives on a multifaceted relationship

To see and be seen: a fresh look at prosociality

Tribal minds: the evolution and psychology of coalitional aggression


Exploring humility: theory, measurement, and development

Chair: Jennifer L. Wright

Presenters/Authors: Thomas Nadelhoffer, Jennifer L. Wright, Walter Sinnott-Armstrong

Time / Date: Thursday, 12 March 2015 12:00 – 13:20

The first symposium I attended was entitled “Some Varieties of Humility Worth Wanting”, and it was an exciting promise of things yet to come.

 This symposium had three talks, all following on from one another, and all focusing on the concept of humility, from a theoretical and then empirical perspective.

The first talk opened by discussing theoretical perspectives on humility throughout the ages, and I learnt a lot of interesting new things. As the speaker noted, the concept of humility can be problematic . It is epistemically problematic (i.e. some people do have great achievements, and so humility not be accurate view of oneself); morally problematic (how can a moral virtue require one to have a low regard for oneself?); and psychologically problematic (work in positive psychology clashes with humility).

Indeed, humility has a tangled history, from what the term the low-mindedness view (Old & New Testament; Middle Ages), to critics of humility in modern philosophy, to those trying to rescue it in contemporary philosophy (the underestimation view; and the non-overestimation view).

Humility stems etymologically from humus (earth; social), and so the original idea was that humble people are down to earth. In the Old and New Testament, humility means lowering oneself before God and recognizing that God is important. There is a dark side to religious humility: the idea of learning to obey; to lower down yourself; break your will, etc (Kempis; 1587; Of the Imitation of Christ). They identify this as the low-mindedness view: humility involves self-abasing and self-effacing attitudes, beliefs, and behaviours, and the humble person (correctly: supposed epistemic accuracy) views herself as vile, corrupt and contemptible.

From this view, modern philosophers rejected this idea from low-mindedness that humility is a virtue. Spinoza, for example, held that “humility is not a virtue”: it is unhealthy self-abasement. Hume describes humility as a “monkish virtue” and “serves no manner of purpose”, and should be rejected by “men of sense”. This traces through to Nietzsche and his writings of the slave revolt in morality: the noble man is keenly aware of his excellence. Sidgwick finds humility “paradoxical” and “irrational”, and picks up particularly in the epistemic problems with humility. Most recently, this finds its expression in the work of Hare (1996).

But after this, the authors consider whether we can rehabilitate humility? Contemporary philosophers (e.g. Driver) have presented a “underestimation view”: the modest person underestimates self-worth to some limited degree, and you are ignorant to some degree of your self-worth. Here, humility is a virtue of ignorance. We are not necessarily filth, but the problem is that this still involves some epistemic inaccuracy.

On the other hand, a second contemporary view (that the authors find more convincing) is that of non-overestimation. Here, humility does not require us to see ourselves as vile or corrupt, nor underestimate our value or worth (it is therefore epistemically accurate), but rather to keep these things in perspective. Here, humility is directed away from oneself and is a virtue that arises from recognition of attunement to reality beyond oneself. For example “awareness of nature typically has and should have a humbling effect” (Hill, 1983). The key to the non-overestimation view is having proper perspective, and the mere requirement to be humble is to avoid thinking too highly of ourselves and decenter from the self. Humility has low self-focus, high other-focus. As C.S. Lewis said, humility is not thinking less of yourself, but thinking of yourself less.

After this excellent beginning, the second talk described three empirical projects: a first project investigating people’s concept of humility; a second project looking at the underlying construct and scale development; and a third project looking at relationships of the construct with others.

In their first project, for both an adult and middle-school sample, they asked people what someone who possesses the virtue of humility is like. They found that there is an idea that humble people are calm and quiet; honest; trustworthy; appreciate; polite; not materialistic. Interestingly, the younger samples in particular thought that humility involves embarrassment, but adults did not, suggesting a developmental role of low-mindedness view.

In their second project they looked at the development of a humility construct (scale). They began by noting that a difficulty here is that measuring humility is a bit of a contraction: to the degree that a key component of humility is forgetting oneself, self-reporting humility may be oxymoronic. Nonetheless, even using the self-report scale, there are problems with existing scales (e.g. Peterson & Seligman, 2004; Lee & Ashton, 2004) because they rely directly on self report and conflate humility and other constructs. To address these concerns, they began a process of scale development. In the scale item development, they had a number of items (210) relating to humility and related categories. For the EFA they used standard statistical procedures and cutoffs. In Round 1, they went from 210 items (N = 610) to 10 factors with 52 items. Then they did a Round 2 with 95 items (N = 447), finding a core concept involving religious humility, cosmic humility, and so on. They then split these into 3 scales: humility, modesty, and open-mindenedness. Then they did a Round 3, with 76 items and 5 factors. In the confirmed final scale, they have 25 questions with 5 factors. Interestingly, they find general trend upwards in humility across developmental age.

In their third project they looked at how it related to other constructs. In both samples, humility was positively related to a number of constructs expected: civic responsibility; empathy; human egalitarianism; psychological well being; moral foundations; and an agreeable and open personality. Humility was negatively correlated with economic greed and self-judgment and isolation. In adults, humility was positively related to moral identity; forgiveness; mindfulness, and so on. Again in adults, it was negatively correlated with psychopathy, sadism, depression, system justification, JWB, and a host of other measures. Humility remained unrelated to self-esteem, self-compassion, NFC, quest religious orientation, and SDO. I found it particularly interesting that humility was associated with aid given to an outgroup-organization, and hope to consider this in more depth with my work on intergroup prosocial behavior.

In the third talk, a textual analysis was conducted, which looked very interesting but unfortunately I had to leave before the end, so cannot comment too much.

Overall, this was an excellent introduction to the conference, and I look forward to following their work as it is published.


Nice and right: clarifying the relations between empathy and moral judgment 

Chair: Paul Conway

Presenters: Indrajeet Patil, Paul Conway, Michela Sarlo, Giorgia Silani

Date/Time: Friday, 13 March 2015 8:30 – 9:50

An early morning start to listen to a series of excellent talks on the relationship between empathy and moral judgment. We started, as moral psychology discussion tends to, with a brief chat about the so-called ‘Trolley Dilemmas’. Conway asked us to indicate whether we would flip the switch in the ‘Switch’ case, and consistent with formal studies, most people who answered said they would. I was one of the few that said that wouldn’t. I’m always the outlier: I promise I don’t do it on purpose.

In the first talk Indrajeet Patil (whom I know from twitter, and it was a pleasure to meet in person!) talked about his work look at the differential role of empathy for assignments of moral wrongness and moral blame in an fMRI procedure. Drawing on Cushman et al’s dual process model, Patil and colleagues had 20 participants, each faced with 36 unique stories that involved both beliefs and outcomes. Interestingly, they found that moral decision making recruited empathy-based regions more during blame assignment than wrongness evaluation in the presence of harmful outcomes. As I understand it, this then suggests that empathy plays a stronger role for judging how much a person should be blamed or punished, compared to just judging whether the action itself was right or wrong. Patil concluded, therefore, that empathy plays a key role in the known overbearing effect of moral luck on blame and punishment.

In the second talk, Paul Conway looked at why empathy for others does in fact matter.

Conway began by noting a conceptual problem with the use of sacrificial dilemmas: we ask across an array of dilemmas whether it is acceptable to cause harm to maximize welfare (with utilitarianism and deontology at opposite ends), but the dominant theory of Greene et al. suggests that in fact there are two independent processes of ‘reason’ and ‘emotion’. It is therefore problematic, Conway argued, that we measure judgments in these dilemmas by using a binary scale where higher scores on one (e.g. deontological-consistent judgments) necessarily mean lower scores on another (e.g. consequentialist-consistent judgments). In an attempt to resolve this, Conway and colleagues have utilized the process dissociation approach. Using this approach, Conway and Gawronski (2013) show that a computed deontology parameter seems more associated with the affective reaction to harm and empathic concern, while a computed utilitarian parameter is more associated with NfC, cognitive evaluation of outcome, etc. Importantly, moral identity was positively correlated with both parameters, such that this effect cancels the correlations out. In the work reported today, Conway and colleagues attempted to replicate and extend the work of Miller, Hannikainen, and Cushman (2014) that looked  at action aversion and outcome aversion, by looking at relative dilemma judgments and process dissocation parameters.

 In Study 1 (N = 262), Conway et al. replicate Miller et al’s work. They found that the utilitarian parameter was negatively associated with action aversion, while the deontology parameter was positively associated with action aversion. However, both parameters were associated with outcome aversion (empathic concern). Interestingly, looking at the control items (items involving personal inconvenience), they find that increased importance to personal convenience was negatively associated with the deontology parameter, and had no correlation with the utilitarian parameter. The control items were, however, still significantly associated with ‘utilitarian’ judgments when measured in standard way. In Study 2 (N = 288), they find again that outcome aversion was associated with utilitarian and deontological parameters, but action aversion associated negatively with utilitarian parameter. Again, they found the same pattern with control items. By adding individual differences measures, however, Conway and colleagues were able to show that people scoring higher on psychopathy experienced less outcome aversion (and no difference in action aversion). Similarly, empathic concern was non-correlated with action aversion, but positively associated with outcome aversion. Conway’s talk was very interesting, and highlights that a process dissociation reveals hidden relationships with deontological or consequentialist-consistent actions. Put simply, action aversion predicted deontology and negatively predicted utilitarianism, while outcome aversion positively predicted both parameters.

In the third and fourth talks, Sarlo and Silani respectively talked about the modulation of affective empathy to responses in moral dilemmas, and brain activity and prosocial behavior in a simulated life-threatening situation. Both talks were interesting, and served to follow on well from the excellent first two talks by Conway and Patil.


The processes and consequences of trustworthiness detection


Chair: Jean-François Bonnefon

Presenters: Anthony M. Evans, Jean-François Bonnefon, Nicholas Rule, Rik van den Brule

Date/Time: Friday, 13 March 2015 12:30 – 13:50

The lunchtime session I attended was on the processes and consequences of trust detection. First up was J-F Bonnefon, who began by noting a central problem with trust detection: trusting no-one will not get you far, but if you trust the wrong person, the consequences are even worse. How, then, (if at all) do we detect trustworthiness? Bonnefon presented at least six studies in the 20 minute talk, and unfortunately I wasn’t able to keep up and make as detailed notes as I like to do. Apologies, therefore, if the specific details are lacking somewhat in my coverage. Nevertheless, you can take it from me that these were a really fascinating set of studies. In Study 1, Bonnefon and colleagues used a set of 60 trustees (Player 2) in a Trust Game, recording a movie of these people while they were playing the TG. A research assistant then went through and for each person, chose a single movie frame where the actor had a neutral face, so that the researchers had both an image of a face, and information on exactly how trustworthy that person was in the TG. (Bonnefon originally said that the research assistant was blind, which confused me for a good few minutes as to how a blind person could identify the best still frame from a movie, until it was clear he meant blind to the hypotheses and design, not blind as in visually impaired. I felt rather silly). For the central part of the experiment, Bonnefon had 208 investors (Player 1) play 60 TGs, and for each TG saw a cropped (chin to eyebrows) black and white image of the trustee for 5 seconds. Would participants detect trustworthiness from faces? I was very excited to see that indeed, investors trusted those people more who were indeed more trustworthy (as measured by the investors previous behavior in the TG). This held when controlling for education and other factors. That is, participants actually seemed to detect trustworthiness just from briefly seeing a small black and white image of a face.

In Study 2, Bonnefon and colleagues looked at whether this process was effortful or not, and found that it was not: you don’t need to be smart (Study 1), and you don’t need to consciously think and reason (Study 2), to detect trustworthiness from faces. Thus far, you’ll remember, they have been using the cropped black and white image. What if they used the full colour image? It seems plausible that with more pictorial information, people would be even more accurate. In fact, this was not what Bonnefon found: using a full colour picture for Study 3, there was no significant effect. Why would this be so? In Study 4, they attempted to explore this by showing the pictures to a different set of people and asking for explicit ratings of trustworthiness, which they correlated with decisions in the game. Indeed, with the full colour picture there was a full (80%) variance correlation between full picture and the transfers, but only 18% in the cropped picture. Bonnefon suggested that the results of Studies 1-3 can therefore be explained because when participants see the cropped picture, they don’t consciously think about it as much and just go with their gut feeling – and get it right. In contrast, when given the full picture, people try to reason based on other features, and get it wrong. In Study 5, Bonnefon followed on from this by asking whether people have any insight into this: do people prefer to play when seeing the full image, or the cropped black and white image? In fact, people preferred to use the full pictures, even though they were less accurate with this. Overall, this was a fascinating talk, and I’m very glad I was able to attend.

In the second talk, John Paul Wilson followed on from Bonnefon’s by looking at overgeneralization effects in perceptions of trustworthiness. Wilson began by noting that there is evidence for strong consensus in perceptions of trustworthiness from real photos and computer-generated images. Such agreement even occurs at very low exposure time (50ms). But does consensus always translate to accuracy? Wilson suggested not. For example, Rule et al. (2013) found that people did not rate war heroes faces’ as any different in trustworthiness from war criminals. Similarly, people did not rate the faces of cheaters as different in a lab task (results seemingly inconsistent with Bonnefon’s previous talk).

In this talk, Wilson presented research that considered the question of whether perceived trustworthiness has generalizing effects to other games beyond the Trust Game (TG): for example, the Ultimatum Game (UG). Rejections in the UG are related to punishment, not trust. They predicted that if people inappropriately overgeneralize trustworthiness perceptions, recipients should reject unfair offers in the UG from people perceived as untrustworthy. In their study, they therefore had people across multiple trials shown a picture of a person, alongside their offer in the UG (10 cents, varying from a fair 50/50 split, to an unfair 90/10 split). Participants were then asked to accept or reject the offer. They found a main effect of trustworthiness (people accepted more from people perceived as trustworthy), and a level of offer by trustworthiness interaction such that the untrustworthy were punished, and this was most pronounced at ambigiously unfair offer levels (e.g. a 60/40 split).

Wilson concluded by suggesting that people inappropriately apply facial trustworthiness in economic decision making, despite a lack of bias in fairness perceptions. Further, the consequences of perceived trustworthiness extend beyond trust in economic games.

To me, this made sense. It seems like the signalling value of trust is primarily used to determine whether someone is a ‘good’ person, and therefore takes place in an overall person-centred evaluation, rather than being a discrete and distinct rating. Again, this was really interesting and I am hoping to incorporate some of this into my own work on signalling and trust.

Religious dimensions and morality: perspectives on a multifaceted relationship.

Chair: Kristen Laurin

Presenters: Adam B. Cohen, Matthias Fortsmann, Kristin Laurin, Azim F. Shariff

Date/Time: Friday, 13 March 2015 17:00 – 18:20

Adam Cohen kicked off this symposium with a talk showing that costly signaling increases trust, even across religious affiliations. Cohen noted that religion often involves costly and hard to fake signals of commitment, and based on this questioned whether the presence of ingroup religious signals increases trust, and whether the presence of outgroup costly signals decreases trust?

In the first study conducted by Cohen and colleagues, they used Muslims as an outgroup. Using Arizona State students in a between-subjects design, they manipulated whether a fictional target (in a social media type bio) was 1) Christian or Muslim, 2) a believer in a God that punishes or forgives, and 3) a regular donator to charity. They found no effect of target religion on trustworthiness, and no effect of forgiving or punishing God on trust. That is, participant did not trust a Muslim or Christian differentially, and did not differentially trust someone who believed in a punishing God more than a forgiving God. However, there was a significant main effect of costly signalling such that those that donated to charity were seen as trustworthy. Furthermore, there was no interaction by religious affiliation: a person who is Christian and donates to charity is trusted no more than a Muslim who donates to charity. In Study 2 they explored this again with a few modifications. Perhaps participants simply hadn’t noticed that the person was Muslim. This time, they had an image of a woman wearing a hijab, and found the same pattern as in the first study. Perhaps the results so far could be due to the effect of the costly signalling being charitable giving crowded out any other effects, given the connotations that charitable giving has with prosociality?

In Study 3, they looked at dietary food restrictions. This time they decided not to use photos (in case the people looking nice and trustworthy, again, swamped effects). They had participants in a between subjects design told about a person – Sam, or Samir – who goes to a dinner event at a steakhouse, where the food is either non-Halal, or it is during Lent. Participants were then told that Sam / Samir either ate the food (contravening dietary requirements) did not eat the food (signalling), or where no information was given. Interestingly, they conceptually replicated findings from before: engaging in costly signalling increased trust for both ingroup and outgroup targets. Finally, Cohen presented a final study in which they used the conjunction fallacy (Linda is a bank teller; Linda is a feminist bank taller), which has been used to good effect in work by Gervais on atheist prejudice. In fact, results from Study 4 again replicated those found earlier: Christians and Muslims were equally trustworthy.

Cohen concluded by suggesting that perhaps belief (in God) may not be the primary driver of trust, but rather it is that people care more about behaviours. He ended on the optimistic note that to be trusted by Christians, Muslims don’t need to abandon their own religious practices – in fact, adherence to these practices can increase trust.

Next, Matthias Fortsmann gave a talk about how priming religion attenuates perceived cognitive dissonance. As he began by noting, we have frequent reminders of religion: some conscious, some unconscious. Furthermore, there are multiple effects of religious priming: some good (e.g. honesty, cooperation); some bad (prejudice, submissive thoughts); and some neutral (agency, error response, task persistence). In this project, they were interested in certainty. Some evidence suggests that priming Christian religious concepts makes people prefer non ambiguity and judgment certainty (Ferschtmann & Sagioglou, 2013). One way in which ambiguity intolerance manifests is in social judgments.

In this, they questioned whether religious primes might attenuate (i.e. weaken) the experience of cognition dissonance. Across four studies, using different types of primes (semantic, pictorial), he showed that this was the case: that priming religious concepts did attenuate the experience of cognitive dissonance. I found the link between this and moral judgments somewhat confusing (which is why I don’t discuss that here), but it was nonetheless a very interesting talk and went well with Azim Shariff’s talk.

Third in line was Kristin Laurin. Thus far, I think this was perhaps one of my favorite talks – and that is saying something, as the quality has been so high. This is mostly due to the content, but the slide design being extremely visually appealing didn’t hurt either (I am partial to nice slides…).

Laurin began by noting that we know that less intentional acts leads to less harsh judgment, but that there is variability in how harsh we judge unintended actions. Laurin questioned we can explain some of this with religion? Religion can promote both forgiveness and harshness. Crucial to this talk was the conceptual distinction between Orthopraxy (following religious traditions and practice), and Orthodoxy (Believing in God and other religious tenets: Cohen, Siegl, & Rozin, 2003). Laurin argued that if you’re higher in religious orthopraxy, you should be harsher towards unintentional wrongdoing because you focus on the behavior, or consequence. On the other hand, to the extent that one is high on religious orthodoxy, then this focus on internal states should lead one to be more lenient towards unintentional wrongdoers.

To explore this, Laurin conducted a number of studies that are now in press at SPSS, I believe. In Study 1 (N = 289), Laurin used an MTurk sample and looked at individual differences in orthopraxy and orthodoxy, and whether there were associated with harshness of judgments in either an intentional case (where John largely intends to drive over and kill his uncle) or an unintentional case (where John largely doesn’t intend to kill his uncle). Supporting predictions, she found that individuals higher on orthopraxy were harsher in their judgments for the unintentional actor (but no difference for intentional actions). Next, in Study 2, they manipulated a focus on thoughts vs. behavior by having participants think about God watching their movements and actions as if on a TV screen (behavior: orthopraxy-like), or God listening to the content of their mind and their thoughts like a radio (beliefs: orthodoxy-like). Using a vignette of a woman who either intentionally or unintentionally poisoned her husband, they found that people in the action-focused condition (orthopraxy) were harsher than those in the thought-focused condition (orthodoxy). So far, very interesting.

In Study 3, they looked at a group context, noting that in addition to individual differences in orthodoxy and orthopraxy, there are also cultural differences. They compared a sample of Hindus (generally higher in orthopraxy) with Protestants (generally higher in orthodoxy), in another vignette. They found that, as predicted, both groups were just as harsh for an intentional action, but Hindus were harsher for unintentional outcomes. Furthermore, looking at the individual differences in orthopraxy and orthodoxy, they found that orthopraxy fully mediated the effect of religious group on harshness of moral judgment. In Study 4, they looked at whether the same pattern would be observed for good judgments as well as bad outcomes, and in another MTurk sample found higher praise for those higher in orthopraxy when an action was unintentional.

This was a fascinating talk and connects to the interesting work being conducted on the roles of actions and intentions in explaining moral behavior (by people like Fiery Cushman, for example), while adding a new religious dimension

 

Last, but certainly not least, was Azim Shariff. It was great to finally see (and after the talk, meet in person) Azim, because we’ve been working together for years and led a paper on free will together (currently under review) – despite never having met in person!

Shariff considered the question of whether God makes you good. As he began by noting, self-report relationship between self-reported prosocial behavior and religious belief is stronger than the relationship between behaviorally observed prosocial behavior. The self-report problem arises, in particular, because not only do people ‘fudge’ (inflate) their own prosocial behavior, they also fudge how much they attend Church. Indeed, Azim noted that if everyone in the US who said they went to Church on Sunday actually did so, there would not be enough space in all of the churches in the USA – of which they are more than a few.

Should we turn to behavioral observations, then? A problem remains here, though, in that behavioral tasks might not capture the real features of the religious-prosociality link. That is: moral behavior might be a product of the religious context, rather than the religious disposition. Azim noted a number of studies supporting this (you can also see his recently published meta-analysis). One study (unpublished; I cannot remember the author’s name, unfortunately) that particularly stood out was a field study in which the researcher went to Morocco and conducted a field experiment looking at prosocial behavior in a charity dictator game with shop-keepers. The fascinating result was that when the shopkeepers were given the dictator game while the Muslim call to prayer was sounding, 100% donated all of the money to charity. That is, when the shopkeepers were asked whether they wanted to donate or keep a pool of money, all of them donated – as long as the call to prayer was sounding. Furthermore, this effect was fleeting – just 20 minutes after the call to prayer had finished, prosocial behavior rates fell to normal (around 50%).

Shariff ended with a wonderful analogy: God makes you good in the same way that food makes you sated: it’s an ephemeral effect and it doesn’t last, but it doesn’t mean the effect isn’t there.

Overall, an excellent end to an excellent day. I’m exhausted.

To see and be seen: a fresh look at prosociality

Chair: Niels J. van Doesum

Presenters: Jeff Joireman, Niels J. van Doesum, Jim A.C. Everett, Zoi Manesi

Date/Time: Saturday, 14 March 2015 08:30 – 09:50

An early start again – unfortunately, a little too early on a Saturday for quite a few people. I do wish that conference organizers would stop scheduling talks for so early in the morning, as inevitably the turn-out for these talks is lower than the ones later in the day.

Jeff Joireman opened the symposium with an excellent talk on the benefits and challenges in connecting donors and recipients via peer-to-peer (P2P) charities. Peer-to-peer charities allow a donor to donate to a specific recipient or project, and are a growing trend in charitable giving. Drawing on a self-determination theoretical model, Joireman and colleagues predicted that people would prefer to donate with P2P charities because thy give a greater sense of 1) control, 2) connection, and 3) impact. To what extent would these perceptions influence donations to P2P charities? In Study 1a, Joireman used a between subjects design and found that people who rated how much control, connection and impact they perceived traditional vs. P2P charities to have, subsequently donated more to P2P charities – but only if the mediators (control, connection, impact) came before the dependent variable of donating. If participants indicated how much they wanted to donate first, and then rated control, connection, and impact, there was no effect. There are a number of explanations for this, but in the interests of time this wasn’t discussed in much detail. In Study 1b, Joireman used a within-subjects design, and here found a strong preference for P2P charities over traditional charities. People like P2P charities more, and this seems driven by greater perceptions of control, connection, and impact. So far, so good. But what happens when the link between the donor and recipient is severed?

In Study 2, Joireman used a redirection of donations scenario. He asked participants (N = 145; undergraduate students) to imagine they had donated to a project that would build a well for a poor village in rural India. For the experimental manipulation, in a between subjects design participants were told to imagine either that this money they had donated had either been redirected to a new village and a new project, or had been used for its purpose. Their results showed that intentions to donate were significantly lower in the redirected condition, and there were worse attitudes towards the charity. Redirecting donations, it seems, is perceived negatively by donors. But perhaps these results could be explained because the redirection involves both a change of location and a change of project: perhaps the location didn’t matter, but the project type did? To address this, in Study 3 Joireman looked at whether the use of a different village and a different project was a confound by manipulating a change in location and project independently, in a between-subjects design (N = 161; undergraduate students). In fact, they showed that any change was perceived negatively, such that any deviation from the plan – whether this is project, location, or both – reduces intentions to donate.

Why might this negativity towards redirection be occurring? In Study 4, Joireman expanded the theoretical model to consider why it is that people respond negatively to redirection of funds. They used the business concept of consumer’s felt ‘service failures’, where the consumer typically feels injustice and unfairness. Using a Qualtrics panel of US Consumers (N = 163), they found that charities that redirected did have significantly higher ratings of negative emotions and attitudes directed towards them. Is it possible to intervene to stop this negativity if a charity does need to redirect? To test this, in Study 5 Joireman looked at whether interventions in the form of apologies and compensation could buffer the negative effects of redirection. In fact, Joireman found no effect: people still disliked the charity, even when apologies and compensation was offered.

Joireman concluded his very interesting talk by suggesting that P2P charities are a double-edged sword, whereby donors like P2P charities in part due to the sense of connection and control, but any change is perceived very negatively. This work is really interesting, and I plan on sharing his findings with friends I have that work on ‘effective altruism’ and the best way to donate to charity. Clearly, Joireman’s work on P2P charity is not just of theoretical interest, but has real – life-saving – real-world implications.

As an aside, I’d also like to say that yesterday afternoon I was very grateful to Paul van Lange, who gave me a copy of his new book with Jeff Joireman – ‘How to Publish High-Quality Research’. I’ve already started reading it and it really is excellent. I will be writing a blog post review when I’ve finished with the conference reporting, but I can already give you the spoiler that it will be a very positive review. You can purchase the book here.

Next up was Niels van Doesum, who talked about some work he’s been doing with Paul van Lange (who also helped conduct the work in the fourth talk by Zoi Manesi). Van Doesum talked about how social mindfulness and the presence of others can elevate prosociality. Social mindfulness involves a social mind that is open to the needs and wishes of others in the present moment. Social mindfulness requires that people first see what others want, and then act accordingly. It can be a strategy, but it can also be the result of a more general prosocial orientation on the world. Further, social mindfulness does not have to costly: a small gesture may be enough to get benevolent intentions across. The common example here is that of participants being given an option to take some small item (e.g. a pen). There are three pens: two the same (e.g. a blue pen), and one unique (e.g. a black pen). If the participant chooses the black pen, then the next person to choose really has no choice at all, because the remaining pens are the same. If, however, the participant chooses the blue pen, then the next person is able to have a real choice between the two pens. In the work that van Doesum presents, he shows how social mindfulness (e.g. taking the blue pen) can inspire trust and warmth felt in others.

In this presentation, van Doesum presents research in which he explored whether people would be more socially mindful of a real person than an abstract one. In a between-subjects design, he had (female) confederates join a participant who was going down on an elevator in a university building, and manipulated whether the confederate greeted the participant or not. At the bottom of the elevator as the participant left, a different confederate asked them whether they would be able to complete a short survey. Using the pen example described above, the key dependent measure was whether participants would take the unique pen (non-socially mindful) or the non-unique pen (socially mindful). In the second between subjects factor, participants were either told that they would pick the pen and then the confederate from the elevator would pick one, or that they would pick a pen and then an unidentified, abstract person would choose later in the day. They found no effect of whether the confederate greeted the participant or not, and found that even in the abstract conditions, participants were socially mindful – often taking the non-unique pen. The most interesting result, however, was that in the ‘real’ condition (where the participant would first choose, and then the confederate from the elevator would choose immediately after), participants were significantly less un-mindful.

Van Doesum concluded by arguing that the actual presence of others does matter, seemingly because social mindfulness is about the here and now. After his presentation, we had an interesting discussion with the audience about the identifiable victim effect and how this might relate here, suggesting that participants would be more mindful towards an identifiable (but not present) other person, than they would to a non-identified and not present person.

 

Third, was your very own Jim A.C. Everett (i.e. me). I presented a talk on default effects in altruistic contexts. This talk was based on a paper that is published at the European Journal of Social Psychology, and you can access the paper (open-acess) here. I began by noting a phenomenon that has been observed in a growing body of research on consumer choices: the default bias, or default effects. Default effects refers to the well-documented tendency for people to prefer the default option in a given choice situation, and this has been shown to influence behavior such as car choices, insurance policies, retirement plans, energy use, organ donation, and many more. The driving idea behind this paper was whether such effects would also be involved in immediate altruistic contexts, where the behavior doesn’t involve an a benefit that is good for the person themselves (or the salesman), but rather behavior that is for the common good. This was our first aim. Our second concerned the psychological mechanisms that underlie default effects, where we hypothesized that default effects can be explained through reference to social norms. To explore this, we asked three questions. First, First, when something is a default, do people believe that this is the option that most other people would choose and approve of? Second, do these perceptions of social norms mediate default effects? And third, if the default option is perceived to be indicative of social norms, does awareness of this norm in the default option transfer to different situations involving similar norms? We tested this in four studies, all providing converging and strong evidence for the explanatory role of social norms in leading to default effects.

In Study 1, 173 American participants took part in this study in a between-subjects design with two conditions: “charity default” vs. “charity non-default”. Participants first completed an unrelated filler task before presented with information about payment. Participants were told that, on top of the $0.50 payment they received for taking part in the study, they could either keep for themselves or donate to charity an additional $0.50 bonus. The experimental manipulation constituted whether this money would be paid to them by default (charity non-default), or whether it would be donated to a charity to help people in the developing world have access to clean water as a default (charity default). After making their decision, participants were asked two questions that measured their perceptions of injunctive and descriptive social norms. Indeed, as predicted, more participants donated to charity when this was the default option, and participants perceived stronger social norms to donate to charity when this was the default, and these perceptions fully mediated the effects of default condition on charitable donations. We then questioned, however, whether these same effects would be observed for a higher threshold charity – one that most people don’t hold as strong attitudes. Study 2 tested for this by having exactly the same design and measures, but simply changing the charity from an anti-poverty one to Greenpeace. As before, hypotheses were supported, with default effects found in an altruistic context: when the default option was to donate the bonus money to charity. Participants perceived stronger descriptive (but not injunctive) norms to donate to Greenpeace when this was the default, and perceptions of descriptive norms marginally mediated the effects of default condition on charitable donations. In Study 3, we refined our method to test our hypotheses again, noting that in many real-life default options, the default option is not merely stated but also pre-selected, such that if participants do nothing they receive that default option. Therefore, in our third study we used the same basic experimental design, but changed the response measure so that the default option was preselected with a checked box, and participants could choose to de-select this box and select the alternative, or simply do nothing to stick with the default. As predicted, there was a significant effect of default condition whereby a staggering 81% of participants donated the money to anti-poverty charity when this was the default option, but only 19% did so when it was not the default. Furthermore, participants perceived stronger descriptive and injunctive social norms to donate to charity when this was the default, and both social norms significantly mediated the effects of default condition. Default effects clearly, then, play an important role in altruistic contexts.

In our final study we decided to probe this further by reasoning that if a default is perceived to represent a normative action, this effect is likely to transfer to other situations in which that norm is salient. Therefore, in Study 4 we explored whether a transfer effect of perceived social norms from an initial default policy context to actual donations in a related altruistic context would occur. In a between subjects design, participants (N = 136) were presented with a description of an optional (and fictional) 5% charity tax in Sweden, followed by the experimental manipulation where this tax was presented as either a default or not. As a filler task, participants were asked what they thought about this policy. Ostensibly after the task had finished, participants were then given the option to actually donate up to $1 of their own bonus for taking part in the study. In line with our predictions, more people donated their own money after reading about a charity taxation policy that was the default ($0.42), compared to when it was not the default to donate ($0.29).

In the final minutes of the ask I concluded by arguing that, theoretically, our analysis situates default effects within a comprehensive body of social psychological research concerning social norms and the attitude-behavior relationship, providing novel empirical predictions. Practically, the evidence presented in this paper highlights that the way that optional donation policies are framed can have an important impact on donation behavior, and making use of default effects could be an effective tool to increase behavior in the overall interest without compromising freedom. I ended by noting Samuel Johnson (1751) famous line that “To do nothing is in every man’s power”. While no doubt, in some cases, it is better to do nothing than to act, the work presented in this research programme show that the passivity of human choice and the tendency to be led by social norms and default options can be harnessed for the greater good.

In the fourth and final talk of the symposium, Zoi Manesi gave a fascinating presentation on some work she’s being conducting on the function of eye gaze on increasing norm compliance with Paul van Lange (who also worked with Niels). Zoi began by noting that it seems that humans evolved to be sensitive to eyes, and that some evidence suggests that this might be because they trigger a low-level neural mechanism. Zoi questioned why this is: is it something specific about the eye gaze, the eyes generally, a face, or even anything related to humans at all? Most importantly, she noted that it remains unknown whether it is necessary that the eye image being paying attention for the effects to occur. Put simply, do eyes need to be ‘watching’ to promote norm compliance? Across their studies, they had participants do a low-cost, normative behavior in a between-subjects design. The behavior chosen was typing long strings of randomly selected characters, where participants were told that if they left without completing the list they would still be paid, but a participant scheduled for later in the day would have to do the remainder of the first participant’s list, as well as their own. In Study 1 (N = 249), they had participants presented with images either of open eyes, closed eyes, or flowers, and found that participants exposed to images of open eyes were significantly more likely to complete the full list (56%) compared to those who saw images of closed eyes (49%), or flowers (48%). In Study 2 (N = 190), they had participants exposed to images of eyes staring at the camera, eyes showing an averted gaze, or flowers, and again found that only images of the eyes open – eyes paying attention – increased norm compliance. Zoi concluded her talk by noting that normative behavior is not affected just by any facial cue that may imply a socially salient context, but that the reminder of reputation comes in the eye gaze. Thus, it seems that eye images can promote prosociality, but only when they pay attention and trigger feelings of being watched.


Tribal minds: the evolution and psychology of coalitional aggression

Chair: Mark Van Vugt

Presenters/Authors: Carsten K.W. De Dreu, Daniel Balliet, Hannes Rusch, Mark Van Vugt

Time / Date: Saturday, 14 March 2015 12:30 – 13:50

From the moment when I saw this symposium I was very excited, as my work on intergroup prosocial behavior has increasingly drawn on social-evolutionary theory and models. Indeed, it didn’t disappoint.

Mark van Vugt (who is actually also affiliated with Oxford, but whom I’ve never met) kicked off the symposium in a talk about the Tribal Mind and his Male Warrior Hypothesis. He began by quoting Darwin (it’s evolutionary psychology, after all). This quote is on of my favorites, and I’ll reproduce here:

“With those animals which were benefited by living in close association, the individuals which took the greatest pleasure in society would best escape various dangers, while those that cared least for their comrades, and lived solitary, would perish in greater numbers.”

(Darwin, 1871, p. 105).

Van Vugt’s work is inspired by the question of why is it that when we look at intergroup conflict, aggression, and violence, it seems that this behavior is exhibited primarily by men. Why would this be so? Van Vugt suggested that coalitional aggression can be seen as a evolutionary puzzle in the form of a social dilemma, and that the costs of coalitional aggression are almost always more costly for women than they are for men. If he’s right, then should have been social and/or sexual selection for ‘warrior’ traits in men, and so the tribal instinct should be stronger for men (spoiler alert: it is). Van Vugt sped through a number of converging studies that supported his claim, and I can’t go over these in any great detail. There is a lot of this stuff published in various places, and I highly recommend reading it if you haven’t already. I will mention a few pieces of evidence that he cited. First, he discussed how having male/male dyads in a double gender-blind war game (i.e. neither party knew gender of the other) lead to greater unprovoked conflict than in male/female dyads or female/female dyads. Second, men are more aroused by pictures of coalitional aggression than women. Third, men have more positive attitudes war then women. Fourth, men see intergroup aggression as a more ‘final’ and viable solution. Fifth, both men and women prefer more masculine leaders in war, but more feminine leaders in times of peace.

Van Vugt concluded that coalitional aggression has been a significant selection force in human evolution, shaping intergroup psychology. Men and women have therefore likely evolved different adaptations to engage in intergroup behavior. He ended by noting that there is a need for more work to explore in more detail the neural mechanisms of this, and the way that this interacts with culture.

Next up was Carsten de Dreu talking about parochial altruism and asymmetrical conflict. He noted that intergroup conflict can be both symmetrical and asymmetrical. De Dreu argued that the intergroup conflict most often studied in psychology is symmetrical, where both groups want something that is not presently ‘owned’ by either groups (e.g. a third territory, in a colonial war). However, he argues that real-world intergroup conflict is more often asymmetrical where one wants to change the status quo (predator), and one wants to defend the status quo (prey). Sometimes groups can be on the predatory mode (wanting something the outgroup has), and sometimes in a defending mode (wanting to protect and defend against the outgroup). This sounds really obvious, when it’s stated like this, but I must admit that de Dreu’s distinction was actually somewhat of a lightbulb moment for me. Of course this is the case, and he is right that most research in the field looks at symmetrical conflicts. Even at the start of the talk, my mind was already racing with all the other ideas of studies I should do…. Anyway, back to the talk. De Dreu talked about some data from a war project at Penn State University, who coded >2000 military disputes since 1816. Using this data, de Dreu was able to explore two questions: first, whether conflicts were more often symmetrical or asymmetrical; and second, what the outcomes of these symmetrical and asymmetrical conflicts were. Interestingly, de Dreu showed that of the disputes in the database since 1816, two-thirds were asymmetrical conflicts, and only one-third symmetrical. Next, he looked at the outcomes of these conflicts, and found that not only was there a stronger likelihood of conflicts being asymmetrical, but that the party who was defending was usually the winner, while the predatory group tended to lose. He next questioned why this might be so. One potential reason, he suggested, might that defending the status quo (more so than challenging) seems to activate more the basic and evolutionarily old system in the brain. For example, defending seems related to the amygdala, while offence seems related to the pFC.

De Dreu then discussed a game that he developed with his collaborators to explore this: the Predator-Prey Intergroup Conflict Game. In the game, one group was a predator group, and one was a prey group. In the game, people can contribute an initial endowment to a common pot (i.e. contributions to war). If the predator group invests more, then they win. If the prey group invests equal or more, both the predatory and prey group members get everything left apart from that which they donated to the war pot. Using 24 6-person groups (N = 144; 35% males), they had 10 conflict episodes: 5 with peer punishment possible, and 5 without. De Dreu found that there were more investments in the prey group, and there was only more investments in the predator group when peer punishment was possible. Furthermore, they found that the prey survived the predator most of the time. De Dreu argued, therefore, that for predator groups contributions detract from individual wealth, while for prey groups, defense preserves individual wealth. De Dreu concluded that across group-living species, conflict is more asymmetrical than symmetrical. Cooperation in prey defense promotes individual fitness (survival), while cooperating in predatory attacks reduces the individual fitness. Further, sanctioning individuals are needed and used in predatory groups, but not prey defense. Therefore, sanctioning institutions potentially raise within group cooperation, but they seem to escalate intergroup conflict.

The third talk was by Daniel Balliet, who talked about his recent (and excellent) meta-analysis of ingroup favouritism and cooperation. This was a really exciting talk for me, as it connects so much to some experimental work I’ve been doing recently (currently under review), and a theoretical paper I recently published in Frontiers in Behavioral Neuroscience. Balliet began by highlighting a few points of agreement and contrast between the social identity approach to intergroup relations (SIT: Tajfel & Turner, 1979; Turner et al., 1987) and the bounded generalized reciprocity model (BGR: Yamagishi and colleagues). For example, BGR suggests that ingroup favouritism only occurs when reputational concerns are salient and where the actor has a chance to enhance or maintain a positive social reputation, while SIT suggests that ingroup favouritism should be observed even in private conditions where reputation management is impossible. A second contrast, for example, is that the BGR model suggests that ingroup favouritism should be greater in cases where direct reciprocity is not possible, while the SIT approach suggests that ingroup favouritism should be observed whether or not direct reciprocity is possible. Balliet and colleagues conducted a meta-analysis and found a number of interesting results. To be included in their meta-analysis, studies had to have manipulated groups (either experimental or natural), where the DVs were cooperation games like Dictator Games, Trust Game, Prisoner’s Dilemmas, and Public Goods Games. Overall, Balliet argues that the meta-analysis is more supportive of the BGR model. While his data is certainly consistent with this, I am not still sure that I agree with this conclusion overall. I would argue that preference-based accounts – that people do prefer to help ingroup members – are a crucial part of explaining ingroup favouritism, and that explaining ingroup phenomenon simply through strategic reputational concerns cannot account for a greater body of research in which behavior in economic games is just one example. But, regardless of how we interpret the results, Balliet’s meta-analysis is extremely interesting and theoretically important because it offers some new insights into ingroup favouritism and helps to resolve in part debates that have gone on since the 1970s. For example, Balliet’s work suggests that ingroup favouritism does not require an explicit outgroup to be manifested, and that IGF is stronger under situations of interdependence of outcomes. Balliet ended by arguing that more work is needed on the interplay of identity and reputation, which I completely agree with and which was reassuring to hear.  I actually have a paper on exactly this that is under review right now and I’m hoping that the reviewers see the theoretical importance of this work. The paper was previously rejected without being sent for review because it was judged as not theoretically interesting. Reviewers, whomever you may be, if you’re reading this: this is a super cool and theoretically important topic – just ask Balliet.

The view from my balcony
The view from my balcony

 

The Grand Krasnalopsky
The Grand Krasnalopsky

 

Amsterdam Canals
Amsterdam Canals