When I was teaching a course in African history or African studies at the university, I would usually begin with a discussion to gauge the scope of knowledge students might already have. One icebreaker I employed was to ask students to name the most deadly (to humans) animal in Africa. Over the years, the correct answer was given just once—the mosquito (especially the anopheles variety which transmits malaria). Most responses were lions, elephants, and the like. Some could have said other human beings, but that didn’t happen. I did get the objection that mosquitos were not “animals,” but they didn’t think they were vegetable or mineral either. This exercise illustrates one of the ways in which we tend to misperceive risks and probabilities. Deaths from the large game animals are fairly rare and so the risk from them is quite low. Although rare, these animals are familiar to us and readily come to mind when thinking about Africa. It may also be interesting that snakes are far more deadly than the large animals in statistical terms in Africa, but also are not immediately available (psychologically speaking) when thinking of “dangerous African animals.” When thinking about risks we face in everyday lives, we also tend to place more importance on the familiar or striking sources even though the risk presented by those more readily “available” things may be as rare as tourists’ death by lion.
Since the Superfund Act of the 1980s, corporations, governments, and scientists have found it necessary to communicate more and more about environmental and chemical risks. More recently, the topics of concern have shifted more to chemical additives, irradiation of foods, GMOs in the food supply, toxicity of inoculations and dangers of immunizations (whether childhood immunizations cause autism, for example), carbon fuels’ contribution to potentially dangerous climate change, among others. Over the same period (1980s to present), public trust in government, corporations, and scientists appears to have declined. The literature in risk communication and perception indicates that prior belief usually determines who or which sources are credible anyway. For example, if a person already believes that childhood immunizations can cause autism, credible sources are limited to those that reinforce this belief. Sources that dispute an already existing belief are discounted. Ethical and effective communication about these kinds of risk is therefore challenging. The enormous effects of online and social media, of course, might exacerbate such challenges even further.
The dangerous animals exercise illustrates the kinds of filters shaping how we perceive information about risks. Most people do not have ready access to the kinds of statistics that help them determine the actual probabilities of something bad happening—as a result they tend to rely on practical guidelines for interpreting the nature of risk.
One such rule of thumb is a sense of control. If I am driving myself, I feel more in control of what happens than if someone else has the wheel. That may explain why people feel safer driving rather than flying on a commercial airliner. Generally air travel is safer than driving, although a careful statistician may quibble (strictly speaking, the statistics of this comparison are not straightforward—these sorts of complexities can occasion the difficulties communicating about risk). Related to this rule is one about voluntariness. We are willing to accept the risk of skydiving if we have chosen to do it, but not as willing to accept a nuclear power plant in the neighborhood, even though jumping from a plane may be statistically riskier. Along those lines, people differentiate risk based on catastrophic potential. A very high percentage of people involved in auto accidents survive. Surviving the crash of an airliner is much less likely. If told there is a 99% chance our plane won’t crash today, we still might want to change travel plans.
Other tests we use include uncertainty and unfamiliarity. A new danger or one not really understood seems more threatening than one that is familiar. Ebola seemed frightening a year ago, but we are less fearful of the annual flu season even though thousands of Americans die from the flu. Similarly, if the incidents and numbers of people affected are distributed over a wide area or time, the threat seems less striking. Auto accidents might kill 40,000 Americans a year, while the one plane crash that kills 200 at one time gets more attention and may arouse more fear of flying. The risk may be seen as distributed in different ways as well. The people who are most at risk may seem distant from us in place, time, or condition. In his encyclical message, LAUDATO SI’, Pope Francis focused on the harmful effects of climate change on the world’s poor, in areas such as Africa or southern Asia, since they tend to be much more at risk from a lack of water, food, and sanitation resulting from environmental degradation than people in the wealthier countries (Laudato Si’). In a similar vein, we are usually less concerned about these risks for people in the future. From this point of view, risk communication takes on aspects touching on social justice and human rights.
These rules of thumb for judging risks clarify the differences among being logical, being rational, and being reasonable. For the most part, the rules tend to be reasonable, if not strictly logical or even completely rational. Most of us are not Spock or Sheldon Cooper, nor do we want to be. Those who want to communicate about public risk need to recognize that their audiences are usually being reasonable when they appear to be unconvinced by just the presenting of “the facts.” It is reasonable to judge risks with irreversible consequences as less acceptable than those that are much more likely but with less catastrophic outcomes. It is also reasonable for people to interpret such communication from a point of view associated with their interactions with and memberships in social groups. People do not get such messages as isolated individuals.
Issues about risk communication highlight the difficulties surrounding communicating scientific information in an ethical and effective manner. To the lay public, scientists may seem to communicate in arcane and overly technical language. Their statements may seem hedged with qualifications, worded as probabilities rather than definite statements of fact. I hope to take up issues related to ethical risk and science communication over the next few weeks.
William W. Neher
Bill Neher is professor emeritus of communication studies at Butler University, where he taught for 42 years. Over those years he has served as Dean of the University College, Director of the Honors Program, Head of the Department of Communication Studies, the Chair of the faculty governance, and most recently as the first Dean (Interim) for the new College of Communication begun in June 2010. He is the author of several books dealing with organizational and professional communication, ethics, and African studies, plus several public speaking and communication text books.
The Conference on Ethics and Public Argumentation, housed in the Butler University College of Communication, serves as CCOM’s academic hub for promoting the ethical use of reasoning and rationality in public deliberation.