Tag Archives: Fuzzy-Trace Theory

HD-Today e-Newsletter, Summer 2016 Issue

By Allison M. Hermann, Ph.D.

LRDM lab members and 4-H Career Explorations students

LRDM lab members and 4-H Career Explorations students

The Laboratory for Rational Decision Making (LRDM), led by Dr. Valerie Reyna in Human Development, welcomed 24 high school students from 18 different counties throughout New York State as part of a 3-day course in decision making research called, “Getting the Gist.” The high school students journeyed to Cornell University as part of the 4-H Career Explorations Conference that offers secondary school students the opportunity to attend courses and workshops and learn about STEM research.

get-the-gist-add

James Jones-Rounds, Lab Manager of the HEP Lab

The high school students became guest LRDM lab members and learned how to turn their questions about risky decision making into experiments. They created an experiment, collected and analyzed the data, and discussed the results. The student career explorers also toured the Center for Magnetic Resonance Imaging Facility and the EEG and Psychophysics Laboratory and saw how decision research uses brain imaging technologies to examine what brain areas are activated when making risky decisions.

Dr. Reyna’s graduate students' David Garavito, Alisha Meschkow and Rebecca Helm, and research staff member, Bertrand Reyna-Brainerd, presented lectures on Dr. Reyna’s fuzzy trace theory and research design and led interactive discussions with the visiting students about the paths that led the graduate students to the LRDM at Cornell. In addition, three undergraduate members of the lab, Tristan Ponzo (’18), Elana Molotsky (’17) and Joe DeTello (’19) delivered poster presentations of current lab research projects. Feedback from one of the career explorers expressed the gist of the program, “Yes, I definitely feel like I have a better understanding of why I make the decisions I do.”

For juries awarding plaintiffs for pain and suffering, the task is more challenging – and the results more inconsistent – than awarding for economic damages, which is formulaic. Now, Cornell social scientists show how to reduce wide variability for monetary judgments in those cases: Serve up the gist.

As an example of gist, juries take into account the severity of injury and time-scope. In the case of a broken ankle, that injury is a temporary setback that can be healed. In an accident where someone’s face is disfigured, the scope of time lasts infinitely and affects life quality. In short, “meaningful anchors” – where monetary awards ideally complement the context of the injury – translate into more consistent dollar amounts.

Valerie Reyna

Valerie Reyna

“Inherently, assigning exact dollar amounts is difficult for juries,” said Valerie Reyna, professor of human development. “Making awards is not chaos for juries. Instead of facing verbatim thoughts, juries rely on gist – as it is much more enduring. And when we realize that gist is more enduring, our models suggest that jury awards are fundamentally consistent.”

The foundation for understanding jury awards lies in the “fuzzy trace” theory, developed by Reyna and Charles Brainerd, professor of human development. The theory explains how in-parallel thought processes are represented in your mind. While verbatim representations – such as facts, figures, dates and other indisputable data – are literal, gist representations encompass a broad, general, imprecise meaning.

VHans Chronicle

Valerie Hans

“Experiments have confirmed the basic tenets of fuzzy trace theory,” said Valerie Hans, psychologist and Cornell professor of law, who studies the behavior of juries. “People engage in both verbatim- and gist-thinking, but when they make decisions, gist tends to be more important in determining the outcome; gist seems to drive decision-making.”

In addition to authors Reyna and Hans for the study, “The Gist of Juries: Testing a Model of Award and Decision Making,” the other co-authors include Jonathan Corbin Ph.D. ’15; Ryan Yeh ’13, now at Yale Law School; Kelvin Lin ’14, now at Columbia Law School; and Caisa Royer, a doctoral student in the field of human development and a student at Cornell Law School.

The research was funded by grants from the National Institutes of Health, Cornell’s Institute for the Social Sciences, the Cornell Law School and Cornell’s College of Human Ecology.

Update: On Sept. 1, 2015, the National Science Foundation awarded a grant for $389,996 to Cornell for support of the project “Quantitative Judgments in Law: Studies of Damage Award Decision Making,” under the direction of Valerie P. Hans and Valerie F. Reyna.

Flip a coin, roll the dice, pick a number, throw a dart -- there is no shortage of metaphors for the perceived randomness of juries. That belief that "anything can happen," is applied to some extent to the whole jury model, but nowhere more so than on the topic of civil damages. When jurors are asked to supply a number, particularly on the categories that are intangible (like pain and suffering) or somewhat speculative (like lost earnings), the eventual number is sometimes treated as a kind of crapshoot. What's more, the process that the jury goes through to get to it is seen as a kind of black box: not well understood or necessarily subject to clear influence.

What litigators might not know is that the subject of civil damages is a great example of social science research beginning to close the gap. Based on research within the last decade, we are coming closer to opening that black box in order to see a jury process -- albeit not fully predictable, but at least more knowable. The way jurors arrive at a number is increasingly capable of being described through the literature, and these descriptions have implications for the ad damnum requests made by plaintiffs and for the alternative amounts recommended by defendants. A study just out this year (Reyna et al., 2015), for example, provides support for a multistage model describing how jurors move from a story, to a general sense or 'gist' of damages, and then to a specific number. Their work shows that, while the use of a numerical suggestion -- an 'anchor' -- has a strong effect on the ultimate award amount, that effect is strongest when the anchor is meaningful. In other words, just about any number will have an effect on a jury, but a meaningful number -- one that provides a reference point that matters in the context of the case -- carries a stronger and more predictable effect. This post will share some of the results from the new study framed around some conclusions that civil litigators should take to heart.

No, Jurors Aren't Random

The research team, led by Cornell psychologists Valerie Reyna and Valerie Hans, begins with the observation that "jurors are often at sea about the amounts that should be awarded, with widely differing awards for cases that seem comparable." They cover a number of familiar reasons for that: limited guidance from the instructions; categories that are vague or subjective; judgments that depend on broad estimations, if not outright speculation; and a scale that begins at zero but, theoretically at least, ends in infinity. Add to that the problem that jurors often have limited numerical competence (low "numeracy") or an aversion to detailed thinking (low "need for cognition"). All of that means that there isn't, and will never be, a precise and predictable logic to a jury's damages award. But it doesn't mean that they're picking a number out of a hat either. Research is increasingly pointing toward the route jurors take in moving to the numbers.

There is a Path Jurors Follow

As detailed in the article, jury awards will vary on the same essential case facts, but a number of studies have found a strong "ordinal" relationship between injury severity and award amounts: More extreme injuries lead to higher awards, and vice versa. Increasing knowledge of the process has led researchers to describe a jury's decision in steps, both to better understand it, and also to underscore that some steps are more predictable than others. The following model, for example, is drawn from Hans and Reyna's 2011 work (though I've taken the liberty to make some of the language less academic):

Slide1

That breakdown of steps is backed up by the research reviewed in the article, but it isn't just useful for scholars. Watch mock trial deliberations and you are likely to see jurors moving through that general sequence. Knowing that jurors are going to implicitly or explicitly settle on a 'gist' (large, small, or in between) before translating that into a specific number is also helpful to litigators in providing the reminder 'speak to the gist' as well as to the ultimate number.

So Use an Anchor, but Make it Meaningful

The model also points out the advantages of giving jurors guidance on a number. Research supports the view that it is a good idea to try to anchor jurors' awards by providing a number. Suggesting a higher number generally leads to a higher award, and vice versa. Reyna and colleagues found that even the mention of an arbitrary dollar amounts (the cost of courthouse repairs) influences the size of awards. But the central finding of the study is that it helps even more if the dollar amount isn't arbitrary. "Providing meaningful reference points for award amounts, as opposed to only providing arbitrary anchors," the team concludes, "had a larger and more consistent effect on judgments." Not only are the ultimate awards closer to the meaningful anchor, but they are also more predictable, being more tightly clustered around the anchor when that anchor is meaningful and not arbitrary.

And Here is what "Meaningful" Might Mean

Of course, the notion of what is "meaningful" might carry just as much vagueness as the damages category itself, and the research team is not fully explicit on what makes a number meaningful in a trial context. That might be a fitting question for the next study, but in the meantime, the team gives at least some guidance. To Reyna et al., recommended amounts are "meaningful in the sense that their magnitude is understood as appropriate in the context of that case." In the study, the "meaningful" anchor was that one that expressed a pain and suffering amount as being either higher or lower than one year's income. That's not a perfect parallel -- one is payment for work, the other is recompense for suffering -- but it does reference something that the jury is used to thinking about and using as a rough way of valuing time. By nature, meaningful anchors will vary from case to case, but there are a few general numbers that could be used as a reference point, like median annual salary; daily, weekly, or monthly profits; or remediation costs.

Of course, those are well-known devices that attorneys will regularly apply. Still, it is good to know that, at a preliminary level at least, they have the social science stamp of approval. Those kinds of anchors and reference points work because they give jurors a way to get a gist of the claimed damages, and a way to bring abstract numbers into the jurors' own mental universe.

Copyright Holland & Hart LLP 1995-2015.

By H. Roger Segelken
Reprinted from Cornell Chronicle, December 16, 2014

When the doctor says, “I could prescribe antibiotics for your sniffles, but it’s probably a virus – not bacterial,” do you decline? Many patients expect antibiotics, although overprescription is a major factor driving one of the biggest public health concerns today: antibiotic resistance.

Now researchers at Cornell, George Washington and Johns Hopkins universities have figured out why: “Patients choose antibiotics because there’s a chance [prescription medications] will make them better, and they perceive the risks of taking antibiotics as negligible,” says Cornell psychologist Valerie Reyna.

With her co-authors, the professor of human development has published new research with important implications for communicating about antibiotics: “Germs Are Germs, and Why Not Take a Risk? Patients’ Expectations for Prescribing Antibiotics in an Inner-City Emergency Department,” in the journal Medical Decision Making.

That’s encouraging news for health educators, Reyna says, noting: “Patients might expect doctors to prescribe antibiotics because patients confuse viruses and bacteria – and think antibiotics will be effective for either. Most educational campaigns attempt to educate patients about this misconception. However, we found fewer than half of patients in an urban ER agreeing with the message, ‘germs are germs.’”

Patients who understand the difference between viruses and bacteria – and take antibiotics anyway – are making a strategic risk assessment, Reyna says: “Our research suggests that antibiotic use boils down essentially to a choice between a negative status quo – sick for sure – versus taking antibiotics and maybe getting better. This risk strategy promotes antibiotic use, particularly when taking antibiotics is perceived as basically harmless.”

Fuzzy-trace theory

The Broniatowski-Klein-Reyna study is the first to apply “fuzzy-trace” theory to how people think about antibiotics. The theory predicts that patients make decisions based on the gist (or simple bottom line) of information.

As Reyna explains: “The goal is to make better decisions, getting antibiotics to patients who need them but not overusing them so the rest of the public is safe. Understanding how patients think is crucial because their expectations influence doctors’ decisions.”

Adds David Broniatowski, assistant professor of engineering management and systems engineering at GWU, and the report’s first author: “We need to fight fire with fire. If patients think that antibiotics can’t hurt, we can’t just focus on telling them that they probably have a virus. We need to let them know that antibiotics can have some pretty bad side effects, and that they will definitely not help cure a viral infection.”

The third author is Dr. Eili Klein, assistant professor in the Department of Emergency Medicine at the Johns Hopkins University and a fellow at the Center for Disease Dynamics, Economics and Policy.

Reyna is the director of the Human Neuroscience Institute, co-director of the Cornell University Magnetic Resonance Imaging Facility, and a co-director of the Center for Behavioral Economics and Decision Research, all in the College of Human Ecology. She is a developer of “fuzzy-trace theory,” a model of the relation between mental representations and decision making that has been widely applied in law, medicine and public health.

The study was supported, in part, by funds from the National Institutes of Health and the U.S. Department of Homeland Security.