Articles on the Web

Articles on the Web

Reprinted from NPR, "How To Help A Kid Survive Early Puberty," May 16, 2019, by Juli Fraga.

From surging hormones and acne to body hair and body odor, puberty can be a rocky transition for any kid. But girls and boys who start physically developing sooner than their peers face particular social and emotional challenges, researchers find.

Jane Mendle

"Puberty is a pivotal time in kids' lives, and early maturing boys and girls may be more likely to struggle psychologically," says Jane Mendle, a psychologist and associate professor at Cornell University.

2018 study conducted by Mendle and her team found that girls who entered puberty significantly earlier than their peers were at higher risk for mental health concerns. They're more likely to become depressed during adolescence, the study finds, and this distress can persist into adulthood.

"For some girls, puberty can throw them off course, and the emotional stress can linger," Mendle says, "even after the challenges of puberty wane."

While the age-range for puberty varies, says Jennifer Dietrich, a pediatric gynecologist at Texas Children's Hospital, the average age of menses is 12.3 years old. However, about 15% of females start puberty much sooner — by the age of 7.

Research from the American Academy of Pediatrics suggests boys are also developing earlier, by age 10, which is six months to one year sooner than previous generations.

Pediatricians haven't identified a lone cause for this shift, but Louise Greenspan, a pediatric endocrinologist at Kaiser Permanente in San Francisco, says childhood obesity, environmental chemical-contributors, and the effects of chronic stress — a hormonal response to neglect or abuse in the family, for example — may all play a role.

At a crucial time when kids long to fit in, puberty can make them stand out. And when breast buds and body hair sprout during elementary school, children often feel exposed. Unable to hide their sexual development from others, they may feel ashamed or embarrassed.

Cosette Taillac, a psychotherapist at Kaiser Permanente in Oakland, Calif., recalls a particular client, a 9-year-old girl, who was started to feel self-conscious playing soccer because her body was developing.

When the little girl no longer wanted to participate in sports — something she had always loved — her parents sought Taillac's help.

"She didn't want to dress in front of her teammates," says Taillac.

Studies show girls who physically mature early, may be more likely than boys to ruminate about these uneasy feelings. According to researchers, this can prolong the emotional distress, which may increase their risk of depression and anxiety.

Still, though girls are more likely to internalize the stress they feel, boys aren't unscathed, says Mendle.

In research by Mendle and her colleagues, early maturing boys were more likely than others to feel socially isolated and to face conflict with friends and classmates. "This may increase their risk of depression," she says,"but we're uncertain if these effects last into adulthood."

Because information about early development tends to focus on girls, parents are often perplexed when their sons start puberty early, says Fran Walfish, a child and adolescent psychotherapist in Beverly Hills, Calif.

Their first clue, she says, may come when a tween boy refuses to shower or wear deodorant.

Helping kids navigate these new social and emotional hurdles can be tricky, especially since puberty spans several years. But don't be afraid to reach out — or to start the conversation early.

Greenspan suggests talking to children about sexual development by the age of 6 or 7. "Starting the conversation when kids are young, and keeping lines of communication open can make the transition less scary," she says.

At times, parents may also need to advocate for their children. "My client's parents worked with the soccer coach to create more privacy for her when dressing for team events," says Taillac. The simple adjustment helped the girl feel safe and more confident.

Of course, not all kids are eager for a parent's help; some shy away from even talking about their newfound struggles. That's sometimes a sign they're confused or overwhelmed, child psychologists say.

"It's important for parents to realize that puberty triggers identity questions like 'Who am I?' and 'Where do I fit in?' for boys and girls," Walfish says.

Taillac says reading books together can help. "Books provide a common language to discuss what's going on, which can open up conversations between parents and children," she says.

For elementary school girls, "The Care and Keeping of You: The Body Book for Younger Girls," by Valorie Schaefer can be a helpful book. Reading "The Tween Book: A Growing Up Guide for the Changing You," by Wendy Moss and Donald Moses can be informative for boys and girls, even as they reach the teen years.

Seeing your child mature early can also worry a parent. If you find yourself unsure of how to intervene, psychologists say, remember that distraught kids often want the same thing we all seek when we're upset — a generous dose of empathy.

Luckily, compassion doesn't require parents to have all the answers. Puberty calls for the same good parenting skills as any other age: being emotionally available to kids through their developmental milestones, witnessing their growing pains, and providing comfort when life throws them curveballs.

That advice is simple; the effects powerful. Scientific evidence shows this kind of parental support helps foster emotional resilience, and that bolsters kids' health and relationships for years to come.

Listen to an interview with Jane Mendle to learn more about her research on early puberty in girls.

In the early 1990s, Iris Murdoch was writing a new novel, as she’d done 25 times before in her life. But this time something was terribly off. Her protagonist, Jackson, an English manservant who has a mysterious effect on a circle of friends, once meticulously realized in her head, had become a stranger to her. As Murdoch later told Joanna Coles, a Guardian journalist who visited her in her house in North Oxford in 1996, a year after the publication of the book, Jackson’s Dilemma, she was suffering from a bad writer’s block. It began with Jackson and now the shadows had suffused her life. “At the moment I’m just falling, falling … just falling as it were,” Murdoch told Coles. “I think of things and then they go away forever.”

Jackson’s Dilemma was a flop. Some reviewers were respectful, if confused, calling it “an Indian Rope Trick, in which all the people … have no selves,” and “like the work of a 13-year-old schoolgirl who doesn’t get out enough.” Compared to her earlier works, which showcase a rich command of vocabulary and a keen grasp of grammar, Jackson’s Dilemma is rife with sentences that forge blindly ahead, lacking delicate shifts in structure, the language repetitious and deadened by indefinite nouns. In the book’s final chapter, Jackson sits sprawled on grass, thinking that he has “come to a place where there is no road,” as lost as Lear wandering on the heath after the storm.

Iris Murdoch and John Bayley

Iris Murdoch and her husband, John Bayley

Two years after Jackson’s Dilemma was published, Murdoch saw a neurologist who diagnosed her with Alzheimer’s disease. That discovery brought about a small supernova of media attention, spurred the next year by the United Kingdom publication of Iris: A Memoir of Iris Murdoch (called Elegy for Iris in the United States), an incisive and haunting memoir by her husband John Bayley, and a subsequent film adaptation starring Kate Winslet and Judi Dench. “She is not sailing into the dark,” Bayley writes toward the end of the book. “The voyage is over, and under the dark escort of Alzheimer’s, she has arrived somewhere.”

In 2003, Peter Garrard, a professor of neurology, with an expertise in dementia, took a unique interest in the novelist’s work. He had studied for his Ph.D. under John Hodges, the neurologist who had diagnosed Murdoch with Alzheimer’s. One day Garrard’s wife handed him her copy of Jackson’s Dilemma, commenting, “You’re interested in language and Alzheimer’s; why don’t you analyze this?” He resolved he would do just that: analyze the language in Murdoch’s fiction for signs of the degenerative effects of Alzheimer’s.

Researchers believe cognitive impairment begins well before signs of dementia are obvious to outsiders.

Prior to his interest in medicine, Garrard had studied ancient literature at Oxford, at a time when the discipline of computational language analysis, or computational linguistics, was taking root. Devotees of the field had developed something they called the Oxford Concordance Program—a computer program that created lists of all of the word types and word tokens in a text. (Token refers to the total number of words in a given text, and the type is the number of different words that appear in that text.) Garrard was intrigued by the idea that such lists could give ancient literature scholars insight into texts whose authorship was in dispute. Much as a Rembrandt expert might examine paint layers in order to assign a painting to a forger or to the Old Master himself, a computational linguist might count word types and tokens in a text and use that information to identify a work of ambiguous authorship.

Garrard had the idea to apply a similar computational technique to books by Murdoch. Alzheimer’s researchers believe cognitive impairment begins well before signs of dementia are obvious to outsiders. Garrard thought it might be possible to sift through three of Murdoch’s novels, written at different points in her life, to see if signs of dementia could be read between the lines.

Scientists believe Alzheimer’s disease is caused by cell death and tissue loss as a result of abnormal build up of plaques and tangles of protein in the brain. Language is impacted when the brain’s Wernicke’s and Broca’s areas, responsible for language comprehension and production, are affected by the spread of disease. Language, therefore, provides an exceptional window on the onset and development of pathology. And a masterful writer like Murdoch puts bountiful language in high relief, offering a particularly rich field of study.

The artist, in fact, could serve science. If computer analysis could help pinpoint the earliest signs of mild cognitive impairment, before the onset of obvious symptoms, this might be valuable information for researchers looking to diagnose the disease before too much damage has been done to the brain.

Barbara Lust,  Professor of Human Development, Linguistics, and Cognitive Science at Cornell University, who researches topics in language acquisition and early Alzheimer’s, explains that understanding changes in language patterns could be a boon to Alzheimer’s therapies. “Caregivers don’t usually notice very early changes in language, but this could be critically important both for early diagnosis and also in terms of basic research,” Lust says. “A lot of researchers are trying to develop drugs to halt the progression of Alzheimer’s, and they need to know what the stages are in order to halt them.”

Before Garrard and his colleagues published their Murdoch paper in 2005, researchers had identified language as a hallmark of Alzheimer’s disease. As Garrard explains, a patient’s vocabulary becomes restricted, and they use fewer words that are specific labels and more words that are general labels. For example, it’s not incorrect to call a golden retriever an “animal,” though it is less accurate than calling it a retriever or even a dog. Alzheimer’s patients would be far more likely to call a retriever a “dog” or an “animal” than “retriever” or “Fred.” In addition, Garrard adds, the words Alzheimer’s patients lose tend to appear less frequently in everyday English than words they keep—an abstract noun like “metamorphosis” might be replaced by “change” or “go.”

Researchers also found the use of specific words decreases and the noun-to-verb ratio changes as more “low image” verbs (be, come, do, get, give, go, have) and indefinite nouns (thing, something, anything, nothing) are used in place of their more unusual brethren. The use of the passive voice falls off markedly as well. People also use more pauses, Garrard says, as “they fish around for words.”

In their 2005 paper, Garrard and colleagues point out that the assessment of language changes in Alzheimer’s patients was based in many cases on standardized tasks such as word fluency and picture naming, the kind of tests criticized for lacking a “real-world validity.” But writing novels is a more naturalistic activity, one done voluntarily and without knowledge of the disease. That eliminates any negative or compensatory response that a standardized test might induce in a patient. With Murdoch, he and his colleagues could analyze language, “the products of cognitive operations,” over the natural course of her novel-writing life, which stretched from her 30s to 70s. “I thought it would be fascinating to be able to show that language could be affected before the patient or anyone else was aware of symptoms,” Garrard says.

For his analysis of Murdoch, Garrard used a program called Concordance to count word tokens and types in samples of text from three of her novels: her first published effort, Under the Net; a mid-career highlight, The Sea, The Sea, which won the Booker prize in 1978; and her final effort, Jackson’s Dilemma. He found that Murdoch’s vocabulary was significantly reduced in her last book—“it had become very generic,” he says—as compared to the samples from her two earlier books.

The Murdoch paper by Garrard and his colleagues proved influential. In Canada, Ian Lancashire, an English professor at the University of Toronto, was conducting his own version of textual analysis. Though he’d long toiled in the fields of Renaissance drama, Lancashire had been inspired by the emergence of a field called corpus linguistics, which involves the study of language though specialized software. In 1985, he founded the Center for Computing in the Humanities at the University of Toronto. (Today Lancashire is an emeritus professor, though he maintains a lab at the University of Toronto.)

What he discovered astounded him: Agatha Christie’s use of vocabulary had “completely tanked.”

In trying to determine some sort of overarching theory on the genesis of creativity, Lancashire had directed the development of software for the purpose of studying language through the analysis of text. The software was called TACT, short for Textual Analysis Computing Tools. The software created an interactive concordance and allowed Lancashire to count types and tokens in books by several of his favorite writers, including Shakespeare and Milton.

Lancashire had been an Agatha Christie fan in his youth, and decided to apply the same treatment to two of Christie’s early books, as well as Elephants Can Remember, her second-to-last novel. What he discovered astounded him: Christie’s use of vocabulary had “completely tanked” at the end of her career, by an order of about 20 percent. “I was shocked, because it was so obvious,” he says. Even though the length of Elephants was comparable to her other works, there was a marked decrease in the variety of words she used in it, and a good deal more phrasal repetition. “It was as if she had given up trying to find le mot juste, exactly the right word,” he says.

Lancashire presented his findings at a talk at the University of Toronto in 2008. Graeme Hirst, a computational linguist in Toronto’s computer science department, was in the audience. He suggested to Lancashire that they collaborate on statistical analysis of texts. The team employed a wider array of variables and much larger samples of text from Christie and Murdoch, searching for linguistic markers for Alzheimer’s disease. (Unlike Murdoch, Christie was never formally diagnosed with Alzheimer’s.)

The Toronto team, which included Regina Jokel, an assistant professor in the department of Speech-Language Pathology at the University of Toronto, and Xuan Le, at the time one of Hirst’s graduate students, settled on P.D. James—a writer who would die with her cognitive powers seemingly intact—as their control subject. Using a program called Stanford Parser, they fed books by all three writers through the algorithm, focusing on things like vocabulary size, the ratio of the size of the vocabulary to the total number of words used, repetition, word specificity, fillers, grammatical complexity, and the use of the passive voice.

“Each type of dementia has its own language pattern, so if someone has vascular dementia, their pattern would look different than someone who has progressive aphasia or Alzheimer’s,” says Jokel. “Dementia of any kind permeates all modalities, so if someone has problems expressing themselves, they will have trouble expressing themselves both orally and in writing.”

To the researchers, evidence of Murdoch’s decline was apparent in Jackson’s Dilemma. A passage from The Sea, The Sea illustrates her rich language:

The chagrin, the ferocious ambition which James, I am sure quite unconsciously, prompted in me was something which came about gradually and raged intermittently.

In Jackson’s Dilemma, her vocabulary seems stunted:

He got out of bed and pulled back the curtains. The sun blazed in. He did not look out of the window. He opened one of the cases, then closed it again. He had been wearing his clothes in bed, except for his jacket and his shoes.

It seems that after conceiving of her character, Murdoch had trouble climbing back inside of his head. According to Lancashire, this was likely an early sign of dementia. “Alzheimer’s disease ... damages our ability to see identity in both ourselves and other people, including imagined characters,” Lancashire later wrote. “Professional novelists with encroaching Alzheimer’s disease will forget what their characters look like, what they have done, and what qualities they exhibit.”

The Toronto team’s “Three British Novelists” paper, as it came to be called, influenced a number of other studies, including one at Arizona State University. Using similar software, researchers examined non-scripted news conferences of former presidents Ronald Reagan and George Herbert Walker Bush. President Reagan, they wrote, showed “a significant reduction in the number of unique words over time and a significant increase in conversational fillers and non-specific nouns over time,” while there was no such pattern for Bush. The researchers conclude that during his presidency, Reagan was showing a reduction in linguistic complexity consistent with what others have found in patients with dementia.

Brian Butterworth, a professor emeritus of cognitive neuropsychology at the Institute of Cognitive Neuropsychology at the University College London, also “diagnosed” Reagan in the mid ’80s, years before Reagan was clinically diagnosed with Alzheimer’s disease. Butterworth wrote a report comparing Reagan’s 1980 debate performance with then-President Jimmy Carter, with that of his debate performance with democratic presidential nominee Walter Mondale four years later.

“With Carter, Reagan was more or less flawless, but in 1984, he was making mistakes of all sorts, minor slips, long pauses, and confusional errors,” Butterworth says. “He referred to the wrong year in one instance.” If one forgets a lot of facts, as Reagan did, Butterworth says, that might be an effect of damage to the frontal lobes; damage to the temporal lobes and Broca’s area affects speech. “The change from 1980 to 1984 was not stylistic, in my opinion,” Butterworth says. Reagan “got much worse, probably because his brain had changed in a significant way. He had been shot. He had been heavily rehearsed. Even with all that, he was making a lot of mistakes.”

Thanks in part to the literary studies, the idea of language as a biomarker for Alzheimer’s has continued to gain credibility. In 2009, the National Institute on Aging and the Alzheimer’s Association charged a group of prominent neurologists with revising the criteria for Alzheimer’s disease, previously updated in 1984. The group sought to include criteria that general healthcare providers, who might not have access to diagnostic tools like neuropsychological testing, advanced imaging, and cerebrospinal fluid measures, could use to diagnose dementia. Part of their criteria included impaired language functions in speaking, reading, and writing; a difficulty in thinking of common words while speaking; hesitations; and speech, spelling, and writing errors.

The embrace of language as a diagnostic strategy has spurred a host of diagnostic tools. Hirst has begun working on programs that use speech by real patients in real time. Based on Hirst’s work, Kathleen Fraser, a Ph.D. student, and Frank Rudzicz, an assistant professor of computer science at the University of Toronto, and a scientist at the Toronto Rehabilitation Institute, who focuses on machine learning and natural language processing in healthcare settings, have developed software that analyzes short samples of speech, 1 to 5 minutes in length, to see if an individual might be showing signs of cognitive impairment. They are looking at 400 or so variables right now, says Rudzicz, such as pitch variance, pitch emphasis, pauses or “jitters,” and other qualitative aspects of speech.

Few of us are prolific novelists, but most of us are leaving behind large datasets of language, courtesy of email and social media.

Rudzicz and Fraser have co-founded a startup called Winterlight Labs, and they are working on similar software to be used by clinicians. Some organizations are already piloting their technology. They hope to capture the attention of pharmaceutical companies regarding using their program to help quickly identify the best individuals to be part of clinical trials—which tends to be a very expensive and laborious process—or to help track people’s cognitive states once they’ve been clinically diagnosed. They also hope one day to be able to use language as a lens to peer into people’s general cognitive states, so that researchers might gain a clearer understanding of everything from depression to autism.

Lust and other researchers agree, however, that the idea of using language as a biomarker for Alzheimer’s and other forms of cognitive impairment is still in its early stages. “We ultimately need some kind of low-cost, easy-to-use and noninvasive tool that can identify someone who should go on for more intensive follow-up, such as a cup on your arm can detect high-blood pressure that could indicative of heart disease,” says Heather Snyder, a molecular biologist and Senior Director of Medical and Scientific Operations at the Alzheimer’s Association. “At this point we don’t have that validated tool that tells us that something is predictive, at least to my knowledge.”

Howard Fillit, the founding executive editor and chief scientific officer of the Alzheimer’s Drug Discovery Foundation, says language is a valid way to test for Alzheimer’s disease and other forms of dementia. “If someone comes in complaining of cognitive impairment, and you want to do a diagnostic evaluation and see how serious their language problem is, I can see that [such software] would be useful,” he says. But he says the language analysis would have to be performed with other tests that measure cognitive function. “Otherwise,” Fillit says, “you might end up scaring a lot of people unnecessarily.”

One of the main reasons Garrard undertook the Murdoch study in the early 2000s was he saw her novels as a kind of large, unstructured dataset of language. He loved working with datasets, he says, “to see whether they tell a story or otherwise support a hypothesis.” Now, with computer programs that analyze language for cognitive deficits on the horizon, the future of Alzheimer’s diagnosis looks both beneficial and unnerving. Few of us are prolific novelists, but most of us are leaving behind large, unstructured datasets of language, courtesy of email, social media, and the like. There are such large volumes of data in that trail, Garrard says, “that it’s going to be usable in predicting all sorts of things, possibly including dementia.”

Garrard agrees a computer program that aids medical scientists in diagnosing cognitive diseases like Alzheimer’s holds great promise. “It’s like screening people for diabetes,” he says. “You wouldn’t want to have the condition, but better to know and treat it than not.”

Adrienne Day is a Bay Area-based writer and editor. She covers issues in science, culture, and social innovation.

How can current research inform the development of new methods to assess intelligence?

The fifth blog of a six-part series on Researching Human Intelligence. Posted on June 15, 2016  fifteeneightyfour, the blog of Cambridge University Press

Participants:

Sternberg

Robert Sternberg

James R. Flynn, University of Otago, New Zealand

Richard Haier, University of California, Irvine

Robert Sternberg, Cornell University, New York


James Flynn:

If we mean the kind of intelligence that IQ tests at present measure, the Wechsler tests plus Sternberg, I doubt there will be any new breakthroughs in measuring intelligence on the psychological level, at least in fully modern societies.  Measurements on the level of brain physiology are dependent on IQ test results to map what areas of the brains are active in various problem-solving tasks.  One suggestion should be set aside:  that we use measurements of things like reaction times (how quickly a person can press a button when perceiving a light or hearing a sound) as a substitute for IQ tests.  They are subject to differences in temperament between people, stop increasing far too young to capture the maturation of intelligence, and are much subject to practice effects.I do not know enough about creating tests for pre-industrial societies to comment.  However, even the use of “our” tests there can be illuminating.  In the Sudan, there was a large gain on Object Assembly and Coding, subtests responsive to modernity’s emphasis on spatial skills and speedy information processing.  There were moderate gains on Picture Arrangement and Picture Completion, subtests responsive to modernity’s visual culture.  As the “new ways of thinking” subtests, Block Design and Similarities, they actually showed a loss.  On the “school-basics” subtests of Information, Arithmetic, and Vocabulary, only a slight gain.  Diagnosis: no real progress to modernity. They still have traditional formal schooling based on the Koran, and have not learned to use logic on abstractions and to classify.  Their entry into the modern world is superficial:  just access to radio, TV, and the internet.  However, the profile of other nations (Turkey, Brazil) is more promising.  If they continue to develop economically, their average IQs will equal those of the West.

Robert Sternberg:

We have developed what we believe to be better tests that measure no only the analytical aspect of intelligence but also the creative, practical, and wisdom-based ones.  For example, an analytical item might ask an individual to write an essay on why her favourite book is her favourite book—or perhaps comparing the messages of two books.  A creative essay might ask what the world would be like today if the American Revolution had never taken place or if computers had never been invented or if weapons were made illegal throughout the entire world.  Another create item might ask people to draw something creative or to design a scientific experiment or to write a creative story.  A practical item might ask an individual how he persuaded someone else of an idea he had that the other person initially reacted to sceptically. Or it might ask the individual to say how he would solve a practical problem such as how to move a large bed to a second floor in a house with a winding staircase.  A wisdom-based item might ask a person how, in the future, she might make the world a better place; or an item might ask her to resolve a conflict between two neighbours, such as over noise issues.We have found that, through these tests, it is possible clearly to separate out distinct analytical, creative, and practical factors.  These tests increase prediction not only of academic achievement (compared with traditional analytical tests), but also increase prediction of extracurricular success.  Moreover, they substantially reduce ethnic/racial group differences.  Moreover, students actually like to take the tests, something that cannot be said for traditional tests.

Richard Haier:

There is research oriented to measuring intelligence based on using brain speed measured by reaction time to solving mental test items.

There are major advances using neuroimaging to predict IQ scores from structural and functional connections in the brain. Just after I finished writing my book detailing these advances and noting that none were yet successful, a new study found a way to create a brain fingerprint based on imaging brain connections. They reported that these brain fingerprints were unique and stable within a person. Amazingly, they also found these brain fingerprints predicted IQ scores—truly a landmark study. Fortunately, I was able to add it to my book in time. One implication of this kind of research is that intelligence can be measured by brain imaging. Interestingly, a brain image now costs less than an IQ test. If a brain imaging method to assess intelligence also turns out to predict academic success (as it should), an MRI scan might replace the SATs at a much cheaper cost than an SAT prep course (and you can sleep during the MRI).


Week 1 – Can We Define Intelligence?Week 2 – What role does neuroscience play in understanding intelligence and our capacity to learn?Week 3 – What role do IQ tests play in measuring intelligence?Week 4 –How are technological advances, access to instant information and media forces affecting human intelligence?Week 5 – How can current research inform the development of new methods to assess intelligence?Week 6 – What does the future hold in the research of intelligence? How much smarter will we be in 100 years’ time?


About the Author: James R. Flynn

James R. Flynn is the author of Does Your Family Make You Smarter? He is professor emeritus at the University of Otago, New Zealand, and a recipient of the University's Gold Medal for Distinguished Career Research. He is renowned for the 'Flynn effect', the documentation of massive IQ gains from one generation to another....View the Author profile >

About the Author: Richard Haier

Richard J. Haier earned is Professor Emeritus at the University of California and author of The Neuroscience of Intelligence....View the Author profile >

About the Author: Robert J. Sternberg

Robert J. Sternberg is Professor of Human Development at Cornell University, New York. Formerly, he was IBM Professor of Psychology and Education at Yale University, Connecticut. He won the 1999 James McKeen Cattell Fellow Award and the 2017 William James Fellow Award from APS. He is editor of Perspectives on Psychological Science. His main fields ...View the Author profile >

Flip a coin, roll the dice, pick a number, throw a dart -- there is no shortage of metaphors for the perceived randomness of juries. That belief that "anything can happen," is applied to some extent to the whole jury model, but nowhere more so than on the topic of civil damages. When jurors are asked to supply a number, particularly on the categories that are intangible (like pain and suffering) or somewhat speculative (like lost earnings), the eventual number is sometimes treated as a kind of crapshoot. What's more, the process that the jury goes through to get to it is seen as a kind of black box: not well understood or necessarily subject to clear influence.

What litigators might not know is that the subject of civil damages is a great example of social science research beginning to close the gap. Based on research within the last decade, we are coming closer to opening that black box in order to see a jury process -- albeit not fully predictable, but at least more knowable. The way jurors arrive at a number is increasingly capable of being described through the literature, and these descriptions have implications for the ad damnum requests made by plaintiffs and for the alternative amounts recommended by defendants. A study just out this year (Reyna et al., 2015), for example, provides support for a multistage model describing how jurors move from a story, to a general sense or 'gist' of damages, and then to a specific number. Their work shows that, while the use of a numerical suggestion -- an 'anchor' -- has a strong effect on the ultimate award amount, that effect is strongest when the anchor is meaningful. In other words, just about any number will have an effect on a jury, but a meaningful number -- one that provides a reference point that matters in the context of the case -- carries a stronger and more predictable effect. This post will share some of the results from the new study framed around some conclusions that civil litigators should take to heart.

No, Jurors Aren't Random

The research team, led by Cornell psychologists Valerie Reyna and Valerie Hans, begins with the observation that "jurors are often at sea about the amounts that should be awarded, with widely differing awards for cases that seem comparable." They cover a number of familiar reasons for that: limited guidance from the instructions; categories that are vague or subjective; judgments that depend on broad estimations, if not outright speculation; and a scale that begins at zero but, theoretically at least, ends in infinity. Add to that the problem that jurors often have limited numerical competence (low "numeracy") or an aversion to detailed thinking (low "need for cognition"). All of that means that there isn't, and will never be, a precise and predictable logic to a jury's damages award. But it doesn't mean that they're picking a number out of a hat either. Research is increasingly pointing toward the route jurors take in moving to the numbers.

There is a Path Jurors Follow

As detailed in the article, jury awards will vary on the same essential case facts, but a number of studies have found a strong "ordinal" relationship between injury severity and award amounts: More extreme injuries lead to higher awards, and vice versa. Increasing knowledge of the process has led researchers to describe a jury's decision in steps, both to better understand it, and also to underscore that some steps are more predictable than others. The following model, for example, is drawn from Hans and Reyna's 2011 work (though I've taken the liberty to make some of the language less academic):

Slide1

That breakdown of steps is backed up by the research reviewed in the article, but it isn't just useful for scholars. Watch mock trial deliberations and you are likely to see jurors moving through that general sequence. Knowing that jurors are going to implicitly or explicitly settle on a 'gist' (large, small, or in between) before translating that into a specific number is also helpful to litigators in providing the reminder 'speak to the gist' as well as to the ultimate number.

So Use an Anchor, but Make it Meaningful

The model also points out the advantages of giving jurors guidance on a number. Research supports the view that it is a good idea to try to anchor jurors' awards by providing a number. Suggesting a higher number generally leads to a higher award, and vice versa. Reyna and colleagues found that even the mention of an arbitrary dollar amounts (the cost of courthouse repairs) influences the size of awards. But the central finding of the study is that it helps even more if the dollar amount isn't arbitrary. "Providing meaningful reference points for award amounts, as opposed to only providing arbitrary anchors," the team concludes, "had a larger and more consistent effect on judgments." Not only are the ultimate awards closer to the meaningful anchor, but they are also more predictable, being more tightly clustered around the anchor when that anchor is meaningful and not arbitrary.

And Here is what "Meaningful" Might Mean

Of course, the notion of what is "meaningful" might carry just as much vagueness as the damages category itself, and the research team is not fully explicit on what makes a number meaningful in a trial context. That might be a fitting question for the next study, but in the meantime, the team gives at least some guidance. To Reyna et al., recommended amounts are "meaningful in the sense that their magnitude is understood as appropriate in the context of that case." In the study, the "meaningful" anchor was that one that expressed a pain and suffering amount as being either higher or lower than one year's income. That's not a perfect parallel -- one is payment for work, the other is recompense for suffering -- but it does reference something that the jury is used to thinking about and using as a rough way of valuing time. By nature, meaningful anchors will vary from case to case, but there are a few general numbers that could be used as a reference point, like median annual salary; daily, weekly, or monthly profits; or remediation costs.

Of course, those are well-known devices that attorneys will regularly apply. Still, it is good to know that, at a preliminary level at least, they have the social science stamp of approval. Those kinds of anchors and reference points work because they give jurors a way to get a gist of the claimed damages, and a way to bring abstract numbers into the jurors' own mental universe.

Copyright Holland & Hart LLP 1995-2015.

Steve and WendyWendy M. Williams is a professor in the department of human development at Cornell University. She founded and directs the Cornell Institute for Women in Science. Stephen J. Ceci is the Helen Carr professor of developmental psychology at Cornell University. They wrote "The Mathematics of Sex" and edited "Why Aren't More Women in Science?"

Earlier today, struggling with an armful of files, a large computer, and a 10-pound mega-purse, one of us got a steel door in the face when an out-to-lunch undergraduate slammed it. So are Cornell students vacant-minded budding sociopaths? No; nor are most male scientists prone to disparaging talented women working in their labs.

Tim Hunt speaks for a vanishing minority — as is shown by the national data on women in science, which reveal sustained progress.

Tim Hunt made some outrageous statements, but he speaks for a vanishing minority — as is shown by the national data on women in science, which reveal sustained progress. In large-scale analyses with economists Donna Ginther and Shulamit Kahn, we showed the academic landscape has changed rapidly, with women and men treated comparably in most domains. Some differences exist, usually benefiting men when they occur, but they are exceptions, not the rule.

Generally, female assistant and associate professors earn as much as men, are tenured and promoted at comparable rates, persist at their jobs equally, and express equivalent job satisfaction (over 85 percent of women and men rate their satisfaction as “somewhat to very satisfied”). And, importantly, women are hired at higher rates than men.

In 1971, women were less than 1 percent of professors in academic engineering. Today women represent roughly 25 percent of assistant professors, with similar growth in all traditional male domains — physics, chemistry, geosciences, mathematics/computer science and economics. Women in 1973 comprised 15 percent or less of assistant professors in these fields whereas today they constitute 20 percent to 40 percent.

Women prefer not to major in these fields in college (choosing instead life sciences, premed, animal science, social science or law) and women do not apply as often as men for professorial posts. But when female Ph.D.’s apply for tenure-track jobs they are offered these posts at a higher rate than male competitors. This is not obvious because the majority of both men and women are rejected when they apply for professorial positions. But women are usually hired over men.

We recently reported results of five national experiments, demonstrating that 872 faculty members employed at 371 universities and colleges strongly preferred, 2 to 1, to hire a female applicant over an identically qualified man. Even when asked to evaluate just one applicant, faculty rated the woman as stronger. We found this pro-female hiring preference in all four fields we studied and it was just as true of women faculty as men faculty.

 Read more

Steve RoberstonSteven Robertson, a developmental psychologist and Professor of Human Development at Cornell University, worked with students from the Division for Nutritional Sciences in the College for Human Ecology on a new approach for assessing the effects of nutrition on infant recognition memory through the use of Electroencephalographic (EEG) imaging.

Read more here

By Jann Ingmire                                                                                                                                     Reprinted from Futurity.org

K.Kinzler

Katherine Kinzler, Professor of Human Development

­

Young children who hear more than one language spoken at home become better communicators, a new study finds. Effective communication requires the ability to take others’ perspectives.

Researchers discovered that children from multilingual environments are better at interpreting a speaker’s meaning than children who are exposed only to their native tongue.

The most novel finding is that the children don’t even have to be bilingual themselves—it’s the exposure to more than one language that is the key for building effective social communication skills.

Previous studies have examined the effects of being bilingual on cognitive development. This study, published online in Psychological Science, is the first to demonstrate the social benefits of just being exposed to multiple languages.

“Children in multilingual environments have extensive social practice in monitoring who speaks what to whom, and observing the social patterns and allegiances that are formed based on language usage,” explains Katherine Kinzler, associate professor of psychology at the University of Chicago. [Over the summer, Kinzler joined the faculty of the Department of Human Development.]

“These early socio-linguistic experiences could hone children’s skills at taking other people’s perspectives and provide them tools for effective communication.”

Kids from 3 backgrounds

Study coauthor Boaz Keysar, professor of psychology, says the study is part of a bigger research program that attempts to explain how humans learn to communicate. “Children are really good at acquiring language. They master the vocabulary and the syntax of the language, but they need more tools to be effective communicators,” says Keysar. “A lot of communication is about perspective-taking, which is what our study measures.”

Keysar, Kinzler, and their coauthors, doctoral students in psychology Samantha Fan and Zoe Liberman, had 72 4- to 6- year- old children participate in a social communication task. The children were from one of three language backgrounds: monolinguals (children who heard and spoke only English and had little experience with other languages); exposures (children who primarily heard and spoke English, but they had some regular exposure to speakers of another language); and bilinguals (children who were exposed to two languages on a regular basis and were able to speak and understand both languages). There were 24 children in each group.

Each child who participated sat on one side of a table across from an adult and played a communication game that required moving objects in a grid. The child was able to see all of the objects, but the adult on the other side of the grid had some squares blocked and could not see all the objects. To make sure that children understood that the adult could not see everything, the child first played the game from the adult’s side.

For the critical test, the adult would ask the child to move an object in the grid. For example, she would say, “I see a small car, could you move the small car?” The child could see three cars: small, medium, and large. The adult, however, could only see two cars: the medium and the large ones. To correctly interpret the adult’s intended meaning, the child would have to take into account that the adult could not see the smallest car, and move the one that the adult actually intended—the medium car.

Picking up on perspective

The monolingual children were not as good at understanding the adult’s intended meaning in this game, as they moved the correct object only about 50 percent of the time. But mere exposure to another language improved children’s ability to understand the adult’s perspective and select the correct objects.

The children in the exposure group selected correctly 76 percent of the time, and the bilingual group took the adult’s perspective in the game correctly 77 percent of the time.

“Language is social,” notes Fan. “Being exposed to multiple languages gives you a very different social experience, which could help children develop more effective communication skills.”

Liberman adds, “Our discovery has important policy implications, for instance it suggests previously unrealized advantages for bilingual education.”

Some parents seem wary of second-language exposure for their young children, Kinzler comments. Yet, in addition to learning another language, their children might unintentionally be getting intensive training in perspective taking, which could make them better communicators in any language.

Felix Thoemmes

Felix Thoemmes

Felix Thoemmes, an assistant professor in the Department of Human Development was part of team of researchers that examined the effect of taking a gap year before college on student persistence in college.

Reprinted from the Academy of Finland Communications, May 12, 2015

A gap year between high school and the start of university studies does not weaken young people’s enthusiasm to study or their overall performance once the studies have commenced. On the other hand, adolescents who continue to university studies directly after upper secondary school are more resilient in their studies and more committed to the study goals. However, young people who transfer directly to university are more stressed than those who start their studies after a gap year. These research results have been achieved in the Academy of Finland’s research programme The Future of Learning, Knowledge and Skills (TULOS).

“For young people, the transition from upper secondary school to further studies is a demanding phase in life, and many adolescents are tired at the end of upper secondary school. The demanding university admission tests take place close to the matriculation examination in Finland and require diligent studying from students. For many, a gap year offers an opportunity to take a break and think about future choices while developing a positive view of the future,” says Professor Katariina Salmela-Aro, the principal investigator of the study.

The transition period from secondary education to further studies is a key phase for the development of young people. It is a phase in which adolescents ponder over important future choices regarding educational directions and career goals.

The impact of a gap year on young people’s motivation to study and their future educational path was studied for the first time in the Academy of Finland’s research programme The Future of Learning, Knowledge and Skills. The research was conducted in Finland with the help of the FinEdu longitudinal study, which followed young people for several years after upper secondary school. A corresponding study was conducted concurrently in cooperation with Australian researchers among local youth in Australia.

“In the light of our research findings, a gap year between secondary education and further studies is not harmful, especially if the young person only takes one year off. When these adolescents are compared with those who continue their studies directly after upper secondary school, those who take a gap year quickly catch up with the others in terms of study motivation and the effort they put into their studies,” says Salmela-Aro.

If young people take more than one gap year, however, they may have more difficulties coping with the studies and with study motivation. “In the transition phase, many young people are left quite alone, which may make the transition to a new study phase quite challenging.”

According to the research results, those young people who begin their further studies directly after upper secondary school are more resilient in their studies and more committed to their goals than those who take a gap year. In addition, adolescents who continue their studies immediately after upper secondary school believe in their ability to achieve their goals more than those who start their studies after a gap year. On the other hand, they find studying and aiming for study goals more stressful than the students who take a gap year.

“The research results also suggest that students who take a gap year are slightly more susceptible to dropping out of university later on than those who transfer to university directly after upper secondary school,” says Salmela-Aro.

The study has been published in Developmental Psychology:

I wish I had (not) taken a gap-year? The psychological and attainment outcomes of different post-school pathways. Parker, Philip D.; Thoemmes, Felix; Duinevald, Jasper J.; Salmela-Aro, Katariina. Developmental Psychology, Vol 51(3), Mar 2015, 323–333. http://dx.doi.org/10.1037/a0038667

More information:

  • Professor Katariina Salmela-Aro, Cicero Learning, University of Helsinki and University of Jyväskylä, tel. +358 50 415 5283, katariina.salmela-aro(at)helsinki.fi

Academy of Finland Communications
Riitta Tirronen, Communications Manager
tel. +358 295 335 118
firstname.lastname(at)aka.fi

Last modified 12 May 2015

Reyna_Valerie_web

Reprinted from the Association for Psychological Science's Journal, Observer, Feb., 2015

A high-quality journal of juried review articles on issues of broad social importance is needed now more than ever. Psychological science is directly relevant to the most pressing social, economic, and health problems of our day, yet is vastly underutilized. To be sure, PSPI has increased the uptake of behavioral research in policy and practice, but so much more potential exists. Building on the success of prior editors, I want to propel the scientific and practical influence of behavioral research forward.

This journal should influence — and be influenced by — the latest scientific theories as well as speak to the mysteries of human conflict, motivation, achievement, learning, feelings, disorders, and decision making.

Why theory? We need evidence-based theory in order to understand how to apply what we learn about human behavior. Theory explains and predicts behavior, so that it is possible to know what the “active ingredient” is when interventions change behavior. Theory also explains and predicts who will benefit from specific practices and policies. Therefore, I will emphasize causal mechanisms when appropriate, with a view to understanding how to generalize results of research to policy and practice. There is no reason why PSPI cannot be a cutting-edge theoretical and translational journal, and its audience should encompass scientists, practitioners, and policy makers.

Another important role of PSPI is to reconcile different viewpoints from researchers across disciplines.Scholarship means taking account of all of the relevant prior evidence, not just evidence produced by those with similar worldviews. Psychology as a cumulative science, in which current work builds on prior findings and ideas, is crucial for scientific and social progress. I have had the opportunity to interact with scholars from many different disciplines, and I will draw on those experiences to build bridges between psychology and other disciplines.

PSPI connects members of the Association for Psychological Science (APS) to members of the public — including policy makers. It should also serve as the go-to source for behavioral scientists from different disciplines because it provides the most rigorous evidence and the most exciting ideas about the most important issues.

About Valerie F. Reyna

Incoming PSPI Editor Valerie F. Reyna is a professor of human development at Cornell University, where she is also director of the Human Neuroscience Institute, codirector of the Cornell University Magnetic Resonance Imaging Facility, and codirector of the Center for Behavioral Economics and Decision Research. Her research integrates brain and behavioral approaches to understand and improve judgment, decision making, and memory across the lifespan. Her recent work has focused on the neuroscience of risky decision making and its implications for health and well-being, especially in adolescents; applications of cognitive models and artificial intelligence for improving understanding of genetics (e.g., in breast cancer); and medical and legal decision making (e.g., about jury awards, medication decisions, and adolescent culpability).

In addition to being an APS Fellow, Reyna is a fellow of the Society of Experimental Psychologists, the American Association for the Advancement of Science, and several divisions of the American Psychological Association, including the Divisions of Experimental Psychology, Developmental Psychology, Educational Psychology, and Health Psychology. She has been a Visiting Professor at the Mayo Clinic, a permanent member of study sections of the National Institutes of Health, and a member of advisory panels for the National Science Foundation, the MacArthur Foundation, and the National Academy of Sciences. She has also served as president of the Society for Judgment and Decision Making.

Reyna helped create a new research agency in the US Department of Education, where she oversaw grant policies and programs. Her service also has included leadership positions in organizations dedicated to creating equal opportunities for minorities and women, and on national executive and advisory boards of centers and grants with similar goals, such as the Arizona Hispanic Center of Excellence, National Center of Excellence in Women’s Health, and Women in Cognitive Science.

2015 Psychological Science in the Public Interest Editorial/Advisory Board

 

APS Past President Mahzarin R. Banaji, Harvard University
Past APS Board Member Stephen J. Ceci, Cornell University
APS William James Fellow Uta Frith, University College London, United Kingdom
APS Past President Morton Ann Gernsbacher, University of Wisconsin–Madison
APS Fellow John B. Jemmott, III, University of Pennsylvania
APS William James Fellow Daniel Kahneman, Princeton University
APS Past President Elizabeth F. Loftus, University of California, Irvine
APS Fellow Marcus E. Raichle, Washington University in St. Louis
APS Past President Henry L. Roediger, III, Washington University in St. Louis
APS Fellow Daniel L. Schacter, Harvard University
APS William James Fellow Richard M. Shiffrin, Indiana University
APS Fellow Keith E. Stanovich, University of Toronto, Canada
APS Fellow Laurence Steinberg, Temple University
Cass R. Sunstein, Harvard University
APS Fellow Wendy M. Williams, Cornell University
APS Fellow Christopher Wolfe, Miami University

Valerie Reyna can be contacted at ReynaPSPI@cornell.edu.

By Karene Booker
Reprinted from Cornell Chronicle, December 8, 2014

Ethics book cover

Equipping social scientists for ethical challenges is the aim of a new book, “Ethical Challenges in the Brain and Behavioral Sciences: Case Studies and Commentaries” (Cambridge University Press), edited by Cornell psychologist Robert Sternberg and Susan Fiske of Princeton University. The volume’s eye-opening and cautionary tales about real-world ethical dilemmas are intended not to provide “correct” answers, but to prompt readers to reflect on how to resolve ethics problems before encountering them.

“Students learn a lot of content knowledge in graduate school, but not necessarily much about the ethical expectations of the field,” said Sternberg, professor of human development in the College of Human Ecology.

The advantage of case studies is that the lessons are more concrete and easy to apply than abstract ethical “principles,” he said. “This book provides ethical case studies in the whole range of situations that a behavioral or brain scientist might confront – in teaching, research and service.”

The volume is notable for its breadth – covering topics such as testing and grading, authorship and credit, confidentiality, data fabrication, human subjects research, personnel decisions, reviewing and editing, and conflicts of interest – and for the nearly 60 prominent scientists who took time out to share their wisdom by contributing a chapter. Each chapter includes a first-hand account of an ethical problem, how it was resolved and what the scientist would have done differently. Commentary on the greater ethical dilemmas follows each section, and the book wraps up with a model by Sternberg for thinking about ethical reasoning.

“Ethical Challenges” is intended for students, teachers and researchers in the behavioral and brain sciences. Although it is oriented toward those early in their career, senior faculty will also have a lot to learn from the case studies.

“After almost 40 years in the field, I thought I’d seen it all in terms of ethical challenges - I had no idea just how many different ones there were, and how many I have been fortunate enough not to have encountered … yet,” Sternberg concluded.

Next fall, Sternberg plans to teach a graduate-level course, Ethical Challenges in the Behavioral and Brain Sciences, based on the book.

Karene Booker is an extension support specialist in the Department of Human Development.