Tag Archives: EEG

 

 

HD TODAY e-NEWS: Insights from Human Development's Research & Outreach

HD TODAY e-NEWS is a quarterly digest of cutting-edge research from the Department of Human Development, College of Human Ecology, Cornell University. Explore the HD Today e-NEWS website at http://hdtoday.human.cornell.edu/ and discover a wide range of resources:

The National Science Foundation's blog, Discovery. July 14, 2017

by Stanley Dambroski and Madeline Beal

From an outside perspective, understanding a spoken language versus a signed language seems like it might involve entirely different brain processes. One process involves your ears and the other your eyes, and scientists have long known that different parts of the brain process these different sensory inputs.

To scientists at the University of Chicago interested in the role rhythm plays in how humans understand language, the differences between these inputs provided an opportunity for experimentation. The resulting study published in the Proceedings of the National Academy of Sciences helps explain that rhythm is important for processing language whether spoken or signed.

Previous studies have shown the rhythm of speech changes the rhythm of neural activity involved in understanding spoken language. When humans listen to spoken language, the brain's auditory cortex activity adjusts to follow the rhythms of sentences. This phenomenon is known as entrainment.

But even after researchers identified entrainment, understanding the role of rhythm in language comprehension remained difficult. Neural activity changes when a person is listening to spoken language -- but the brain also locks onto random, meaningless bursts of sound in a very similar way and at a similar frequency.

That's where the University of Chicago team saw an experimental opportunity involving sign language. While the natural rhythms in spoken language are similar to what might be considered the preferred frequency for the auditory cortex, this is not true for sign language and the visual cortex. The rhythms from the hand movements in ASL are substantially slower than that of spoken language.

The researchers used electroencephalography (EEG) to record the brain activity of participants as they watched videos of stories told in American Sign Language (ASL). One group was made up of participants who were fluent in ASL, while the other was made up of non-signers. The researchers then analyzed the rhythms of activity in different regions of the participants' brains.

The brain activity rhythms in the visual cortex followed the rhythms of sign language. Importantly, the researchers observed entrainment at the low frequencies that carry meaningful information in sign language, not at the high frequencies usually seen in visual activity.

Daniel Casasanto

"By looking at sign, we've learned something about how the brain processes language more generally," said principal investigator Daniel Casasanto, Professor of Psychology at the University of Chicago (now Professor of Human Development at Cornell University). "We've solved a mystery we couldn't crack by studying speech alone."

While the ASL-fluent and non-signer groups demonstrated entrainment, it was stronger in the frontal cortex for ASL-fluent participants, compared to non-signers. The frontal cortex is the area of the brain that controls cognitive skills. The authors postulate that frontal entrainment may be stronger in the fluent signers because they are more able to predict the movements involved and therefore more able to predict and entrain to the rhythms they see.

"This study highlights the importance of rhythm to processing language, even when it is visual. Studies like this are core to the National Science Foundation's Understanding the Brain Initiative, which seeks to understand the brain in action and in context," said Betty Tuller, a program manager for NSF's Perception, Action, and Cognition Program. "Knowledge of the fundamentals of how the brain processes language has the potential to improve how we educate children, treat language disorders, train military personnel, and may have implications for the study of learning and memory."

HD-Today e-Newsletter, Summer 2016 Issue

By Allison M. Hermann, Ph.D.

LRDM lab members and 4-H Career Explorations students

LRDM lab members and 4-H Career Explorations students

The Laboratory for Rational Decision Making (LRDM), led by Dr. Valerie Reyna in Human Development, welcomed 24 high school students from 18 different counties throughout New York State as part of a 3-day course in decision making research called, “Getting the Gist.” The high school students journeyed to Cornell University as part of the 4-H Career Explorations Conference that offers secondary school students the opportunity to attend courses and workshops and learn about STEM research.

get-the-gist-add

James Jones-Rounds, Lab Manager of the HEP Lab

The high school students became guest LRDM lab members and learned how to turn their questions about risky decision making into experiments. They created an experiment, collected and analyzed the data, and discussed the results. The student career explorers also toured the Center for Magnetic Resonance Imaging Facility and the EEG and Psychophysics Laboratory and saw how decision research uses brain imaging technologies to examine what brain areas are activated when making risky decisions.

Dr. Reyna’s graduate students' David Garavito, Alisha Meschkow and Rebecca Helm, and research staff member, Bertrand Reyna-Brainerd, presented lectures on Dr. Reyna’s fuzzy trace theory and research design and led interactive discussions with the visiting students about the paths that led the graduate students to the LRDM at Cornell. In addition, three undergraduate members of the lab, Tristan Ponzo (’18), Elana Molotsky (’17) and Joe DeTello (’19) delivered poster presentations of current lab research projects. Feedback from one of the career explorers expressed the gist of the program, “Yes, I definitely feel like I have a better understanding of why I make the decisions I do.”

Reprinted from Research Cornell News
by Alexandra Chang

An 18-month-old boy sits on his father’s lap in a small room furnished with a child-sized chair and a short table. The boy faces a monitor. On it, a video starts to play. A woman, Psychology graduate student Kate Brunick, assembles a simple toy—she holds two bright green cups, places a plastic object inside one, brings the cups together, and closes them to form a capsule. She shakes it; it’s a makeshift rattle.

As the boy watches the video, Michael H. Goldstein, Psychology, and his graduate students observe from the B.A.B.Y. (Behavioral Analysis of Beginning Years) lab’s observation room, a space hidden behind one-way glass and filled with monitors and video controls. Once the recording is done, Psychology graduate student Melissa Elston heads into the room where the boy and father sit. She’s carrying the toy from the video clip and places it on top of the table in front of the boy.

The boy doesn’t budge. After a couple minutes of encouragement from Elston, it’s clear he’s not assembling a rattle on this visit. It’s not a failure. This is exactly what Goldstein expects.

The study is just one of the many taking place at Cornell’s infant labs, where researchers are discovering more about the nuances of infant development. It’s a crucial area of academic research and exploration, given the impact early development has on later stages of life.

How Babies Learn in Social Settings

This particular study is on a phenomenon called the "video deficit effect," in which babies from 12 to 30 months are much worse at learning from video presentations than from real-life experiences. The group studies the babies in three scenarios: one in which babies see a live presentation of putting the toy together, another with an automatic pre-recorded video, and a third in which the baby has to press a button in order to play the pre-recorded video. Their theory is that the first group will learn, the second won’t, and the third will because the experience is contingent on and immediately follows their own action.

The study falls under the lab’s research on how babies learn in social context. Most of the work Goldstein and his codirector, Jennifer Schwade, do is on how social interactions affect the acquisition of speech and language in both babies and songbirds (in their case, song). Contingency, they’ve found, is crucial to learning.

Goldstein argues that the social behavior of adults contains patterns that can guide young learners. “If you want to understand how infants learn, you’ve got to understand not only what’s in the baby’s head but what social environment the baby’s head is in,” he says.

Alongside Goldstein, Steven S. Robertson and Marianella Casasola, Human Development, run baby labs at Cornell.

How Babies Collect Information from Their Environment through Visual Foraging

Steve S. Roberston, Professor in Human Development

If, when you think of an infant lab, you imagine a baby outfitted with sensors, you’re on the right track when it comes to Robertson’s research. He examines mind–body relations in very young babies, typically three-month-olds. Specifically, he looks at the relationships between vision, motor activity, and attention during visual foraging, a major way in which infants gather information from their surroundings.

“If you want to understand how infants learn, you’ve got to understand not only what’s in the baby’s head but what social environment the baby’s head is in,” Goldstein says.

To study the dynamics of visual foraging, Robertson depends on EEG measurements and a few flashing rubber ducks. When a baby arrives at the lab, she is placed in a high chair in front of three yellow rubber ducks. The ducks are outfitted with LED lights and attached to motor-controlled rods that can move them right and left. Atop the baby’s head is an EEG cap. It measures the oscillations in the activity of visual neurons. Each duck’s light flashes at a different frequency and the baby’s oscillations in neural activity will match the frequency of the duck receiving her attention.

Through these measurements, Robertson knows when a baby is paying attention to a certain duck. A video camera records the baby, so they can see how her eyes move in relation to that attention. What Robertson found and reported in a study published in the Proceedings of the National Academy of Sciences in 2012 is that attention is not always directly correlated to gaze. In fact, babies redirect their attention to a new duck ahead of actually looking at it. What’s more surprising is that a second or two before shifting to the new duck, babies actually paid more attention to the duck they didn’t choose to look at.

Robertson sees this behavior as consistent with the inhibition of return (IOR) observed in adults. In IOR, attention is suppressed toward previously inspected areas or objects in favor of new locations or objects. It would make sense for a baby to look at, and focus attention toward, a duck that it had not been paying attention to earlier.

Robertson is currently conducting further studies to test whether the behavior in infants truly is the development of IOR. “The adaptive value of this in visual exploration is that it keeps you from going to the same spot,” Robertson says. “You get to literally explore new locations in your environment and pick up new information.” And he adds that it’s especially important to study in infants because “the nature of visual input during this period has important consequences for the structural and functional development of the brain,” which happens quickly in early infancy.

Understanding Spatial Language Skills

20130506_casasola_portrait_48

Marianella Casasola, Proessor in Human Development

Casasola agrees that looking at babies is crucial for tracing how certain skills develop. One of her main interests is in understanding the link between spatial cognition and the acquisition of spatial language—language relating to space, location, and shapes.

Spatial awareness is a core cognitive ability. It is linked to achievement in math and sciences and has broader implications for everyday life. For example, spatial cognition relates to our ability to navigate, to project how objects will look from different angles, and even to reading orientation. Casasola wants to understand how these skills develop, but she also aims to figure out how they relate to acquisition of spatial language and what sorts of experiences promote spatial skills.

For this area of research, Casasola studies a wide age range, from babies at 14 months up to toddlers at 4.5 years old. The studies vary from age to age. For example, younger babies watch a computer animation of two halves of a shape—say, a heart—on either side of a curtain. The two halves move together and then disappear behind the curtain. The curtain then lifts and shows the whole heart. Or, it could show a completely different shape, like a square. Casasola relies on infant looking time to determine how they perceive these expected and unexpected shapes.

Older children are asked to put halves of foam shapes together. Casasola has also done naturalistic studies, during which researchers play with kids using spatial toys—puzzles, origami, and Legos. One group receives a lot of spatial language: “Fold the paper horizontally, you’ve made a triangle.” The other group receives general language, like “do this, now fold it like this, look what you’ve made.” What Casasola’s research found is that children with more exposure to spatial language are much better at naming shapes. The more spatial language a child acquires, the better they are at accomplishing nonverbal spatial tasks. Throughout the studies of spatial learning, Casasola wants to determine at what ages significant advances can be made.

“No one has looked at trajectory, which is important,” she says. “It can answer questions like, how stable are spatial skills? It can also highlight when might be ultimate time periods to promote it.” Knowing this will be useful for effective interventions that nurture better spatial cognition to help babies and children develop better spatial cognition abilities.

Using the Research

Goldstein is already on his way to applying his research findings to real-world intervention. Cornell’s Bronfenbrenner Center for Translational Research has recently funded the B.A.B.Y. Lab’s pilot intervention program to aid infant language development in low socioeconomic status families.

In previous research, the lab found that the timing and the form of reactions to infant babbling are crucial for language development. For example, if a baby is babbling at a toy, it’s important to respond immediately and to engage with that toy. The baby then sees there’s a reward to vocalizing and takes the next learning step.

The work done in the infant labs has a direct public impact. “Outreach is the real key,” Goldstein says. “We’re doing work that should improve the lives of parents and infants.”

Steve RoberstonSteven Robertson, a developmental psychologist and Professor of Human Development at Cornell University, worked with students from the Division for Nutritional Sciences in the College for Human Ecology on a new approach for assessing the effects of nutrition on infant recognition memory through the use of Electroencephalographic (EEG) imaging.

Read more here