A new study of brain imaging results reiterates the need to focus on gathering quality information.
If it can happen to a neuroscientist, it can happen to just about anyone.
Of course, according to a new study from the University of California San Francisco, neuroscientists might be particularly prone to the problem: bad data. Researchers found that oft-used sampling techniques for brain imaging studies, when applied to a data set of pediatric MRI images, “significantly distorted findings” related to the development of various regions of the brain.
The issue stemmed from failure to follow a “basic principle,” according to the school. The idea is that a study group should reflect the larger population in question, and when that falls through, biased data produce skewed results, researchers noted.
“Much of what we know about how the brain develops comes from samples that don’t look like the broader US at all,” Kaja LeWinn, ScD, an epidemiologist and assistant psychiatry professor who led the study, said in a statement.
For some time, cognitive science has appeared to suffer from this bug more than other disciplines, according to the report. Too often, the field homes in on white, educated, industrial, rich, developed participant groups. Similar techniques, LeWinn claimed, would be unacceptable in attempts to understand health conditions like cardiovascular disease.
But her study was the first to examine how sampling practices affect neuroimaging research.
Its findings call into question the validity of studies that provided bedrock of scientific knowledge related to brain development and mental health disorders, like depression and autism, LeWinn said.
The authors used a public MRI data set of 1,162 kids to reach their conclusions. The demographics of the group didn’t match that of the country, researchers said. Household education and income levels were higher than is typical, they said. So the team toyed with the data to make it better represent the population and gauged how it altered the results.
The new, weighted data showed mature brain development occurred earlier—9.7 years—than the crude data suggested, which landed on 12.1 years, according to the study. Researchers also found that the disconnect might have affected the order in which regions of the brain grow.
“The differences we found are pretty dramatic,” LeWinn said. “These findings give us pause because they raise questions about existing knowledge of brain development in children, which is based almost entirely on non-representative samples.”
She said the size of the data set enabled her team to weight the information and ultimately perform the study. Most cognitive neuroscience studies are too small for such a method to be effective, she said.
The lesson for others in health research? At the very least, they must be upfront about their studies’ demographics and only generalize findings after some thought, according to the researchers. Further, LeWinn said, larger brain imaging studies should consider aiming to represent the population, a goal that is too expensive for many endeavors.
Neuroscientists have encountered trouble when trying to replicate results, according to the announcement. “But so far,” the lead author added, “explanations for that have been focused on the way that brain imaging data are processed and analyzed, not on sample composition.” That’s significant because research encompassing two groups of people with different characteristics offer no expectation of similar results, she said.