Machine learning potentially enables researchers to detect drug effects that would be missed entirely by conventional statistical tests.
Machine learning could improve our ability to determine whether a new drug works in the brain. The machine learning technique took into account the presence or absence of damage across the entire brain, treating the stroke as a complex "fingerprint", described by a multitude of variables. This can potentially enable researchers to detect drug effects that would be missed entirely by conventional statistical tests, finds a new UCL study published in Brain.
‘Machine learning could be invaluable to medical science, especially when the system under study is highly complex.’
"Current statistical models are too simple. They fail to capture complex biological variations across people, discarding them as mere noise. We suspected this could partly explain why so many drug trials work in simple animals but fail in the complex brains of humans. If so, machine learning capable of modelling the human brain in its full complexity may uncover treatment effects that would otherwise be missed," said the study's lead author, Dr Parashkev Nachev (UCL Institute of Neurology). To test the concept, the research team looked at large-scale data from patients with stroke, extracting the complex anatomical pattern of brain damage caused by the stroke in each patient, creating in the process the largest collection of anatomically registered images of stroke ever assembled. As an index of the impact of stroke, they used gaze direction, objectively measured from the eyes as seen on head CT scans upon hospital admission, and from MRI scans typically done 1-3 days later.
They then simulated a large-scale meta-analysis of a set of hypothetical drugs, to see if treatment effects of different magnitudes that would have been missed by conventional statistical analysis could be identified with machine learning. For example, given a drug treatment that shrinks a brain lesion by 70%, they tested for a significant effect using conventional (low-dimensional) statistical tests as well as by using high-dimensional machine learning methods.
"Stroke trials tend to use relatively few, crude variables, such as the size of the lesion, ignoring whether the lesion is centred on a critical area or at the edge of it. Our algorithm learned the entire pattern of damage across the brain instead, employing thousands of variables at high anatomical resolution. By illuminating the complex relationship between anatomy and clinical outcome, it enabled us to detect therapeutic effects with far greater sensitivity than conventional techniques," explained the study's first author, Tianbo Xu (UCL Institute of Neurology).
The advantage of the machine learning approach was particularly strong when looking at interventions that reduce the volume of the lesion itself. With conventional low-dimensional models, the intervention would need to shrink the lesion by 78.4% of its volume for the effect to be detected in a trial more often than not, while the high-dimensional model would more than likely detect an effect when the lesion was shrunk by only 55%.
Advertisement
"The real value of machine learning lies not so much in automating things we find easy to do naturally, but formalising very complex decisions. Machine learning can combine the intuitive flexibility of a clinician with the formality of the statistics that drive evidence-based medicine. Models that pull together 1000s of variables can still be rigorous and mathematically sound. We can now capture the complex relationship between anatomy and outcome with high precision," said Dr Nachev.
Advertisement
Source-Eurekalert