Home Case Studies How one professor is using complex computational modelling to improve healthcare

How one professor is using complex computational modelling to improve healthcare

A finely organised mesh of some 90 billion neurons interacting across 160 trillion connections, the brain has few rivals in complexity. Understanding its operation in health and disease is a problem we shall need a great deal of computing power to frame, let alone solve.

Professor Parashkev Nachev is a neurologist and neuroscientist at the UCL Queen Square Institute of Neurology and the National Hospital for Neurology and Neurosurgery in London, a major centre of its kind.

The primary focus of his work is in building complex computational models of the relationship between brain damage and clinical outcome in an effort to predict what will happen to an individual patient whose brain has been impaired in a particular way. The theory is that this will allow him to prescribe the best treatment for each patient, closely tailored to individual needs.

We recently caught up with him to discuss his research and how Novatech has helped him along the way.

Slamcore - Novatech blog - The slamcore team

Professor Parashkev Nachev

Problem-solving

There’s a relationship between particular parts of the brain and certain functions,” Professor Nachev explains. “So it is theoretically possible to learn the relationship between damage to the brain and behavioural or cognitive outcomes.

The most common cause of localised brain damage is a stroke, which is not only a major cause of disability but one of the biggest killers in the western world. The approach Nachev’s research exemplifies is applicable not only to a stroke, but to any cause of brain injury.

He intends to “create a way of studying the complex relation between damage and function. The task is difficult because, although the brain is organised, its organisation is highly intricate, and can vary a great deal from one individual to another.” 

The challenge is asymptotic in the sense that we can never have a perfect system,” he adds. “We can only have a system that approaches greater and greater fidelity as it becomes more refined. But what we can know with certainty is that it will never be simple.

Complex modelling

Nachev explains that there are three things that complex neurological modelling always needs. First, there’s data that is representative of the population as a whole. Then, the right kind of algorithms. Thirdly – and this is where Novatech comes in – is a powerful and very particular kind of computer.

The kind of GPUs we require are not the kind optimal for gaming: we need more power, and especially much more memory,” says Nachev. “Not only do we need these high memory, high performance, highly customisable GPUs, we also need to have them installed within systems that enable us to operate in parallel across processing units” 

Practical applications

While it takes time for complex computational modelling systems to enter clinical practice—the evaluative and regulatory processes are slow—Nachev and his team can use powerful GPUs to optimise the operations of existing systems, helping make the hospital run better.

We have developed a fairly sophisticated tool that predicts a patient’s likelihood of attendance, enabling us to calibrate reminding to its risk. This reduces the risk of patients accidentally missing appointments, enabling investigations to be completed in a timely fashion.” 

The approach is now used in the NHS, and shows that a complex piece of computation can solve a fundamental problem with substantial impact on care.

While he is pleased with progress, Nachev remarks that it’s “impossible to build anything perfect.” What the goal is, ultimately, is to create a framework that seeks the optimal available or achievable performance.

That framework is really about drawing the maximum intelligence from the data we have – symptomatic features, clinical characteristics, blood results, the appearance of the brain scans, and more.” 

Nachev and his team can then use that intelligence to help guide how to make patients better, both individually, and through optimising the operation of the hospital as a whole. But every patient is both unique and holds similar characteristics to other patients, which is always going to lead to complexities: complexities that require some very powerful processors.

It’s all in the GPU

Such complex functions require some pretty capable machines and that’s where Novatech comes into the fold. These are not the kind of machines you use to run even the most demanding games or simulations though, these are the kind of machines with power that won’t be possible in ‘mainstream’ systems for years.

We’ve been using Nvidia's DGX-1 as our primary platform. This gives us eight V100 cards per machine with 32 gigabytes of RAM each. What's distinctive about these machines is the ability to operate seamlessly across all cards, rendering tractable models of a size that would otherwise be infeasible.” 

Of course, as the system matures, the focus will start shifting to the other bottlenecks in the system. For example, storage is an obvious one. Going forward, however, the real struggle for Nachev and his team will be thinking about how they are able to create systems that can be widely deployed within hospitals that might not have the same monumental computing power they do.

The task is obviously magnified by the need to be able to run these models on more conventional hardware, hardware that you could conceivably see deployed in ordinary hospitals.

Is the future in the cloud?

Because the field is evolving so rapidly, what's important for Nachev right now is to focus on the architectural problems. Because when their products start being deployed in two or three years, the hardware landscape is going to look very different to how it does right now. Indeed, he feels there will be more hospitals operating in the cloud.

One important consideration specific to healthcare is the relative difficulty of relying on cloud-based systems. For now, on-premises hardware provides the best combination of power, cost-effectiveness and security. But as cloud matures and healthcare institutions become more comfortable with cloud computing in the context of highly sensitive data, the situation may change.” 

For the next few years, however, there will still be a requirement for on-premises computing, which means that hardware vendors will have a very important role to play - particularly given the security implications related to the cloud, which are amplified when one is dealing with medical information.

The sky's the limit

Thanks again to Parashkev for taking the time to speak to us. With the power of Novatech GPU Servers backing up his research, we can only dream of what advances the Professor and his team will make in the years ahead. If you want to learn more about the man and his work, you can check out the links below.

Institutional Research Information Service

The Conversation - Parashkev Nachev

Researchers devise AI-based method of detecting response to MS treatment

Why not also find out more about our workstations for deep learning today?

 

From gamers to data scientists, we help organisations and individuals who want the best IT hardware to run the applications that are critical to them, by supplying purpose-built, fully supported IT hardware solutions.

If you have a project you’d like to discuss, drop us a message using the form below, or call our team on 02392 322 500.

Posted in Case Studies

Author -

Published on 17 Feb 2021

Last updated on 17 Feb 2021

Recent posts