Using machine learning to decode how the brain works

Despite considerable advances in our understanding of the brain, our knowledge remains somewhat patchy, not least in our ability to conceive how the brain manages to get our body to perform its wishes.

A recent study from Northwestern University, in Chicago, uses machine learning to shed some light on the matter.  Central to the project is understanding the patterns of voltage spikes that occur as information travels along nerve fibers.  This provides information as to how the brain controls muscle movements.

To derive just such a better understanding, the team trained a pack of monkeys to move a cursor across a screen towards a target using a computer mouse.  Each monkey was strapped up to devices that could measure their neuronal activity, whether in the primary motor cortex, the dorsal premotor cortex or the primary somatosensory cortex.

Decoding the brain

The machine learning algorithm the team developed to do that job aimed to analyze the horizontal and vertical distance that the mouse cursor moved during each test, purely by looking at the neurological data.

The team developed a number of different algorithms to test, varying from standard statistics based algorithms to machine learning based approaches, such as the Long Short Term Memory Network.

As with most machine learning based systems, the more data they’re fed, the better they perform, and this one was no different.  When the results of the various algorithms were analyzed, the AI based ones performed very well indeed.

“For instance, for all of the three brain areas, a Long Short Term Memory Network decoder explained over 40% of the unexplained variance from a Wiener filter,” the authors explain. “These results suggest that modern machine-learning techniques should become the standard methodology for neural decoding.”

As well as performing so well, another pleasing aspect of the work was that it did so even without having a vast dataset to work from.  Indeed, the team deliberately starved the algorithm of data to see just how well it performed with meager data.  They believe that the way they designed the system contributed to this efficiency.

“Our networks have on the order of 100 thousand parameters, while common networks for image classification can have on the order of 100 million parameters,” they say.

The hope is that others will build on this work, with this made possible by the donation of the code to the world so that it can be both deployed on other (larger) data sets, but also improved upon by other teams.

With such work crucially important for areas such as artificial limbs, this is a fascinating piece of work and a vital contribution to the field.

Related

Facebooktwitterredditpinterestlinkedinmail