As man and machine work ever more closely together, the ability to control those interactions is increasingly important, both to the efficient operations of the partnership, and their safe functioning. Brain-computer interfaces (BCIs) have shown particular promise as they support noninvasive control of the robotic devices. This not only renders them especially safe, but opens up the potential for paralyzed patients to have robotic support.
Whilst BCIs have shown considerable promise however, they require huge levels of medical and surgical expertise to install effectively in the brain of the user, hence they have remained limited to a very small number of test cases. The holy grail is to develop BCI that doesn’t involve invasive surgery.
Cleaner interfaces
One of the challenges with noninvasive processes is that the signal can often be lower quality, which renders the outputs poorer than a direct implant in the brain. A recent study from Carnegie Mellon highlights how these shortcomings are being overcome via neural decoding technology that’s built upon EEG-based signals.
The paper highlights a new framework that enables a robotic arm to follow a cursor on a computer screen continuously without jerky movements. It’s an improvement that will be crucial for finer motor tasks that require smooth actions to be completed.
The framework increases user engagement and training whilst also capturing spacial resolution of noninvasive neural data through the EEG source imaging. The approach was able to enhance the learning capabilities of the BCI by around 60% for a range of tasks, whilst improving the continuous tracking of the cursor by over 500%.
“Despite technical challenges using noninvasive signals, we are fully committed to bringing this safe and economic technology to people who can benefit from it,” the researchers explain. “This work represents an important step in noninvasive brain-computer interfaces, a technology which someday may become a pervasive assistive technology aiding everyone, like smartphones.”
To date the technology has been tested in 68 able-bodied humans who were asked to perform a number of tasks involving control of virtual devices and of a robotic arm. It’s fair to say that it’s a very long way from the market, but is an interesting sign of the progress being made. Check out the video below to see it in action.