Whereas early AI systems were largely rule based systems, recent iterations have been more capable of self-learning. This is perhaps most commonly typified by DeepMind’s various projects, such as their video game algorithms that learned to master various retro games.
So it’s interesting to read a recently published paper whereby the researchers use a neurologically inspired approach to allow a computer to train itself on whatever task it chooses to perform.
The method, which utilizes reservoir computing, outperformed a number of alternative methods, including existing reservoir computing algorithms and more traditional approaches to problem solving.
Self-learning machines
The authors believe that not only might this provide an effective way of managing complex tasks, but might also provide a pathway by which Moore’s law can be extended.
“On the one hand, over the past decade there has been remarkable progress in artificial intelligence, such as spectacular advances in image recognition, and a computer beating the human Go world champion for the first time, and this progress is largely based on the use of error backpropagation,” the authors say. “On the other hand, there is growing interest, both in academia and industry (for example, by IBM and Hewlett Packard) in analog, brain-inspired computing as a possible route to circumvent the end of Moore’s law.”
The work is important because it shows that this form of backpropogation can function effectively on more traditional hardware used in analog computing. Indeed, it could even improve the performance of such systems.
Backpropogation sits at the heart of recent advances in AI, including the triumph of AlphaGo earlier this year. It allows algorithms to perform thousands of small, iterative calculations that reduce errors a small amount at a time, thus creating an ever more optimal value.
The method of combining backpropogation with reservoir computing was first tested back in 2015, with this latest paper highlighting the progress that has been made since then. Whereas the initial experiment featured a single, simple task, this latest version upped the ante, with three considerably harder tasks undertaken, including speech recognition and a complex, nonlinear task.
“We are trying to broaden as much as possible the range of problems to which experimental reservoir computing can be applied,” the authors say. “We are, for instance, writing up a manuscript in which we show that it can be used to generate periodic patterns and emulate chaotic systems.”
This isn’t to say that a lot of work isn’t required to improve this set-up. For instance, it runs up against limitations in data processing and transfer speeds, with improvements in these areas the next target for the researchers.
“The present experiment was implemented using a rather slow system, in which the neurons (internal variables) were processed one after the other. We are currently testing photonic systems in which the internal variables are all processed simultaneously—we call this a parallel architecture. This can provide several orders of magnitude of speed-up. Further in the future, we may revisit physical error backpropagation, but in these faster, parallel, systems,” they say.