While artificial intelligence (AI) traditionally stems from human brain dynamics, brain learning is known to be restricted in several significant aspects compared to deep learning (DL). For instance, efficient DL wiring structures consist of tens of consecutive, or “feedforward” layers, while brain dynamics consists of only a few of them. Moreover, deep learning architectures also have many consecutive filter layers, which are crucial for identifying input classes.
If the input is a car, for example, the first filter identifies its wheels, the second one its doors, the third one its lights, and so on, until it becomes clear what the input is. By contrast, brain dynamics contain just a single filter close to the retina. Finally, the mathematical complexity underlying DL training is clearly far beyond biological realization.
Nevertheless, the human brain remains much better than deep learning structures at performing a wide variety of tasks. Now, a team of researchers led by the Bar-Ilan University in Israel has investigated why this is the case, and proposed new methods of building AI systems inspired by brain dynamics.
“We’ve shown that efficient learning on an artificial tree architecture, where each weight has a single route to an output unit, can achieve better classification success rates than previously achieved by DL architectures consisting of more layers and filters. This finding paves the way for efficient, biologically-inspired new AI hardware and algorithms,” explained study senior author Ido Kanter, an expert in Physics and Computer Science at Bar-Ilan.
“Highly pruned tree architectures represent a step toward a plausible biological realization of efficient dendritic tree learning by a single or several neurons, with reduced complexity and energy consumption, and biological realization of backpropagation mechanisms, which is currently the central technique in AI,” added lead author Yuval Meir, a PhD student at the same university.
The efficient implementation of these ideas requires a new type of hardware that is different from current and emerging GPUs, which are better fitted to current DL approaches. Further research is needed to clarify how to build and program such machines.
The study is published in the journal Nature Scientific Reports.
Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.