Member-only story
Are Neural Networks Really Similar To The Brain?
Deep learning has made tremendous progress in various fields such as image recognition, natural language processing, and machine translation. Many of the techniques used in modern deep learning were inspired by biological systems, such as the structure of the brain and the way it processes information. However, it is also true that some of the techniques that were once thought to be biologically inspired have been largely abandoned in favor of newer techniques that have proven to be more effective.
Comparisons
One example of this is the use of sigmoid and tanh activation functions, which were once popular in deep learning. These functions were inspired by the way neurons in the brain process information, but they have largely been replaced by the rectified linear unit (ReLU) activation function. The ReLU function has been found to be more effective because it is able to learn faster and more accurately than the sigmoid and tanh functions.
Another example is the use of spiking neural networks (SNNs), which were inspired by the way neurons in the brain communicate with each other through action potentials or “spikes.” While SNNs have the potential to be more energy efficient than traditional artificial neural networks (ANNs), they have not yet been able to outperform ANNs on a wide range of tasks.
Hebbian learning, which is based on the idea that neurons that fire together wire together, has also been largely abandoned in favor of backpropagation, a technique that is used to train neural networks. Backpropagation is a supervised learning algorithm that uses gradient descent to update the weights of a neural network in order to minimize the error between the predicted output and the desired output.
There are also a number of techniques that have durably outperformed their biologically inspired counterparts, but which do not necessarily have a clear biological analogue. One example is the use of dropout, a regularization technique that has been shown to be effective in improving the generalization of deep learning models. Dropout works by randomly setting a portion of the neurons in a neural network to zero during training, which helps to prevent overfitting. While there is some evidence that the brain may use a similar mechanism to prevent overfitting, the exact mechanism is not well understood.
Another example is the use of multihead attention, a technique that has been used…