Neuromorphic Computation
INTRODUCTION
There has been a tremendous growth in the AI field with the current CPUs and GPUs and the use of supercomputers. However, the present technology lacks machine reasoning, transfer learning, physical dimensions and many more. As a result, we need some serious and major changes in the way we structure a computer. One of those inspiring ways is by making physical computers a little more like a human brain. Neuromorphic chips promise to overcome this challenge with ground breaking research in the field of memristor and artificial synapse.
It involves designing and engineering computer chips that use the same physics and computation used by our nervous system which is different from an artificial neural network that mimics the logic of how a human brain thinks.
Neuromorphic computing simulates analog behaviour of human brain. Traditional computers ‘think’ in binary. Everything is either a ‘1’ or ‘0’, a ‘yes’ or a ‘no’. Hence the code we use and the question we ask must be structured rigidly. On the other hand, a neuromorphic computer works a little flexibly.
To do this we mimic the current flow that happens between neurons via synapses in a hardware. Exploiting the concept of synaptic transmission and flow of information within a network of neurons which are heavily dependent on ionic currents, we try to incorporate multiple states in the neuromorphic chips rather than just a simple “yes” and “no”. Mimicking this ability to transmit a gradient of understanding from neuron to neuron and have them all working together simultaneously results in neuromorphic chips that are more energy efficient especially for complicated tasks.
Artificial synapse:
The prime aim of a machine learning algorithm is to mimic the ability of the brain to make changes in the form of interpretation of the data based on the need of the surrounding. The brain both transfer and processes electric signal the received from a sensory organ. Inspired by this artificial synapse transfer, store and process the data at the same time without having need to have a separate space for data storage.
Memristor/Memory resistor:
It consists of the storage layer which is inserted between the electrodes. When a external electrical stimuli is applied to across the layer, it undergoes dynamic reconfiguration within the storage layer resulting in resistance modulation , referred to as memory effect. It has the ability to retain the changed resistance state even after electrical inputs are removed. Thus, it is used for analog switching which resembles biological synapse. The synaptic weight can be increased or decreased depending upon applied potential. In addition to analog switching , memristors also have other desirable properties such as long endurance and retention, nanosecond switching power and low power consumption. Owing to these characteristics memristors has emerged as a promising option for artificial synapse.
Desirable properties of Memristor synapse:
Linearity in weight update is one important factor affecting the performance of a memristor It indicates the linear relationship between synaptic weight change and programming pulse. It affects the accuracy of the system as it is associated with the mapping of weights in the algorithm for conductance in memristor synapse. Though most of the memristor shows non linear weight update, where the conductance change gradually saturates.
The resolution capability of storage is influenced by multilevel states and dynamic ranges because numerous conductance states can distinguishably store individual pixels of input patterns. However, in a large scale the device to device variation or the cycle to cycle variation can degrade neuromorphic computing. However, the ability of the neuromorphic architecture to be immune to the variation to some extend is of immense use. The memristor synapse should be able to retain the trained weight after every update of the training phase. Hence, larger the endurance and the retention time more is the achievement of neuromorphic computing. Supervised learning-based networks are less vulnerable to cycle-to-cycle and device-to-device variations. This is because memristor synapses are updated according to calculated errors under known target values. By contrast, the networks based on unsupervised learning are directly affected by the variation owing to unknown target values. Therefore, memristor synapses need to be designed or selected depending on individual neuromorphic networks. Neuromorphic systems based on crossbar array of memristor synapse: In this approach, Al2O3/TiO2−x memristor was used to fabricate a 12 × 12 crossbar array to implement a single-layer network. The 10 input neurons and 3 output neurons are fully linked by 10 × 3 = 30 synaptic weights (Wi,j). Input voltages (Vi = 1…9) assigned from pixels of the 3 × 3 input images were applied to each input neuron. V10 is a bias voltage to control the degree of activation of the output neurons. After being applied into the network, the input voltages were individually weighted depending on each synaptic weight.
The output neurons received each weighted voltage through linked weights and then integrated the weighted voltages (ΣWi,jVj), where j and i represent the input (j = 1–9) and output (i = 1–3) neurons respectively. The output neurons converted each integrated voltage into output (fi) ranging from −1 to 1 according to the nonlinear activation function: fi = tanh(βIi), where β adjusts the nonlinearity of the activation function and Ii = ΣWi,jVj.
The synaptic weights were represented by a pair of adjacent memristors (Wi,j = Gi,j+ − Gi,j−) for the effectiveness of weight update. The number of selected memristor synapses in 12 × 12 array were 30 × 2 = 60, due to a pair of memristors . When the network was under the training process, memristor synapses between input and output neurons were updated based on the Manhattan update rule, which is classified as supervised learning: ΔWi,j = ηsgnΣ[(ti(n) − fi(n)) × df/dI × Vj(n)], where η is the learning rate, ti(n) is the target value, fi(n) is the output value, and n is the nth input image. After the training process was complete, the memristor synapses retained their final conductance, and the test process was performed without weight update.
Other notable advances have been done in this field such as:
A new design has been proposed by a team in the Massachusetts Institute of Technology, which uses single crystalline silicon and silicon-germanium layer on top of one another. As we apply an electric field to this new device provides a well-controlled flow of ions.
Korean researchers proposed the use of titanium oxide that is even more durable whereas in Colorado they proposed the use of magnets to even precisely the way of computer neuron communication.
The University of Manchester took a different approach, their system is called SpiNNaker which stands for Spiking neural network architecture.
They used traditional digital parts connecting and communicating with each other in an innovative way. They used SpiNNaker to stimulate the behavior of the human cortex.
SpiNNaker is a processor platform optimized for the simulation of neural networks. A large number of ARM cores is integrated in a system architecture optimized for communication and memory access.
This field has been further studied with the hope that a computer that behave like a brain will give us enough computing power to simulate something as complicated as the task brain does within a fraction of seconds.
Written by Shambhavi Sinha, AI Subsystem at ISA Manipal.
References:
https://www.intechopen.com/chapters/66439
https://iopscience.iop.org/article/10.1088/1757-899X/912/6/062029
https://www.researchgate.net/publication/304414471_Development_of_a_neuromorphic_computing_system