Virtual neurons are used to encode integers and rationals using neuromorphic computers
The brain is modeled by neuromorphic computers, which perform calculations. They are very energy-efficient, just like the brain. While CPUs and graphics cards consume between 70-250W, IBM’s TrueNorth uses only 65mW (which is 4-5 orders of magnitude lower than CPUs or GPUs). Neurons and synapses are the structural and functional units in neuromorphic computing. They can be implemented using digital or analog hardware, and they can also have different implementations such as devices and materials. We focus on spiking systems that are composed of neurons and synapses. Spiking neuromorphic implementations of hardware include Intel’s Loihi5, SpiNNaker26. TrueNorth3, DYNAPS8, and BrainScales27. These characteristics are critical for the energy-efficiency of neuromorphic computer. We define neuromorphic computer as any computing paradigm that emulates the brain using binary-valued spikes (also called neurons) to communicate.
Neuromorphic computing, which is almost exclusively based on spiking neural network (SNNs), is used primarily in machine learning applications9. It has been used for non-machine learning purposes in recent years. Examples include graph algorithms, Boolean algebra and neuromorphic simulators10,11,12. Researchers have shown that neuromorphic computation is Turing-complete, i.e. capable of general-purpose computing. Neuromorphic computing’s ability to perform general purpose computations while using orders of magnitude less power is what makes it a key part of future energy-efficient computing.
SNNs are used to accelerate machine learning tasks. We still use CPUs and GPUs to perform other operations (e.g. arithmetic or logical) because neuromorphic methods are not available for these. These general-purpose functions are crucial for preprocessing the data before it’s transferred to a Neuromorphic processor. Data transfer takes up more than 99% in the current neuromorphic workflow, which involves preprocessing data on CPU/GPU before inferencing it on the neuromorphic processor (see Table 7). It is a highly inefficient workflow and it can be avoided by preprocessing the data on the neuromorphic computer. The cost of data transfer between a CPU/GPU and a neuromorphic processor would be drastically reduced if neuromorphic methods were developed to perform these preprocessing functions. This would allow all computations (preprocessing and inferencing), to be performed efficiently on low-power, neuromorphic computers deployed at the edge.
Source:
https://www.nature.com/articles/s41598-023-35005-x