“School of Cognitive”

Back to Papers Home
Back to Papers of School of Cognitive

Paper   IPM / Cognitive / 15420
School of Cognitive Sciences
  Title:   A Scalable FPGA Architecture for Randomly Connected Networks of Hodgkin-Huxley Neurons
  Author(s): 
1.  K. Akbarzadeh-Sherbaf
2.  B. Abdoli
3.  S. Safari
4.  A-H. Vahabie
  Status:   Published
  Journal: Frontiers in Neuroscience
  Vol.:  12
  Year:  2018
  Pages:   1-17
  Supported by:  IPM
  Abstract:
Human intelligence relies on the vast number of neurons and their interconnections that form a parallel computing engine. If we tend to design a brain-like machine, we will have no choice but to employ many spiking neurons, each one has a large number of synapses. Such a neuronal network is not only compute-intensive but also memory-intensive. The performance and the configurability of the modern FPGAs make them suitable hardware solutions to deal with these challenges. This paper presents a scalable architecture to simulate a randomly connected network of Hodgkin-Huxley neurons. To demonstrate that our architecture eliminates the need to use a high-end device, we employ the XC7A200T, a member of the mid-range Xilinx Artix-7 family, as our target device. A set of techniques are proposed to reduce the memory usage and computational requirements. Here we introduce a multi-core architecture in which each core can update the states of a group of neurons stored in its corresponding memory bank. The proposed system uses a novel method to generate the connectivity vectors on the fly instead of storing them in a huge memory. This technique is based on a cyclic permutation of a single prestored connectivity vector per core. Moreover, to reduce both the resource usage and the computational latency even more, a novel approximate two-level counter is introduced to count the number of the spikes at the synapse for the sparse network. The first level is a low cost saturated counter implemented on FPGA lookup tables that reduces the number of inputs to the second level exact adder tree. It, therefore, results in much lower hardware cost for the counter circuit. These techniques along with pipelining make it possible to have a high-performance, scalable architecture, which could be configured for either a real-time simulation of up to 5120 neurons or a large-scale simulation of up to 65536 neurons in an appropriate execution time on a cost-optimized FPGA.

Download TeX format
back to top
scroll left or right