(Note: For access to MASTISK please send an email to email@example.com, access cannot be granted without email request. In the email, please mention your name and organization) :)
MASTISK acronym for MAchine-learning and Synaptic-plasticity Technology Integrated Simulation frameworK is an open-source versatile and flexible tool developed in MATLAB for design exploration of dedicated neuromorphic hardware by researchers of the Non-Volatile Memory Research group at the Indian Institute of Technology - Delhi (http://web.iitd.ac.in/~manansuri/).
In Sanskrit etymology the word MASTISK (pronounced as mas-tea-she-k) means 'brain'.
MASTISK supports nanodevices and hybrid CMOS-nanodevice circuits. It has a hierarchical organization capturing details at the level of devices, circuits (i.e. neurons or activation functions, synapses or weights) and architectures (i.e. topology, learning-rules, algorithms).
Current version provides user-friendly interface for design and simulation of spiking neural networks (SNN) powered by spatio-temporal learning rules such as Spike-Timing Dependent Plasticity (STDP). Users can provide network definition as a simple input parameter file and the framework is capable of performing automated learning/inference simulations.
Case studies, codes, relevant publications, scripts, manuals, examples, and licensing information will be updated from time to time.
The developers assume no liability of any kind financial, legal or otherwise arising out of the usage of the framework or the results produced by using it.
The default parameters used in params_init.m are for the AlOx/HfO2 RRAM stack published in: Woo, Jiyong, et al. "Improved synaptic behavior under identical pulses using AlO x/HfO 2 bilayer RRAM array for neuromorphic systems." IEEE Electron Device Letters 37.8 (2016): 994-997.
and the synaptic circuit adopted is described in: Ambrogio, Stefano, et al. "Neuromorphic learning and recognition with one-transistor-one-resistor synapses and bistable metal oxide RRAM." IEEE Transactions on Electron Devices 63.4 (2016): 1508-1515.
LIF neurons are used with threshold voltage homeostasis. Two layered SNN is used with all-to-all excitatory synapses between input and output layer and lateral inhibitory connections between the output layer neurons for WTA. Training images used are four 8X8 binary images and testing images used are noisy versions of the training images with varying amounts of noise, provided in the folders named 'A', 'E', 'I' and 'O'. Expected training and testing accuracy reached after 11 epochs is 100% and 97% respectively.
If you use MASTISK, we request you to cite our work by copy-pasting the following exact citation:
(This effort led by PI. Prof. Manan Suri, is partially supported by- Department of Science & Technology, Science and Engineering Research Board (DST-SERB) Extramural research grant and Indian Institute of Technology-Delhi FIRP-IRD grant.)