by Simon Simoneau, Communications, Oak Ridge National Laboratory
If you try to visually represent a spiking neural network, a type of machine learning model, what you often get is an inextricable three-dimensional spiderweb of flashing dots and lines. This visual complexity masks a deeper dynamism, though, as the tangled mass is actually an ever-changing network of neurons and synapses inspired by the architecture of the human brain.
These networks, known as neuromorphic systems when implemented in hardware, are optimized over hundreds, thousands, or even millions of iterations on powerful computers by researchers like Katie Schuman, a Liane B. Russell Early Career Fellow at the Department of Energy’s Oak Ridge National Laboratory.
Schuman is a neuromorphic computing researcher on ORNL’s Nature Inspired Machine Learning Team, where she works to figure out what makes the human brain so powerful and how to leverage the theory of biologically inspired computing into practice.
“There’s a lot to learn about what is possible with computing and with these systems,” she said. “It’s a paradigm shift in how we think about what computers can do.”