James E. Smith is Professor Emeritus in the Department of Electrical and Computer Engineering at the University of Wisconsin-Madison. He received his PhD from the University of Illinois in 1976. He then joined the faculty of the University of Wisconsin-Madison, teaching and conducting research ̶ first in fault-tolerant computing, then in computer architecture. Over the years, he has also worked in industry on a variety of computer research and development projects.
Prof. Smith made a number of early contributions to the development of superscalar processors. These include basic mechanisms for dynamic branch prediction and implementing precise traps. He has also studied vector processor architectures and worked on the development of innovative microarchitecture paradigms. He received the 1999 ACM/IEEE Eckert-Mauchly Award for these contributions.
For the past six years, he has been developing neuron-based computing paradigms at home along the Clark Fork near Missoula, Montana.
“The most powerful computing machine of all is the human brain. Is it possible to design and implement an architecture that mimics the way the human brain works?” -- Workshop report: Mary J. Irwin and John P. Shen, “Revitalizing computer architecture research,” Computing Research Association (2005).
Reverse-engineering the brain, so to speak, was identified as a grand challenge for computer architecture researchers over a decade ago, and there is no question that it is a challenge of immense consequence. Yet, computer architecture researchers have done remarkably little (virtually nothing) to address this challenge. One very significant reason is that experimental neuroscience has a long history and the neurobiological landscape is vast. Consequently, for someone outside the field, the mass of literature and diversity of ideas is daunting, replete with controversy and contradiction. The learning curve is very steep, and it is difficult to know where to begin. The goal of this tutorial is to provide a starting point and significantly flatten the learning curve for researchers wishing to pursue this computer architecture grand challenge.
“Reverse-engineering the brain” is an attention-getter. To be more precise, this tutorial is about “reverse- abstracting the neocortex”. From a computer architect’s perspective, the brain’s neocortex is a complex computing system that requires a hierarchy of abstractions if it is to be understood. The way architects use abstractions to model conventional computer systems provides a general template for the “reverse- abstracting” process. At the bottom of the architecture stack, the first abstraction is from neuron biology to the lowest-level computing model. This is analogous to the abstraction from CMOS to binary gates. This first layer of abstraction is the foundation upon which everything rests. (And establishing this foundation is a hot topic of current neuroscience research.) Once out of the neurobiological domain, and in the computing model domain, future research will reveal higher levels of abstraction.
In the past 20 years, theoretical neuroscientists have made significant progress in studying and architecting the lower levels of abstraction. This includes research on the initial biology-to- model abstraction, as well as the first abstraction layer: from single neurons to assemblies of neurons. This tutorial will bring attendees up-to- date with neuroscientific progress toward “reverse-engineering the brain”, as interpreted by a computer architect. After attending the tutorial, attendees will: 1) better understand the nature of the problem, 2) view it as a computer architecture research problem, 3) have a firm foundation for initiating study of the problem, and, it is hoped, 4) participate in a serious effort to address the grand challenge!
Biological background: the brain’s major components; structural hierarchy of the neocortex; neuron operation; engineering-related features and properties; encoding sensory information.
Temporal coding and processing: experimental and theoretical support for communication and computation based on temporal (spike-timing) relationships.
Neuron-level Models: Modeling excitatory neurons and bulk inhibition; synaptic plasticity.
Neuron Assemblies: Layered hierarchical structures; interplay of excitation and inhibition; unsupervised training methods.
Architecture Case Studies: Two state-of- the-art systems from the neuroscience literature, plus the prototype architecture under development by the speaker.
Simulator Design: Simulation methodology and structure of neural network architectures.
Saturday June 2, 2018
- Introductory Remarks
- Research Focus: Milestone Temporal Neural Network (TNN)
- Classification and Clustering
- Neural Network Taxonomy
- Biological Background
- Neural Information Coding
- Sensory Encoding
3:00 -3:15 Break
3:15 - 4:15
- Fundamental Hypotheses
- Theory: Time as an Engineering Resource
- Low Level Abstractions
- Excitatory Neuron Models
- Modeling Inhibition
- Column Architecture
4:15 - 4:30 Break
- Case Studies
- Recent TNNs
- Space-Time Theory and Algebra
- Indirect Implementations: Simulation
- Direct Implementations: Neuromorphic Circuits
- Concluding Remarks
Attendees are welcome to familiarize themselves with material by clicking here. Alos, feel free to distribute the slides to anyone who might be interested.