|
Sparse ReRAM Engine: Joint exploration of activation and weight sparsity on compressed neural network - Tzu-Hsien Yang (National Taiwan Univervsity); Hsiang-Yun Cheng (Academia Sinica); Chia-Lin Yang, I-Ching Tseng (National Taiwan Univervsity); Han-Wen Hu, Hung-Sheng Chang, Hsiang-Pang Li (Macronix International Co., Ltd.)
|
|
MnnFast: A Fast and Scalable System Architecture for Memory-Augmented Neural Networks - Hanhwi Jang (POSTECH); Joonsung Kim (Seoul National University); Jae-Eon Jo (POSTECH); Jaewon Lee, Jangwoo Kim (Seoul National University)
|
|
TIE: Energy-efficient tensor train-based inference engine for deep neural network - Chunhua Deng (Rutgers University); Fangxuan Sun (Nanjing University); Xuehai Qian (University of Southern California); Jun Lin, Zhongfeng Wang (Nanjing University); Bo Yuan (Rutgers University)
|
|
Accelerating Distributed Reinforcement Learning with In-Switch Computing - Youjie Li, Iou-Jen Liu, Deming Chen, Alexander Schwing, Jian Huang (UIUC)
|
|
Eager Pruning: Algorithm and Architecture Support for Fast Training of Deep Neural Networks - Jiaqi Zhang, Xiangru Chen, Mingcong Song, Tao Li (University of Florida)
|
|
Laconic Deep Learning Inference Acceleration - Sayeh Sharify, Alberto Delmas Lascorz, Mostafa Mahmoud, Milos Nikolic, Kevin Siu, Dylan Malone Stuart, Zissis Poulos, Andreas Moshovos (Toronto)
|