event-icon
Description
The papers in this session have a common theme in that they propose to dynamically reconfigure DNN accelerators to improve their efficiency. The first paper dynamically scales the precision of computations. The second paper proposes to reconfigure the micro-architectural parameters of a neural network accelerator. The third paper reconfigures in software by changing the data-flow used for computation. The final paper dynamically partitions resources on reconfigurable platforms for modern deep neural network topologies like Inception and residual networks.
Sub-Sessions
Tags