As deep neural nets are at the core of many applications, a new problem of HW/SW co-design emerges. It is now common that even highly regarded DNN accelerators benchmark themselves on tiny datasets and antiquated DNN architectures. At the same time, for designers of novel DNN models, details on processor power-consumption and timing-models have never been harder to obtain. As a result, many DNN accelerator architects are focusing on increasing the speed on energy efficiency of older DNN models running on out of date benchmarks, and the novel DNN models, of many computer vision researchers, that increase accuracy on their target benchmarks, are only later discovered to be poorly suited to current generations of processor and DNN accelerator architectures. In this session we bring together three research groups which aim to closely coordinate the novel design of DNN models with the design of processors for efficiently executing them.