In this session, we consider how technology impacts both training and inference in various types of neuro-inspired computing models. The first paper considers a ReRAM-based accelerator suitable for both training and inference in CNNs. The next paper presents ways to mitigate accuracy loss in spiking neural networks even when data is quantized. CNNs and binary CNNs are then considered in the context of SOT-MRAM. In papers four and five, RRAM-based compute kernels are considered in the context of sparse neural networks and gradient sparsification. The session concludes with a discussion of hyperdimensional computing.