This session deals with optimizations in multi-core architectures at the task and architectural level, as well as in wireless networks. First, a combination of design-time and run-time techniques for managing fluctuating computational workloads are presented. The second talk focuses on the determination of optimized task mappings on multi-core architectures by architecture decomposition. The session continues with a talk on fast performance simulation for neural networks on GPU architectures. Then, online learning with integrated forgetting mechanism is discussed for performance prediction of GPU/CPU architectures. Evaluating the impact of instruction set architectures on multi-core soft-error reliability is the issue of an additional talk. The last presentation focusses on optimized topology in a wireless sensor network.