Efficient Deep Learning (DL) models are increasingly recognized as one of the keys for successful Artificial Intelligence (AI) applications. Breakthroughs in AI, including Large Language Models (LLMs), have been largely driven by massive datasets and computationally intensive architectures. However, their high energy consumption raises concerns about sustainability, scalability, and accessibility. Efficient DL approaches, including efficient LLMs, randomized and semi-randomized neural networks, deep reservoir computing, neuromorphic hardware, knowledge distillation, weight quantization, model compression, and hardware acceleration, offer promising solutions to lower computational and energy costs, while maintaining effective performance. These methodologies enable efficient and robust AI systems across a wide range of applications, including signal analysis, audio-video processing, industrial process modeling, control, and automation. This workshop aims at gathering contributions advancing theory, methodologies, and applications in efficient DL, highlighting computational efficiency, real-time performance, adaptability, and scalability in modern AI systems.
Workshop Organizers
- Luca Pedrelli
- Stefano Dettori
- Federico Aromolo
- Marco Cococcioni
