Bridging materials to neuromorphic systems and applications

Neuromorphic Computing Technologies: Opportunities, challenges and Applications Roadmap


Date: March 28th. 2022

NEUROTECH is organizing a workgroup day on March 28th from 2 PM- 6:30 PM CET. Four workgroups of Science, Industry, Ethics, and Bridge will present and discuss their own perspective on how the neuromorphic field can close the gap between materials and devices to systems and applications. 

Each workgroup event lasts about 1 hour, including two short (20 min) presentations, followed by a moderated discussion between the speakers and the audience.

Science workgroup has invited two speakers representing two recently funded European projects in the framework of ICT and FET calls.

Industry workgroup has invited two companies providing tools that ease the understanding and usage of novel devices for neuromorphic systems.

Ethics workgroup will tackle the highly discussed issue of explainability in AI and how neuromorphic systems can move in this direction.

Finally, Bridge workgroup aims to bridge the neuromorphic field with unconventional devices and computing systems to find synergies that can bring further ideas for the future of neuromorphic systems. 



Topical Agenda

Time (CET) Work Group Speaker
2 - 3 PM

Science and Technology

Chair: Dr. Sabina Spiga - IMM-CNR

Dr. Elisa Vianello - EU H2020 MeM-Scales 

Prof. Bernabé Linares-Barranco - EU FET H2020 Hermes  

3 - 4 PM


Chair: Dr. Alice Mizrahi - Thales

Dr. Adnan Mehonic - UCL/ InstrinSic

Prof. Themis Prodromakis - Southhampton University/Arc Instrument

 4 - 4:30 PM Coffee Break  
4:30 - 5:30 PM

Ethics and Future Discussions

Chair: Prof. Steve Furber - UNIMANN

Carlos Zednik, EUT, the Netherlands.

Panel discussion: Ron Chrisley, U. Sussex, UK, Bernd Stahl, De Montfort U., UK, Carlos Zednik, Eindhoven UT, The Netherlands, Steve Furber, U. Manchester, UK 

5:30 - 6:30 PM


Chair: Dr. Melika Payvand - UZH/ETHZ

Dr. Danijela Markovic - CNRS - Neuromorphic Quantum Computing
Dr. Ilia Valov - Research Centre Julich - "Unconventional" memristors and applications


Carlos Zednik, Eindhoven UT, The Netherlands

Title: Moral Responsibility and Explainable AI

The opacity of ML-driven AI systems has been thought to give rise to responsibility gaps: situations in which no moral agent can be deemed responsible for AI-executed actions. Indeed, opacity systematically undermines stakeholders' abilities to predict, justify, and control the behavior of AI-executed actions, and thus, arguably renders them ineligible as morally responsible agents. In this talk, I will consider whether current analytic techniques from Explainable AI can be used to close these responsibility gaps. Although I will argue that predictive ability can in many cases be reinstated, I will suggest that the ability to justify and control remains limited. Finally, I will draw some speculative conclusions for explainable AI as a research program and as an instrument for "AI for good".