Fredrik Heintz

Associate Professor, ICT-48 TAILOR Network

Dr. Fredrik Heintz is an Associate Professor of Computer Science at Linköping University, Sweden. He leads the Reasoning and Learning group within the Division of Artificial Intelligence and Integrated Systems (AIICS) in the Department of Computer Science. His research focus is artificial intelligence especially autonomous systems, stream reasoning and the intersection between knowledge representation and machine learning. He is the Director of the Graduate School for the Wallenberg AI, Autonomous Systems and Software Program (WASP), the coordinator of the TAILOR ICT-48 network of AI research excellence centers, the President of the Swedish AI Society, a member of the CLAIRE extended core team, a member of the EurAI board, and a member of the European Commission High-Level Expert Group on AI. He is also very active in education activities both at the university level and in promoting AI, computer science and computational thinking in primary, secondary and professional education. Fellow of the Royal Swedish Academy of Engineering Sciences (IVA).

Fredrik is speaking at

Focus Track 2 - Technology, Platforms and Trust
November 3, 2020
3:30 pm - 5:00 pm

Speakers

  • Philipp Slusallek (Chair) Scientific Director & Professor, DFKI, Saarland University & Plattform Lernende Systeme
  • Ray Walshe (Chair) Dublin City University (DCU)
  • Freek Bomhof (Chair) senior consultant, TNO
  • Katharina Morik (Speaker) Professor, TU Dortmund
  • Fredrik Heintz (Speaker) Associate Professor, ICT-48 TAILOR Network
  • Wolfgang Wahlster (Speaker) Professor of Artificial Intelligence, Founding director and CEA of DFKI, the German Research Center for AI, DFKI
  • Cecile Huet (Speaker) Deputy Head of Unit, Robotics & AI, European Commission , DG CONNECT
  • Ricardo Chavarriaga (Moderator) Research Associate - Head of the CLAIRE Office Zurich, ZHAW School of Engineering - CLAIRE Office Zurich

Description

Trustworthy AI – from national initiatives to European perspectives

AI technology connects increasing amounts of data and becomes more and more interconnected with other digital systems. To unlock its economic potential and use it to the benefit of our society however, people need to trust AI-based systems and applications. Trust in this context comprises different perspectives, like compliance to data protection rules and policies as well as ethical values and standards, or that an AI system reliably behaves correctly even in highly complex or rare situations. Accordingly, this track will present different activities and projects on the national and European level which deal with this fundamental aspect of trust in AI, and discuss some promising approaches to achieve it, ranging from standardization and certification to research topics like explainability and integrity by design.

Add to my calendar

Google

Create your personal schedule through the official app, Whova!

Get Started