Scientific Director & Professor, DFKI, Saarland University & Plattform Lernende Systeme
Philipp Slusallek is scientific director and member of the executive board at the German Research Center for Artificial Intelligence (DFKI), where he heads the research area on Agents and Simulated Reality. At Saarland University he has been a professor for computer graphics since 1999, a principle investigator at the German Excellence-Cluster on “Multimodal Computing and Interaction” from 2007 to 2019, and director for research at the Intel Visual Computing Institute 2009-2017. Before coming to Saarland University, he was a visiting assistant professor at Stanford University. He is a member of acatech (German National Academy of Science and Engineering), a member of the High-Level Expert Group on Artificial Intelligence for the European Commission, a fellow of Eurographics, and has been associate editor for Computer Graphics Forum. Prof. Slusallek co-founded the European AI initiative CLAIRE (Confederation of Laboratories for Artificial Intelligence Research in Europe, claire-ai.org) in 2018. He originally studied physics in Frankfurt and Tübingen (Diploma/M.Sc.) and got his PhD in computer science from Erlangen University. His research covers a wide range of topics including artificial intelligence in a broad sense, simulated/digital reality, real-time realistic graphics, high-performance computing, novel programming models for CPU/GPU/FPGA, computational science, and others.
Philipp is speaking at
AI technology connects increasing amounts of data and becomes more and more interconnected with other digital systems. To unlock its economic potential and use it to the benefit of our society however, people need to trust AI-based systems and applications. Trust in this context comprises different perspectives, like compliance to data protection rules and policies as well as ethical values and standards, or that an AI system reliably behaves correctly even in highly complex or rare situations. Accordingly, this track will present different activities and projects on the national and European level which deal with this fundamental aspect of trust in AI, and discuss some promising approaches to achieve it, ranging from standardization and certification to research topics like explainability and integrity by design.
Add to my calendar
Create your personal schedule through the official app, Whova!Get Started