New trends in Artificial Intelligence
We are proud to announce the first AI Congress in Graz, where leading international experts present their work and visions for the future of artificial intelligence and machine learning. Topics from privacy-preserving algorithms and new database-inspired big data analytics to visualizations and information theory will be covered and you will get insights about the challenges and AI solutions at CERN, where enormous amounts of data are generated in the particle accelerator experiments every second.
“In the course of the ongoing digitalization and the ever-increasing amount of data associated with it, there is also a need for an in-depth discussion of the possibilities and use of artificial intelligence.”
The Federal Ministry for Climate Protection, Environment, Energy, Mobility, Innovation and Technology therefore supports the first AI-Know in Graz, which is dedicated to this topic with internationally renowned experts. Already in the past, the I-Know, organised by the Know Center / TU Graz under the direction of Prof. Stefanie Stefanie Lindstaedt, has been a guarantee for highest quality and contributes significantly to the visibility of Austrian expertise in this field.
Confirmed Keynote Speakers
Ian Fischer is a Machine Learning researcher at Google Research. He did graduate studies at Harvard and Berkeley in Computational Geometry and Computer Security, and cofounded two tech companies before switching to Machine Learning research. His current research is focused on information theoretic approaches to machine learning in general and representation learning in particular; the development of probabilistic and variational techniques; and application of these approaches to supervised, unsupervised, semi-supervised, and reinforcement learning.
Abstract: Information Theoretic Objectives, Generalization, and Robustness
I will give an overview of a number of information theoretic objective functions and simple techniques for getting correct bounds on information theoretic quantities. I will show that careful application of these bounds can result in substantial empirical improvements to classical generalization as well as to robustness to a wide variety of distributional shifts, even on larger-scale problems like ImageNet.
Maria Girone is CTO at CERN OpenLab where cutting-edge ICT solutions for the research community are developed. She is an expert in global grid computing and the analysis of the tremendous amounts of data produced in high-energy physics experiments at CERN. She is coordinating R&D projects with industry and CERN experiments on new computing architectures, HPC and AI, instrumental to the upgrade program of the LHC.
Abstract: Computing Challenges at the CERN Large Hadron Collider (LHC)
CERN was established in 1954, with the mission of advancing science for peace and exploring fundamental physics questions, primarily through elementary particle research. The Large Hadron Collider (LHC) at CERN is the world’s most powerful particle accelerator colliding bunches of protons 40 million times every second. This extremely high rate of collisions makes it possible to identify rare phenomenon and to declare new discoveries such as the Higgs boson in 2012. High energy physics (HEP) has long been a driver in managing and processing enormous scientific datasets and the largest scale high throughput computing centers. HEP developed one of the first scientific computing grids that now regularly operates a million core processor cores and an exabyte of disk storage located on 5 continents including hundred of connected facilities. In this talk, I will discuss the challenges of capturing, storing and processing the large volumes of data generated at CERN. I will also discuss how these challenges will evolve towards the High-Luminosity Large Hadron Collider (HL-LHC), the upgrade programme scheduled to begin taking data in 2026 and to run into the 2030s. I will discuss the approaches we are considering in order to handle these enormous data, including the deployment of resources through the use of commercial clouds, and exploring new techniques, such as alternative computing architectures, advanced data analytics, and deep learning.
Know-Center & TU Graz
Stefanie Lindstaedt is the Head of the Institute for Interactive Systems & Data Science (ISDS) at Graz University of Technology and CEO of Know-Center, Austria‘s leading Research Center for Data-driven business and Big Data Analytics. Stefanie is an interdisciplinary researcher in the fields of data-driven business and adaptive systems. She has a strong background in computer science, especially artificial intelligence. Her research focus is the integration of data-driven approaches (e.g., machine learning, neural networks) with knowledge-based models (e.g., ontologies, engineering models) and human computer interaction. On the business side, Stefanie has extensive industry experience in process and technical consulting, solution sales, marketing and organizational development for international companies
Volker Markl is a Full Professor and Chair of the Database Systems and Information Management (DIMA) Group at the Technische Universität Berlin. At the German Research Center for Artificial Intelligence (DFKI), he is both a Chief Scientist and Head of the Intelligent Analytics for Massive Data Research Group. In addition, he is Director of the Berlin Big Data Center (BBDC) and Co-Director of the Berlin Machine Learning Center (BzMl). Dr. Markl has published numerous research papers on indexing, query optimization, lightweight information integration, and scalable data processing.
Abstract: Mosaics in Big Data
The global database research community has greatly impacted the functionality and performance of data storage and processing systems along the dimensions that define “big data”, i.e., volume, velocity, variety, and veracity. Locally, over the past five years, we have also been working on varying fronts. Among our contributions are: (1) establishing a vision for a database-inspired big data analytics system, which unifies the best of database and distributed systems technologies, and augments it with concepts drawn from compilers (e.g., iterations) and data stream processing, as well as (2) forming a community of researchers and institutions to create the Stratosphere platform to realize our vision. One major result from these activities was Apache Flink, an open-source big data analytics platform and its thriving global community of developers and production users. Although much progress has been made, when looking at the overall big data stack, a major challenge for database research community still remains. That is, how to maintain the ease-of-use despite the increasing heterogeneity and complexity of data analytics, involving specialized engines for various aspects of an end-to-end data analytics pipeline, including, among others, graph-based, linear algebra-based, and relational-based algorithms, and the underlying, increasingly heterogeneous hardware and computing infrastructure. At TU Berlin, DFKI, and the Berlin Big Data Center (BBDC), we aim to advance research in this field via the Mosaics project. Our goal is to remedy some of the heterogeneity challenges that hamper developer productivity and limit the use of data science technologies to just the privileged few, who are coveted experts.
University of Stuttgart
Michael Sedlmair is leading a research group for visualization and virtual/augmented reality at the VISUS research center at the University of Stuttgart. His research focus is data visualization, human-computer interaction, virtual and augmented reality, and machine learning.
Abstract: Machine Learning meets Visualization
Over the last years, the field of machine learning has substantially changed the work in many different scientific disciplines, including the visualization community. Based on our experience of conducting projects at the intersection of machine learning (ML) and interactive visualization (Vis) over the last decade, my talk will reflect on and discuss the current relation between these two areas. For that purpose, the talk’s structure will follow two main ideas. First, I will talk about *Vis for ML*, that is, the idea that visualization can help machine learning researchers and practitioners gaining interesting insights into their models. Here, I will specifically focus on visual parameter space analysis, and illustrate how this approach can help to better understand ML models, such as dimensionality reduction, clustering, and classification models. In the second part, I will turn the relationship around and discuss the contribution that *ML for Vis* can make. While other communities seem to have been much quicker in adopting ML pipelines, ML for Vis has gained little attention so far, but bears the potential to partly or even fully automatize the visualization design process. This new approach might potentially lead to a fundamental paradigm shift in how visualization research and design will be done in the future.
Dimitar Jetchv is a world-renowend cryptographer, CTO and Co-Founder of the security company INPHER with headquarters in New York City, San Francisco and Lausanne, Switzerland. His work focuses on secret computing, multiparty computation and fully homomorphic encryption to tackle modern big data privacy and security challenges.
Abstract: Scalable Privacy-Preserving Computing with High Numerical Precision
In this talk, I will present and discuss recent novel techniques for scalable privacy-preserving computing with high numerical precision. Apart from well-known applications to engineering and scientific problems (such as satellite collision detection), high-precision computing on large datasets is becoming relevant to machine and statistical learning systems designed to detect rare events (such as fraud transactions in FinTech, predictive maintenance in manufacturing, or rare diseases in healthcare). I will describe an approach based on Fourier transforms that allows to evaluate efficiently various non-linear functions in the setting of secure multi-party computations (SMPC). Finally, I will present a novel practical and scalable data-independent approach to compiling privacy-preserving programs that is applicable to both SMPC systems and fully-homomorphic encryption (FHE) systems.