We are about to establish a graduate school on inter-disciplinary research in cyber security. The CyberSec School allows for establishing the necessary structures for bringing together doctorate researcher across cyber security disciplines. This will particularly benefit current and future (externally) funded research training groups. We are convinced that in cyber security it is of tremendous importance to have a holistic view of the system and a deep understanding of different disciplines to ensure that new developments can interlock across all levels of an IT system.
The project "SECML: Secure and Robust Machine Learning for IoT Systems" aims at independent research on secure and robust learning methods with the option of evaluating the developed approaches on a diverse range of IoT applications at SAP and its customers. Next to the robustness, also the efficient execution of such systems on devices with limited hardware resources will be investigated. Moreover, we are researching methods to detect (attempts of) attacks on systems irrespective of their robustness level.
The former BMBF competence center KASTEL has been continued as HGF topic "Engineering Secure Systems" in the Helmholtz Association, where we research a wide range of topics in information security. The "Artificial Intelligence and Security" research group contributes to three out of four subtopics, working on applications such as energy systems, production systems, and also fundamental methods. In particular, we investigate explainable machine learning for computer security tasks, adaptive machine learning for attack detection, and feedback-driven testing.
The research project "Poison Ivy: Detection and Prevention of data-based Backdoors" is dedicated to research methods to prevent and detect backdoors in AI applications. Learning-based systems are driven by large amounts of data and thus are prone to attacks that stealthily manipulate training data. We develop approaches to secure learning-based systems in practice, monitor access to detect attacks early on and help inspect learned models for manipulations to prevent backdoors.