Over the course of its first year, the European Lighthouse on Secure and Safe AI (ELSA) network of excellence has taken significant steps towards making the European Union a pioneer in secure and trustworthy artificial intelligence (AI). The network is intensively dedicated to research, maintains a close dialogue with industry representatives, pursues a strategic research focus and provides policy advice. All dimensions of the pressing issues that arise for society in connection with artificial intelligence are taken into account.
“I am thrilled with the progress we have already made in the first year”, says Professor Dr Mario Fritz, CISPA faculty and coordinator of ELSA, at the consortium's one-year anniversary meeting at the end of September. This first year together, in which experts in the fields of machine learning and artificial intelligence have shared their knowledge and experience, is just the beginning. “We still have a lot of work to do to make artificial intelligence more robust, secure and trustworthy so that it can be used for the benefit of society”, says Fritz.
Using AI sustainably
Large language models are a topic of intense interest to many researchers in the ELSA network. With the publication of ChatGPT and other chatbots, they became accessible to the general public. According to Fritz, the modern learning technology behind them needs to be made sustainable and secure. “As we have seen, however, the models still have enormous weaknesses.”
Fritz and his team, together with CISPA faculty Prof Dr Thorsten Holz and the Saarbrücken-based IT company sequire technology, were able to uncover critical weaknesses in the spring. The publication of their paper led, among other things, to the German Federal Office for Information Security (BSI) writing a detailed position paper on the topic entitled “Large AI language models - opportunities and risks for industry and authorities”.
Just as many intelligent minds are joining forces in ELSA, the NoEs are doing the same. Together, they have drawn up a Joint Research Agenda that shows how they want to jointly pave the way for the safe use of artificial intelligence.
ELSA is focussing on three major challenges. These include technically robust and secure AI systems as well as privacy-friendly and robust collaborative learning. In addition, human control mechanisms must be developed for the ethical and safe use of AI. The specific use cases are health, autonomous driving, robotics, cyber security, media and document intelligence. The network pursues a fundamental, transparent and interdisciplinary approach. The strategic research agenda published in November gives interested parties an overview of what ELSA's work on these challenges will look like in concrete terms.
Getting AI safely “on the road”
In order to test the technologies and methods developed by ELSA researchers under real-life conditions, ELSA 2023 has published a benchmarks platform. This platform is used to share data and metrics within the network and publish “competitions” on the six ELSA USE cases. This ensures that the network makes measurable progress and that research activities are constantly linked to real needs and applications.
In May, ELSA also called on small and medium-sized enterprises and innovative start-ups to apply for funding and work together with ELSA researchers on methods, benchmarks and software solutions and bring them into industrial application. Six start-ups were recently selected by a panel of experts. They will each receive around 60,000 euros in EU funding through ELSA and will work on specific projects together with selected consortium members. In January 2024, ELSA will announce which young companies were successful in the Industry Call.