Zur Navigation springen Zum Hauptinhalt springen Zum Footer springen

AI in Europe: How do we position ourselves in global competition?

Billions in investments in the United States, a surprisingly powerful AI model from China: the field of artificial intelligence is currently experiencing rapid growth. Where does Europe stand, and how can we maintain our international competitiveness in the future? Experts from Plattform Lernende Systeme provide their assessment.

Papapig/Shutterstock

The announcement of multi-billion-dollar investment packages for AI infrastructure in the US (Stargate) and affordable, powerful language models from China (Deepseek) are bringing new momentum to the artificial intelligence market. Germany and Europe face major challenges in light of current developments. However, there are also opportunities to play a leading role in the international competition for cutting-edge AI technology. With the InvestAI initiative, the EU has launched a €200 billion investment pact for AI, which is intended, among other things, to finance so-called AI gigafactories and secure technological sovereignty for Europe. Shortly afterwards, the European Commission announced that it would invest a total of €1.3 billion in artificial intelligence, cybersecurity and digital skills by 2027. We asked experts from Plattform Lernende Systeme about the opportunities this presents for Germany and Europe and how the existing potential in research, development and application can be exploited in a targeted manner.

  • Corina Apachiţe
    AUMOVIO SE

  • Tim Gutheit
    Infineon Technologies AG

  • Ruth Janal
    Universität Bayreuth

  • Kristian Kersting
    Technical University

  • Gitta Kutyniok
    Ludwig-Maximilians-Universität München

  • Anne Lauber-Rönsberg
    TU Dresden

  • Marius Lindauer
    Leibniz Universität Hannover

  • Alexander Löser
    Berliner Hochschule für Technik (BHT)

  • Katharina Morik
    Dortmund Technical University | Lamar Institute

  • Ute Schmid
    University Bamberg | bidt

  • Christoph M. Schmidt
    RWI – Leibniz-Institut für Wirtschaftsforschung

  • Volker Tresp
    Ludwig-Maximilians-Universität München

Harnessing the potential of human-centred and sustainable AI for the benefit of society

With the AI Act, Europe has adopted a comprehensive set of regulations. Have we shackled ourselves with these rules, or can we turn the guidelines into a competitive advantage? What are the requirements from a legal and business perspective?

Dr. Corina Apachiţe | AUMOVIO SE

 

With the AI Act, the European Union (EU) has chosen its own path to regulating AI. What does this mean for globally operating companies such as AUMOVIO SE?

Corina Apachiţe: With the AI Act, the European Union has created a clear framework for the use of artificial intelligence. For globally operating suppliers in the automotive industry, this means one thing above all else: cooperation with OEMs must become even closer. Clear documentation requirements, declarations of conformity and, where necessary, joint risk analyses require a coordinated approach along the entire supply chain.

At the same time, the AI Act offers the opportunity to gain a competitive advantage through compliant and trustworthy AI solutions. Companies that invest in implementation at an early stage not only secure access to the European market, but also send a strong signal to customers, partners and investors: they stand for safety, quality and sustainability. The automotive industry is well positioned in this regard, as it already has established standards in the area of functional safety. However, internationally active suppliers still face the challenge of dealing with different regulatory requirements in different markets. Flexible governance structures are therefore key to efficiently managing regulatory fragmentation and implementing innovations worldwide in a legally compliant manner.

With the AI Regulation, the European Union has chosen a unique path to regulate AI. What are the implications from a legal perspective?

Ruth Janal: It is true that the AI Regulation is an internationally unique regulatory instrument for artificial intelligence. However, many other countries are also pursuing regulatory initiatives. China tends to rely on specific, targeted administrative guidelines, for example for generative AI. In the US, the Trump administration is pursuing a ‘hands-off’ approach in principle, but nevertheless wants to draft and implement an AI Action Plan. However, the widespread belief that developers of AI systems based in the US and China do not have to comply with EU law is incorrect. The AI Regulation follows a so-called market location principle, meaning that it applies to developers of any AI that is to be introduced into or used in the European single market. Just like physical products imported into the EU from third countries, AI systems must also comply with European safety regulations. An important advantage of the AI Regulation is often overlooked: it prevents different Member States from introducing different regulations for the development and use of AI systems.

How do you assess the risk-based approach of the AI Act? Is it acceptable in view of the dynamics of technological development?

Ruth Janal: The risk-based approach of the AI Regulation makes sense in principle. AI systems are already being used in many areas of everyday life, and their areas of application will expand significantly in the future. Not every one of these areas of application requires a prior safety and fundamental rights assessment. That is why it is good that the AI Regulation focuses on so-called high-risk AI systems. The definition of a high-risk AI system is not based on technological criteria, but essentially on subject areas such as education, work, credit scoring, critical infrastructures or border controls. This offers great openness to technological development because the AI technology used is irrelevant for classification as a high-risk AI system. Of course, one can argue about whether the high-risk application areas have been chosen correctly. However, the European Commission has been granted the power in the AI Regulation to define new high-risk areas of application should the need arise in the future.

What challenges does the AI Act pose for the development of AI-based business models and the distribution of AI-based products?

Corina Apachiţe: Automotive products are systems consisting of software, hardware and mechanics. From a technical perspective, AI is a development technology that is integrated into the system. This means that the products and their business models are based on the system. The AI Act must be taken into account for the AI component share in terms of technical, legal and operational challenges. The requirements of the AI Act must be integrated into the processes in such a way that as little additional effort as possible is required. In particular, the documentation requirements are supplementary to the existing obligations. There is a risk that processes and thus products will become more expensive. Since AI is currently innovating at an extremely high speed, integrating it into traditional processes can significantly slow down innovation. This also poses a risk to business models, as long-term automotive business models cannot keep up with the high speed of AI development. There is a risk that products will already be obsolete by the time they go into production. There is a risk that products will already be obsolete by the time they go into production.

The future of mobility is inconceivable without artificial intelligence. What opportunities and risks does the AI Act present for the (German) mobility industry?

Corina Apachiţe: The European Union's AI Act marks a milestone on the path to a uniform, trustworthy framework for artificial intelligence – and thus has a direct impact on the mobility industry. For German companies, it offers both great opportunities and concrete challenges. On the opportunity side, the creation of a reliable regulatory environment is particularly important. Companies that use AI in a compliant and transparent manner strengthen trust in their products – especially in the safety-critical area of mobility. The AI Act thus promotes innovation-friendly conditions and gives German providers a competitive advantage in global markets. At the same time, it secures their access to the European single market and acts as proof of quality for customers, investors and partners. The mobility industry also benefits from existing expertise: with its many years of experience in safety standards, risk management and system documentation, it has the important foundations in place to efficiently implement the requirements of the AI Act. However, there are also risks. Particularly in the case of AI applications that are classified as ‘high risk’ – such as automated driving or intelligent traffic control – the requirements for verification, control and governance are increasing significantly. In addition, internationally active companies have to deal with different regulatory requirements, which increases the need for flexible, scalable compliance structures. Overall, the AI Act is both a wake-up call and an opportunity: those who address it strategically can actively shape the future of mobility – responsibly, innovatively and competitively.

Many generative AI models are trained on American data and are not tailored to European legal and social conditions. How can European values be incorporated into the development and training of AI?

Ruth Janal: The AI Regulation also applies to the development of AI systems that are trained outside the European Union, provided that the AI system is to be used within the EU. The AI Regulation obliges developers to use appropriate data governance procedures. For example, data sets must be relevant and representative and take into account the relevant geographical conditions. However, this is not entirely straightforward in practice, especially since data protection law in the EU sometimes prevents training with representative data. Experience in data protection and antitrust law has also shown that regulatory requirements are often ignored by international tech companies. Two things are therefore important: firstly, effective enforcement of EU law and, secondly, strengthening the European AI industry.

Empfohlener redaktioneller Inhalt

An dieser Stelle finden Sie einen externen Inhalt von YouTube, der den Artikel ergänzt. Sie können ihn sich mit einem Klick anzeigen lassen und wieder ausblenden.

Harnessing the potential of human-centred and sustainable AI for the benefit of society

With the AI Act, Europe has adopted a comprehensive set of regulations. Have we shackled ourselves with these rules, or can we turn the guidelines into a competitive advantage? What are the requirements from a legal and business perspective?

Prof. Dr. Ruth Janal | Universität Bayreuth

 

With the AI Act, the European Union (EU) has chosen its own path to regulating AI. What does this mean for globally operating companies such as AUMOVIO SE?

Corina Apachiţe: With the AI Act, the European Union has created a clear framework for the use of artificial intelligence. For globally operating suppliers in the automotive industry, this means one thing above all else: cooperation with OEMs must become even closer. Clear documentation requirements, declarations of conformity and, where necessary, joint risk analyses require a coordinated approach along the entire supply chain.

At the same time, the AI Act offers the opportunity to gain a competitive advantage through compliant and trustworthy AI solutions. Companies that invest in implementation at an early stage not only secure access to the European market, but also send a strong signal to customers, partners and investors: they stand for safety, quality and sustainability. The automotive industry is well positioned in this regard, as it already has established standards in the area of functional safety. However, internationally active suppliers still face the challenge of dealing with different regulatory requirements in different markets. Flexible governance structures are therefore key to efficiently managing regulatory fragmentation and implementing innovations worldwide in a legally compliant manner.

With the AI Regulation, the European Union has chosen a unique path to regulate AI. What are the implications from a legal perspective?

Ruth Janal: It is true that the AI Regulation is an internationally unique regulatory instrument for artificial intelligence. However, many other countries are also pursuing regulatory initiatives. China tends to rely on specific, targeted administrative guidelines, for example for generative AI. In the US, the Trump administration is pursuing a ‘hands-off’ approach in principle, but nevertheless wants to draft and implement an AI Action Plan. However, the widespread belief that developers of AI systems based in the US and China do not have to comply with EU law is incorrect. The AI Regulation follows a so-called market location principle, meaning that it applies to developers of any AI that is to be introduced into or used in the European single market. Just like physical products imported into the EU from third countries, AI systems must also comply with European safety regulations. An important advantage of the AI Regulation is often overlooked: it prevents different Member States from introducing different regulations for the development and use of AI systems.

How do you assess the risk-based approach of the AI Act? Is it acceptable in view of the dynamics of technological development?

Ruth Janal: The risk-based approach of the AI Regulation makes sense in principle. AI systems are already being used in many areas of everyday life, and their areas of application will expand significantly in the future. Not every one of these areas of application requires a prior safety and fundamental rights assessment. That is why it is good that the AI Regulation focuses on so-called high-risk AI systems. The definition of a high-risk AI system is not based on technological criteria, but essentially on subject areas such as education, work, credit scoring, critical infrastructures or border controls. This offers great openness to technological development because the AI technology used is irrelevant for classification as a high-risk AI system. Of course, one can argue about whether the high-risk application areas have been chosen correctly. However, the European Commission has been granted the power in the AI Regulation to define new high-risk areas of application should the need arise in the future.

What challenges does the AI Act pose for the development of AI-based business models and the distribution of AI-based products?

Corina Apachiţe: Automotive products are systems consisting of software, hardware and mechanics. From a technical perspective, AI is a development technology that is integrated into the system. This means that the products and their business models are based on the system. The AI Act must be taken into account for the AI component share in terms of technical, legal and operational challenges. The requirements of the AI Act must be integrated into the processes in such a way that as little additional effort as possible is required. In particular, the documentation requirements are supplementary to the existing obligations. There is a risk that processes and thus products will become more expensive. Since AI is currently innovating at an extremely high speed, integrating it into traditional processes can significantly slow down innovation. This also poses a risk to business models, as long-term automotive business models cannot keep up with the high speed of AI development. There is a risk that products will already be obsolete by the time they go into production. There is a risk that products will already be obsolete by the time they go into production.

The future of mobility is inconceivable without artificial intelligence. What opportunities and risks does the AI Act present for the (German) mobility industry?

Corina Apachiţe: The European Union's AI Act marks a milestone on the path to a uniform, trustworthy framework for artificial intelligence – and thus has a direct impact on the mobility industry. For German companies, it offers both great opportunities and concrete challenges. On the opportunity side, the creation of a reliable regulatory environment is particularly important. Companies that use AI in a compliant and transparent manner strengthen trust in their products – especially in the safety-critical area of mobility. The AI Act thus promotes innovation-friendly conditions and gives German providers a competitive advantage in global markets. At the same time, it secures their access to the European single market and acts as proof of quality for customers, investors and partners. The mobility industry also benefits from existing expertise: with its many years of experience in safety standards, risk management and system documentation, it has the important foundations in place to efficiently implement the requirements of the AI Act. However, there are also risks. Particularly in the case of AI applications that are classified as ‘high risk’ – such as automated driving or intelligent traffic control – the requirements for verification, control and governance are increasing significantly. In addition, internationally active companies have to deal with different regulatory requirements, which increases the need for flexible, scalable compliance structures. Overall, the AI Act is both a wake-up call and an opportunity: those who address it strategically can actively shape the future of mobility – responsibly, innovatively and competitively.

Many generative AI models are trained on American data and are not tailored to European legal and social conditions. How can European values be incorporated into the development and training of AI?

Ruth Janal: The AI Regulation also applies to the development of AI systems that are trained outside the European Union, provided that the AI system is to be used within the EU. The AI Regulation obliges developers to use appropriate data governance procedures. For example, data sets must be relevant and representative and take into account the relevant geographical conditions. However, this is not entirely straightforward in practice, especially since data protection law in the EU sometimes prevents training with representative data. Experience in data protection and antitrust law has also shown that regulatory requirements are often ignored by international tech companies. Two things are therefore important: firstly, effective enforcement of EU law and, secondly, strengthening the European AI industry.

The future of AI lies not in gigantism, but in reasonThe future of AI lies not in gigantism, but in reasonThe future of AI lies not in gigantism, but in reason

„The AI competition is not yet decided. It is up to Germany and Europe to pursue a strategy that focuses not only on short-term effects through scaling, but also on sustainable, well-thought-out and long-term viable AI technologies."

Prof. Dr. Kristian Kersting | TU Darmstadt/hessian.AI/DFKI

Deep learning has enabled groundbreaking advances in artificial intelligence over the past decade. Nevertheless, today's AI systems have significant weaknesses: they require enormous computing resources, which leads to market dominance by a few large companies. They also lack the ability to think logically and adapt flexibly to unknown situations. Instead of learning continuously, they must be laboriously retrained and adapted. A fitting quote from Alan Kay, Turing Award winner and inventor of object-oriented programming, sums up the problem with large-scale AI models: ‘They resemble an Egyptian pyramid, built from millions of stones, stacked on top of each other without structural integrity – erected with sheer force and thousands of slaves.’ This reveals the fundamental weakness of current AI development: it is based on sheer computing power rather than genuine structural intelligence.

However, recent developments surrounding DeepSeek illustrate that the technological race is not yet decided. The future of artificial intelligence lies not in pure scaling, but in ‘reasonable’ AI. Approaches such as test-time compute and mixture-of-experts show that the real opportunity lies in the development of flexible, adaptive and resource-efficient AI systems. We humans also think situationally and efficiently: we answer simple questions such as ‘What is 2+2?’ directly, without much effort. But when it comes to complex questions, such as the economic impact of climate change, we pause, gather facts and link different aspects before formulating an informed answer.

 

Reasonable AI enables the intelligent combination of different AI methods to create multi-paradigm systems. Reasonable AI systems use an appropriate amount of resources and are based on high-quality data. They are not only powerful, but also capable of adapting to new situations. Just as a delicious cake is not only created by many ingredients (data) and a great oven (model and infrastructure), but also by a well-thought-out recipe (the interaction of intelligent algorithms), AI systems must also be orchestrated sensibly.

Germany and Europe have great opportunities in this development. While companies in the US and China are currently focusing on scaling, Europe could go its own way with ‘sensible’ AI. Such a strategy would not only make economic sense, but would also address the growing criticism of the environmental and social costs of the AI industry. The increasing energy consumption of data centres can no longer be ignored. At the same time, it is becoming apparent that not only huge language models, but also specialised AI algorithms are bringing enormous benefits to science, medicine and industry. This is where European companies and research institutions could score points with innovative, highly specialised technologies.

Investments should be directed specifically at key areas such as neurosymbolic AI, adaptive learning methods and hybrid models. The combination of symbolic systems and neural networks improves the explainability and traceability of AI decisions – essential for regulatory requirements and social acceptance. Multimodal systems that link different data sources also offer promising opportunities. The future lies in the intelligent combination of existing technologies to achieve better results in a resource-efficient manner.

The AI competition is not yet decided. It is up to Germany and Europe to pursue a strategy that focuses not only on short-term effects through scaling, but also on sustainable, well-thought-out and long-term viable AI technologies. The future of AI lies not in gigantomania, but in reason.

Empfohlener redaktioneller Inhalt

An dieser Stelle finden Sie einen externen Inhalt von YouTube, der den Artikel ergänzt. Sie können ihn sich mit einem Klick anzeigen lassen und wieder ausblenden.

Empfohlener redaktioneller Inhalt

An dieser Stelle finden Sie einen externen Inhalt von YouTube, der den Artikel ergänzt. Sie können ihn sich mit einem Klick anzeigen lassen und wieder ausblenden.

Harnessing the potential of human-centred and sustainable AI for the benefit of society

„We should take a leading role in the development and application of human-centred and sustainable AI. By combining excellent research, a strong industry and a focus on ethics and sustainability, Germany can become a pioneer in the field of AI."

Prof. Dr. Marius Lindauer | Leibniz University Hannover

Artificial intelligence (AI) is one of the most transformative technologies of our time. It has the potential to fundamentally change our actions, our economy, our science and our society. As a professor of machine learning, I am concerned with the question of how we can best utilise the potential of AI in Germany, Europe and worldwide, particularly with regard to the democratisation of AI (through AutoML), human-centred AI and sustainability aspects.

My vision for the future of AI in Germany and Europe is a society in which AI technologies are used responsibly and for the benefit of all. We should take a leading role in the development and application of human-centred AI and sustainable AI. By combining excellent research, a strong industry and a focus on ethics and sustainability, Germany can become a pioneer in the field of AI.

To realise this vision, a national AI strategy is needed that prioritises the above-mentioned goals and is accompanied by measures to accelerate growth. Politicians must do more to create the necessary framework conditions, e.g. by providing greater support for AI ecosystems (e.g. for the rapid development of AI applications through AutoML) and further developing ethical guidelines for the use of AI. The business community must invest broadly in AI research and development and integrate AI solutions into its products and services. At the same time, society must understand the opportunities and challenges of AI and actively participate in shaping the future of AI.

 

Current dynamics of AI research and development

Advances in scaling AI, particularly machine learning based on neural networks such as transformers, have led to impressive results in areas such as image recognition, speech processing and robotics in recent years. Large language models, such as ChatGPT, Llama, Gemini, and Le Chat from Mistral, have demonstrated that AI is capable of generating human-like text and solving astonishingly complex tasks.

The popularity and importance of AutoML has grown significantly in recent years thanks to various open-source packages and cloud services. While it was still a hot topic just a few years ago, it is now evident that it has arrived in productivity environments. AutoML makes it possible to automate the development of AI models, thereby increasing development efficiency. It supports developers with little AI expertise as well as highly qualified AI experts in automating tedious and time-consuming tasks in order to find better solutions faster.

Another important aspect and trend that research in Germany is focusing on is human-centred AI. This places people at the heart of AI development and aims to create AI systems that are transparent, sustainable and user-friendly. Human-centred AI requires a rethink in AI development, moving away from pure automation towards augmentation and enhancement of human capabilities. In line with European values, this is also the path that AI must take in the future.

German AI research in international comparison

In international comparison, AI research in Germany is well positioned, especially in the field of basic research. Germany ranks among the top countries in terms of the number of AI publications in relation to the number of researchers. Germany has a long tradition of AI research and boasts excellent universities and research institutes, such as the AI Centres of Excellence, as well as other strong centres such as hessian.ai, the AI Centre at RWTH Aachen University and the L3S AI Centre in the Hanover and Braunschweig region. In Germany, there is also a focus on the ethical and sustainable aspects of AI, which will be essential for AI to gain widespread acceptance. In addition, Germany has a strong industry and a leading position in the medical sector – areas in which AI solutions are already being used today. Germany has advantages over the USA and China in these sectors, as large amounts of data and expertise are available here. This potential must be leveraged.

In contrast, Germany's weaknesses in international comparison are particularly evident in investment, risk appetite and bureaucracy. Compared to the United States and China, Germany invests less in AI research and infrastructure. In addition, German culture is too often characterised by a high aversion to risk and error, which inhibits the development and application of AI innovations. Last but not least, bureaucracy in Germany makes it difficult to establish and grow AI start-ups.

Above all, investment in AI research and development must be significantly increased in order to keep pace internationally, for example by promoting AI clusters and providing venture capital for AI start-ups. Education and training in the field of AI must also be expanded across all disciplines in order to meet the demand for AI experts. Ultimately, the ethical and sustainable aspects of AI must also be taken into account in all phases of development and application. This can be achieved, for example, through the further development and implementation of AI standards and the promotion of research in the field of ethical and sustainable AI.

Harnessing the potential of AutoML, human-centred AI and sustainability

In order to get ahead in the global AI race, Germany should take targeted measures and make proper use of its potential. Automated machine learning (AutoML), human-centred AI and sustainability are important aspects of the future of AI in Germany.

AutoML can accelerate and simplify the development of AI models, which is particularly beneficial for small and medium-sized enterprises, as it eliminates the need to build huge AI teams. By automating tasks such as model selection and hyperparameter optimisation, companies can develop and deploy AI solutions faster and more cost-effectively.

Human-centred AI ensures that AI systems meet people's needs and values and contribute to improving quality of life, both on an individual basis and for society as a whole. Rather than replacing humans with machines, the aim is to develop AI systems that complement and enhance human capabilities.

Sustainable AI solutions can help to overcome the environmental, economic and social challenges of our time. AI can be used, for example, to optimise energy consumption, increase resource efficiency and create sustainable supply chains.

The future of AI in Germany is promising. Through targeted measures and by harnessing the potential of AutoML, human-centred AI and sustainability, Germany can take a leading role in the global AI race and use AI technology for the benefit of society. A national AI strategy that promotes cooperation between politics, business and society is key to realising this vision. Only by acting together can we ensure that AI is used responsibly and for the benefit of all in Germany.

Empfohlener redaktioneller Inhalt

An dieser Stelle finden Sie einen externen Inhalt von YouTube, der den Artikel ergänzt. Sie können ihn sich mit einem Klick anzeigen lassen und wieder ausblenden.

Europe needs a strong network of AI centres

„If Europe now understands the urgency of significantly improving and expanding the network of AI centres in Europe and invests in human and computer resources, then the dream that most AI researchers have probably cherished for a long time could come true and Europe could position itself well between the US and China.“

Prof. Dr. Katharina Morik | TU Dortmund/Lamarr Institute

Germany and Europe are very well positioned in research. This applies to the entire spectrum of machine learning. In Germany, the open source large language model (LLM) Teuken-7B has just been published, which has been trained for all European languages. But LLMs are not everything. I see them as an interface to many different services. Just as the internet only made its breakthrough thanks to the WWW interface and the smartphone thanks to its uniform ‘app’ design, I also see LLMs as enablers for many applications. Approaches that trigger action systems via LLM are particularly exciting – from robots in the narrower sense to automatic experiments. These optimise machine learning itself, but also offer applications of practical relevance, particularly in medicine and chemistry.

German companies are still hesitant when it comes to commercial implementation, even though they could benefit enormously from the use of AI. Their data is a treasure trove that, when processed appropriately, can achieve better large LLMs for applications. This is where Europe could trump the US and China.

Germany is particularly well known for federated learning, also known as edge AI. Edge AI offers an invaluable advantage, especially in production and logistics. Embedded systems use machine learning so that data can be collected directly on site in real time and used to optimise actions. This requires resource-saving algorithms and the integration of computer architectures in the sense of software-hardware co-design.

 

Open source is important for development progress. But not when it comes to data. Europe, and Germany in particular, should keep a close eye on its data. Thanks to its multilingualism and strong industrial production, Europe – and Germany – has a decisive advantage that we should not relinquish.

The AI Act can be a unique selling point that speaks in favour of EU products. Germany and Europe are very well positioned in terms of the explainability and trustworthiness of AI. It is now important to implement certifications quickly and in a practical manner. There are already very well-developed process models for companies, for example at Fraunhofer IAIS. There are libraries for robustness or energy scoring for the individual steps in this process. For the automatic generation of tests for trained models and the comprehensible presentation of test results, there is, for example, the Care Label Framework from the Lamarr Institute. Further scaling is important here, and with it the development of many best practice examples.

The promotion of machine learning requires better structures.

The idea of establishing a European network of talent is not new. It began with the networks of excellence, in which machine learning laboratories could meet for targeted discussions on research topics, projects, teaching and training – for example, at the ECML and then the ECMLPKDD. The networks of excellence were a catalyst for community building.

The next step was the idea of centres for machine learning. Each centre combines excellent research, international networking, best practice, graduate schools and computer resources for experiments. Close cooperation with companies is intended to inspire new industrial applications and start-ups. Talents are also trained here and promoted in diverse careers as top performers – not least through good permanent employment contracts. Networking these hubs will facilitate the exchange of students and lectures, promote collaboration in research and enable the sharing of algorithms, codes and data. This idea was launched in 2018 with the AI centres in Germany and France.

However, various framework conditions are hindering further progress even before the actual goals have been achieved: On the one hand, sufficient computer resources must be available for further expansion of the hubs, and these must be available in a variety of computer architectures that enable experimental environments. Integration into networked hubs would be the much-vaunted CERN of AI.

On the other hand, the integration of companies for best practice studies and the promotion of start-ups is not sufficiently supported and too fragmented in the application process. Long-term perspectives and corresponding positions are needed here. This also applies to training courses at companies, schools and training workshops, which are part of a hub's remit. AI.NRW is an example of a contact point that already exists in many places – but often only as temporary individual projects.

Finally, the international and regional networks that each hub has established must be adequately and agilely promoted. It's about more than just nice meetings and photos for the press!

If Europe now understands the urgency of significantly improving and expanding the network of AI centres in Europe and invests in human and computer resources, then the dream long cherished by most AI researchers could come true and Europe could position itself well between the US and China.

Generative AI and human expertise must interact meaningfully

"Human expertise and the ability to critically evaluate and correct generated content are indispensable. We must ensure that generative AI and human expertise interact meaningfully in a co-creation process to guarantee the reliability and quality of generated content."

Prof. Dr. Ute Schmid | University of Bamberg / Bavarian Research Institute for Digital Transformation

No other technology has ever seen so much innovation in basic research, technology development and new applications at the same time as generative AI. I see new developments in research particularly in the combination of large language models (LLMs) with classic AI methods: Agentic LLMs use methods from the field of multi-agent systems, which make it possible to solve complex tasks in a goal-oriented and dynamic way. Retrieval Augmented Generation (RAG) combines LLMs with knowledge-based methods, which can increase the accuracy and robustness of generated content.

I am pleased to see that there are increasingly powerful European and German language models that enable greater independence from US and Chinese companies. Models such as the French Mistral and the German Teuken-7B are also open source, multilingual and comply with European data protection guidelines. Key challenges include the increasing energy requirements and the decline in the quality of generated content. The observed decline in reliability and quality is due in particular to the fact that increasingly more generated content that has not been checked by humans is being incorporated into the models.

 

 

While 2024 saw strong growth in the use of generative AI in marketing and customer communications, I hope to see more developments in production, science and software development in 2025. Current research in the field of large action models (LAMs) could contribute to making it possible to control robots and complex systems using natural language. Scientific LLMs can become helpful tools in research, for example in drug development.

In a current project at the Bavarian Institute for Digital Transformation (bidt), we are investigating LLMs for generating code from natural language specifications. On the one hand, the challenge here is how to evaluate the quality of generated code; on the other hand, we are addressing the problem of the possible loss of competence among computer science students. We must ensure that generative AI and human expertise interact meaningfully in a co-creation process. The fact that content generated by one system is used or evaluated by another generative AI system without human review can lead to a dangerous and absurd detachment from reality.

When it comes to highly specialised products such as machine control systems, medicines or programme code, critical review of generated content can only take place if the reviewers themselves are highly competent in the relevant field. Accordingly, new didactic concepts are becoming relevant in all degree programmes that meaningfully support the acquisition of skills in the context of using generative tools. Human expertise and the ability to critically assess and correct generated content are indispensable. This is a huge challenge for our education system.

Will AI solve our skilled labour shortage?

The use of AI offers a wide range of potential for alleviating skills shortages: it can increase employee productivity and help to integrate potential employees who would otherwise remain excluded. However, its successful use in the world of work is subject to a number of conditions.

Prof. Dr. Dr. h. c. Christoph M. Schmidt | RWI – Leibniz-Institut für Wirtschaftsforschung

 

Increasing productivity in employment relationships

The baby boomers of the post-war period will leave the labour market in this decade, depriving it of millions of experienced skilled workers. Many industries are already experiencing serious skills shortages, but this is likely to be just the beginning. However, the use of AI can counteract this shortage of skilled workers. Firstly, it will be possible to enrich or replace activities in existing employment relationships. This will free up employees' time so that they can be deployed for other, more productive and often more personally meaningful activities. AI can also be applied at the level of the employees themselves and increase their productivity by significantly expanding their skills or by using AI to promote knowledge transfer within the company.

Activating the potential of the domestic and immigrant workforce

Secondly, AI offers considerable potential for integrating people into the world of work who are currently excluded from it. There is a large domestic pool of potential employees who are not currently participating in working life, even though they would be fundamentally capable of doing so. For example, they may have low productive capacity due to a long period of unemployment or suffer from mental or physical impairments. AI could help them find employment that suits their limited capabilities, tailor-made training to expand their currently insufficient skills, or even create new jobs with job profiles tailored to their needs and abilities. Thirdly, targeted immigration of skilled workers and talented individuals willing to undergo training is an important lever for overcoming the shortage of skilled workers. However, in order to attract these potential immigrants to our economy, they must first be recruited. AI can help identify who is suitable for the companies seeking employees and significantly improve the speed of decisions on visa issuance, work permits and recognition of educational qualifications. Finally, AI can help with the acquisition of skills on site, especially language acquisition.

Successful use of AI requires many prerequisites

However, using all these levers to alleviate future skills shortages requires many prerequisites, and all spheres of the economy and society are equally challenged. Companies must adapt their work processes, management guidelines and entire corporate culture to this new world of work with AI, in which they bear more responsibility for the individual skills development of their employees. On the other hand, employees must be prepared to learn completely new things and adapt to radically changed work processes. And while politicians must cushion the social consequences of structural change, they are also called upon to spur it on. In all of this, the aim cannot be to introduce the perfect solution in one fell swoop, but rather to set out on a journey, try out many things and learn from failures. The recipe for success is therefore ‘pragmatism instead of striving for perfection’.

Empfohlener redaktioneller Inhalt

An dieser Stelle finden Sie einen externen Inhalt von YouTube, der den Artikel ergänzt. Sie können ihn sich mit einem Klick anzeigen lassen und wieder ausblenden.

Further assessments

Prof. Dr. Irene Bertschek

Prof. Dr. Gitta Kutyniok

Prof. Dr. Ahmad-Reza Sadeghi