Research & Innovation Roadmap

This Research & Innovation Roadmap, developed under the NexusForum.EU project and supported by Horizon Europe, outlines a research and innovation trajectory towards achieving a federated European multi-provider, AI-driven computing continuum, to enable and support data-driven innovation and AI deployments in Europe. It complements the technology developments of the European Alliance for Industrial Data, Edge and Cloud and the new IPCEI-CIS, providing a long-term vision bridging industry needs with excellent research in cloud, edge, and AI technologies.

Executive summary and report

Research & Innovation Roadmap

Cognitive Computing Continuum White Paper

Interactive preview

A European Perspective: Markets, Laws, and Initiatives

The report The future of European competitiveness assesses the current state of European competitiveness, with a particular focus on industrial policy, its caveats and hopes for its future. Due to geopolitical instability, the urge of not depending on other countries is higher, because the EU has realized that dependency easily engenders instability. There has been a shift in paradigm, “The era of rapid world trade growth looks to have passed, with EU companies facing both greater competition from abroad and lower access to overseas markets”. Europe must radically change for digitalisation and decarbonisation to take place in the European economy, investments need to rise to 1960s-70s levels.

For growth to take place, three key areas have been identified to close the innovation gap with the US and China, i) activate a joint plan aimed at strengthening competitiveness ii) decarbonising the economy; iii) take action to enhance security, while reducing dependencies on third parties. In particular, three main barriers prevent Europe from growth.

The European Computing Continuum is one of the essential elements for the digital and green transition to take place within the EU and beyond, and the work of NexusForum serves as a facilitation instrument that on one hand asses the current landscape at the technological, economic and legal levels, and on the other, by engaging internally with academic and technical experts, and externally with industry experts, policy makers and more, it advises the European Commission on how.

Mario Draghi’s Report The future of European competitiveness mentions the lack of innovation as one of the reasons why the EU´s industry lacks dynamism. More specifically, it argues that this is due to the lifecycle of this innovation, starting from the initial product idea to its commercialisation. First, companies encounter financial barriers due to the quantity and the quality of funding dedicated to high risk, breakthrough, Research and Innovation (R&I) areas. In the EU, the capital market is smaller compared to the one of the US, and the Venture Capital sector is far less developed. Considering the global share of Venture Capitalist funds, the EU only raises around 5% of these, while the US accounts for more than half of them (52%), followed by China with a 40% share. Therefore, companies willing to scale-up in Europe, often seek alternative growth opportunities abroad, in markets where they are able to have more reach and have higher remunerative returns; in response to this, the demand for Venture Capital finance in Europe shrinks, alimenting a vicious circle. As a matter of fact, Europe lags behind in innovative technologies that have the power to drive productivity growth. With regards to the innovative technology of cloud, the numbers are loud and clear: 3 US tech giants (“hyperscalers”) account for more than 65% of the total share of the global cloud market.

The share of GDP spent on Research and Innovation was twice for the US compared to the EU. This gap in innovation is directly linked to the gap in productive investment between the European and the US economy that is shown in the figure below.

Productive investment over two decades, EIB 2024

From a European legal context, the digital ecosystem is shaped by several key regulations and initiatives aimed at ensuring a fair, competitive, and secure digital environment. Thus, during the last decade Europe has been developing a regulatory framework aimed at establishing the future European Digital Single Market. Ensuring fair competition and fostering innovation, creating a safer digital space, giving individuals greater control over their data, and promoting digital sovereignty, inclusion, equality, and sustainability are among the primary objectives Europe aims to achieve with this regulatory framework.

Within this context, the European Computing Continuum and its associated activities are playing a crucial role in establishing relevant requirements and guiding the needs for the regulatory landscape. Legislation shapes the European Computing by

  • Ensuring robust data protection measures across all stages of data processing.
  • Encouraging the development and deployment of AI systems that are transparent, fair, and accountable.
  • Creating a more open and competitive data economy, enabling innovation and collaboration.

These laws collectively aim to create a secure, fair, and innovative digital environment in Europe, balancing the need for data protection with the benefits of data-driven innovation.

The current regulatory landscape in the European Cloud-Edge Continuum is summarised in the figure below, which shows the timeline and impact of key regulations. These regulations are further described under the figure and can be roughly grouped into regulations aiming to i) create a digital single market in Europe, ii) regulate how data and AI are used in Europe, and iii) safeguard the European digital environment.

To address the technical challenges, the EU has launched several initiatives to develop foundational cloud technologies in Europe, support the development of AI in Europe, and incentivise data sharing between organizations in Europe.

Towards a European Cognitive Computing Continuum

The progressive convergence between Cloud Computing and the Internet of Things (IoT) is resulting in a Computing Continuum. The multi-faceted concept of Edge Computing first became to represent a middle ground between data centres and IoT hyper-local networks of sensors and actuators (2020, H-CLOUD project)

The Compute Continuum can be seen a multi-dimensional manner. One dimension is from a technical prerogative: AI for Cloud-Edge, Cloud-Edge for AI and Telco Cloud -Edge. A second dimension is the Compute Continuum through the lens of Open-Source code, Digital Sovereignty, Interoperability and Cybersecurity, as well as energy consumption. A third dimension is markets, rules, policies and regulation. These aspects are particular to Europe, with its stricter rules on the use of personal information, AI and data. A fourth dimension is the missing components. This deliverable, the roadmap, outlines some of the missing components for Europe to produce a competitive Compute infrastructure.

Challenges and opportunities: 

  • Fragmented datacentre and cloud market (missing segments)
  • EuroHPC JU and AI Factories
  • Establishment of a European ecosystem based on RISC-V open standard
  • Convergence of networking and compute and O-RAN
  • Convergence of Operational Technologies and Information Technologies
  • Data privacy and access to European data

Read more on this in the full Research & Innovation roadmap

Roadmap main topics

The section below provides an overview of the main topics and subtopics part of the Research & Innovation Roadmap

Transversal topics

Digital sovereignty

Open source

Sustainability

Cybersecurity

Interoperability

AI for Cloud-Edge: Orchestration and Managing a Multi-Provider Continuum

Sustainable, energy-efficient, and energy-grid-aware Compute Continuum
The rapidly growing adoption of cloud-edge computing and AI services across Europe will lead to a sharp increase in energy demand of data centres in the coming years. As the global share of energy used by data centres increases, the carbon footprint of the computing continuum will play an increasingly important role in achieving the ambitious climate goals.
AI-assisted operation maintenance of large-scale systems
Systems are becoming ever more complex. Whether new functionality is being bolted onto legacy systems, alternatively functionality creep, there are many systems millions of people rely on. So complex, such systems will never be completely understood, often maintained by teams of support people.
New distributed data processing paradigms for the Compute Continuum
Data is increasingly generated by devices and systems at the edges of the network. Continuous analytics of such data streams will require data management and analytics solutions that work in highly distributed environments. In use cases where these analytics workflows depend on previous data, or where the processing is distributed between edge sites, there is a need for efficient state synchronization mechanisms.
Safety-critical applications in the computing continuum
Safety-critical applications refer to systems whose failure or malfunction could lead to significant harm to people, environment, or infrastructure. In the context of the Cognitive Computing Continuum, these include applications such as autonomous transportation, critical healthcare monitoring, industrial automation, smart grids, and emergency response systems. Ensuring the reliability, availability, and safety of these applications is critical as computing resources become more distributed across cloud and edge environments.
The continuum performance: cross-layer optimisation
In a multi-provider Cloud-Edge continuum, Cloud optimisation becomes a distributed optimisation problem in which each entity has partial information and possibilities to adjust operational parameters. A layer, often standardised, in an IT system implies the same behaviour on separated subsystems. Each layer has an identifiable functionality that designers and users depend upon. By optimising the layers, performance increase can be achieved at the cost of interoperability.
Towards a Hyper-decentralized Computing Continuum
A hyper-decentralized computing continuum involves distributing computational resources and data management across a large number of independent nodes, including local edge nodes, local data centers, and cloud platforms. Hyper-decentralization aims to enable dynamic and peer-to-peer collaboration among these nodes without relying on a central authority or single point of failure.

Cloud-Edge for AI: Enabling and Facilitating AI Applications Across the Continuum

Portable AI applications and an open AI ecosystem
Cost and ease of use are major factors when choosing cloud service providers. For many organisations, the high costs and specialised expertise required to develop, operate, and maintain its own technology stack for developing and deploying AI applications, are a major challenge.
Integrating HPC in the Cognitive Computing Continuum
With a budget of EUR 7 billion for the period 2021-2027, the EuroHPC JU coordinates efforts and pools resources across Europe to develop and maintain a “world-leading federated, secure and hyper-connected supercomputing, quantum computing, service and data infrastructure ecosystem,” to boost scientific excellence and industrial strength in Europe. The AI Innovation Package further establishes the important role of the EuroHPC supercomputer ecosystem in the future development of general-purpose AI in Europe, through the development and operation of AI Factories and supercomputers optimised for AI.
RISC-V in the Cognitive Compute Continuum
Over the past decade and a half there has been significant investment in implementing European processors targeting both HPC and embedded and IoT applications. The EU goal is the production of cutting-edge and sustainable semiconductors in Europe, with at least 20% of world production in value by 2030, including manufacturing capacities below 5nm nodes, aiming at 2nm, with an aim to improve energy efficiency by a factor 10. To achieve this the RISC-V ISA plays a central role in EUs strategy.
A Middleware Toolkit for the Continuum
The Compute Continuum will almost certainly be a heterogeneous set of hardware and software systems. Therefore, running applications on different architectures, such as ARM, Intel (x86), RISC-V and differing operating systems makes it hard to find a one size fits all for software developers coding in the continuum. In large organizations even specialised teams e.g., IOS, Android are needed. Tools applying the Infrastructure as Code (IaC) approach, such as Ansible, may help, but are still often too error prone.
Operationalization of future AI systems in the Computing Continuum
It is common knowledge that the implementation of AI techniques in the computing continuum is not only desirable but also necessary to support future applications and continue to evolve. Certain requirements, like power consumption or storage, make the application of AI techniques particularly difficult when discussing certain computing continuum components, like Internet of Things devices.
Federated computation for Foundational Models
Federated computation could play a significant role in the future of AI training by enabling more efficient and privacy-preserving methods to train foundational models. The creation of foundation models, which are a subset of large language models, has changed the methodology of how AI can be created. These models can be adapted for a wide variety of use cases with much higher productivity than before.
Large-scale testbeds for AI services
There is a need for large-scale testbeds that can simulate real-world conditions for testing AI services, pipelines, and workflows in the computing continuum. Without such test environments, it’s challenging to validate and optimize these services for use in operational conditions. There is demand for an International Testbed for “Cross-border Data flows”, for example with Japan and South Korea, considering the “Cross-border Data flow deal” signed between EU and Japan.
Data privacy and security in AI services
Data-driven insights can contribute to vital decisions in many domains (e.g. crisis management, predictive maintenance, mobility, public safety, and cybersecurity). However, obtaining the trust of decision makers to exploit data-driven insight is still a pending issue due to i) the data and their fluctuating quality and volumetry and ii) the finality of big-data processing not necessarily suited to decision-maker comprehension.
Confidential computing
Confidential Computing comes into scene to solve the persistent problem related to the security of data when they are outsourced to the cloud. With the wide acceptance and adoption of the Continuum concept including all its advantages, the security problem has been amplified and extended to all tiers of the Continuum.
Data spaces for AI
Within the EU Artificial Intelligence Act (AI Act), GenAI is considered as a type of general-purpose AI model (GPAI) , and it is described as a highly capable model trained on large and diverse datasets and designed to perform a wide variety of tasks with a significant generality, allowing them to be integrated in a variety of downstream systems . Data Spaces are “Interoperable frameworks, based on common governance principles, standards, practices and enabling services, that enable trusted data transactions between participants.

Telco Cloud-Edge: Telco as One of the Main Tenants and Infrastructure Providers

Open radio access network (Open RAN)
The radio access network (RAN) is mostly closed in terms of an “open development space” except for large OEM telecommunications companies. Unlike cloud native, where 3rd parties can develop microservices, advertise and sell/license, the RAN is a HW+SW monolith which is a hurdle to enter and offer services within and existing licences, limited standards access, and protective patents further limit the possibilities to enter the market.
Seamless data connectivity and predictive handover across different networks
Many industry sectors and use cases are increasingly adopting technologies from robotics and autonomous systems, such as drones, for various monitoring or delivery applications. In many applications they will need to cover large distances and/or operate in remote areas with limited dedicated communication infrastructure and bandwidth.
Generative telco cloud
The telecommunications industry worldwide and specifically in Europe is facing significant growth and revenue challenges. As traditional revenue streams become stale, telcos need to find a way to improve their operational efficiency and reduce costs. This requires increasing automation and flexibility in their operations. Furthermore, the quick development of new technologies and the demand for high-speed connectivity are driving the need for robust cloud and network infrastructure.
Resilient on-prem 5G/6G edge-cloud for Industry 4.0 and beyond
Many industries are in the process of connecting their equipment to their IT infrastructure to gather data to enable predictive maintenance and inform decision making processes, coordinate work across the plant, and for using unmanned vehicles and robots. The seamless delivery of advanced data services and AI-based applications is integral to Industry 4.0.

Digitalisation of Industry Sectors: Requirements from Next-Gen Applications

Vehicles are becoming ‘computers on wheels’ with technology assistance for the drivers. Currently the assistance is often safety related.  Fully automated tech. finds its way down the tech chain: cruise control, lane departure warnings, automatic crash signalling (OnStar in the US, Apple devices) to name a few.

Technology extends beyond the driver; computer-based maintenance, GPS-enabled theft electronic Vehicle Identification Numbers. An (e)SIM is commonplace in trucks, transmitting maintenance, wearage information to OEMs on behalf of hauliers that demand high uptimes using on demand service.

The industry is also changing how software is developed. Moving towards Software Defined Vehicle (SDV) architectures where features are updated continuously and uploaded over the air. Connectivity using cloud services offer new possibilities. Motivated by Telsa and Jenkins-style development, the SDV in the automotive sector spans the continuum from the cloud, edge and devices. Smart cities through roadside units for safety, road monitoring, autonomous driving exemplify a rich environment for the continuum.

Roadside Units (RSUs), as intermediatory storage and compute units, the automotive sector illustrates well the continuum. Vehicles store and upload data via RSUs an example of a working system is automated toll booths, transmitting and paying for the journey via IEEE 802.11p, WiFi with priority bands. Further developments will be cameras indicating VRUs, unforeseen icy roads or accident hot spots. RSUs demonstrate the use of edge compute as opposed to solely edge devices. Note, data sovereignty and privacy are key issues.

Challenges: Extended reality (XR) technologies, such as augmented reality (AR) and virtual reality (VR), are expected to transform everyday life fundamentally, and enable new use cases and applications in industry. Due to the high data throughput and low latencies required to deliver a seamless VR/AR/XR experience, these technologies are among the main use cases for 5G and edge computing technologies.

R&D priorities:

  • Consistency in Virtual Environments: Develop algorithms to ensure seamless and consistent user experiences across decentralized networks.
  • Latency Reduction: Deploy and reinforce telecom and network infrastructure to minimize latency for real-time interaction within VR environments.
  • AI Integration: Integrate AI to dynamically adapt and optimize VR experiences based on user interaction and environmental changes.
  • Interoperability Standards: Establish standards to ensure interoperability among diverse VR platforms and decentralized computing resources.
  • Secure Data Exchange: Create secure protocols for spatial data exchange in decentralized VR applications to protect user privacy and data integrity.
  • Operational Datasets: Develop open operational datasets for training and evaluating AI models within VR scenarios.

Potential impact: VR/AR/XR are enabling new use cases and applications, with the potential of bringing great value to industry and enterprises.

Challenges: Originating from Japan, the concept of “Society 5.0” envisions a connected cyberspace where AI surpasses human capabilities, feeding optimal outcomes back into the physical realm. Ensuring data privacy and security on the Cloud is paramount as AI systems process vast amounts of data. High latency can hinder performance, particularly in critical applications such as healthcare or autonomous vehicles, where real-time processing is essential. Additionally, AI systems must efficiently scale with increasing data volumes and user demand to maintain optimal performance. The computational power required for AI can be costly, especially with the utilization of large neural networks, thus affecting overall expenses. Interoperability is crucial as AI systems often need to integrate with various other technologies and data sources, necessitating seamless compatibility. Moreover, regulatory compliance, particularly regarding data protection regulations like GDPR in Europe, is vital for AI systems operating on the Cloud, emphasizing the importance of cross-border data flows and specialised legal expertise. Lastly, addressing ethical considerations surrounding AI’s integration into society, alongside ensuring data quality and standardization, remains imperative to harness its full potential while mitigating risks and limitations.

R&D priorities:

  • Data Privacy and Security: There are risks associated with data breaches and unauthorized access, which necessitate robust encryption and security protocols.
  • Latency and Performance: Cloud infrastructure must be optimized to minimize latency and provide the necessary computational power.
  • Scalability: Cloud platforms need to provide flexible and scalable resources to accommodate the growth of AI applications without compromising performance.
  • Cost: Organizations must manage the cost of Cloud resources effectively to make AI integration economically viable.
  • Interoperability: Ensuring interoperability between different Cloud services and AI models requires standardized protocols and interfaces.
  • Regulatory Compliance: AI Governance platforms: Automate the identification of regulatory changes and translation into enforceable policies, Risk management and lifecycle governance.
  • Technical Expertise: Training and recruiting talent are necessary to bridge this gap and drive integration forward.
  • Ethical Considerations: Ensure that AI systems are transparent, explainable, and aligned with human values.
  • Data Quality and Standardization: Ensure that shared data is of high quality and standardized for interoperability.

Potential impact: The convergence of cyberspace and physical space envisages a seamless integration, facilitating smart cities and environments where data exchange between sensors and cyberspace optimizes resources and enhances quality of life. Society 5.0 embraces AI-driven analysis, wherein AI not only processes data but also offers feedback and solutions, augmenting decision-making. This feedback loop, empowered by AI, is anticipated to spawn new value across sectors, fostering economic growth, job creation, and societal well-being.

Disruptive impact initiatives

Neuromorphic systems
Neuromorphic computing, which aims to emulate the self-organizing and self-learning nature of the brain, offers a promising solution to this challenge. Despite its potential, neuromorphic hardware has not found its way into commercial AI data centres. One reason is due to insufficient opportunities for application-oriented testing of the hardware developments required for the highly complex computing technologies, as well as for a rapid implementation of the results in prototypes and small series. Another is the lack of standardized neuron models and common training techniques.
Space Edge
The European Space Agency (ESA) has launched several relevant initiatives on the topic of edge computing and AI in space. For example, one of ESA’s Phi Labs is exploring Cognitive Cloud Computing in Space (3CS), and another one focusing on edge learning in space is anticipated. Their Phi-sat programme performs experiments to deploy on-board AI on satellites for various earth observation tasks, such as filtering out high-value, for example cloud-free, images before they are sent back to earth.
Integration of quantum computing infrastructure
Quantum computing is emerging as a transformative technology with the potential to revolutionize various sectors, including cryptography, materials science, and complex system simulations. Globally, significant investments are being made to develop scalable quantum hardware and efficient quantum algorithms. Countries like the United States and China are leading the charge, aiming to achieve quantum supremacy and integrate quantum computing with existing HPC and cloud infrastructures.
Quantum and classical computing fusion
The fusion of quantum and classical computing represents a significant advancement in computational capabilities. Quantum computing uses qubits, which can process a richer set of possibilities compared to classical computing’s binary bits. This makes quantum computing more efficient for certain types of problems, such as those involving exponential variables. The most promising algorithms are hybrid, combining quantum and classical approaches to leverage the strengths of both paradigms.

This page contains excerpts of the Research & Innovation Roadmap. Full references and in-depth analyses can be found in the full document.