CAPE Update #3: Software Architecture

CAPE’s Software Architecture: Towards an Automated, Open, and Intelligent Edge-Cloud Continuum

Written by: Milad Afzal (HIRO)

Overview

The Edge-Cloud landscape in Europe is transforming. Applications across energy, mobility, satellite data processing, industry, and critical infrastructure increasingly require low latency, strong privacy and data sovereignty, demanding intelligent decision-making at the edge. CAPE is meeting this shift by designing a software architecture that unifies cloud-native automation with the performance and resilience of modern edge computing systems.

This architecture integrates distributed orchestration, cognitive Infrastructure-from-Code, and hardware-aware management, operating across highly heterogeneous environments of COM-HPC compute modules with a mix of x86, ARM and RISC-V processor to hardware accelerators like FPGA and GPU SoC units. Applications with high memory requirements can leverage CXL memory expanders across our embedded hardware platforms. The goal is clear: provide an ecosystem that can deploy, manage, and optimize applications wherever they run best, automatically and with minimal human intervention. 

One of the key accomplishments so far is CAPE’s multi-site orchestration capability. Modern edge workloads are rarely confined to a single location. AI inference, grid anomaly detection, Earth-observation data pipelines, and industrial automation tasks increasingly rely on distributed execution to mitigate peak-loads. CAPE’s workflow engine now coordinates processes across multiple hardware platforms from CAPE’s specialized platforms and commodity (cloud) servers. Tasks are dynamically assigned to server nodes based on costs, latency, resource availability, and application requirements.

Orchestration

At the core of this orchestration model is the Sovereign European Cloud API (SECA), which acts as a standardized control interface across heterogeneous cloud and edge environments. SECA enables CAPE to expose uniform lifecycle, resource, and orchestration capabilities while remaining cloud-agnostic and aligned with European sovereignty requirements.

The following illustration provides an overview of CAPE’s software architecture, showing how cognitive components, Infrastructure-from-Code tooling, orchestration services, and heterogeneous execution environments interact across the edge-cloud continuum. It highlights the standardized interfaces that enable multiple deployment paths, from manual and declarative workflows to fully agentic, intent-driven provisioning.

Software Architecture of the CAPE Platform

Cognitive Infrastructure from Code

Beyond orchestration, CAPE introduces a significant innovation: cognitive Infrastructure-from-Code (IfC). Instead of manually writing low-level deployment configurations, operators can describe requirements in natural language. CAPE’s agentic pipeline interprets the intent, generates infrastructure logic using cloud-native tools such as Pulumi/Nitric, validates the results, and executes the deployment. These generated configurations are executed through SECA-compliant interfaces, ensuring consistent behavior across cloud, edge, and on-premise deployments. This approach reduces operational complexity and brings autonomous behavior closer to the edge.

The accompanying diagram illustrates the different usage paths supported by CAPE, ranging from direct infrastructure definitions (2) and Infrastructure-as-Code workflows (1) to fully agentic pipelines (3) that translate user intent into executable configurations.

The software layers integrate seamlessly with CAPE’s heterogeneous hardware ecosystem. The platform incorporates next-generation technologies such as PCIe Gen6 switching, pooled CXL memory, ARM and RISC-V server-processors, FPGA accelerators, and GPU SoCs on modular COM-HPC compute nodes. CAPE’s orchestration and deployment tools automatically discover available resources and assign workloads based on hardware capabilities as a crucial requirement for workloads like deep learning, anomaly detection, and satellite imagery processing.

CAPE’s pathway to resource provision and server management

Server Management

Managing this complexity requires a unified operational layer, which CAPE provides through openMPMC, an open, Redfish-compatible management interface. openMPMC offers low-level control including power management, firmware interaction, OS mounting, hardware inventory, telemetry, fan and thermal monitoring, and PCIe/CXL topology inspection. This management layer integrates directly with orchestration and cognitive tools, enabling CAPE to automate everything from powering on a node to deploying applications at scale.

System architecture of CAPE’s openMPMC server management platform

Outlook

Looking ahead, CAPE is expanding its cognitive automation models focusing on lightweight on-premises LLMs, automated troubleshooting, security-aware configuration validation, and optimized workload placement that considers latency, performance, and energy usage. The orchestration layer is evolving to become CXL-aware, enabling memory pooling and zero-copy workflows for data-intensive applications such as satellite imagery pipelines or multi-modal AI workloads.

The project’s long-term trajectory is toward a fully automated Edge-Cloud lifecycle. This includes powering on a node, provisioning its OS, integrating it into Kubernetes clusters, deploying workflows, and continuously optimizing their performance all guided by orchestration logic and cognitive intelligence. What emerges is one of Europe’s first self-managing Edge-Cloud platforms, capable of supporting production-grade deployments in energy, industrial automation, aerospace, mobility, and telecommunication sectors.

CAPE’s software architecture plays a central role in realizing this ambition. With advancements in distributed orchestration, Infrastructure-from-Code, intelligent scheduling, and unified hardware management, the platform is evolving into a powerful foundation for Europe’s sovereign, high-performance digital infrastructure. As CAPE moves into its next phase, the combination of automation, cognitive tooling, and modular hardware integration will continue to push the boundaries of what is possible at the edge.

Share the Post:​

LinkedIn

Related Posts​

General Assembly in Athens

The CAPE team gathered at the premises of IPTO in Athens for the bi-annual project meeting to review the current progress and plan the next steps. We are pleased to

DSD2025 conference on Digital System Design

We delivered an insightful overview of CAPE today at the  DSD/SEAA conference 2025, sparking a lively discussion on the future of edge-cloud computing and next-generation server interconnects. For those who

CAPE Update #2: Architectural Blueprint

July 2025: The CAPE team finished CAPE’s Architectural Blueprint and submitted it in the first public deliverable. The Architectural Blueprint outlines a cohesive, open-source architecture that enables European innovators to