diff --git a/assets/contributors.csv b/assets/contributors.csv index ca9d8d4a27..478fe00629 100644 --- a/assets/contributors.csv +++ b/assets/contributors.csv @@ -94,4 +94,5 @@ Peter Harris,Arm,,,, Chenying Kuo,Adlink,evshary,evshary,, William Liang,,,wyliang,, Waheed Brown,Arm,https://github.com/armwaheed,https://www.linkedin.com/in/waheedbrown/,, -Aryan Bhusari,Arm,,https://www.linkedin.com/in/aryanbhusari,, \ No newline at end of file +Aryan Bhusari,Arm,,https://www.linkedin.com/in/aryanbhusari,, +Ken Zhang, Insyde,,,, \ No newline at end of file diff --git a/content/learning-paths/cross-platform/zenoh-multinode-ros2/_index.md b/content/learning-paths/cross-platform/zenoh-multinode-ros2/_index.md index 39f75a71da..f51f506190 100644 --- a/content/learning-paths/cross-platform/zenoh-multinode-ros2/_index.md +++ b/content/learning-paths/cross-platform/zenoh-multinode-ros2/_index.md @@ -51,7 +51,7 @@ further_reading: type: documentation - resource: title: Zenoh and ROS 2 Integration Guide - link: https://github.com/eclipse-zenoh/zenoh-plugin-ros2 + link: https://github.com/eclipse-zenoh/zenoh-plugin-ros2dds type: documentation diff --git a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/1_introduction_rdv3.md b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/1_introduction_rdv3.md new file mode 100644 index 0000000000..4a89b0ce5b --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/1_introduction_rdv3.md @@ -0,0 +1,82 @@ +--- +title: Introducing the Arm RD‑V3 Platform +weight: 2 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Introduction to the Arm RD‑V3 Platform + +This module introduces the Arm [Neoverse CSS‑V3](https://www.arm.com/products/neoverse-compute-subsystems/css-v3) architecture and the RD‑V3 [Reference Design Platform Software](https://neoverse-reference-design.docs.arm.com/en/latest/index.html) that implements it. You'll learn how these components enable scalable, server-class system design, and how to simulate and validate the full firmware stack using Fixed Virtual Platforms (FVP)—well before hardware is available. + +Arm Neoverse is designed to meet the demanding requirements of data center and edge computing, delivering high performance and efficiency. Widely adopted in servers, networking, and edge devices, the Neoverse architecture provides a solid foundation for modern infrastructure. + +Using Arm Fixed Virtual Platforms (FVPs), you can explore system bring-up, boot flow, and firmware customization well before physical silicon becomes available. + +This module also introduces the key components involved, from Neoverse V3 cores to secure subsystem controllers, and shows how these elements work together in a fully virtualized system simulation. + +### Neoverse CSS-V3 Platform Overview + +[Neoverse CSS-V3](https://www.arm.com/products/neoverse-compute-subsystems/css-v3) (Compute Subsystem Version 3) is the core subsystem architecture underpinning the Arm RD-V3 platform. It is specifically optimized for high-performance server and data center applications, providing a highly integrated solution combining processing cores, memory management, and interconnect technology. + +CSS V3 forms the key building block for specialized computing systems. It reduces design and validation costs for the general-purpose compute subsystem, allowing partners to focus on their specialization and acceleration while reducing risk and accelerating time to deployment. + +CSS‑V3 is available in configurable subsystems, supporting up to 64 Neoverse V3 cores per die. It also enables integration of high-bandwidth DDR5/LPDDR5 memory (up to 12 channels), PCIe Gen5 or CXL I/O (up to 64 lanes), and high-speed die-to-die links with support for UCIe 1.1 or custom PHYs. Designs can be scaled down to smaller core-count configurations, such as 32-core SoCs, or expanded through multi-die integration. + +Key features of CSS-V3 include: + +* High-performance CPU clusters: Optimized for server workloads and data throughput. + +* Advanced memory management: Efficient handling of data across multiple processing cores. + +* Interconnect technology: Enabling high-speed, low-latency communication within the subsystem. + +The CSS‑V3 subsystem is fully supported by Arm's Fixed Virtual Platform, enabling pre-silicon testing of these capabilities. + +### RD‑V3 Platform Introduction + +The RD‑V3 platform is a comprehensive reference design built around Arm’s [Neoverse V3](https://www.arm.com/products/silicon-ip-cpu/neoverse/neoverse-v3) CPUs, along with [Cortex-M55](https://www.arm.com/products/silicon-ip-cpu/cortex-m/cortex-m55) and [Cortex-M7](https://www.arm.com/products/silicon-ip-cpu/cortex-m/cortex-m7) microcontrollers. This platform enables efficient high-performance computing and robust platform management: + + +| Component | Description | +|---------------|-----------------------------------------------------------------------------| +| Neoverse V3 | The primary application processor responsible for executing OS and payloads | +| Cortex M7 | Implements the System Control Processor (SCP) for power, clocks, and init | +| Cortex M55 | Hosts the Runtime Security Engine (RSE), providing secure boot and runtime integrity | + +These subsystems work together in a coordinated architecture, communicating through shared memory regions, control buses, and platform protocols. This enables multi-stage boot processes and robust secure boot implementations. + +Here is the Neoverse Reference Design Platform [Software Stack](https://neoverse-reference-design.docs.arm.com/en/latest/about/software_stack.html#sw-stack) for your reference. + +![img1 alt-text#center](rdinfra_sw_stack.jpg "Neoverse Reference Design Software Stack") + + +### Develop and Validate Without Hardware + +In traditional development workflows, system validation cannot begin until silicon is available—often introducing risk and delay. + +To address this, Arm provides the Fixed Virtual Platform ([FVP](https://developer.arm.com/Tools%20and%20Software/Fixed%20Virtual%20Platforms)) —a complete simulations model that emulates full Arm SoC behavior on a host machine. The CSS‑V3 platform is available in multiple FVP configurations, allowing developers to select the model that best fits their specific development and validation needs. + + +Key Capabilities of FVP: +* Multi-core CPU simulation with SMP boot +* Multiple UART interfaces for serial debug and monitoring +* Compatible with TF‑A, UEFI, GRUB, and Linux kernel images +* Provides boot logs, trace outputs, and interrupt event visibility for debugging + +FVP enables developers to verify boot sequences, debug firmware handoffs, and even simulate RSE behaviors—all before first silicon. + +### Comparing different version of RD-V3 FVP + +To support different use cases and levels of platform complexity, Arm offers three virtual models based on the CSS‑v3 architecture: RD‑V3, RD-V3-Cfg1, and RD‑V3‑R1. While they share a common foundation, they differ in chip count, system topology, and simulation flexibility. + +| Model | Description | Recommended Use Cases | +|-------------|------------------------------------------------------------------|--------------------------------------------------------------------| +| RD‑V3 | Standard single-die platform with full processor and security blocks | Ideal for newcomers, firmware bring-up, and basic validation | +| RD‑V3‑R1 | Dual-die platform simulating chiplet-based architecture | Suitable for multi-node, interconnect, and advanced boot tests | +| CFG1 | Lightweight model with reduced control complexity for fast startup | Best for CI pipelines, unit testing, and quick validations | + + +This Learning Path begins with RD‑V3 as the primary platform for foundational exercises, guiding you through the process of building the software stack and simulating it on FVP to verify the boot sequence. +In later modules, you’ll transition to RD‑V3‑R1 to more advanced system simulation, multi-node bring-up, and firmware coordination across components like MCP and SCP. diff --git a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/2_rdv3_bootseq.md b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/2_rdv3_bootseq.md new file mode 100644 index 0000000000..fd07f2c169 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/2_rdv3_bootseq.md @@ -0,0 +1,160 @@ +--- +title: Understanding the CSS V3 Boot Flow and Firmware Stack +weight: 3 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Firmware Stack Overview and Boot Sequence Coordination + +To ensure the platform transitions securely and reliably from power-on to operating system launch, this module introduces the roles and interactions of each firmware component within the RD‑V3 boot process. +You’ll learn how each module contributes to system initialization and how control is systematically handed off across the boot chain. + + +## How the System Wakes Up + +In the RD‑V3 platform, each subsystem—such as TF‑A, RSE, SCP, LCP, and UEFI—operates independently but cooperates through a well-defined sequence. +Each module is delivered as a separate firmware image, yet they coordinate tightly through a structured boot flow and inter-processor signaling. + +The following diagram from the [Neoverse Reference Design Documentation](https://neoverse-reference-design.docs.arm.com/en/latest/shared/boot_flow/rdv3_single_chip.html?highlight=boot) illustrates the progression of component activation from initial reset to OS handoff: + +![img1 alt-text#center](rdf_single_chip.png "Boot Flow for RD-V3 Single Chip") + +### Stage 1. Security Validation Starts First (RSE) + +The first firmware module triggered after BL2 is the Runtime Security Engine (RSE), executing on Cortex‑M55. RSE authenticates all critical firmware components—including SCP, UEFI, and kernel images—using secure boot mechanisms. It performs cryptographic measurements and builds a Root of Trust before allowing any other processors to start. + +***RSE acts as the platform’s security gatekeeper.*** + +### Stage 2. Early Hardware Initialization (SCP / MCP) + +Once RSE completes verification, the System Control Processor (SCP) and Management Control Processor (MCP) are released from reset. + +These controllers perform essential platform bring-up: +* Initialize clocks, reset lines, and power domains +* Prepare DRAM and interconnect +* Enable the application cores and signal readiness to TF‑A + +***SCP/MCP are the ground crew bringing hardware systems online.*** + +### Stage 3. Secure Execution Setup (TF‑A) + +Once the AP is released, it begins executing Trusted Firmware‑A (TF‑A) at EL3, starting from the reset vector address programmed during boot image layout. +TF‑A configures the secure world, sets up exception levels, and prepares for handoff to UEFI. + +***TF‑A is the ignition controller, launching the next stages securely.*** + +### Stage 4. Firmware and Bootloader (EDK2 / GRUB) + +TF‑A hands off control to UEFI firmware (EDK2), which performs device discovery and launches GRUB. + +Responsibilities: +* Detect and initialize memory, PCIe, and boot devices +* Generate ACPI and platform configuration tables +* Locate and launch GRUB from storage or flash + +***EDK2 and GRUB are like the first- and second-stage rockets launching the payload.*** + +### Stage 5. Linux Kernel Boot + +GRUB loads the Linux kernel and passes full control to the OS. + +Responsibilities: +* Initialize device drivers and kernel subsystems +* Mount the root filesystem +* Start user-space processes (e.g., BusyBox) + +***The Linux kernel is the spacecraft—it takes over and begins its mission.*** + +## Firmware Module Responsibilities in Detail + +Now that we’ve examined the high-level boot stages, let’s break down each firmware module’s role in more detail. + +Each stage of the boot chain is backed by a dedicated component—either a secure bootloader, platform controller, or operating system manager—working together to ensure a reliable system bring-up. + +### RSE: Runtime Security Engine (Cortex‑M55) (Stage 1: Security Validation) + +RSE firmware runs on the Cortex‑M55 and plays a critical role in platform attestation and integrity enforcement. +* Authenticates BL2, SCP, and UEFI firmware images (Secure Boot) +* Records boot-time measurements (e.g., PCRs, ROT) +* Releases boot authorization only after successful validation + +RSE acts as the second layer of the chain of trust, maintaining a monitored and secure environment throughout early boot. + + +### SCP: System Control Processor (Cortex‑M7) (Stage 2: Early Hardware Bring-up) + +SCP firmware runs on the Cortex‑M7 core and performs early hardware initialization and power domain control. +* Initializes clocks, reset controllers, and system interconnect +* Manages DRAM setup and enables power for the application processor +* Coordinates boot readiness with RSE via MHU (Message Handling Unit) + +SCP is central to bring-up operations and ensures the AP starts in a stable hardware environment. + +### TF-A: Trusted Firmware-A (BL1 / BL2) (Stage 3: Secure Execution Setup) + +TF‑A is the entry point of the boot chain and is responsible for establishing the system’s root of trust. +* BL1 (Boot Loader Stage 1): Executes from ROM, initializing minimal hardware such as clocks and serial interfaces, and loads BL2. +* BL2 (Boot Loader Stage 2): Validates and loads SCP, RSE, and UEFI images, setting up secure handover to later stages. + +TF‑A ensures all downstream components are authenticated and loaded from trusted sources, laying the foundation for a secure boot. + + +### UEFI / GRUB / Linux Kernel (Stage 4–5: Bootloader and OS Handoff) + +After SCP powers on the application processor, control passes to the main bootloader and operating system: +* UEFI (EDK2): Provides firmware abstraction, hardware discovery, and ACPI table generation +* GRUB: Selects and loads the Linux kernel image +* Linux Kernel: Initializes the OS, drivers, and launches the userland (e.g., BusyBox) + +On the FVP, you can observe this process via UART logs, helping validate each stage’s success. + + +### LCP: Low Power Controller (Optional Component) + +If present in the configuration, LCP handles platform power management at a finer granularity: +* Implements sleep/wake transitions +* Controls per-core power gating +* Manages transitions to ACPI power states (e.g., S3, S5) + +LCP support depends on the FVP model and may be omitted in simplified virtual setups. + + +### Coordination and Handoff Logic + +The RD‑V3 boot sequence follows a multi-stage, dependency-driven handshake model, where each firmware module validates, powers, or authorizes the next. + +| Stage | Dependency Chain | Description | +|-------|----------------------|-------------------------------------------------------------------------| +| 1 | RSE ← BL2 | RSE is loaded and triggered by BL2 to begin security validation | +| 2 | SCP ← BL2 + RSE | SCP initialization requires both BL2 and authorization from RSE | +| 3 | AP ← SCP + RSE | The application processor starts only after SCP sets power and RSE permits | +| 4 | UEFI → GRUB → Linux | UEFI launches GRUB, which loads the kernel and enters the OS | + +This handshake model ensures that no firmware stage proceeds unless its dependencies have securely initialized and authorized the next step. + +{{% notice Note %}} +In the table above, arrows (←) represent **dependency relationships**—the component on the left **depends on** the component(s) on the right to be triggered or authorized. +For example, `RSE ← BL2` means that RSE is loaded and triggered by BL2; +`AP ← SCP + RSE` means the application processor can only start after SCP has initialized the hardware and RSE has granted secure boot authorization. +These arrows do not represent execution order but indicate **which component must be ready for another to begin**. +{{% /notice %}} + +{{% notice Note %}} +Once the firmware stack reaches UEFI, it performs hardware discovery and launches GRUB. +GRUB then selects and boots the Linux kernel. Unlike the previous dependency arrows (←), this is a **direct execution path**—each stage passes control directly to the next. +{{% /notice %}} + +This layered approach supports modular testing, independent debugging, and early-stage simulation—all essential for secure and robust platform bring-up. + + +In this module, you have: + +* Explored the full boot sequence of the RD‑V3 platform, from power-on to Linux login +* Understood the responsibilities of key firmware components such as TF‑A, RSE, SCP, LCP, and UEFI +* Learned how secure boot is enforced and how each module hands off control to the next +* Interpreted boot dependencies using FVP simulation and UART logs + +With the full boot flow and firmware responsibilities now clear, you're ready to apply these insights. +In the next module, you'll fetch the RD‑V3 codebase, configure your workspace, and begin building your own firmware stack for simulation. diff --git a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/3_rdv3_sw_build.md b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/3_rdv3_sw_build.md new file mode 100644 index 0000000000..ea55e55327 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/3_rdv3_sw_build.md @@ -0,0 +1,245 @@ +--- +title: Build the RD‑V3 Reference Platform +weight: 4 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- +## Building the RD‑V3 Reference Platform + +In this module, you’ll set up your development environment on Arm server and build the firmware stack required to simulate the RD‑V3 platform. + +### Step 1: Prepare the Development Environment + +First, ensure your system is up-to-date and install the required tools and libraries: + +```bash +sudo apt update +sudo apt install curl git +``` + +Configure git as follows. + +```bash +git config --global user.name "" +git config --global user.email "" +``` + +### Step 2: Fetch the Source Code + +The RD‑V3 platform firmware stack consists of many independent components—such as TF‑A, SCP, RSE, UEFI, Linux kernel, and Buildroot. Each component is maintained in a separate Git repository. To manage and synchronize these repositories efficiently, we use the `repo` tool. It simplifies syncing the full platform software stack from multiple upstreams. + +If repo is not installed, you can download it manually: + +```bash +mkdir -p ~/.bin +PATH="${HOME}/.bin:${PATH}" +curl https://storage.googleapis.com/git-repo-downloads/repo > ~/.bin/repo +chmod a+rx ~/.bin/repo +``` + +Once ready, create a workspace and initialize the repo manifest: + +We use a pinned manifest to ensure reproducibility across different environments. This locks all component repositories to known-good commits that are validated and aligned with a specific FVP version. + +For this session, we will use `pinned-rdv3.xml` and `RD-INFRA-2025.07.03`. + +```bash +cd ~ +mkdir rdv3 +cd rdv3 +# Initialize the source tree +repo init -u https://git.gitlab.arm.com/infra-solutions/reference-design/infra-refdesign-manifests.git -m pinned-rdv3.xml -b refs/tags/RD-INFRA-2025.07.03 --depth=1 + +# Sync the full source code +repo sync -c -j $(nproc) --fetch-submodules --force-sync --no-clone-bundle +``` + +Once synced, you will see the message like: +``` +Syncing: 95% (19/20), done in 2m36.453s +Syncing: 100% (83/83) 2:52 | 1 job | 0:01 platsw/edk2-platforms @ uefi/edk2/edk2-platformsrepo sync has finished successfully. +``` + +{{% notice Note %}} +As of the time of writing, the latest official release tag is RD-INFRA-2025.07.03. +Please note that newer tags may be available as future platform updates are published. +{{% /notice %}} + +This manifest will fetch all required sources including: +* TF‑A +* SCP / RSE firmware +* EDK2 (UEFI) +* Linux kernel +* Buildroot and platform scripts + + +### Step 3: Build the Docker Image + +There are two supported methods for building the reference firmware stack: **host-based** and **container-based**. + +- The **host-based** build installs all required dependencies directly on your local system and executes the build natively. +- The **container-based** build runs the compilation process inside a pre-configured Docker image, ensuring consistent results and isolation from host environment issues. + +In this Learning Path, we will use the **container-based** approach. + +The container image is designed to use the source directory from the host (`~/rdv3`) and perform the build process inside the container. Make sure Docker is installed on your Linux machine. You can follow this [installation guide](https://learn.arm.com/install-guides/docker/). + + +After Docker is installed, you’re ready to build the container image. + +The `container.sh` script is a wrapper that builds the container using default settings for the Dockerfile and image name. You can customize these by using the `-f` (Dockerfile) and `-i` (image name) options, or by editing the script directly. + +To view all available options: + +```bash +cd ~/rdv3/container-scripts +./container.sh -h +``` + +To build the container image: + +```bash +./container.sh build +``` + +The build procedure may take a few minutes, depending on network bandwidth and CPU performance. On my AWS m7g.4xlarge instance, it took 250 seconds. + +``` +Building docker image: rdinfra-builder ... +[+] Building 239.7s (19/19) FINISHED docker:default + => [internal] load build definition from rd-infra-arm64 0.0s + => => transferring dockerfile: 4.50kB 0.0s + => [internal] load metadata for docker.io/library/ubuntu:jammy-20240911.1 1.0s + => [internal] load .dockerignore 0.0s + => => transferring context: 2B 0.0s + => [internal] load build context 0.0s + => => transferring context: 10.80kB 0.0s + => [ 1/14] FROM docker.io/library/ubuntu:jammy-20240911.1@sha256:0e5e4a57c2499249aafc3b40fcd541e9a456aab7296681a3994d631587203f97 1.7s + => => resolve docker.io/library/ubuntu:jammy-20240911.1@sha256:0e5e4a57c2499249aafc3b40fcd541e9a456aab7296681a3994d631587203f97 0.0s + => => sha256:0e5e4a57c2499249aafc3b40fcd541e9a456aab7296681a3994d631587203f97 6.69kB / 6.69kB 0.0s + => => sha256:7c75ab2b0567edbb9d4834a2c51e462ebd709740d1f2c40bcd23c56e974fe2a8 424B / 424B 0.0s + => => sha256:981912c48e9a89e903c89b228be977e23eeba83d42e2c8e0593a781a2b251cba 2.31kB / 2.31kB 0.0s + => => sha256:a186900671ab62e1dea364788f4e84c156e1825939914cfb5a6770be2b58b4da 27.36MB / 27.36MB 1.1s + => => extracting sha256:a186900671ab62e1dea364788f4e84c156e1825939914cfb5a6770be2b58b4da 0.5s + => [ 2/14] RUN apt-get update -q=2 && apt-get install -q=2 --yes --no-install-recommends ca-certificates curl 12.5s + => [ 3/14] RUN wget -O - https://apt.kitware.com/keys/kitware-archive-latest.asc 2>/dev/null | gpg --dearmor - | tee /etc/apt/trust 0.5s + => [ 4/14] RUN apt-get update -q=2 && apt-get install -q=2 --yes --no-install-recommends acpica-tools autoconf 40.0s + => [ 5/14] RUN pip3 install --no-cache-dir poetry 7.4s + => [ 6/14] RUN curl https://storage.googleapis.com/git-repo-downloads/repo > /usr/bin/repo && chmod a+x /usr/bin/repo 0.3s + => [ 7/14] COPY common/install-openssl.sh /tmp/common/ 0.0s + => [ 8/14] RUN bash /tmp/common/install-openssl.sh /opt 32.7s + => [ 9/14] COPY common/install-gcc.sh /tmp/common/ 0.0s + => [10/14] COPY common/install-clang.sh /tmp/common/ 0.0s + => [11/14] RUN bash /tmp/common/install-gcc.sh /opt 13.2.rel1 "arm-none-eabi" 19.8s + => [12/14] RUN bash /tmp/common/install-gcc.sh /opt 13.2.rel1 "aarch64-none-elf" 13.4s + => [13/14] RUN bash /tmp/common/install-clang.sh /opt 15.0.6 101.2s + => [14/14] COPY common/entry.sh /root/entry.sh 0.0s + => exporting to image 9.2s + => => exporting layers 9.2s + => => writing image sha256:3a395c5a0b60248881f9ad06048b97ae3ed4d937ffb0c288ea90097b2319f2b8 0.0s + => => naming to docker.io/library/rdinfra-builder 0.0s +``` + +After the docker image build completes successfully, you can use `docker images` to find the build docker image called `rdinfra-builder`. + +``` +REPOSITORY TAG IMAGE ID CREATED SIZE +rdinfra-builder latest 3a395c5a0b60 4 minutes ago 8.12GB +``` + + +### Step 4: Enter the Container and Build Firmware + +You can enter the Docker container interactively to quick look the image. + +```bash +cd ~/rdv3/container-scripts +./container.sh -v ~/rdv3 run +``` + +This mounts your source directory (~/rdv3) into the container and opens a shell at that location. +Inside the container, you’ll see a prompt like: + +``` +Running docker image: rdinfra-builder ... +To run a command as administrator (user "root"), use "sudo ". +See "man sudo_root" for details. + +your-username:hostname:/home/your-username/rdv3$ +``` + +Since building the full firmware stack can involve many components, the more efficient method is to use a single Docker command that runs both build and package steps automatically. + +- **build**: This phase compiles all individual components of the firmware stack, including TF‑A, SCP, RSE, UEFI, Linux kernel, and rootfs. + +- **package**: This phase consolidates the build outputs into simulation-ready formats and organizes boot artifacts for FVP. + +To execute the full build and packaging flow: + +```bash +cd ~/rdv3 +docker run --rm \ + -v "$PWD:$PWD" \ + -w "$PWD" \ + --mount type=volume,dst="$HOME" \ + --env ARCADE_USER="$(id -un)" \ + --env ARCADE_UID="$(id -u)" \ + --env ARCADE_GID="$(id -g)" \ + -t -i rdinfra-builder \ + bash -c "./build-scripts/rdinfra/build-test-buildroot.sh -p rdv3 build && \ + ./build-scripts/rdinfra/build-test-buildroot.sh -p rdv3 package" +``` + +The build artifacts will be placed under `~/rdv3/output/rdv3/rdv3/`, where the last `rdv3` corresponds to the selected platform name. + +After a successful build, the following output artifacts will be generated under `~/rdv3/output/rdv3/rdv3/` + +``` +ls ~/rdv3/output/rdv3/rdv3 -al + +total 7092 +drwxr-xr-x 2 ubuntu ubuntu 4096 Aug 12 13:15 . +drwxr-xr-x 4 ubuntu ubuntu 4096 Aug 12 13:15 .. +lrwxrwxrwx 1 ubuntu ubuntu 25 Aug 12 13:15 Image -> ../components/linux/Image +lrwxrwxrwx 1 ubuntu ubuntu 35 Aug 12 13:15 Image.defconfig -> ../components/linux/Image.defconfig +-rw-r--r-- 1 ubuntu ubuntu 7250838 Aug 12 13:15 fip-uefi.bin +lrwxrwxrwx 1 ubuntu ubuntu 32 Aug 12 13:15 lcp_ramfw.bin -> ../components/rdv3/lcp_ramfw.bin +lrwxrwxrwx 1 ubuntu ubuntu 26 Aug 12 13:15 lkvm -> ../components/kvmtool/lkvm +lrwxrwxrwx 1 ubuntu ubuntu 32 Aug 12 13:15 mcp_ramfw.bin -> ../components/rdv3/mcp_ramfw.bin +lrwxrwxrwx 1 ubuntu ubuntu 26 Aug 12 13:15 rmm.img -> ../components/rdv3/rmm.img +lrwxrwxrwx 1 ubuntu ubuntu 32 Aug 12 13:15 scp_ramfw.bin -> ../components/rdv3/scp_ramfw.bin +lrwxrwxrwx 1 ubuntu ubuntu 29 Aug 12 13:15 tf-bl1.bin -> ../components/rdv3/tf-bl1.bin +lrwxrwxrwx 1 ubuntu ubuntu 29 Aug 12 13:15 tf-bl2.bin -> ../components/rdv3/tf-bl2.bin +lrwxrwxrwx 1 ubuntu ubuntu 30 Aug 12 13:15 tf-bl31.bin -> ../components/rdv3/tf-bl31.bin +lrwxrwxrwx 1 ubuntu ubuntu 53 Aug 12 13:15 tf_m_flash.bin -> ../components/arm/rse/neoverse_rd/rdv3/tf_m_flash.bin +lrwxrwxrwx 1 ubuntu ubuntu 46 Aug 12 13:15 tf_m_rom.bin -> ../components/arm/rse/neoverse_rd/rdv3/rom.bin +lrwxrwxrwx 1 ubuntu ubuntu 48 Aug 12 13:15 tf_m_vm0_0.bin -> ../components/arm/rse/neoverse_rd/rdv3/vm0_0.bin +lrwxrwxrwx 1 ubuntu ubuntu 48 Aug 12 13:15 tf_m_vm1_0.bin -> ../components/arm/rse/neoverse_rd/rdv3/vm1_0.bin +lrwxrwxrwx 1 ubuntu ubuntu 33 Aug 12 13:15 uefi.bin -> ../components/css-common/uefi.bin +``` + +| Component | Output Files | Description | +|----------------------|----------------------------------------------|-----------------------------| +| TF‑A | `bl1.bin`, `bl2.bin`, `bl31.bin`, `fip.bin` | Entry-level boot firmware | +| SCP and RSE firmware | `scp.bin`, `mcp_rom.bin`, etc. | Platform power/control | +| UEFI | `uefi.bin`, `flash0.img` | Boot device enumeration | +| Linux kernel | `Image` | OS payload | +| Initrd | `rootfs.cpio.gz` | Minimal filesystem | + + +### Optional: Run the Build Manually from Inside the Container + +You can also perform the build manually after entering the container: + +In the container shell: +```bash +cd ~/rdv3 +./build-scripts/rdinfra/build-test-buildroot.sh -p rdv3 build +./build-scripts/rdinfra/build-test-buildroot.sh -p rdv3 package +``` + +This manual workflow is useful for debugging, partial builds, or making custom modifications to individual components. + + +You’ve now successfully prepared and built the full RD‑V3 firmware stack. In the next module, you’ll install the matching FVP model and simulate the full boot sequence—bringing the firmware to life in a virtual platform. diff --git a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/4_rdv3_on_fvp.md b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/4_rdv3_on_fvp.md new file mode 100644 index 0000000000..5d0553d039 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/4_rdv3_on_fvp.md @@ -0,0 +1,170 @@ +--- +title: Simulate RD‑V3 Boot Flow on Arm FVP +weight: 5 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Simulating RD‑V3 with Arm FVP + +In the previous module, you built the complete CSS‑V3 firmware stack. +Now, you’ll use Arm Fixed Virtual Platform (FVP) to simulate the system—allowing you to verify the boot sequence without any physical silicon. +This simulation brings up the full stack from BL1 to Linux shell using Buildroot. + +### Step 1: Download and Install the FVP Model + +Before downloading the RD‑V3 FVP, it’s important to understand that each reference design release tag corresponds to a specific version of the FVP model. + +For example, the **RD‑INFRA‑2025.07.03** release tag is designed to work with **FVP version 11.29.35**. + +You can refer to the [RD-V3 Release Tags](https://neoverse-reference-design.docs.arm.com/en/latest/platforms/rdv3.html#release-tags) for a full list of release tags, corresponding FVP versions, and their associated release notes, which summarize changes and validated test cases. + +Download the matching FVP binary for your selected release tag using the link provided in this course: + +```bash +mkdir -p ~/fvp +cd ~/fvp +wget https://developer.arm.com/-/cdn-downloads/permalink/FVPs-Neoverse-Infrastructure/RD-V3/FVP_RD_V3_11.29_35_Linux64_armv8l.tgz + +tar -xvf FVP_RD_V3_11.29_35_Linux64_armv8l.tgz +./FVP_RD_V3.sh +``` + +The FVP installation may prompt you with a few questions—choosing the default options is sufficient for this learning path. By default, the FVP will be installed in `/home/ubuntu/FVP_RD_V3`. + +### Step 2: Remote Desktop Set Up + +The RD‑V3 FVP model launches multiple UART consoles—each mapped to a separate terminal window for different subsystems (e.g., Neoverse V3, Cortex‑M55, Cortex‑M7, panel). + +If you're accessing the platform over SSH, these console windows won't open properly. +To interact with all UART consoles, we recommend installing a Remote Desktop environment using XRDP. + +In AWS Ubuntu 22.04 instance, you need install required packages: + + +```bash +sudo apt update +sudo apt install -y ubuntu-desktop xrdp xfce4 xfce4-goodies pv xterm sshpass socat retry +sudo systemctl enable --now xrdp +``` + +To allow remote desktop connections, you need to open port 3389 (RDP) in your EC2 security group: +- Go to the EC2 Dashboard → Security Groups +- Select the security group associated with your instance +- Under the Inbound rules tab, click Edit inbound rules +- Add the following rule: + - Type: RDP + - Port: 3389 + - Source: your local machine IP + +For better security, limit the source to your current public IP instead of 0.0.0.0/0. + + +***Switch to Xorg (required on Ubuntu 22.04):*** + +Wayland is the default display server on Ubuntu 22.04, but it is not compatible with XRDP. +To enable XRDP remote sessions, you need to switch to Xorg by modifying the GDM configuration. + +Open the `/etc/gdm3/custom.conf` in a text editor. +Find the line: + +``` +#WaylandEnable=false +``` + +Uncomment it by removing the # so it becomes: + +``` +WaylandEnable=false +``` + +Then restart the GDM display manager for the change to take effect: +```bash +sudo systemctl restart gdm3 +``` + +After reboot, XRDP will use Xorg and you should be able to connect to the Arm server via Remote Desktop. + +### Step 3: Launch the Simulation + +Once connected via Remote Desktop, open a terminal and launch the RD‑V3 FVP simulation: + +```bash +cd ~/rdv3/model-scripts/rdinfra +export MODEL=/home/ubuntu/FVP_RD_V3/models/Linux64_armv8l_GCC-9.3/FVP_RD_V3 +./boot-buildroot.sh -p rdv3 & +``` + +The command will launch the simulation and open multiple xterm windows, each corresponding to a different CPU. +You can start by locating the ***terminal_ns_uart0*** window — in it, you should see the GRUB menu. + +From there, select RD-V3 Buildroot in the GRUB menu and press Enter to proceed. +![img3 alt-text#center](rdv3_sim_run.jpg "GRUB Menu") + +Booting Buildroot will take a little while — you’ll see typical Linux boot messages scrolling through. +Eventually, the system will stop at the `Welcome to Buildroot` message on the ***terminal_ns_uart0*** window. + +At the `buildroot login:` prompt, type `root` and press Enter to log in. +![img4 alt-text#center](rdv3_sim_login.jpg "Buildroot login") + +Congratulations — you’ve successfully simulated the boot process of the RD-V3 software you compiled earlier, all on FVP! + +### Step 4: Understand the UART Outputs + +When you launch the RD‑V3 FVP model, it opens multiple terminal windows—each connected to a different UART channel. +These UARTs provide console logs from various firmware components across the system. + +Below is the UART-to-terminal mapping based on the default FVP configuration: + +| Terminal Window Title | UART | Output Role | Connected Processor | +|----------------------------|------|------------------------------------|-----------------------| +| `FVP terminal_ns_uart0` | 0 | Linux Kernel Console (BusyBox) | Neoverse‑V3 (AP) | +| `FVP terminal_ns_uart1` | 1 | TF‑A / UEFI Logs | Neoverse‑V3 (AP) | +| `FVP terminal_uart_scp` | 2 | SCP Firmware Logs (power, clocks) | Cortex‑M7 (SCP) | +| `FVP terminal_rse_uart` | 3 | RSE Secure Boot Logs | Cortex‑M55 (RSE) | +| `FVP terminal_uart_mcp` | 4 | MCP Logs (management, telemetry) | Cortex‑M7 (MCP) | +| `FVP terminal_uart_lcp` | 5 | LCP Logs (per-core power control) | Cortex‑M55 (LCP) | +| `FVP terminal_sec_uart` | 6 | Secure World / TF‑M Logs | Cortex‑M55 | + + +Logs are also captured under `~/rdv3/model-scripts/rdinfra/platforms/rdv3/rdv3`, each UART redirected to its own log file. +You can also explore refinfra-*.txt log files to validate subsystem states. + +For example, if you’d like to verify that each CPU core has its GICv3 redistributor and LPI table correctly initialized, you can refer to the relevant messages in refinfra-24812-uart-0-nsec_.txt. + + +``` +[ 0.000056] Remapping and enabling EFI services. +[ 0.000078] smp: Bringing up secondary CPUs ... +[ 0.000095] Detected PIPT I-cache on CPU1 +[ 0.000096] GICv3: CPU1: found redistributor 10000 region 0:0x0000000030200000 +[ 0.000096] GICv3: CPU1: using allocated LPI pending table @0x0000008080200000 +[ 0.000109] CPU1: Booted secondary processor 0x0000010000 [0x410fd840] +[ 0.000125] Detected PIPT I-cache on CPU2 +[ 0.000126] GICv3: CPU2: found redistributor 20000 region 0:0x0000000030240000 +[ 0.000126] GICv3: CPU2: using allocated LPI pending table @0x0000008080210000 +[ 0.000139] CPU2: Booted secondary processor 0x0000020000 [0x410fd840] +[ 0.000155] Detected PIPT I-cache on CPU3 +[ 0.000156] GICv3: CPU3: found redistributor 30000 region 0:0x0000000030280000 +[ 0.000156] GICv3: CPU3: using allocated LPI pending table @0x0000008080220000 +[ 0.000169] CPU3: Booted secondary processor 0x0000030000 [0x410fd840] +[ 0.000185] Detected PIPT I-cache on CPU4 +[ 0.000186] GICv3: CPU4: found redistributor 40000 region 0:0x00000000302c0000 +[ 0.000186] GICv3: CPU4: using allocated LPI pending table @0x0000008080230000 +[ 0.000199] CPU4: Booted secondary processor 0x0000040000 [0x410fd840] +[ 0.000215] Detected PIPT I-cache on CPU5 +[ 0.000216] GICv3: CPU5: found redistributor 50000 region 0:0x0000000030300000 +[ 0.000216] GICv3: CPU5: using allocated LPI pending table @0x0000008080240000 +[ 0.000229] CPU5: Booted secondary processor 0x0000050000 [0x410fd840] +[ 0.000245] Detected PIPT I-cache on CPU6 +[ 0.000246] GICv3: CPU6: found redistributor 60000 region 0:0x0000000030340000 +[ 0.000246] GICv3: CPU6: using allocated LPI pending table @0x0000008080250000 +[ 0.000259] CPU6: Booted secondary processor 0x0000060000 [0x410fd840] +... + +``` + +You can try to identify the SCP, RSE, and kernel boot logs across their respective terminals. + +Successfully tracing these logs confirms your simulation environment and firmware stack are functioning correctly—all without physical silicon. diff --git a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/5_rdv3_modify.md b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/5_rdv3_modify.md new file mode 100644 index 0000000000..2acfaa811d --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/5_rdv3_modify.md @@ -0,0 +1,135 @@ +--- +title: Simulate Dual Chip RD-V3-R1 Platform +weight: 6 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Build and Run RDV3-R1 Dual Chip Platform + +The RD‑V3‑R1 platform is a dual-chip simulation environment built to model multi-die Arm server SoCs. It expands on the single-die RD‑V3 design by introducing a second application processor and a Management Control Processor (MCP). + +***Key Use Cases*** + +- Simulate chiplet-style boot flow with two APs +- Observe coordination between SCP and MCP across dies +- Test secure boot in a distributed firmware environment + +***Differences from RD‑V3*** +- Dual AP boot flow instead of single AP +- Adds MCP (Cortex‑M7) to support cross-die management +- More complex power/reset coordination + +### Step 1: Clone the RD‑V3‑R1 Firmware Stack + +Initialize and sync the codebase for RD‑V3‑R1: + +```bash +cd ~ +mkdir rdv3r1 +cd rdv3r1 +# Initialize the source tree +repo init -u https://git.gitlab.arm.com/infra-solutions/reference-design/infra-refdesign-manifests.git -m pinned-rdv3r1.xml -b refs/tags/RD-INFRA-2025.07.03 --depth=1 + +# Sync the full source code +repo sync -c -j $(nproc) --fetch-submodules --force-sync --no-clone-bundle +``` + +### Step 2: Install RD-V3-R1 FVP + +Refer to the [RD-V3-R1 Release Tags](https://neoverse-reference-design.docs.arm.com/en/latest/platforms/rdv3.html#release-tags) to determine which FVP model version matches your selected release tag. +Then download and install the corresponding FVP binary. + +```bash +mkdir -p ~/fvp +cd ~/fvp +wget https://developer.arm.com/-/cdn-downloads/permalink/FVPs-Neoverse-Infrastructure/RD-V3-r1/FVP_RD_V3_R1_11.29_35_Linux64_armv8l.tgz +tar -xvf FVP_RD_V3_R1_11.29_35_Linux64_armv8l.tgz +./FVP_RD_V3_R1.sh +``` + +### Step 3: Build the Firmware + +Since you have already created the Docker image for firmware building in a previous module, there is no need to rebuild it for RD‑V3‑R1. + +Run the full firmware build and packaging process: + +```bash +cd ~/rdv3r1 +docker run --rm \ + -v "$PWD:$PWD" \ + -w "$PWD" \ + --mount type=volume,dst="$HOME" \ + --env ARCADE_USER="$(id -un)" \ + --env ARCADE_UID="$(id -u)" \ + --env ARCADE_GID="$(id -g)" \ + -t -i rdinfra-builder \ + bash -c "./build-scripts/rdinfra/build-test-buildroot.sh -p rdv3r1 build && \ + ./build-scripts/rdinfra/build-test-buildroot.sh -p rdv3r1 package" +``` + +### Step 4: Launch the Simulation + +Once connected via Remote Desktop, open a terminal and launch the RD‑V3‑R1 FVP simulation: + +```bash +cd ~/rdv3r1/model-scripts/rdinfra +export MODEL=/home/ubuntu/FVP_RD_V3_R1/models/Linux64_armv8l_GCC-9.3/FVP_RD_V3_R1_R1 +./boot-buildroot.sh -p rdv3r1 & +``` + +This command starts the dual-chip simulation. +You’ll observe additional UART consoles for components like the MCP, and you can verify that both application processors (AP0 and AP1) are brought up in a coordinated manner. + +![img5 alt-text#center](rdv3r1_sim_login.jpg "RDV3 R1 buildroot login") + +Similar with previous session, the terminal logs are stored in `~/rdv3r1/model-scripts/rdinfra/platforms/rdv3r1/rdv3r1`. + + +### Step 5: Customize Firmware and Confirm MCP Execution + +To wrap up this learning path, let’s verify that your firmware changes can be compiled and simulated successfully within the RD‑V3‑R1 environment. + +Edit the MCP source file `~/rdv3r1/host/scp/framework/src/fwk_module.c` + +Locate the function fwk_module_start(). Add the following logging line just before return FWK_SUCCESS;: + +```c +int fwk_module_start(void) +{ + ... + FWK_LOG_CRIT("[FWK] Module initialization complete!"); + + // Custom log message for validation + FWK_LOG_CRIT("[FWK] Customer code here"); + return FWK_SUCCESS; +} +``` + +Rebuild and repackage the firmware: + +```bash +cd ~/rdv3r1 +docker run --rm \ + -v "$PWD:$PWD" \ + -w "$PWD" \ + --mount type=volume,dst="$HOME" \ + --env ARCADE_USER="$(id -un)" \ + --env ARCADE_UID="$(id -u)" \ + --env ARCADE_GID="$(id -g)" \ + -t -i rdinfra-builder \ + bash -c "./build-scripts/rdinfra/build-test-buildroot.sh -p rdv3r1 build && \ + ./build-scripts/rdinfra/build-test-buildroot.sh -p rdv3r1 package" +``` + +Launch the FVP simulation again and observe the UART output for MCP. + +![img6 alt-text#center](rdv3r1_sim_codechange.jpg "RDV3 R1 modify firmware") + + +If the change was successful, your custom log line will appear in the MCP console—confirming that your code was integrated and executed as part of the firmware boot process. + +You’ve now successfully simulated a dual-chip Arm server platform using RD‑V3‑R1 on FVP—from cloning firmware sources to modifying secure control logic. + +This foundation sets the stage for deeper exploration, such as customizing platform firmware or integrating BMC workflows in future development cycles. diff --git a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/_index.md b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/_index.md new file mode 100644 index 0000000000..df177c3f47 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/_index.md @@ -0,0 +1,55 @@ +--- +title: CSS-V3 Pre-Silicon Software Development Using Neoverse Servers + +minutes_to_complete: 90 + +who_is_this_for: This Learning Path is for firmware developers, system architects, and silicon validation engineers building Arm Neoverse CSS platforms. It focuses on pre-silicon development using Fixed Virtual Platforms (FVPs) for the CSS‑V3 reference design. You’ll learn how to build, customize, and validate firmware on the RD‑V3 platform using Fixed Virtual Platforms (FVPs) before hardware is available. + +learning_objectives: + - Understand the architecture of Arm Neoverse CSS‑V3 as the foundation for scalable server-class platforms + - Build and boot the RD‑V3 firmware stack using TF‑A, SCP, RSE, and UEFI + - Simulate multi-core, multi-chip systems with Arm FVP models and interpret boot logs + - Modify platform control firmware to test custom logic and validate it via pre-silicon simulation + +prerequisites: + - Access to an Arm Neoverse-based Linux machine (cloud or local), with at least 80 GB of storage + - Familiarity with Linux command-line tools and basic scripting + - Understanding of firmware boot stages and SoC-level architecture + - Docker installed, or GitHub Codespaces-compatible development environment + +author: + - Odin Shen + +### Tags +skilllevels: Advanced +subjects: Containers and Virtualization +armips: + - Neoverse +tools_software_languages: + - C + - Docker + - FVP +peratingsystems: + - Linux + +further_reading: + - resource: + title: Neoverse Compute Subsystems V3 + link: https://www.arm.com/products/neoverse-compute-subsystems/css-v3 + type: website + - resource: + title: Reference Design software stack architecture + link: https://neoverse-reference-design.docs.arm.com/en/latest/about/software_stack.html + type: website + - resource: + title: GitLab infra-refdesign-manifests + link: https://git.gitlab.arm.com/infra-solutions/reference-design/infra-refdesign-manifests + type: gitlab + + +### FIXED, DO NOT MODIFY +# ================================================================================ +weight: 1 # _index.md always has weight of 1 to order correctly +layout: "learningpathall" # All files under learning paths have this same wrapper +learning_path_main_page: "yes" # This should be surfaced when looking for related content. Only set for _index.md of learning path content. +--- diff --git a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/_next-steps.md b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/_next-steps.md new file mode 100644 index 0000000000..c3db0de5a2 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/_next-steps.md @@ -0,0 +1,8 @@ +--- +# ================================================================================ +# FIXED, DO NOT MODIFY THIS FILE +# ================================================================================ +weight: 21 # Set to always be larger than the content in this path to be at the end of the navigation. +title: "Next Steps" # Always the same, html page title. +layout: "learningpathall" # All files under learning paths have this same wrapper for Hugo processing. +--- diff --git a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/rdf_single_chip.png b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/rdf_single_chip.png new file mode 100644 index 0000000000..85937be535 Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/rdf_single_chip.png differ diff --git a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/rdinfra_sw_stack.jpg b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/rdinfra_sw_stack.jpg new file mode 100644 index 0000000000..780c21f291 Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/rdinfra_sw_stack.jpg differ diff --git a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/rdv3_sim_login.jpg b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/rdv3_sim_login.jpg new file mode 100644 index 0000000000..0bfc8474fc Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/rdv3_sim_login.jpg differ diff --git a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/rdv3_sim_run.jpg b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/rdv3_sim_run.jpg new file mode 100644 index 0000000000..0178bb8228 Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/rdv3_sim_run.jpg differ diff --git a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/rdv3r1_sim_codechange.jpg b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/rdv3r1_sim_codechange.jpg new file mode 100644 index 0000000000..3278620e94 Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/rdv3r1_sim_codechange.jpg differ diff --git a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/rdv3r1_sim_login.jpg b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/rdv3r1_sim_login.jpg new file mode 100644 index 0000000000..610e5ac73d Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/rdv3r1_sim_login.jpg differ