Skip to content

Improve usage doc, fix typos and follow style guide #89

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Jul 23, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/baremetal/architecture/discovery.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Server Discovery and First Boot

This document describes the discovery and first boot process for bare metal servers in IronCore's baremetal automation.
This document describes the discovery and first boot process for bare metal servers in IronCore's bare metal automation.
The goal here is to provide a clear understanding of how bare metal servers are discovered and prepared for provisioning.

## Server Discovery
Expand Down
2 changes: 1 addition & 1 deletion docs/baremetal/architecture/provisioning.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Server Provisioning

This section describes how the provisioning of bare metal servers is handled in IronCore's baremetal automation.
This section describes how the provisioning of bare metal servers is handled in IronCore's bare metal automation.
In the [discovery section](/baremetal/architecture/discovery) we discussed how servers are discovered and first time
booted and how they are transitioned into an `Available` state. Now we will focus on the provisioning process, and
one can use such a `Server` resource to provision a custom operating system and automate the software installation on
Expand Down
10 changes: 5 additions & 5 deletions docs/baremetal/index.md
Original file line number Diff line number Diff line change
@@ -1,28 +1,28 @@
# Baremetal Automation

The baremetal automation in IronCore is designed to provide a comprehensive solution for managing physical servers
The bare metal automation in IronCore is designed to provide a comprehensive solution for managing physical servers
in a Kubernetes-native way. It leverages the power of Kubernetes Custom Resource Definitions (CRDs) to automate:

- **Discovery**: Automatically detect and register bare metal servers.
- **Provisioning**: Deploy and configure servers using Ignition.
- **Day-2 Operations**: Manage BIOS, firmware, and hardware inventory declaratively.
- **3rd Party Integrations**: Seamlessly integrate with existing tools like vendor-specific management tools.
- **Kubernetes Support**: Run Kubernetes on bare metal servers with support for Cluster API and Gardener.
- **Kubernetes Support**: Run Kubernetes on bare metal servers with support of Cluster API and Gardener.

## Core Components

The core components of the baremetal automation in IronCore include:
The core components of the bare metal automation in IronCore include:
- [**Metal Operator**](https://github.com/ironcore-dev/metal-operator): The central component that manages the lifecycle of bare metal servers.
- [**Boot Operator**](https://github.com/ironcore-dev/boot-operator): iPXE and HTTP boot server that provides boot images and Ignition configurations.
- [**FeDHCP**](https://github.com/ironcore-dev/fedhcp): A DHCP server that provides inband and out of band network configuration to bare metal servers.
- [**FeDHCP**](https://github.com/ironcore-dev/fedhcp): A DHCP server that provides in-band and out-of-band network configuration to bare metal servers.

## Concepts and Usage Guides

Usage guides and concepts for the `metal-operator` API types can be found in the [metal-operator documentation](https://ironcore-dev.github.io/metal-operator/concepts/).

## Prerequisites

The current implementation of the baremetal automation in IronCore requires the following prerequisites:
The current implementation of the bare metal automation in IronCore requires the following prerequisites:

- In-band and out-of-band network connectivity to the bare metal servers.
- A management server in the out-of-band network that can communicate with the bare metal servers.
Expand Down
10 changes: 4 additions & 6 deletions docs/baremetal/kubernetes/gardener.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,14 +4,12 @@ Apart from the [Cluster API Provider for Bare Metal](/baremetal/kubernetes/capi)
[Gardener](https://gardener.cloud), a Kubernetes-native project for managing Kubernetes clusters at scale.

There are two main components in the Gardener integration with IronCore:
- **Machine Controller Manager (MCM)**: This component is responsible for managing the lifecycle of machines in a
Kubernetes cluster. It uses the `metal-operator` API types to provision and manage bare metal servers.
- **Gardener Extension Provider**: This component provides the necessary integration points for Gardener to manage bare
metal clusters.
- **Machine Controller Manager (MCM)**: This component is responsible for managing the lifecycle of machines in a Kubernetes cluster. It uses the `metal-operator` API types to provision and manage bare metal servers.
- **Gardener Extension Provider**: This component provides the necessary integration points for Gardener to manage bare metal clusters.

## Machine Controller Manager (MCM)

The `[machine-controller-manager-provider-ironcore](https://github.com/ironcore-dev/machine-controller-manager-provider-ironcore-metal)`
The [machine-controller-manager-provider-ironcore](https://github.com/ironcore-dev/machine-controller-manager-provider-ironcore-metal)
is responsible for managing the lifecycle of `Nodes` in a Kubernetes cluster. Here the MCM in essence is translating
Gardener `Machine` resource to `ServerClaims` and wrapping the `user-data` coming from the Gardner OS extensions into
an Ignition `Secret`.
Expand All @@ -22,7 +20,7 @@ The [`gardener-extension-provider-ironcore-metal`](https://github.com/ironcore-d
is responsible for providing the necessary integration points for Gardener to manage bare metal clusters.

Those integration points include:
- Configure and the [Cloud Controller Manager](/baremetal/kubernetes/cloud-controller-manager) to handle the `Node` lifecycle
- Configure the [Cloud Controller Manager](/baremetal/kubernetes/cloud-controller-manager) to handle the `Node` lifecycle
and topology information.
- Configure the [metal-load-balancer-controller](/baremetal/kubernetes/metal-loadbalancer-controller) to handle `Service` of type `LoadBalancer`.
- Configure the [Machine Controller Manager (MCM)](#machine-controller-manager-mcm) to manage the creation of `Nodes` in the cluster.
9 changes: 3 additions & 6 deletions docs/baremetal/kubernetes/metal-loadbalancer-controller.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,18 +4,15 @@ The [metal-loadbalancer-controller](https://github.com/ironcore-dev/metal-load-b
for managing the lifecycle of `Services` of type `LoadBalancer` in a Kubernetes cluster running on bare metal servers.
The project consists of two main components:
- **Controller**: The main component that watches for changes in `Service` resources and manages the lifecycle of load balancers.
- **Speaker**: Is responsible for announcing the load balancer IP address to `metalbond` which acts as a route reflector
to the bare metal servers.
- **Speaker**: Is responsible for announcing the load balancer IP address to `metalbond` which acts as a route reflector to the bare metal servers.

The `metal-loadbalancer-controller` is designed to work in an IPv6 only environment.

## Controller

The controller component has the following responsibilities:
- Watches for changes in `Service` resources of type `LoadBalancer` and uses the `ClusterIP` of a `Service` and patches the
`LoadBalancer` status using this `ClusterIP`.
- Setting the `PodCIDRs` on the `Node` resources to ensure that the load balancer can route traffic to the pods. Here it
takes the main `Node` IP address and the configured `node-cidr-mask-size` and patches the `Node.spec.podCIDRs` field.
- Watches for changes in `Service` resources of type `LoadBalancer` and uses the `ClusterIP` of a `Service` and patches the `LoadBalancer` status using this `ClusterIP`.
- Setting the `PodCIDRs` on the `Node` resources to ensure that the load balancer can route traffic to the pods. Here it takes the main `Node` IP address and the configured `node-cidr-mask-size` and patches the `Node.spec.podCIDRs` field.

## Metalbond-Speaker

Expand Down
28 changes: 12 additions & 16 deletions docs/contribute/contributing.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,6 @@ We welcome contributions from the community! Follow these guidelines to help us
- Feedback will be provided if any adjustments are necessary.

Thank you for helping improve our documentation!
# Contributing to IronCore Documentation

We welcome contributions from the community! Follow these guidelines to help us maintain high-quality, consistent documentation.
## Contributors Guide

The IronCore Documentation project uses Github to manage reviews of pull requests.
Expand All @@ -36,11 +33,11 @@ and us a good deal of inspiration.

## Steps to Contribute

Do you want to work on an issue? You are welcome to claim an existing one by commenting on it in GitHub.
Do you want to work on an issue? You are welcome to claim an existing one by commenting on it in GitHub.

!!! note
Perform a cursory search to see if the issue has already been taken by someone else.
This will prevent misunderstanding and duplication of effort from contributors on the same issue.
This will prevent misunderstanding and duplication of effort from contributors on the same issue.

If you have questions about one of the issues please comment on them and one of the
maintainers will clarify it.
Expand Down Expand Up @@ -72,21 +69,21 @@ git clone [email protected]:ironcore-dev/ironcore-dev.github.io.git
cd docs
```

* Create a branch from the `main` using 'git checkout' command.
* Create a branch from the `main` using 'git checkout' command.
!!! note
If needed, rebase to the current `main` branch before submitting your pull request. If it doesn't merge properly
If needed, rebase to the current `main` branch before submitting your pull request. If it doesn't merge properly
with `main` you may be asked to rebase your changes.

```shell
git checkout -b my_feature
# rebase if necessary
git fetch upstream main
git rebase upstream/main
```
```shell
git checkout -b my_feature
# rebase if necessary
git fetch upstream main
git rebase upstream/main
```

* Commits should be as small as possible, while ensuring that each commit is correct independently

* Commit your changes to your feature branch and push it to your fork.
* Commit your changes to your feature branch and push it to your fork.

```shell
git add .
Expand All @@ -111,8 +108,7 @@ for a review in the pull request or a comment.

## Issues and Planning

We use GitHub issues to track bugs and enhancement requests. Please provide as much context as possible when you open an issue. The information you provide must be comprehensive enough to understand, reproduce the behavior and find related reports of that issue for the assignee.
We use GitHub issues to track bugs and enhancement requests. Please provide as much context as possible when you open an issue. The information you provide must be comprehensive enough to understand, reproduce the behavior and find related reports of that issue for the assignee.
Therefore, contributors may use but aren't restricted to the issue template provided by the IronCore maintainers.


Thank you for helping improve our documentation!
16 changes: 7 additions & 9 deletions docs/iaas/architecture/networking.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

## Overview

IronCore's virtual networking architecture provides an end-to-end virtual networking solution for provisioned `Machine`s running in data centers, regardless they are baremetal machines or virtual machines. It is designed to enable robust, flexible and performing networking control plane and data plane.
IronCore's virtual networking architecture provides an end-to-end virtual networking solution for provisioned `Machine`s running in data centers, regardless they are bare metal machines or virtual machines. It is designed to enable robust, flexible and performing networking control plane and data plane.

- **Robust**: IronCore's virtual networking control plane is mainly implemented using Kubernetes controller model. Thus, it is able to survive component's failure and recover the running states by retrieving the desired networking configuration.
- **Flexible**: Thanks to the modular and layered architecture design, IronCore's virtual networking solution allows developers to implement and interchange components from the most top-level data center management system built upon defined IronCore APIs, to lowest-level packet processing engines depending on the used hardware.
Expand All @@ -14,23 +14,21 @@ IronCore's virtual networking architecture is illustrated with the following fig

The main elements involved in IronCore's networking are:
- [**ironcore**](https://github.com/ironcore-dev/ironcore): Core networking component that manages network resources and configurations. For more details, see the
[Networking usage guide](/iaas/usage-guides/networking).
[Networking usage guide](/iaas/usage-guides/networking).
- [**ironcore-net**](https://github.com/ironcore-dev/ironcore-net): Global coordination service that manages network resource in an IronCore instance.
- [**metalnet**](https://github.com/ironcore-dev/metalnet): A service that provides cluster-level networking capabilities for `Machines`.
- [**dpservice**](https://github.com/ironcore-dev/dpservice): A service that manages data plane operations, including network traffic routing and policies.
- [**metalbond**](https://github.com/ironcore-dev/metalbond): A component that handles route announcements in an IronCore instance, ensuring that networking routes are
correctly propagated across the IronCore installation.
correctly propagated across the IronCore installation.

## `ironcore` and `ironcore-net`

`ironcore-net` is a global coordination service within an IronCore installation. Therefore, it is a single instance and
the place where all network related decisions like reservation of unique IP addresses, allocation of unique network IDs, etc. are made.

`ironcore-net` has apart from its [own API](https://github.com/ironcore-dev/ironcore-net/tree/main/api/core/v1alpha1) two main components:
- **apinetlet**: This component is responsible from translating the user-facing API objects from the `networking` resource group into the
internal representation used by `ironcore-net`.
- **metalnetlet**: This component is interfacing with the `metalnet` API to manage cluster-level networking resources like `NetworkInterface` which
are requested globally in the `ironcore-net` API but are implemented by `metalnet` on a hypervisor level.
- **apinetlet**: This component is responsible from translating the user-facing API objects from the `networking` resource group into the internal representation used by `ironcore-net`.
- **metalnetlet**: This component is interfacing with the `metalnet` API to manage cluster-level networking resources like `NetworkInterface` which are requested globally in the `ironcore-net` API but are implemented by `metalnet` on a hypervisor level.

### Example `apinetlet` flow

Expand All @@ -50,10 +48,10 @@ The `apinetlet` will reconcile this `VirtualIP` by performing the following step
1. Create an `IP` object in the `ironcore-net` API, which reserves a unique IP address.
2. Update the `VirtualIP` status with the allocated IP address.

The `ironcore` API server is agnostic on how the underlying global IP address is allocated and delegates this responsibility
The `IronCore` API server is agnostic on how the underlying global IP address is allocated and delegates this responsibility
to `ironcore-net`.

A similar flow happens for `Networks`, `LoadBalancer` and `NatGateways` resources, where the `apinetlet` is responsible
A similar flow happens for `Network`, `LoadBalancer` and `NatGateway` resources, where the `apinetlet` is responsible
for translating and allocating the necessary resources in `ironcore-net` to ensure that the networking requirements are met.

### `metalnetlet` and `metalnet`
Expand Down
10 changes: 4 additions & 6 deletions docs/iaas/architecture/runtime-interface.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# IronCore Runtime Interface (IRI)

The IronCore Runtime Interface (IRI) is a key concept in the IronCore architecture, designed to provide a consitent
The IronCore Runtime Interface (IRI) is a key concept in the IronCore architecture, designed to provide a consistent
and unified interface for interacting with various compute and storage providers. The IRI abstracts the underlying
complexities of different providers.

Expand All @@ -12,7 +12,7 @@ There are three main runtime interfaces in IronCore:
Implementations of these interfaces are done by provider-specific components. More infomation about the provider can
be found in the [provider concept documentation](/iaas/architecture/providers/).

The definition of the runtime interfaces can be found in IronCores [`iri` package](https://github.com/ironcore-dev/ironcore/tree/main/iri/).
The definition of the runtime interfaces can be found in IronCore's [`iri` package](https://github.com/ironcore-dev/ironcore/tree/main/iri/).

## MachineRuntime Interface

Expand Down Expand Up @@ -40,9 +40,7 @@ service MachineRuntime {
}
```

The general idea is that a `machinepoollet` ensures that the API level dependencies are met. For example, a `Machine`s
`Volume` which is used as a root disk is in the state `Available`. If those prerequisites are met, the `poollet` will
call the corresponding `CreateMachine` method of the `RuntimeInterface` to create the `Machine` resource.
The general idea is that a `machinepoollet` ensures that the API level dependencies are met. For example, a `Machine`'s `Volume` which is used as a root disk is in the state `Available`. If those prerequisites are met, the `poollet` will call the corresponding `CreateMachine` method of the `RuntimeInterface` to create the `Machine` resource.

The `ListMachines` and `Status` methods are used to retrieve a list of all `Machine` instances managed by the provider.
The result of those methods is then used to propagate `Machine` state changes. Those methods are periodically called by
Expand All @@ -53,7 +51,7 @@ methods to attach volumes or network interfaces to a `Machine` if a change in th

## VolumeRuntime Interface

Similar to the `MachineRuntime`, the `VolumeRuntime` interface is responsible for managing storage resources in IronCore.
Similar to the `MachineRuntime`, the `VolumeRuntime` interface is responsible for managing block storage resources in IronCore.
Here the `volumepoollet` takes a similar role as the `machinepoollet` for the `MachineRuntime` and invokes `CreateVolume`,
`DeleteVolume`, `ExpandVolume`, and other methods to manage `Volume` resources.

Expand Down
8 changes: 4 additions & 4 deletions docs/iaas/kubernetes/cloud-controller-manager.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,10 +27,10 @@ Below is the detailed explanation on how APIs are implemented by `cloud-provider

InstanceMetadata returns metadata of a node instance, which includes :

- `ProviderID`: Provider is combination of ProviderName(Which is nothing but set to `IronCore`)
- `InstanceType`: InstanceType is set to referencing MachineClass name by the instance.
- `ProviderID`: Provider is combination of ProviderName(Which is nothing but set to `IronCore`)
- `InstanceType`: InstanceType is set to referencing MachineClass name by the instance.
- `NodeAddresses`: Node addresses are calculated from the IP information available from NetworkInterfaces of the machine.
- `Zone`: Zone is set to referenced MachinePool name.
- `Zone`: Zone is set to referenced MachinePool name.


## Load balancing for Services of type LoadBalancer
Expand All @@ -47,7 +47,7 @@ explanation on how APIs are implemented in IronCore cloud-provider.
### Ensure LoadBalancer

- `EnsureLoadBalancer` gets the LoadBalancer name based on service name.
- Checks if IronCore `LoadBalancer` object already exists. If not it gets the `port` and `protocol`, `ipFamily` information from the service and creates a new LoadBalancer object in the Ironcore.
- Checks if IronCore `LoadBalancer` object already exists. If not it gets the `port` and `protocol`, `ipFamily` information from the service and creates a new LoadBalancer object in the Ironcore.
- Newly created LoadBalancer will be associated with Network reference provided in cloud configuration.
- Then `LoadBalancerRouting` object is created with the destination IP information retrieved from the nodes (Note: `LoadBalancerRouting` is internal object to Ironcore). Later, this information is used at the Ironcore API level to describe the explicit targets in a pool traffic is routed to.
- Ironcore supports two types of LoadBalancer `Public` and `Internal`. If LoadBalancer has to be of type Internal, "service.beta.kubernetes.io/ironcore-load-balancer-internal" annotation needs to be set to true, otherwise it will be considered as public type.
Expand Down
Loading
Loading