diff --git a/hosted_control_planes/hcp-deploy/hcp-deploy-bm.adoc b/hosted_control_planes/hcp-deploy/hcp-deploy-bm.adoc index 9ba4f7bc933d..7a56f6dc9df0 100644 --- a/hosted_control_planes/hcp-deploy/hcp-deploy-bm.adoc +++ b/hosted_control_planes/hcp-deploy/hcp-deploy-bm.adoc @@ -53,12 +53,18 @@ include::modules/hcp-bm-dns.adoc[leveloffset=+1] include::modules/hcp-custom-dns.adoc[leveloffset=+2] -include::modules/hcp-bm-hc.adoc[leveloffset=+1] +[id="hcp-bm-create-hc_{context}"] +== Creating a hosted cluster on bare metal + +When you create a hosted cluster with the Agent platform, the HyperShift Operator installs the Agent Cluster API provider in the hosted control plane namespace. You can create a hosted cluster on bare metal or you can import one. + +include::modules/hcp-bm-hc.adoc[leveloffset=+2] [role="_additional-resources"] .Additional resources * xref:../../hosted_control_planes/hcp-import.adoc[Manually importing a hosted cluster] +* link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.13/html/clusters/cluster_mce_overview#configure-hosted-disconnected-digest-image[Extracting the {product-title} release image digest] include::modules/hcp-bm-hc-console.adoc[leveloffset=+2] diff --git a/hosted_control_planes/hcp-deploy/hcp-deploy-ibm-power.adoc b/hosted_control_planes/hcp-deploy/hcp-deploy-ibm-power.adoc index b780d5fa6476..e664d0a785ea 100644 --- a/hosted_control_planes/hcp-deploy/hcp-deploy-ibm-power.adoc +++ b/hosted_control_planes/hcp-deploy/hcp-deploy-ibm-power.adoc @@ -37,7 +37,17 @@ include::modules/hcp-ibm-power-infra-reqs.adoc[leveloffset=+1] include::modules/hcp-ibm-power-dns.adoc[leveloffset=+1] include::modules/hcp-custom-dns.adoc[leveloffset=+2] -include::modules/hcp-bm-hc.adoc[leveloffset=+1] +[id="hcp-bm-create-hc-ibm-power"] +== Creating a hosted cluster on bare metal + +When you create a hosted cluster with the Agent platform, the HyperShift Operator installs the Agent Cluster API provider in the hosted control plane namespace. You can create a hosted cluster on bare metal or you can import one. + +include::modules/hcp-bm-hc.adoc[leveloffset=+2] + +[role="_additional-resources"] +.Additional resources +* xref:../../hosted_control_planes/hcp-import.adoc[Manually importing a hosted cluster] +* link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.13/html/clusters/cluster_mce_overview#configure-hosted-disconnected-digest-image[Extracting the {product-title} release image digest] [id="hcp-ibm-power-heterogeneous-nodepools_{context}"] == Creating heterogeneous node pools on agent hosted clusters diff --git a/hosted_control_planes/hcp-deploy/hcp-deploy-ibmz.adoc b/hosted_control_planes/hcp-deploy/hcp-deploy-ibmz.adoc index 5fb81ab2d561..e57c0b18f304 100644 --- a/hosted_control_planes/hcp-deploy/hcp-deploy-ibmz.adoc +++ b/hosted_control_planes/hcp-deploy/hcp-deploy-ibmz.adoc @@ -43,11 +43,17 @@ include::modules/hcp-ibm-z-infra-reqs.adoc[leveloffset=+1] include::modules/hcp-ibm-z-dns.adoc[leveloffset=+1] include::modules/hcp-custom-dns.adoc[leveloffset=+2] -include::modules/hcp-bm-hc.adoc[leveloffset=+1] +[id="hcp-bm-create-hc-ibm-z"] +== Creating a hosted cluster on bare metal + +When you create a hosted cluster with the Agent platform, the HyperShift Operator installs the Agent Cluster API provider in the hosted control plane namespace. You can create a hosted cluster on bare metal or you can import one. + +include::modules/hcp-bm-hc.adoc[leveloffset=+2] [role="_additional-resources"] .Additional resources - +* xref:../../hosted_control_planes/hcp-import.adoc[Manually importing a hosted cluster] +* link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.13/html/clusters/cluster_mce_overview#configure-hosted-disconnected-digest-image[Extracting the {product-title} release image digest] * xref:../../hosted_control_planes/hcp-deploy/hcp-deploy-bm.adoc#hcp-bm-hc-console_hcp-deploy-bm[Creating a hosted cluster on bare metal by using the console] include::modules/hcp-ibm-z-infraenv.adoc[leveloffset=+1] diff --git a/modules/hcp-bm-autoscale.adoc b/modules/hcp-bm-autoscale.adoc index c25dbaf28f9c..dda4ded8cd35 100644 --- a/modules/hcp-bm-autoscale.adoc +++ b/modules/hcp-bm-autoscale.adoc @@ -2,6 +2,7 @@ // // * hosted_control_planes/hcp-manage/hcp-manage-bm.adoc // * hosted_control_planes/hcp-manage/hcp-manage-non-bm.adoc +// * hosted_control_planes/hcp-manage/hcp-manage-virt.adoc :_mod-docs-content-type: PROCEDURE [id="hcp-bm-autoscale_{context}"] diff --git a/modules/hcp-bm-dns.adoc b/modules/hcp-bm-dns.adoc index 91dff7bf66a3..96f681ede92f 100644 --- a/modules/hcp-bm-dns.adoc +++ b/modules/hcp-bm-dns.adoc @@ -8,7 +8,7 @@ The API Server for the hosted cluster is exposed as a `NodePort` service. A DNS entry must exist for `api..` that points to destination where the API Server can be reached. -The DNS entry can be as simple as a record that points to one of the nodes in the managed cluster that is running the hosted control plane. The entry can also point to a load balancer that is deployed to redirect incoming traffic to the ingress pods. +The DNS entry can be as simple as a record that points to one of the nodes in the management cluster that is running the hosted control plane. The entry can also point to a load balancer that is deployed to redirect incoming traffic to the ingress pods. .Example DNS configuration [source,terminal] @@ -22,6 +22,11 @@ api-int.example.krnl.es. IN A 192.168.122.22 `*`.apps.example.krnl.es. IN A 192.168.122.23 ---- +[NOTE] +==== +In the previous example, `*.apps.example.krnl.es. IN A 192.168.122.23` is either a node in the hosted cluster or a load balancer, if one has been configured. +==== + If you are configuring DNS for a disconnected environment on an IPv6 network, the configuration looks like the following example. .Example DNS configuration for an IPv6 network diff --git a/modules/hcp-bm-hc.adoc b/modules/hcp-bm-hc.adoc index bff187da6d34..5a02408858ef 100644 --- a/modules/hcp-bm-hc.adoc +++ b/modules/hcp-bm-hc.adoc @@ -6,75 +6,418 @@ :_mod-docs-content-type: PROCEDURE [id="hcp-bm-hc_{context}"] -= Creating a hosted cluster on bare metal += Creating a hosted cluster by using the CLI -When you create a hosted cluster with the Agent platform, HyperShift installs the Agent Cluster API provider in the hosted control plane namespace. You can create a hosted cluster on bare metal or import one. +To create a hosted cluster by using the command-line interface (CLI), complete the following steps. -As you create a hosted cluster, keep the following guidelines in mind: +.Prerequisites -- Each hosted cluster must have a cluster-wide unique name. A hosted cluster name cannot be the same as any existing managed cluster in order for {mce-short} to manage it. +- Each hosted cluster must have a cluster-wide unique name. A hosted cluster name cannot be the same as any existing managed cluster in order for the {mce-short} to manage it. - Do not use `clusters` as a hosted cluster name. - A hosted cluster cannot be created in the namespace of a {mce-short} managed cluster. -- The most common service publishing strategy is to expose services through a load balancer. That strategy is the preferred method for exposing the Kubernetes API server. If you create a hosted cluster by using the web console or by using {rh-rhacm-title}, to set a publishing strategy for a service besides the Kubernetes API server, you must manually specify the `servicePublishingStrategy` information in the `HostedCluster` custom resource. +- By default when you use the `hcp create cluster agent` command, the hosted cluster is created with node ports. However, the preferred publishing strategy for hosted clusters on bare metal is to expose services through a load balancer. If you create a hosted cluster by using the web console or by using {rh-rhacm-title}, to set a publishing strategy for a service besides the Kubernetes API server, you must manually specify the `servicePublishingStrategy` information in the `HostedCluster` custom resource. For more information, see step 4 in this procedure. + +- Ensure that you meet the requirements described in "Preparing to deploy {hcp} on bare metal", which includes requirements related to infrastructure, firewalls, ports, and services. For example, those requirements describe how to add the appropriate zone labels to the bare-metal hosts in your management cluster, as shown in the following example commands: ++ +[source,terminal] +---- +$ oc label node [compute-node-1] topology.kubernetes.io/zone=zone1 +---- ++ +[source,terminal] +---- +$ oc label node [compute-node-2] topology.kubernetes.io/zone=zone2 +---- ++ +[source,terminal] +---- +$ oc label node [compute-node-3] topology.kubernetes.io/zone=zone3 +---- + +- Ensure that you have added bare metal nodes to a hardware inventory. .Procedure -. Create the hosted control plane namespace by entering the following command: +. Create a namespace by entering the following command: + [source,terminal] ---- -$ oc create ns - +$ oc create ns ---- + -Replace `` with your hosted cluster namespace name, for example, `clusters`. Replace `` with your hosted cluster name. +Replace `` with an identifier for your hosted cluster namespace. Typically, the namespace is created by the HyperShift Operator, but during the hosted cluster creation process on bare metal, a Cluster API provider role is generated that needs the namespace to already exist. -. Verify that you have a default storage class configured for your cluster. Otherwise, you might see pending PVCs. Run the following command: +. Create the configuration file for your hosted cluster by entering the following command: + [source,terminal] ---- $ hcp create cluster agent \ - --name= \// <1> - --pull-secret= \// <2> - --agent-namespace= \// <3> - --base-domain= \// <4> - --api-server-address=api.. \// <5> - --etcd-storage-class= \// <6> - --ssh-key \// <7> - --namespace \// <8> - --control-plane-availability-policy HighlyAvailable \// <9> - --release-image=quay.io/openshift-release-dev/ocp-release: \// <10> - --node-pool-replicas <11> ----- -+ -<1> Specify the name of your hosted cluster, for instance, `example`. + --name= \// <1> + --pull-secret= \// <2> + --agent-namespace= \// <3> + --base-domain= \// <4> + --api-server-address=api.. \// <5> + --etcd-storage-class= \// <6> + --ssh-key= \// <7> + --namespace= \// <8> + --control-plane-availability-policy=HighlyAvailable \// <9> + --release-image=quay.io/openshift-release-dev/ocp-release:-multi \// <10> + --node-pool-replicas= \// <11> + --render > hosted-cluster-config.yaml +---- ++ +<1> Specify the name of your hosted cluster. <2> Specify the path to your pull secret, for example, `/user/name/pullsecret`. -<3> Specify your hosted control plane namespace, for example, `clusters-example`. Ensure that agents are available in this namespace by using the `oc get agent -n ` command. +<3> Specify your hosted control plane namespace. To ensure that agents are available in this namespace, enter the `oc get agent -n ` command. <4> Specify your base domain, for example, `krnl.es`. <5> The `--api-server-address` flag defines the IP address that is used for the Kubernetes API communication in the hosted cluster. If you do not set the `--api-server-address` flag, you must log in to connect to the management cluster. <6> Specify the etcd storage class name, for example, `lvm-storageclass`. <7> Specify the path to your SSH public key. The default file path is `~/.ssh/id_rsa.pub`. <8> Specify your hosted cluster namespace. <9> Specify the availability policy for the hosted control plane components. Supported options are `SingleReplica` and `HighlyAvailable`. The default value is `HighlyAvailable`. -<10> Specify the supported {product-title} version that you want to use, for example, `4.19.0-multi`. If you are using a disconnected environment, replace `` with the digest image. To extract the {product-title} release image digest, see _Extracting the {product-title} release image digest_. +<10> Specify the supported {product-title} version that you want to use, for example, `4.19.0-multi`. If you are using a disconnected environment, replace `` with the digest image. To extract the {product-title} release image digest, see "Extracting the {product-title} release image digest". <11> Specify the node pool replica count, for example, `3`. You must specify the replica count as `0` or greater to create the same number of replicas. Otherwise, no node pools are created. ++ +[NOTE] +==== +Although the YAML file contains references to the pull secret, SSH public key, and etcd encryption key, they are not created yet. You must manually create them. +==== + +. Manually create the pull secret, SSH public key, and etcd encryption key by completing the following steps: ++ +.. Configure an etcd encryption key by creating a YAML file with the following contents: ++ +[source,yaml] +---- +apiVersion: v1 +kind: Secret +metadata: + name: + namespace: +type: Opaque +data: + key: +---- ++ +.. Apply the configuration by entering the following command: ++ +[source,terminal] +---- +$ oc apply -f etcd_encryption_key_config.yaml +---- ++ +.. Create a pull secret by entering the following command: ++ +[source,terminal] +---- +$ oc create secret generic \ + --from-file= \ + -n +---- ++ +.. Configure an SSH key by creating a YAML file with the following content: ++ +[source,yaml] +---- +apiVersion: v1 +kind: Secret +metadata: + name: + namespace: +type: Opaque +data: + id_rsa.pub: +---- ++ +.. Apply the configuration by entering the following command: ++ +[source,terminal] +---- +$ oc apply -f ssh_key_config.yaml +---- ++ +.. Confirm that the pull secret, SSH key, and etcd encryption key exist by entering the following commands: ++ +[source,terminal] +---- +$ oc get secret -n +---- ++ +[source,terminal] +---- +$ oc get secret -n +---- ++ +[source,terminal] +---- +$ oc get secret -n +---- + +. Configure the service publishing strategy. By default, hosted clusters use the NodePort service publishing strategy because node ports are always available without additional infrastructure. However, you can configure the service publishing strategy to use a load balancer. + +** If you are using the default NodePort strategy, configure the DNS to point to the hosted cluster compute nodes, not the management cluster nodes. For more information, see "DNS configurations on bare metal". + +** For production environments, use the LoadBalancer strategy because it provides certificate handling and automatic DNS resolution. To change the service publishing strategy `LoadBalancer`, in your hosted cluster configuration file, edit the service publishing strategy details: ++ +[source,yaml] +---- +... +spec: + services: + - service: APIServer + servicePublishingStrategy: + type: LoadBalancer #<1> + - service: Ignition + servicePublishingStrategy: + type: Route + - service: Konnectivity + servicePublishingStrategy: + type: Route + - service: OAuthServer + servicePublishingStrategy: + type: Route + - service: OIDC + servicePublishingStrategy: + type: Route + sshKey: + name: +... +---- ++ +<1> Specify `LoadBalancer` as the API Server type. For all other services, specify `Route` as the type. + +. Apply the changes to the hosted cluster configuration file by entering the following command: ++ +[source,terminal] +---- +$ oc apply -f hosted_cluster_config.yaml +---- + +. Monitor the creation of the hosted cluster, node pools, and pods by entering the following commands: ++ +[source,terminal] +---- +$ oc get hostedcluster \ + -n \ + -o \ + jsonpath='{.status.conditions[?(@.status=="False")]}' | jq . +---- ++ +[source,terminal] +---- +$ oc get nodepool \ + -n \ + -o \ + jsonpath='{.status.conditions[?(@.status=="False")]}' | jq . +---- ++ +[source,terminal] +---- +$ oc get pods -n +---- + +. Confirm that the hosted cluster is ready. The cluster is ready when its status is `Available: True`, the node pool status shows `AllMachinesReady: True`, and all cluster Operators are healthy. +. Install MetalLB in the hosted cluster: + -. After a few moments, verify that your hosted control plane pods are up and running by entering the following command: +.. Extract the kubeconfig file from the hosted cluster and set the environment variable for hosted cluster access by entering the following commands: + [source,terminal] ---- -$ oc -n - get pods +$ oc get secret \ + -admin-kubeconfig \ + -n \ + -o jsonpath='{.data.kubeconfig}' \ + | base64 -d > \ + kubeconfig-.yaml +---- ++ +[source,terminal] +---- +$ export KUBECONFIG="/path/to/kubeconfig-.yaml" +---- ++ +.. Install the MetalLB Operator by creating the `install-metallb-operator.yaml` file: ++ +[source,yaml] +---- +apiVersion: v1 +kind: Namespace +metadata: + name: metallb-system +--- +apiVersion: operators.coreos.com/v1 +kind: OperatorGroup +metadata: + name: metallb-operator + namespace: metallb-system +--- +apiVersion: operators.coreos.com/v1alpha1 +kind: Subscription +metadata: + name: metallb-operator + namespace: metallb-system +spec: + channel: "stable" + name: metallb-operator + source: redhat-operators + sourceNamespace: openshift-marketplace + installPlanApproval: Automatic +---- ++ +.. Apply the file by entering the following command: ++ +[source,terminal] +---- +$ oc apply -f install-metallb-operator.yaml +---- ++ +.. Configure the MetalLB IP address pool by creating the `deploy-metallb-ipaddresspool.yaml` file: ++ +[source,yaml] +---- +apiVersion: metallb.io/v1beta1 +kind: IPAddressPool +metadata: + name: metallb + namespace: metallb-system +spec: + autoAssign: true + addresses: + - 10.11.176.71-10.11.176.75 +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: l2advertisement + namespace: metallb-system +spec: + ipAddressPools: + - metallb +---- ++ +.. Apply the configuration by entering the following command: ++ +[source,terminal] +---- +$ oc apply -f deploy-metallb-ipaddresspool.yaml +---- ++ +.. Verify that MetalLB is installed by checking the Operator status, the IP address pool, and the L2Advertisement. Enter the following commands: ++ +[source,terminal] +---- +$ oc get pods -n metallb-system +---- ++ +[source,terminal] +---- +$ oc get ipaddresspool -n metallb-system +---- ++ +[source,terminal] +---- +$ oc get l2advertisement -n metallb-system +---- + +. Configure the load balancer for ingress: ++ +.. Create the `ingress-loadbalancer.yaml` file: ++ +[source,yaml] +---- +apiVersion: v1 +kind: Service +metadata: + annotations: + metallb.universe.tf/address-pool: metallb + name: metallb-ingress + namespace: openshift-ingress +spec: + ports: + - name: http + protocol: TCP + port: 80 + targetPort: 80 + - name: https + protocol: TCP + port: 443 + targetPort: 443 + selector: + ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default + type: LoadBalancer +---- ++ +.. Apply the configuration by entering the following command: ++ +[source,terminal] +---- +$ oc apply -f ingress-loadbalancer.yaml +---- ++ +.. Verify that the load balancer service works as expected by entering the following command: ++ +[source,terminal] +---- +$ oc get svc metallb-ingress -n openshift-ingress ---- + .Example output ++ +[source,text] +---- +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +metallb-ingress LoadBalancer 172.31.127.129 10.11.176.71 80:30961/TCP,443:32090/TCP 16h +---- + +. Configure the DNS to work with the load balancer: ++ +.. Configure the DNS for the `apps` domain by pointing the `*.apps..` wildcard DNS record to the load balancer IP address. ++ +.. Verify the DNS resolution by entering the following command: ++ [source,terminal] ---- -NAME READY STATUS RESTARTS AGE -capi-provider-7dcf5fc4c4-nr9sq 1/1 Running 0 4m32s -catalog-operator-6cd867cc7-phb2q 2/2 Running 0 2m50s -certified-operators-catalog-884c756c4-zdt64 1/1 Running 0 2m51s -cluster-api-f75d86f8c-56wfz 1/1 Running 0 4m32s +$ nslookup console-openshift-console.apps.. +---- ++ +.Example output ++ +[source,text] +---- +Server: 10.11.176.1 +Address: 10.11.176.1#53 + +Name: console-openshift-console.apps.my-hosted-cluster.sample-base-domain.com +Address: 10.11.176.71 +---- + +.Verification + +. Check the cluster Operators by entering the following command: ++ +[source,terminal] +---- +$ oc get clusteroperators +---- ++ +Ensure that all Operators show `AVAILABLE: True`, `PROGRESSING: False`, and `DEGRADED: False`. + +. Check the nodes by entering the following command: ++ +[source,terminal] +---- +$ oc get nodes +---- ++ +Ensure that the status of all nodes is `READY`. + +. Test access to the console by entering the following URL in a web browser: ++ +[source,text] ---- +https://console-openshift-console.apps.. +---- \ No newline at end of file