diff --git a/integration/integrate-with-image-vulnerability-scanners.adoc b/integration/integrate-with-image-vulnerability-scanners.adoc index ca008aecd044..41e272ea0020 100644 --- a/integration/integrate-with-image-vulnerability-scanners.adoc +++ b/integration/integrate-with-image-vulnerability-scanners.adoc @@ -9,8 +9,7 @@ toc::[] [role="_abstract"] {rh-rhacs-first} integrates with vulnerability scanners to enable you to import your container images and watch them for vulnerabilities. -[discrete] -== Supported container image registries +Supported container image registries:: Red{nbsp}Hat supports the following container image registries: @@ -27,13 +26,11 @@ Red{nbsp}Hat supports the following container image registries: This enhanced support gives you greater flexibility and choice in managing your container images in your preferred registry. -[discrete] -== Supported Scanners +Supported Scanners:: You can set up {product-title-short} to obtain image vulnerability data from the following commercial container image vulnerability scanners: -[discrete] -=== Scanners included in {product-title-short} +Scanners included in {product-title-short}:: * Scanner V4: Beginning with {product-title-short} version 4.4, a new scanner is introduced that is built on link:https://github.com/quay/claircore[ClairCore], which also powers the link:https://github.com/quay/clair[Clair] scanner. Scanner V4 supports scanning of language and OS-specific image components. You do not have to create an integration to use this scanner, but you must enable it during or after installation. For version 4.4, if you enable this scanner, you must also enable the StackRox Scanner. For more information about Scanner V4, including links to the installation documentation, see xref:../operating/examine-images-for-vulnerabilities.adoc#about-scanner-v4_examine-images-for-vulnerabilities[About {product-title-short} Scanner V4]. * StackRox Scanner: This scanner is the default scanner in {product-title-short}. It originates from a fork of the Clair v2 open source scanner. @@ -43,8 +40,7 @@ You can set up {product-title-short} to obtain image vulnerability data from the Even if you have Scanner V4 enabled, at this time, the StackRox Scanner must still be enabled to provide scanning of RHCOS nodes and platform vulnerabilities such as {osp}, Kubernetes, and Istio. Support for that functionality in Scanner V4 is planned for a future release. Do not disable the StackRox Scanner. ==== -[discrete] -=== Alternative scanners +Alternative scanners:: * link:https://github.com/quay/clair[Clair]: As of version 4.4, you can enable Scanner V4 in {product-title-short} to provide functionality provided by ClairCore, which also powers the Clair V4 scanner. However, you can configure Clair V4 as the scanner by configuring an integration. * link:https://cloud.google.com/container-registry/docs/container-analysis[Google Container Analysis] diff --git a/modules/acs-architecture-overview.adoc b/modules/acs-architecture-overview.adoc index f6a8a0307895..2a0b9a719213 100644 --- a/modules/acs-architecture-overview.adoc +++ b/modules/acs-architecture-overview.adoc @@ -20,8 +20,6 @@ You install {product-title-short} as a set of containers in your {ocp} or Kubern In addition to these primary services, {product-title-short} also interacts with other external components to enhance your clusters' security. -[discrete] -[id="installation-differences-architecture_{context}"] -== Installation differences +Installation differences:: -When you install {product-title-short} on {ocp} by using the Operator, {product-title-short} installs a lightweight version of Scanner on every secured cluster. The lightweight Scanner enables the scanning of images in the integrated OpenShift image registry. When you install {product-title-short} on {ocp} or Kubernetes by using the Helm install method with the _default_ values, the lightweight version of Scanner is not installed. To install the lightweight Scanner on the secured cluster by using Helm, you must set the `scanner.disable=false` parameter. You cannot install the lightweight Scanner by using the `roxctl` installation method. \ No newline at end of file +When you install {product-title-short} on {ocp} by using the Operator, {product-title-short} installs a lightweight version of Scanner on every secured cluster. The lightweight Scanner enables the scanning of images in the integrated OpenShift image registry. When you install {product-title-short} on {ocp} or Kubernetes by using the Helm install method with the _default_ values, the lightweight version of Scanner is not installed. To install the lightweight Scanner on the secured cluster by using Helm, you must set the `scanner.disable=false` parameter. You cannot install the lightweight Scanner by using the `roxctl` installation method. diff --git a/modules/common-search-queries.adoc b/modules/common-search-queries.adoc index b68ba227aac4..9e4d464584ef 100644 --- a/modules/common-search-queries.adoc +++ b/modules/common-search-queries.adoc @@ -7,8 +7,7 @@ Here are some common search queries you can run with {product-title}. -[discrete] -== Finding deployments that are affected by a specific CVE +Finding deployments that are affected by a specific CVE:: |=== | Query | Example @@ -17,8 +16,7 @@ Here are some common search queries you can run with {product-title}. | `CVE:CVE-2018-11776` |=== -[discrete] -== Finding privileged running deployments +Finding privileged running deployments:: |=== | Query | Example @@ -27,8 +25,7 @@ Here are some common search queries you can run with {product-title}. | `Privileged:true` |=== -[discrete] -== Finding deployments that have external network exposure +Finding deployments that have external network exposure:: |=== | Query | Example @@ -37,8 +34,7 @@ Here are some common search queries you can run with {product-title}. | `Exposure Level:External` |=== -[discrete] -== Finding deployments that are running specific processes +Finding deployments that are running specific processes:: |=== | Query | Example @@ -47,8 +43,7 @@ Here are some common search queries you can run with {product-title}. | `Process Name:bash` |=== -[discrete] -== Finding deployments that have serious but fixable vulnerabilities +Finding deployments that have serious but fixable vulnerabilities:: |=== | Query | Example @@ -57,8 +52,7 @@ Here are some common search queries you can run with {product-title}. | `CVSS:>=6` `Fixable:.*` |=== -[discrete] -== Finding deployments that use passwords exposed through environment variables +Finding deployments that use passwords exposed through environment variables:: |=== | Query | Example @@ -67,8 +61,7 @@ Here are some common search queries you can run with {product-title}. | `Environment Key:r/.\*pass.*` |=== -[discrete] -== Finding running deployments that have particular software components in them +Finding running deployments that have particular software components in them:: |=== | Query | Example @@ -77,14 +70,12 @@ Here are some common search queries you can run with {product-title}. | `Component:libgpg-error` or `Component:sudo` |=== -[discrete] -== Finding users or groups +Finding users or groups:: Use Kubernetes link:https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/[Labels and Selectors], and link:https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/[Annotations] to attach metadata to your deployments. You can then query based on the applied annotations and labels to identify individuals or groups. -[discrete] -=== Finding who owns a particular deployment +Finding who owns a particular deployment:: |=== | Query | Example @@ -93,8 +84,7 @@ You can then query based on the applied annotations and labels to identify indiv | `Deployment:app-server` `Label:team=backend` |=== -[discrete] -=== Finding who is deploying images from public registries +Finding who is deploying images from public registries:: |=== | Query | Example @@ -103,8 +93,7 @@ You can then query based on the applied annotations and labels to identify indiv | `Image Registry:docker.io` `Label:team=backend` |=== -[discrete] -=== Finding who is deploying into the default namespace +Finding who is deploying into the default namespace:: |=== | Query | Example diff --git a/modules/configuration-details-tab.adoc b/modules/configuration-details-tab.adoc index 4af72a0fd495..f14442ea3c14 100644 --- a/modules/configuration-details-tab.adoc +++ b/modules/configuration-details-tab.adoc @@ -8,8 +8,7 @@ The *Configuration details* tab displays information about the scan schedule information such as the essential parameters, cluster status, associated profiles, and email delivery destinations. -[discrete] -== Parameters section +Parameters section:: The *Parameters* section organizes information into the following groups: @@ -19,24 +18,21 @@ The *Parameters* section organizes information into the following groups: * *Last scanned*: The timestamp of the last compliance scan performed. * *Last updated*: The last date and time that the compliance scan data was modified. -[discrete] -== Clusters section +Clusters section:: The *Clusters* section organizes information into the following groups: * *Cluster*: Lists the one or more clusters associated with a compliance scan. * *Operator status*: Indicates the current health or operational status of the Operator. -[discrete] -== Profiles section +Profiles section:: The *Profiles* section lists the one or more profiles associated with a compliance scan. -[discrete] -== Delivery destinations section +Delivery destinations section:: The *Delivery destinations* section organizes information into the following groups: * *Email notifier*: Specifies the email notification system or tool set up to distribute reports or alerts. * *Distribution list*: Lists the recipients who should receive the notifications or reports. -* *Email template*: Specifies the email format used for the notifications. You can use the default or customize the email subject and body as needed. \ No newline at end of file +* *Email template*: Specifies the email format used for the notifications. You can use the default or customize the email subject and body as needed. diff --git a/modules/create-policy-from-system-policies-view.adoc b/modules/create-policy-from-system-policies-view.adoc index 5a6c02a53e3b..6797e864542e 100644 --- a/modules/create-policy-from-system-policies-view.adoc +++ b/modules/create-policy-from-system-policies-view.adoc @@ -14,9 +14,7 @@ You can create new security policies from the system policies view. // future enhancement: split these into separate modules and call them from the assembly. Add a procedure title to each module. -[discrete] -[id="policy-details_{context}"] -== Enter policy details +Enter policy details:: Enter the following details about your policy in the *Policy details* section. @@ -31,9 +29,7 @@ Enter the following details about your policy in the *Policy details* section. .. Click the *Add technique* to add techniques for the selected tactic. You can specify multiple techniques for a tactic. . Click *Next*. -[discrete] -[id="policy-lifecycle_{context}"] -== Configure the policy lifecycle +Configure the policy lifecycle:: In the *Lifecycle* section, complete the following steps: @@ -48,9 +44,7 @@ You can select more than one stage from the following choices: * *Audit logs*: {product-title-short} triggers policy violations when event sources match Kubernetes audit log records. . Click *Next*. -[discrete] -[id="policy-rules_{context}"] -== Configure the policy rules and criteria +Configure the policy rules and criteria:: To configure a policy rule: @@ -75,9 +69,7 @@ See "Policy criteria" in the "Additional resources" section for more information . To combine multiple values for an attribute, click the *Add* icon. . Click *Next*. -[discrete] -[id="policy-scope_{context}"] -== Configure the policy scope +Configure the policy scope:: Create scopes to restrict or exclude your policy from entities, such as cluster or namespaces, within your environment. @@ -98,9 +90,7 @@ It does not have any effect if you use this policy to check running deployments ==== . Click *Next*. -[discrete] -[id="policy-actions_{context}"] -== Configure policy actions +Configure policy actions:: Configure the activation state, enforcement, and notifiers for the policy. @@ -130,9 +120,7 @@ You must have previously configured the notification before it is visible and av ==== . Click *Next*. -[discrete] -[id="policy-review_{context}"] -== Review the policy and preview violations +Review the policy and preview violations:: Review the policy settings you have configured. @@ -144,4 +132,4 @@ Review the policy settings you have configured. Runtime violations are not available in this preview because they are generated in response to future events. ==== Before you save the policy, verify that the violations seem accurate. -. Click *Save*. \ No newline at end of file +. Click *Save*. diff --git a/modules/default-requirements-central-services.adoc b/modules/default-requirements-central-services.adoc index 66d418b79352..d727bb215c91 100644 --- a/modules/default-requirements-central-services.adoc +++ b/modules/default-requirements-central-services.adoc @@ -43,8 +43,7 @@ However, you can use another storage type if you do not have SSDs available. For security reasons, you should deploy Central in a cluster with limited administrative access. ==== -[discrete] -== CPU, memory, and storage requirements +CPU, memory, and storage requirements:: The following table lists the minimum CPU and memory values required to install and run Central. @@ -84,8 +83,7 @@ Scanner is responsible for scanning images, nodes, and the platform for vulnerab Beginning with version 4.4, {product-title-short} includes two image vulnerability scanners: StackRox Scanner and Scanner V4. StackRox Scanner is planned to be removed in a future release, but is still required at this time to perform node and platform scanning. Scanner V4 is the preferred image scanner because it provides additional features over the StackRox Scanner, such as expanded language and operating system support and data from additional vulnerability sources. -[discrete] -== CPU, memory, and storage requirements +CPU, memory, and storage requirements:: The following table lists the minimum CPU and memory values required to install and run Scanner. The requirements in this table are based on the default of 3 replicas. diff --git a/modules/default-requirements-external-db.adoc b/modules/default-requirements-external-db.adoc index ceaefd3ba128..50f0dc9ad081 100644 --- a/modules/default-requirements-external-db.adoc +++ b/modules/default-requirements-external-db.adoc @@ -18,12 +18,10 @@ When you use an external database, note the following guidance: If you select an external database, your database instance and the user connecting to it must meet the requirements listed in the following sections. -[discrete] -== Database type and version +Database type and version:: The database must be a PostgreSQL-compatible database that supports PostgreSQL 13 or later. -[discrete] -== User permissions +User permissions:: The user account that Central uses to connect to the database must be a `superuser` account with connection rights to the database and the following permissions: * `Usage` and `Create` permissions on the schema. @@ -31,8 +29,7 @@ The user account that Central uses to connect to the database must be a `superus * `Usage` permissions on all sequences in the schema. * The ability to create and delete databases as a `superuser`. -[discrete] -== Connection string +Connection string:: Central connects to the external database by using a connection string, which must be in `keyword=value` format. The connection string should specify details such as the host, port, database name, user, and SSL/TLS mode. For example, `host= port=5432 database=stackrox user=stackrox sslmode=verify-ca`. [NOTE] @@ -40,6 +37,5 @@ Central connects to the external database by using a connection string, which mu Connections through *PgBouncer* are not supported. ==== -[discrete] -== CA certificates +CA certificates:: If your external database uses a certificate issued by a private or untrusted Certificate Authority (CA), you might need to specify the CA certificate so that Central trusts the database certificate. You can add this by using a TLS block in the Central custom resource configuration. diff --git a/modules/default-requirements-secured-cluster-services.adoc b/modules/default-requirements-secured-cluster-services.adoc index 72b6b523f710..2e6f769af0e9 100644 --- a/modules/default-requirements-secured-cluster-services.adoc +++ b/modules/default-requirements-secured-cluster-services.adoc @@ -21,8 +21,7 @@ If you use a web proxy or firewall, you must ensure that secured clusters and Ce Sensor monitors your Kubernetes and {ocp} clusters. These services currently deploy in a single deployment, which handles interactions with the Kubernetes API and coordinates with the other {product-title} components. -[discrete] -=== CPU and memory requirements +CPU and memory requirements:: The following table lists the minimum CPU and memory values required to install and run sensor on secured clusters. @@ -44,8 +43,7 @@ The following table lists the minimum CPU and memory values required to install The Admission controller prevents users from creating workloads that violate policies you configure. -[discrete] -=== CPU and memory requirements +CPU and memory requirements:: By default, the admission control service runs 3 replicas. The following table lists the request and limits for each replica. @@ -67,8 +65,7 @@ By default, the admission control service runs 3 replicas. The following table l Collector monitors runtime activity on each node in your secured clusters as a DaemonSet. It connects to Sensor to report this information. The collector pod has three containers. The first container is collector, which monitors and reports the runtime activity on the node. The other two are compliance and node-inventory. -[discrete] -=== Collection requirements +Collection requirements:: To use the `CORE_BPF` collection method, the base kernel must support BTF, and the BTF file must be available to collector. In general, the kernel version must be later than 5.8 (4.18 for {op-system-base} nodes) and the `CONFIG_DEBUG_INFO_BTF` configuration option must be set. @@ -93,13 +90,11 @@ Collector looks for the BTF file in the standard locations shown in the followin If any of these files exists, it is likely that the kernel has BTF support and `CORE_BPF` is configurable. -[discrete] -=== CPU and memory requirements +CPU and memory requirements:: By default, the collector pod runs 3 containers. The following tables list the request and limits for each container and the total for each collector pod. -[discrete] -==== Collector container +Collector container:: [cols="3",options="header"] |=== @@ -113,8 +108,7 @@ By default, the collector pod runs 3 containers. The following tables list the r | 1000 MiB |=== -[discrete] -==== Compliance container +Compliance container:: [cols="3",options="header"] |=== @@ -129,8 +123,7 @@ By default, the collector pod runs 3 containers. The following tables list the r | 2000 MiB |=== -[discrete] -==== Node-inventory container +Node-inventory container:: [cols="3",options="header"] |=== @@ -144,8 +137,7 @@ By default, the collector pod runs 3 containers. The following tables list the r | 500 MiB |=== -[discrete] -==== Total collector pod requirements +Total collector pod requirements:: [cols="3",options="header"] |=== @@ -162,8 +154,7 @@ By default, the collector pod runs 3 containers. The following tables list the r [id="default-requirements-secured-cluster-services-scanner_{context}"] == Scanner -[discrete] -=== CPU and memory requirements +CPU and memory requirements:: The requirements in this table are based on the default of 3 replicas. @@ -198,11 +189,9 @@ The StackRox Scanner requires Scanner DB (PostgreSQL 15) to store data. The foll Scanner V4 is optional. If Scanner V4 is installed on secured clusters, the following requirements apply. -[discrete] -=== CPU, memory, and storage requirements +CPU, memory, and storage requirements:: -[discrete] -=== Scanner V4 Indexer +Scanner V4 Indexer:: The requirements in this table are based on the default of 2 replicas. @@ -218,8 +207,7 @@ The requirements in this table are based on the default of 2 replicas. | 6 GiB |=== -[discrete] -=== Scanner V4 DB +Scanner V4 DB:: Scanner V4 requires Scanner V4 DB (PostgreSQL 15) to store data. The following table lists the minimum CPU, memory, and storage values required to install and run Scanner V4 DB. For Scanner V4 DB, a PVC is not required, but it is strongly recommended because it ensures optimal performance. diff --git a/modules/generating-sensor-deployment-files.adoc b/modules/generating-sensor-deployment-files.adoc index 42f1d783dec1..8f93ee3a1305 100644 --- a/modules/generating-sensor-deployment-files.adoc +++ b/modules/generating-sensor-deployment-files.adoc @@ -5,8 +5,7 @@ [id="generating-sensor-deployment-files_{context}"] = Generating Sensor deployment files -[discrete] -== Generating files for Kubernetes systems +Generating files for Kubernetes systems:: .Procedure @@ -17,8 +16,7 @@ $ roxctl sensor generate k8s --name __ --central "$ROX_ENDPOINT" ---- -[discrete] -== Generating files for {ocp} systems +Generating files for {ocp} systems:: .Procedure @@ -47,4 +45,4 @@ To use `wss`, prefix the address with *`wss://`*, and ---- $ roxctl sensor generate k8s --central wss://stackrox-central.example.com:443 ---- -==== \ No newline at end of file +==== diff --git a/modules/operator-upgrade-change-subscription-channel.adoc b/modules/operator-upgrade-change-subscription-channel.adoc index f48ed7775080..6488fc1aa33d 100644 --- a/modules/operator-upgrade-change-subscription-channel.adoc +++ b/modules/operator-upgrade-change-subscription-channel.adoc @@ -30,8 +30,7 @@ ifndef::cloud-svc[] endif::[] * You have access to an {ocp} cluster web console using an account with `cluster-admin` permissions. -[discrete] -== Changing the subscription channel by using the web console +Changing the subscription channel by using the web console:: Use the following instructions for changing the subscription channel by using the web console: .Procedure @@ -44,8 +43,7 @@ Use the following instructions for changing the subscription channel by using th + For subscriptions with a *Manual* approval strategy, you can manually approve the update from the *Subscription* tab. -[discrete] -== Changing the subscription channel by using command line +Changing the subscription channel by using command line:: Use the following instructions for changing the subscription channel by using command line: .Procedure @@ -63,4 +61,4 @@ During the update, the {product-title-short} Operator provisions a new deploymen ifeval::["{context}" == "upgrade-cloudsvc-operator"] :!cloud-svc: -endif::[] \ No newline at end of file +endif::[] diff --git a/modules/recommended-requirements-central-services.adoc b/modules/recommended-requirements-central-services.adoc index a427d51f66d2..13f13f8f31db 100644 --- a/modules/recommended-requirements-central-services.adoc +++ b/modules/recommended-requirements-central-services.adoc @@ -24,8 +24,7 @@ For default resource requirements for the scanner, see the default resource requ [id="recommended-requirements-central-services-central_{context}"] == Central -[discrete] -=== Memory and CPU requirements +Memory and CPU requirements:: The following table lists the minimum memory and CPU values required to run Central. To determine sizing, consider the following data: @@ -59,8 +58,7 @@ The following table lists the minimum memory and CPU values required to run Cent [id="recommended-requirements-central-db-services-central_{context}"] == Central DB -[discrete] -=== Memory and CPU requirements +Memory and CPU requirements:: The following table lists the minimum memory and CPU values required to run Central DB. To determine sizing, consider the following data: @@ -94,8 +92,7 @@ The following table lists the minimum memory and CPU values required to run Cent [id="recommended-requirements-central-services-scanner_{context}"] == Scanner -[discrete] -=== StackRox Scanner Memory and CPU requirements +StackRox Scanner Memory and CPU requirements:: The following table lists the minimum memory and CPU values required for the StackRox Scanner deployment in the Central cluster. The table includes the number of unique images deployed in all secured clusters. diff --git a/modules/recommended-requirements-secured-cluster-services.adoc b/modules/recommended-requirements-secured-cluster-services.adoc index 469e881d0eaf..5d737a7d16dd 100644 --- a/modules/recommended-requirements-secured-cluster-services.adoc +++ b/modules/recommended-requirements-secured-cluster-services.adoc @@ -23,8 +23,7 @@ Collector component is not included on this page. Required resource requirements Sensor monitors your Kubernetes and OpenShift Container Platform clusters. These services currently deploy in a single deployment, which handles interactions with the Kubernetes API and coordinates with Collector. -[discrete] -== Memory and CPU requirements +Memory and CPU requirements:: The following table lists the minimum memory and CPU values required to run Sensor on a secured cluster. @@ -45,8 +44,7 @@ The following table lists the minimum memory and CPU values required to run Sens The admission controller prevents users from creating workloads that violate policies that you configure. -[discrete] -== Memory and CPU requirements +Memory and CPU requirements:: The following table lists the minimum memory and CPU values required to run the admission controller on a secured cluster. diff --git a/modules/use-process-baselines.adoc b/modules/use-process-baselines.adoc index d62d38377065..a186cee28079 100644 --- a/modules/use-process-baselines.adoc +++ b/modules/use-process-baselines.adoc @@ -9,15 +9,13 @@ You can minimize risk by using process baselining for infrastructure security. With this approach, {product-title} first discovers existing processes and creates a baseline. Then it operates in the default deny-all mode and only allows processes listed in the baseline to run. -[discrete] -== Process baselines +Process baselines:: When you install {product-title}, there is no default process baseline. As {product-title} discovers deployments, it creates a process baseline for every container type in a deployment. Then it adds all discovered processes to their own process baselines. -[discrete] -== Process baseline states +Process baseline states:: During the process discovery phase, all baselines are in an unlocked state. diff --git a/modules/using-cli.adoc b/modules/using-cli.adoc index 02017ce78c5f..ae71812f738a 100644 --- a/modules/using-cli.adoc +++ b/modules/using-cli.adoc @@ -38,8 +38,7 @@ Central stores information about: You can back up and restore Central's database by using the `roxctl` CLI. -[discrete] -=== Backing up Central database +Backing up Central database:: Run the following command to back up Central's database: [source,terminal] @@ -47,8 +46,7 @@ Run the following command to back up Central's database: $ roxctl -e "$ROX_CENTRAL_ADDRESS" central backup ---- -[discrete] -=== Restoring Central database +Restoring Central database:: Run the following command to restore Central's database: [source,terminal] @@ -62,8 +60,7 @@ $ roxctl -e "$ROX_CENTRAL_ADDRESS" central db restore To secure a Kubernetes or an {ocp} cluster, you must deploy {product-title} services into the cluster. You can generate deployment files in the {product-title-short} portal by selecting *Platform Configuration* -> *Clusters*, or you can use the `roxctl` CLI. -[discrete] -=== Generating Sensor deployment files +Generating Sensor deployment files:: .Kubernetes @@ -98,8 +95,7 @@ $ roxctl sensor generate k8s --central wss://stackrox-central.example.com:443 ---- ==== -[discrete] -=== Installing Sensor by using the generate YAML files +Installing Sensor by using the generate YAML files:: When you generate the Sensor deployment files, `roxctl` creates a directory called `sensor-` in your working directory. The script to install Sensor is present in this directory. Run the sensor installation script to install Sensor. [source,terminal] @@ -109,8 +105,7 @@ $ ./sensor-/sensor.sh If you get a warning that you do not have the required permissions to install Sensor, follow the on-screen instructions, or contact your cluster administrator for help. -[discrete] -=== Downloading Sensor bundle for existing clusters +Downloading Sensor bundle for existing clusters:: Use the following command to download Sensor bundles for existing clusters by specifying a cluster name or ID. @@ -119,8 +114,7 @@ Use the following command to download Sensor bundles for existing clusters by sp $ roxctl sensor get-bundle ---- -[discrete] -=== Deleting cluster integration +Deleting cluster integration:: [source,terminal] ---- @@ -138,8 +132,7 @@ You can remove them by running the `delete-sensor.sh` script from the Sensor ins You can use the `roxctl` CLI to check deployment YAML files and images for policy compliance. -[discrete] -=== Configuring output format +Configuring output format:: When you check policy compliance by using the `deployment check`, `image check`, or `image scan` commands, you can specify the output format by using the `-o` option. This option determines how the output of a command is displayed in the terminal. You can change the output format by adding the `-o` option to the command and specifying the format as `json`, `table`, `csv`, or `junit`. @@ -205,8 +198,7 @@ $ roxctl -e "$ROX_CENTRAL_ADDRESS" \ |=== -[discrete] -=== Checking deployment YAML files +Checking deployment YAML files:: The following command checks build-time and deploy-time violations of your security policies in YAML deployment files. //TODO: Add link to security policies section @@ -221,8 +213,7 @@ or $ roxctl -e "$ROX_CENTRAL_ADDRESS" deployment check --file= ---- -[discrete] -=== Checking images +Checking images:: The following command checks build-time violations of your security policies in images. //TODO: Add link to security policy section @@ -231,8 +222,7 @@ The following command checks build-time violations of your security policies in $ roxctl -e "$ROX_CENTRAL_ADDRESS" image check --image= ---- -[discrete] -=== Checking image scan results +Checking image scan results:: You can also check the scan results for specific images. @@ -257,13 +247,11 @@ The default *Continuous Integration* system role already has the required permis [id="debug-issues_{context}"] == Debugging issues -[discrete] -=== Managing Central log level +Managing Central log level:: Central saves information to its container logs. -[discrete] -==== Viewing the logs +Viewing the logs:: You can see the container logs for Central by running: .Kubernetes @@ -278,8 +266,7 @@ $ kubectl logs -n stackrox $ oc logs -n stackrox ---- -[discrete] -==== Viewing current log level +Viewing current log level:: You can change the log level to see more or less information in Central logs. Run the following command to view the current log level: [source,terminal] @@ -287,8 +274,7 @@ Run the following command to view the current log level: $ roxctl -e "$ROX_CENTRAL_ADDRESS" central debug log ---- -[discrete] -==== Changing the log level +Changing the log level:: Run the following command to change the log level: [source,terminal] @@ -297,8 +283,7 @@ $ roxctl -e "$ROX_CENTRAL_ADDRESS" central debug log --level= <1> ---- <1> The acceptable values for `` are `Panic`, `Fatal`, `Error`, `Warn`, `Info`, and `Debug`. -[discrete] -=== Retrieving debugging information +Retrieving debugging information:: To gather debugging information for investigating issues, run the following command: diff --git a/modules/validatingwebhookconfiguration-yaml-changes.adoc b/modules/validatingwebhookconfiguration-yaml-changes.adoc index e44365b6cff9..216068475c84 100644 --- a/modules/validatingwebhookconfiguration-yaml-changes.adoc +++ b/modules/validatingwebhookconfiguration-yaml-changes.adoc @@ -12,8 +12,7 @@ With {product-title} you can enforce security policies on: * Pod execution * Pod port forward -[discrete] -== If Central or Sensor is unavailable +If Central or Sensor is unavailable:: The admission controller requires an initial configuration from Sensor to work. Kubernetes or {ocp} saves this configuration, and it remains accessible even if all admission control service replicas are rescheduled onto other nodes. If this initial configuration exists, the admission controller enforces all configured deploy-time policies. @@ -42,8 +41,7 @@ $ kubectl delete ValidatingWebhookConfiguration/stackrox ---- ==== -[discrete] -== Make the admission controller more reliable +Make the admission controller more reliable:: Red{nbsp}Hat recommends that you schedule the admission control service on the control plane and not on worker nodes. The deployment YAML file includes a soft preference for running on the control plane, however it is not enforced. @@ -57,8 +55,7 @@ $ oc -n stackrox scale deploy/admission-control --replicas= ---- <1> If you use Kubernetes, enter `kubectl` instead of `oc`. -[discrete] -== Using with the roxctl CLI +Using with the roxctl CLI:: You can use the following options when you generate a Sensor deployment YAML file: diff --git a/modules/violation-view-policy-tab.adoc b/modules/violation-view-policy-tab.adoc index 2c5206c1ac35..206de161a46f 100644 --- a/modules/violation-view-policy-tab.adoc +++ b/modules/violation-view-policy-tab.adoc @@ -8,8 +8,7 @@ [role="_abstract"] The *Policy* tab of the *Details* panel displays details of the policy that caused the violation. -[discrete] -== Policy overview section +Policy overview section:: The *Policy overview* section lists the following information: @@ -21,8 +20,7 @@ The *Policy overview* section lists the following information: * *Guidance*: Suggestions on how to address the violation. * *MITRE ATT&CK*: Indicates if there are MITRE link:https://attack.mitre.org/matrices/enterprise/containers/[tactics and techniques] that apply to this policy. -[discrete] -== Policy behavior +Policy behavior:: The *Policy behavior* section provides the following information: @@ -40,7 +38,6 @@ The *Policy behavior* section provides the following information: *** For existing deployments, policy changes only result in enforcement at the next detection of the criteria, when a Kubernetes event occurs. For more information about enforcement, see "Security policy enforcement for the deploy stage". ** *Runtime*: {product-title-short} deletes all pods when an event in the pods matches the criteria of the policy. -[discrete] -== Policy criteria section +Policy criteria section:: -The *Policy criteria* section lists the policy criteria for the policy. \ No newline at end of file +The *Policy criteria* section lists the policy criteria for the policy. diff --git a/modules/violations-view-deployment-tab.adoc b/modules/violations-view-deployment-tab.adoc index fc71f8e64a8e..3da226f91204 100644 --- a/modules/violations-view-deployment-tab.adoc +++ b/modules/violations-view-deployment-tab.adoc @@ -8,8 +8,7 @@ [role="_abstract"] The *Deployment* tab of the *Details* panel displays details of the deployment to which the violation applies. -[discrete] -== Overview section +Overview section:: The *Deployment overview* section lists the following information: @@ -25,8 +24,7 @@ The *Deployment overview* section lists the following information: * *Annotations*: The annotations that apply to the selected deployment. * *Service Account*: The name of the service account for the selected deployment. -[discrete] -== Container configuration section +Container configuration section:: The *Container configuration* section lists the following information: @@ -46,8 +44,7 @@ The *Container configuration* section lists the following information: ** *Destination*: The path where the data is stored. ** *Type*: The type of the volume. -[discrete] -== Port configuration section +Port configuration section:: The *Port configuration* section provides information about the ports in the deployment, including the following fields: @@ -64,8 +61,7 @@ The *Port configuration* section provides information about the ports in the dep *** *nodePort*: The port on the node where external traffic comes into the node. *** *externalIps*: The IP addresses that can be used to access the service externally, from outside the cluster, if any exist. This field is not available for an internal service. -[discrete] -== Security context section +Security context section:: The *Security context* section lists whether the container is running as a privileged container. @@ -73,7 +69,6 @@ The *Security context* section lists whether the container is running as a privi ** `true` if it is *privileged*. ** `false` if it is *not privileged*. -[discrete] -== Network policy section +Network policy section:: -The *Network policy* section lists the namespace and all network policies in the namespace containing the violation. Click on a network policy name to view the full YAML file of the network policy. \ No newline at end of file +The *Network policy* section lists the namespace and all network policies in the namespace containing the violation. Click on a network policy name to view the full YAML file of the network policy. diff --git a/scripts/fix_discrete.sh b/scripts/fix_discrete.sh new file mode 100755 index 000000000000..eb4d1efdcc7c --- /dev/null +++ b/scripts/fix_discrete.sh @@ -0,0 +1,116 @@ +#!/bin/bash + +# Spinner function +spinner() { + local pid=$1 + local delay=0.1 + local spinstr='|/-\' + while kill -0 $pid 2>/dev/null; do + local temp=${spinstr#?} + printf " [%c] " "$spinstr" + spinstr=$temp${spinstr%"$temp"} + sleep $delay + printf "\b\b\b\b\b\b" + done + printf " \b\b\b\b" +} + +# Ask user for the base directory +read -rp "Enter the path to the base directory: " BASE_DIR + +# Validate directory +if [[ ! -d "$BASE_DIR" ]]; then + echo "Error: $BASE_DIR is not a valid directory." + exit 1 +fi + +echo "Processing directory: $BASE_DIR" + +FILES=$(find "$BASE_DIR" -type f -name "*.adoc") +TOTAL_COUNT=$(echo "$FILES" | wc -l | tr -d ' ') + +CURRENT=0 +echo "Found $TOTAL_COUNT .adoc files." + +# Process files +echo +for FILE in $FILES; do + CURRENT=$((CURRENT+1)) + printf "Processing file %d/%d: %s" "$CURRENT" "$TOTAL_COUNT" "$FILE" + + ( + REPLACEMENTS=$(awk ' + BEGIN { mode=""; count=0 } + + # [discrete] starts possible collapse + /^\[discrete\]/ { mode="discrete"; next } + + # [id=...] handling + /^\[id=/ { + if (mode=="discrete") { + mode="discrete_id"; next + } else { + mode="id"; buf=$0; next + } + } + + # Heading after [discrete] + mode=="discrete" && /^=+ / { + sub(/^=+ +/, "") + print $0 "::" + mode="" + count++ + next + } + + # Heading after [discrete]+[id] + mode=="discrete_id" && /^=+ / { + sub(/^=+ +/, "") + print $0 "::" + mode="" + count++ + next + } + + # Heading:: after [id] + mode=="id" && /::$/ { + print $0 + mode="" + count++ + next + } + + # If [id] wasn’t followed by Heading::, restore it + mode=="id" { + print buf + buf="" + mode="" + } + + # Reset if discrete marker wasn’t followed by heading + mode=="discrete" { mode=""; } + mode=="discrete_id" { mode=""; } + + { print } + + END { print "###REPLACEMENTS###" count } + ' "$FILE") + + COUNT=$(echo "$REPLACEMENTS" | tail -n1 | sed 's/###REPLACEMENTS###//') + + if [[ "$COUNT" -gt 0 ]]; then + echo "$REPLACEMENTS" | sed '/###REPLACEMENTS###/d' > "${FILE}.tmp" && mv "${FILE}.tmp" "$FILE" + echo "###SUMMARY### $COUNT" + fi + ) & + spinner $! + RESULT=$(wait $! 2>/dev/null; true) + + COUNT=$(echo "$RESULT" | grep "###SUMMARY###" | awk '{print $2}') + + if [[ -n "$COUNT" && "$COUNT" -gt 0 ]]; then + echo " -> Modified ($COUNT replacements)" + else + echo -ne "\r\033[K" + fi +done \ No newline at end of file