diff --git a/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/guide.de-de.md b/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/guide.de-de.md index 58492b8fe82..37218355b3f 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/guide.de-de.md +++ b/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/guide.de-de.md @@ -1,7 +1,7 @@ --- -title: AI Deploy - Accessing your app with tokens -excerpt: Discover how to create a scoped token and query your AI Deploy app -updated: 2023-04-04 +title: AI Deploy - Accessing your app with tokens (EN) +excerpt: Learn how to create scoped tokens to securely access your AI Deploy applications +updated: 2025-07-28 --- > [!primary] @@ -13,92 +13,137 @@ updated: 2023-04-04 This guide covers the creation of application tokens for AI Deploy. +This is particularly useful when you want to make your app accessible to others without sharing your username and password. Moreover, using tokens facilitates the integration of your app with other services or scripts, such as those using curl, allowing for a more automated and flexible interaction with your AI Deploy application. + +In this tutorial, we will create and assign tokens to a basic AI Deploy app, running the [infrastructureascode/hello-world](https://hub.docker.com/r/infrastructureascode/hello-world) Docker image. + ## Requirements - a **Public cloud** project - access to the [OVHcloud Control Panel](/links/manager) -- a Running AI Deploy app (the deployed app in this guide uses the Docker image [infrastructureascode/hello-world](https://hub.docker.com/r/infrastructureascode/hello-world)) ## Instructions -### Adding labels to an App +By default, the following two labels are automatically added to each AI Deploy application: -Tokens are scoped based on labels added to your AI Deploy app. To scope a token you need to add a label to your AI Deploy app upon submission. +- `ovh/id` whose value is the ID of the AI Deploy app +- `ovh/type` whose value is `app`, the type of AI resource -![app add label](images/01-app-add-label.png){.thumbnail} +> [!primary] +> These labels are prefixed by `ovh/`, meaning these are reserved by the platform. These labels will be automatically overwritten by the platform if you attempt to redefine them during submission. They won't be displayed in the manager UI. +> -In this instance we add the label `group=A` to the AI Deploy app. A set of defaults labels is added to all AI Deploy apps: +In addition to these default labels, you can **create new ones** to further customize and secure your application access. -- `ovh/id` whose value is the ID of the AI Deploy app -- `ovh/type` with value `app`, the type of AI resource +### Adding labels to an app -> [!primary] -> Labels prefixed by `ovh/` are reserved by the platform, those labels are overriden upon submission if any are provided. +Tokens are scoped based on labels added to your AI Deploy app. To scope a token, you must add a label to your AI Deploy app. This can be done either during the app creation process or after the app is deployed. You can add multiple labels by repeating the following process. -All the labels of an AI Deploy app are listed on the AI Deploy app details under **Tags**: +#### Adding label during app creation -![app dashboard](images/02-app-dashboard.png){.thumbnail} +To add a label when creating an AI Deploy app, access the `Advanced Configuration`{.action} step in the app creation process. This section allows you to specify a custom Docker command, the mounted volumes, and **the app labels**. -### Generating tokens +From this last sub-section, you can add a key-value pair. The key is the label identifier (e.g., `group`), while the value is the corresponding value assigned to this key (e.g., `A`). In this tutorial, we use the example `group=A` as the label of the AI Deploy app: -From the **AI Deploy** page, you access the tokens management page by clicking the `Tokens`{.action} tab. +![app add label](images/01-app-add-label.png){.thumbnail} -![app list](images/03-app-list.png) +Once created, all the labels of an AI Deploy app are listed on the app details, under **Labels** field: -Once on the token management tab, simply click on `New Token`{.action}. +![app dashboard](images/02-app-dashboard.png){.thumbnail} -![token list new](images/04-token-list-new-2.png){.thumbnail} +#### Adding label to an existing app + +If your app is already deployed, you can still add or update labels at any time using the Control Panel (UI) or the `ovhai` CLI. + +> [!tabs] +> **Using the Control Panel (UI)** +>> +>> Navigate to the **AI Deploy** section where all your apps are listed. **Click the name of your app** to open its details page. Locate the **Labels** section. Enter the key-value pair and click `+`{.action} to add the label to your app. +>> +>> ![image](images/03-app-add-label-existing-app.png){.thumbnail} +>> +>> After saving, the added label will be visible in the **Labels** section of the app details. +>> +> **Using ovhai CLI** +>> +>> To follow this section, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. +>> +>> You can also add labels to an existing app using the `ovhai` CLI. Run the following command: +>> +>> ```console +>> ovhai app set-label +>> ``` +>> +>> Replace with the unique identifier of your app (found in the app details or by running `ovhai app ls`). And replace and with your desired key and value pair. For example: +>> +>> ```console +>> ovhai app set-label a8318623-8357-48b4-bd3b-648c3e343ec9 group A +>> ``` +>> +>> This command adds the label `group=A` to the app with ID `a8318623-8357-48b4-bd3b-648c3e343ec9`. +>> +>> You can verify app labels by running `ovhai app get `. Labels will be displayed at the top of app details, in the *Labels* field. -#### Read token +### Generating tokens -There are two types of roles that can be assigned to a token: +From the **AI Dashboard** page, you can access the tokens management page by clicking on the `Tokens`{.action} tab. From there, you can click on the `+ Create a token`{.action} button to create a new token: -- AI Platform - Read-only -- AI Platform - Operator +![token list new](images/04-token-list.png){.thumbnail} -A Read-only token will only grant you the right to query the deployed app while an Operator token would also allow you to manage the AI Deploy app itself. +There are two types of roles that can be assigned to a token: -Let us create a token for the AI Deploy apps matching the label `group=A` with read-only access in the GRA cluster. -To create an AI Deploy app token we need to specify 3 parameters: +- **AI Platform - Reader**: allows only querying the app +- **AI Platform - Operator**: allows querying and full lifecycle management (start/stop/delete) -- The token scope is specified through label selectors, and a token will be scoped over any app matching the set of label selectors. In this case `group=A` -- The token role: AI Training - Read-only -- The region (cluster in which are deployed the AI Deploy apps): GRA. +#### Read token -Fill out the form: +Let's create a token for the AI Deploy apps matching the label `group=A` with read-only access in the GRA (Gravelines) cluster. To do this, we will need to fill 4 parameters: ![token generation input read](images/05-token-gen-input-read.png){.thumbnail} -Click `Generate`{.action}. Upon success, you are redirected to the token list with the newly generated token displayed at the top: - -![token generation result](images/06-token-gen-result-read.png){.thumbnail} +- **Token name**: used for token identification, management only. +- **Label selector**: determines which apps the token applies to (e.g., `group=A`). +- **Role**: choose from: + - `AI Reader`: read-only + - `AI Operator`: read & manage +- **Region**: e.g., `GRA` (for Gravelines) -Save the token string for later use. +After completing the form, click `Generate`{.action} to confirm the token creation. > [!warning] -> The token is only displayed once, make sure to save it before leaving the page or you will need to regenerate the token. +> You will then receive the value of your new token, which you must **carefully save**, as its value is only displayed once. If you lose the token value, you will need to [regenerate it](#regenerating-a-token). + +You will then be redirected to the token list, where the newly generated token will be displayed at the top: + +![token generation result](images/06-token-gen-result-read.png){.thumbnail} This newly generated token provides read access over all resources tagged with the label `group=A` including the ones submitted after the creation of the token. -#### Operator token +If you prefer working from the command line, you can generate the same token using the `ovhai` CLI: -An operator token grants read access along with management access for the matching apps. This means that you can manage the AI Deploy app lifecycle (start/stop/delete) using either the CLI (more info [here](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli)) or the [AI Training API](https://gra.ai.cloud.ovh.net/) by providing this token. +```console +ovhai token create reader-token --role read --label-selector group=A +``` -This Operator token will be scoped on a specific AI Deploy app and we will use the default `ovh/id` label to do so (since it is reserved, there is only one AI Deploy app that can match this label selector). +#### Operator token -- Token scope: `ovh/id=4c4f6253-a059-424a-92da-5e06a12ddffd` -- The token role: AI Training - Operator -- The region: GRA. +An operator token grants read access along with management access for the matching apps. This allows you to manage the AI Deploy app lifecycle (start/stop/delete) using either the CLI (more info [here](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli)) or the [AI API](https://gra.ai.cloud.ovh.net/) by providing this token. ![token generation input operator](images/07-token-gen-input-op.png){.thumbnail} -Additional information about the use of a token to manage an AI Training resource can be found [here](/pages/public_cloud/ai_machine_learning/cli_13_howto_app_token_cli#use-the-app-token). +Equivalent CLI command is: + +```console +ovhai token create operator-token --role operator --label-selector group=A +``` + +You can also scope a token to a specific app using the `ovh/id` label and the app’s ID as its value. This label is added automatically by default as explained [above](#instructions) and, because it is reserved, it will uniquely match only one app. ### Using a token to query an AI Deploy app With the token we generated in the previous step, we will now query the app. For this demonstration, we deployed a simple Hello World app that always responds `Hello, World!`. -You can get the access URL of your app in the details of the AI Deploy app, above the **Tags**. +You can get the access URL of your app in the details of the AI Deploy app, above the **Labels**. #### Browser @@ -114,7 +159,7 @@ You now land on the exposed AI Deploy app service: #### Code integration -You can also directly CURL the AI Deploy app using the token as an `Authorization` header: +You can also use CURL to directly query the AI Deploy app using the token as an `Authorization` header: ```bash export TOKEN= @@ -137,7 +182,7 @@ From the list of tokens, click on the action menu and select `Regenerate`{.actio ![token regenerate](images/08-token-list-regen.png){.thumbnail} -Then click on `Regenerate`{.action} to confirm. +Then click on `Confirm`{.action}: ![token regenerate confirm](images/09-token-regen-confirm.png){.thumbnail} @@ -147,8 +192,12 @@ If you simply need to invalidate the token, you can delete it using the same act ![token delete](images/10-token-list-delete.png) +## Go further + +Additional information about the use of a token to manage AI Solutions using `ovhai` CLI can be found [here](/pages/public_cloud/ai_machine_learning/cli_13_howto_app_token_cli). + ## Feedback Please feel free to send us your questions, feedback and suggestions to help our team improve the service on the OVHcloud [Discord server](https://discord.gg/ovhcloud) -If you need training or technical assistance to implement our solutions, contact your sales representative or click on [this link](/links/professional-services) to get a quote and ask our Professional Services experts for a custom analysis of your project. +If you need training or technical assistance to implement our solutions, contact your sales representative or click on [this link](/links/professional-services) to get a quote and ask our Professional Services experts for a custom analysis of your project. \ No newline at end of file diff --git a/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/guide.en-asia.md b/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/guide.en-asia.md index 58492b8fe82..2f994c23ae5 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/guide.en-asia.md +++ b/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/guide.en-asia.md @@ -1,7 +1,7 @@ --- title: AI Deploy - Accessing your app with tokens -excerpt: Discover how to create a scoped token and query your AI Deploy app -updated: 2023-04-04 +excerpt: Learn how to create scoped tokens to securely access your AI Deploy applications +updated: 2025-07-28 --- > [!primary] @@ -13,92 +13,137 @@ updated: 2023-04-04 This guide covers the creation of application tokens for AI Deploy. +This is particularly useful when you want to make your app accessible to others without sharing your username and password. Moreover, using tokens facilitates the integration of your app with other services or scripts, such as those using curl, allowing for a more automated and flexible interaction with your AI Deploy application. + +In this tutorial, we will create and assign tokens to a basic AI Deploy app, running the [infrastructureascode/hello-world](https://hub.docker.com/r/infrastructureascode/hello-world) Docker image. + ## Requirements - a **Public cloud** project - access to the [OVHcloud Control Panel](/links/manager) -- a Running AI Deploy app (the deployed app in this guide uses the Docker image [infrastructureascode/hello-world](https://hub.docker.com/r/infrastructureascode/hello-world)) ## Instructions -### Adding labels to an App +By default, the following two labels are automatically added to each AI Deploy application: -Tokens are scoped based on labels added to your AI Deploy app. To scope a token you need to add a label to your AI Deploy app upon submission. +- `ovh/id` whose value is the ID of the AI Deploy app +- `ovh/type` whose value is `app`, the type of AI resource -![app add label](images/01-app-add-label.png){.thumbnail} +> [!primary] +> These labels are prefixed by `ovh/`, meaning these are reserved by the platform. These labels will be automatically overwritten by the platform if you attempt to redefine them during submission. They won't be displayed in the manager UI. +> -In this instance we add the label `group=A` to the AI Deploy app. A set of defaults labels is added to all AI Deploy apps: +In addition to these default labels, you can **create new ones** to further customize and secure your application access. -- `ovh/id` whose value is the ID of the AI Deploy app -- `ovh/type` with value `app`, the type of AI resource +### Adding labels to an app -> [!primary] -> Labels prefixed by `ovh/` are reserved by the platform, those labels are overriden upon submission if any are provided. +Tokens are scoped based on labels added to your AI Deploy app. To scope a token, you must add a label to your AI Deploy app. This can be done either during the app creation process or after the app is deployed. You can add multiple labels by repeating the following process. -All the labels of an AI Deploy app are listed on the AI Deploy app details under **Tags**: +#### Adding label during app creation -![app dashboard](images/02-app-dashboard.png){.thumbnail} +To add a label when creating an AI Deploy app, access the `Advanced Configuration`{.action} step in the app creation process. This section allows you to specify a custom Docker command, the mounted volumes, and **the app labels**. -### Generating tokens +From this last sub-section, you can add a key-value pair. The key is the label identifier (e.g., `group`), while the value is the corresponding value assigned to this key (e.g., `A`). In this tutorial, we use the example `group=A` as the label of the AI Deploy app: -From the **AI Deploy** page, you access the tokens management page by clicking the `Tokens`{.action} tab. +![app add label](images/01-app-add-label.png){.thumbnail} -![app list](images/03-app-list.png) +Once created, all the labels of an AI Deploy app are listed on the app details, under **Labels** field: -Once on the token management tab, simply click on `New Token`{.action}. +![app dashboard](images/02-app-dashboard.png){.thumbnail} -![token list new](images/04-token-list-new-2.png){.thumbnail} +#### Adding label to an existing app + +If your app is already deployed, you can still add or update labels at any time using the Control Panel (UI) or the `ovhai` CLI. + +> [!tabs] +> **Using the Control Panel (UI)** +>> +>> Navigate to the **AI Deploy** section where all your apps are listed. **Click the name of your app** to open its details page. Locate the **Labels** section. Enter the key-value pair and click `+`{.action} to add the label to your app. +>> +>> ![image](images/03-app-add-label-existing-app.png){.thumbnail} +>> +>> After saving, the added label will be visible in the **Labels** section of the app details. +>> +> **Using ovhai CLI** +>> +>> To follow this section, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. +>> +>> You can also add labels to an existing app using the `ovhai` CLI. Run the following command: +>> +>> ```console +>> ovhai app set-label +>> ``` +>> +>> Replace with the unique identifier of your app (found in the app details or by running `ovhai app ls`). And replace and with your desired key and value pair. For example: +>> +>> ```console +>> ovhai app set-label a8318623-8357-48b4-bd3b-648c3e343ec9 group A +>> ``` +>> +>> This command adds the label `group=A` to the app with ID `a8318623-8357-48b4-bd3b-648c3e343ec9`. +>> +>> You can verify app labels by running `ovhai app get `. Labels will be displayed at the top of app details, in the *Labels* field. -#### Read token +### Generating tokens -There are two types of roles that can be assigned to a token: +From the **AI Dashboard** page, you can access the tokens management page by clicking on the `Tokens`{.action} tab. From there, you can click on the `+ Create a token`{.action} button to create a new token: -- AI Platform - Read-only -- AI Platform - Operator +![token list new](images/04-token-list.png){.thumbnail} -A Read-only token will only grant you the right to query the deployed app while an Operator token would also allow you to manage the AI Deploy app itself. +There are two types of roles that can be assigned to a token: -Let us create a token for the AI Deploy apps matching the label `group=A` with read-only access in the GRA cluster. -To create an AI Deploy app token we need to specify 3 parameters: +- **AI Platform - Reader**: allows only querying the app +- **AI Platform - Operator**: allows querying and full lifecycle management (start/stop/delete) -- The token scope is specified through label selectors, and a token will be scoped over any app matching the set of label selectors. In this case `group=A` -- The token role: AI Training - Read-only -- The region (cluster in which are deployed the AI Deploy apps): GRA. +#### Read token -Fill out the form: +Let's create a token for the AI Deploy apps matching the label `group=A` with read-only access in the GRA (Gravelines) cluster. To do this, we will need to fill 4 parameters: ![token generation input read](images/05-token-gen-input-read.png){.thumbnail} -Click `Generate`{.action}. Upon success, you are redirected to the token list with the newly generated token displayed at the top: - -![token generation result](images/06-token-gen-result-read.png){.thumbnail} +- **Token name**: used for token identification, management only. +- **Label selector**: determines which apps the token applies to (e.g., `group=A`). +- **Role**: choose from: + - `AI Reader`: read-only + - `AI Operator`: read & manage +- **Region**: e.g., `GRA` (for Gravelines) -Save the token string for later use. +After completing the form, click `Generate`{.action} to confirm the token creation. > [!warning] -> The token is only displayed once, make sure to save it before leaving the page or you will need to regenerate the token. +> You will then receive the value of your new token, which you must **carefully save**, as its value is only displayed once. If you lose the token value, you will need to [regenerate it](#regenerating-a-token). + +You will then be redirected to the token list, where the newly generated token will be displayed at the top: + +![token generation result](images/06-token-gen-result-read.png){.thumbnail} This newly generated token provides read access over all resources tagged with the label `group=A` including the ones submitted after the creation of the token. -#### Operator token +If you prefer working from the command line, you can generate the same token using the `ovhai` CLI: -An operator token grants read access along with management access for the matching apps. This means that you can manage the AI Deploy app lifecycle (start/stop/delete) using either the CLI (more info [here](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli)) or the [AI Training API](https://gra.ai.cloud.ovh.net/) by providing this token. +```console +ovhai token create reader-token --role read --label-selector group=A +``` -This Operator token will be scoped on a specific AI Deploy app and we will use the default `ovh/id` label to do so (since it is reserved, there is only one AI Deploy app that can match this label selector). +#### Operator token -- Token scope: `ovh/id=4c4f6253-a059-424a-92da-5e06a12ddffd` -- The token role: AI Training - Operator -- The region: GRA. +An operator token grants read access along with management access for the matching apps. This allows you to manage the AI Deploy app lifecycle (start/stop/delete) using either the CLI (more info [here](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli)) or the [AI API](https://gra.ai.cloud.ovh.net/) by providing this token. ![token generation input operator](images/07-token-gen-input-op.png){.thumbnail} -Additional information about the use of a token to manage an AI Training resource can be found [here](/pages/public_cloud/ai_machine_learning/cli_13_howto_app_token_cli#use-the-app-token). +Equivalent CLI command is: + +```console +ovhai token create operator-token --role operator --label-selector group=A +``` + +You can also scope a token to a specific app using the `ovh/id` label and the app’s ID as its value. This label is added automatically by default as explained [above](#instructions) and, because it is reserved, it will uniquely match only one app. ### Using a token to query an AI Deploy app With the token we generated in the previous step, we will now query the app. For this demonstration, we deployed a simple Hello World app that always responds `Hello, World!`. -You can get the access URL of your app in the details of the AI Deploy app, above the **Tags**. +You can get the access URL of your app in the details of the AI Deploy app, above the **Labels**. #### Browser @@ -114,7 +159,7 @@ You now land on the exposed AI Deploy app service: #### Code integration -You can also directly CURL the AI Deploy app using the token as an `Authorization` header: +You can also use CURL to directly query the AI Deploy app using the token as an `Authorization` header: ```bash export TOKEN= @@ -137,7 +182,7 @@ From the list of tokens, click on the action menu and select `Regenerate`{.actio ![token regenerate](images/08-token-list-regen.png){.thumbnail} -Then click on `Regenerate`{.action} to confirm. +Then click on `Confirm`{.action}: ![token regenerate confirm](images/09-token-regen-confirm.png){.thumbnail} @@ -147,8 +192,12 @@ If you simply need to invalidate the token, you can delete it using the same act ![token delete](images/10-token-list-delete.png) +## Go further + +Additional information about the use of a token to manage AI Solutions using `ovhai` CLI can be found [here](/pages/public_cloud/ai_machine_learning/cli_13_howto_app_token_cli). + ## Feedback Please feel free to send us your questions, feedback and suggestions to help our team improve the service on the OVHcloud [Discord server](https://discord.gg/ovhcloud) -If you need training or technical assistance to implement our solutions, contact your sales representative or click on [this link](/links/professional-services) to get a quote and ask our Professional Services experts for a custom analysis of your project. +If you need training or technical assistance to implement our solutions, contact your sales representative or click on [this link](/links/professional-services) to get a quote and ask our Professional Services experts for a custom analysis of your project. \ No newline at end of file diff --git a/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/guide.en-au.md b/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/guide.en-au.md index 58492b8fe82..2f994c23ae5 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/guide.en-au.md +++ b/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/guide.en-au.md @@ -1,7 +1,7 @@ --- title: AI Deploy - Accessing your app with tokens -excerpt: Discover how to create a scoped token and query your AI Deploy app -updated: 2023-04-04 +excerpt: Learn how to create scoped tokens to securely access your AI Deploy applications +updated: 2025-07-28 --- > [!primary] @@ -13,92 +13,137 @@ updated: 2023-04-04 This guide covers the creation of application tokens for AI Deploy. +This is particularly useful when you want to make your app accessible to others without sharing your username and password. Moreover, using tokens facilitates the integration of your app with other services or scripts, such as those using curl, allowing for a more automated and flexible interaction with your AI Deploy application. + +In this tutorial, we will create and assign tokens to a basic AI Deploy app, running the [infrastructureascode/hello-world](https://hub.docker.com/r/infrastructureascode/hello-world) Docker image. + ## Requirements - a **Public cloud** project - access to the [OVHcloud Control Panel](/links/manager) -- a Running AI Deploy app (the deployed app in this guide uses the Docker image [infrastructureascode/hello-world](https://hub.docker.com/r/infrastructureascode/hello-world)) ## Instructions -### Adding labels to an App +By default, the following two labels are automatically added to each AI Deploy application: -Tokens are scoped based on labels added to your AI Deploy app. To scope a token you need to add a label to your AI Deploy app upon submission. +- `ovh/id` whose value is the ID of the AI Deploy app +- `ovh/type` whose value is `app`, the type of AI resource -![app add label](images/01-app-add-label.png){.thumbnail} +> [!primary] +> These labels are prefixed by `ovh/`, meaning these are reserved by the platform. These labels will be automatically overwritten by the platform if you attempt to redefine them during submission. They won't be displayed in the manager UI. +> -In this instance we add the label `group=A` to the AI Deploy app. A set of defaults labels is added to all AI Deploy apps: +In addition to these default labels, you can **create new ones** to further customize and secure your application access. -- `ovh/id` whose value is the ID of the AI Deploy app -- `ovh/type` with value `app`, the type of AI resource +### Adding labels to an app -> [!primary] -> Labels prefixed by `ovh/` are reserved by the platform, those labels are overriden upon submission if any are provided. +Tokens are scoped based on labels added to your AI Deploy app. To scope a token, you must add a label to your AI Deploy app. This can be done either during the app creation process or after the app is deployed. You can add multiple labels by repeating the following process. -All the labels of an AI Deploy app are listed on the AI Deploy app details under **Tags**: +#### Adding label during app creation -![app dashboard](images/02-app-dashboard.png){.thumbnail} +To add a label when creating an AI Deploy app, access the `Advanced Configuration`{.action} step in the app creation process. This section allows you to specify a custom Docker command, the mounted volumes, and **the app labels**. -### Generating tokens +From this last sub-section, you can add a key-value pair. The key is the label identifier (e.g., `group`), while the value is the corresponding value assigned to this key (e.g., `A`). In this tutorial, we use the example `group=A` as the label of the AI Deploy app: -From the **AI Deploy** page, you access the tokens management page by clicking the `Tokens`{.action} tab. +![app add label](images/01-app-add-label.png){.thumbnail} -![app list](images/03-app-list.png) +Once created, all the labels of an AI Deploy app are listed on the app details, under **Labels** field: -Once on the token management tab, simply click on `New Token`{.action}. +![app dashboard](images/02-app-dashboard.png){.thumbnail} -![token list new](images/04-token-list-new-2.png){.thumbnail} +#### Adding label to an existing app + +If your app is already deployed, you can still add or update labels at any time using the Control Panel (UI) or the `ovhai` CLI. + +> [!tabs] +> **Using the Control Panel (UI)** +>> +>> Navigate to the **AI Deploy** section where all your apps are listed. **Click the name of your app** to open its details page. Locate the **Labels** section. Enter the key-value pair and click `+`{.action} to add the label to your app. +>> +>> ![image](images/03-app-add-label-existing-app.png){.thumbnail} +>> +>> After saving, the added label will be visible in the **Labels** section of the app details. +>> +> **Using ovhai CLI** +>> +>> To follow this section, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. +>> +>> You can also add labels to an existing app using the `ovhai` CLI. Run the following command: +>> +>> ```console +>> ovhai app set-label +>> ``` +>> +>> Replace with the unique identifier of your app (found in the app details or by running `ovhai app ls`). And replace and with your desired key and value pair. For example: +>> +>> ```console +>> ovhai app set-label a8318623-8357-48b4-bd3b-648c3e343ec9 group A +>> ``` +>> +>> This command adds the label `group=A` to the app with ID `a8318623-8357-48b4-bd3b-648c3e343ec9`. +>> +>> You can verify app labels by running `ovhai app get `. Labels will be displayed at the top of app details, in the *Labels* field. -#### Read token +### Generating tokens -There are two types of roles that can be assigned to a token: +From the **AI Dashboard** page, you can access the tokens management page by clicking on the `Tokens`{.action} tab. From there, you can click on the `+ Create a token`{.action} button to create a new token: -- AI Platform - Read-only -- AI Platform - Operator +![token list new](images/04-token-list.png){.thumbnail} -A Read-only token will only grant you the right to query the deployed app while an Operator token would also allow you to manage the AI Deploy app itself. +There are two types of roles that can be assigned to a token: -Let us create a token for the AI Deploy apps matching the label `group=A` with read-only access in the GRA cluster. -To create an AI Deploy app token we need to specify 3 parameters: +- **AI Platform - Reader**: allows only querying the app +- **AI Platform - Operator**: allows querying and full lifecycle management (start/stop/delete) -- The token scope is specified through label selectors, and a token will be scoped over any app matching the set of label selectors. In this case `group=A` -- The token role: AI Training - Read-only -- The region (cluster in which are deployed the AI Deploy apps): GRA. +#### Read token -Fill out the form: +Let's create a token for the AI Deploy apps matching the label `group=A` with read-only access in the GRA (Gravelines) cluster. To do this, we will need to fill 4 parameters: ![token generation input read](images/05-token-gen-input-read.png){.thumbnail} -Click `Generate`{.action}. Upon success, you are redirected to the token list with the newly generated token displayed at the top: - -![token generation result](images/06-token-gen-result-read.png){.thumbnail} +- **Token name**: used for token identification, management only. +- **Label selector**: determines which apps the token applies to (e.g., `group=A`). +- **Role**: choose from: + - `AI Reader`: read-only + - `AI Operator`: read & manage +- **Region**: e.g., `GRA` (for Gravelines) -Save the token string for later use. +After completing the form, click `Generate`{.action} to confirm the token creation. > [!warning] -> The token is only displayed once, make sure to save it before leaving the page or you will need to regenerate the token. +> You will then receive the value of your new token, which you must **carefully save**, as its value is only displayed once. If you lose the token value, you will need to [regenerate it](#regenerating-a-token). + +You will then be redirected to the token list, where the newly generated token will be displayed at the top: + +![token generation result](images/06-token-gen-result-read.png){.thumbnail} This newly generated token provides read access over all resources tagged with the label `group=A` including the ones submitted after the creation of the token. -#### Operator token +If you prefer working from the command line, you can generate the same token using the `ovhai` CLI: -An operator token grants read access along with management access for the matching apps. This means that you can manage the AI Deploy app lifecycle (start/stop/delete) using either the CLI (more info [here](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli)) or the [AI Training API](https://gra.ai.cloud.ovh.net/) by providing this token. +```console +ovhai token create reader-token --role read --label-selector group=A +``` -This Operator token will be scoped on a specific AI Deploy app and we will use the default `ovh/id` label to do so (since it is reserved, there is only one AI Deploy app that can match this label selector). +#### Operator token -- Token scope: `ovh/id=4c4f6253-a059-424a-92da-5e06a12ddffd` -- The token role: AI Training - Operator -- The region: GRA. +An operator token grants read access along with management access for the matching apps. This allows you to manage the AI Deploy app lifecycle (start/stop/delete) using either the CLI (more info [here](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli)) or the [AI API](https://gra.ai.cloud.ovh.net/) by providing this token. ![token generation input operator](images/07-token-gen-input-op.png){.thumbnail} -Additional information about the use of a token to manage an AI Training resource can be found [here](/pages/public_cloud/ai_machine_learning/cli_13_howto_app_token_cli#use-the-app-token). +Equivalent CLI command is: + +```console +ovhai token create operator-token --role operator --label-selector group=A +``` + +You can also scope a token to a specific app using the `ovh/id` label and the app’s ID as its value. This label is added automatically by default as explained [above](#instructions) and, because it is reserved, it will uniquely match only one app. ### Using a token to query an AI Deploy app With the token we generated in the previous step, we will now query the app. For this demonstration, we deployed a simple Hello World app that always responds `Hello, World!`. -You can get the access URL of your app in the details of the AI Deploy app, above the **Tags**. +You can get the access URL of your app in the details of the AI Deploy app, above the **Labels**. #### Browser @@ -114,7 +159,7 @@ You now land on the exposed AI Deploy app service: #### Code integration -You can also directly CURL the AI Deploy app using the token as an `Authorization` header: +You can also use CURL to directly query the AI Deploy app using the token as an `Authorization` header: ```bash export TOKEN= @@ -137,7 +182,7 @@ From the list of tokens, click on the action menu and select `Regenerate`{.actio ![token regenerate](images/08-token-list-regen.png){.thumbnail} -Then click on `Regenerate`{.action} to confirm. +Then click on `Confirm`{.action}: ![token regenerate confirm](images/09-token-regen-confirm.png){.thumbnail} @@ -147,8 +192,12 @@ If you simply need to invalidate the token, you can delete it using the same act ![token delete](images/10-token-list-delete.png) +## Go further + +Additional information about the use of a token to manage AI Solutions using `ovhai` CLI can be found [here](/pages/public_cloud/ai_machine_learning/cli_13_howto_app_token_cli). + ## Feedback Please feel free to send us your questions, feedback and suggestions to help our team improve the service on the OVHcloud [Discord server](https://discord.gg/ovhcloud) -If you need training or technical assistance to implement our solutions, contact your sales representative or click on [this link](/links/professional-services) to get a quote and ask our Professional Services experts for a custom analysis of your project. +If you need training or technical assistance to implement our solutions, contact your sales representative or click on [this link](/links/professional-services) to get a quote and ask our Professional Services experts for a custom analysis of your project. \ No newline at end of file diff --git a/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/guide.en-ca.md b/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/guide.en-ca.md index 58492b8fe82..2f994c23ae5 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/guide.en-ca.md +++ b/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/guide.en-ca.md @@ -1,7 +1,7 @@ --- title: AI Deploy - Accessing your app with tokens -excerpt: Discover how to create a scoped token and query your AI Deploy app -updated: 2023-04-04 +excerpt: Learn how to create scoped tokens to securely access your AI Deploy applications +updated: 2025-07-28 --- > [!primary] @@ -13,92 +13,137 @@ updated: 2023-04-04 This guide covers the creation of application tokens for AI Deploy. +This is particularly useful when you want to make your app accessible to others without sharing your username and password. Moreover, using tokens facilitates the integration of your app with other services or scripts, such as those using curl, allowing for a more automated and flexible interaction with your AI Deploy application. + +In this tutorial, we will create and assign tokens to a basic AI Deploy app, running the [infrastructureascode/hello-world](https://hub.docker.com/r/infrastructureascode/hello-world) Docker image. + ## Requirements - a **Public cloud** project - access to the [OVHcloud Control Panel](/links/manager) -- a Running AI Deploy app (the deployed app in this guide uses the Docker image [infrastructureascode/hello-world](https://hub.docker.com/r/infrastructureascode/hello-world)) ## Instructions -### Adding labels to an App +By default, the following two labels are automatically added to each AI Deploy application: -Tokens are scoped based on labels added to your AI Deploy app. To scope a token you need to add a label to your AI Deploy app upon submission. +- `ovh/id` whose value is the ID of the AI Deploy app +- `ovh/type` whose value is `app`, the type of AI resource -![app add label](images/01-app-add-label.png){.thumbnail} +> [!primary] +> These labels are prefixed by `ovh/`, meaning these are reserved by the platform. These labels will be automatically overwritten by the platform if you attempt to redefine them during submission. They won't be displayed in the manager UI. +> -In this instance we add the label `group=A` to the AI Deploy app. A set of defaults labels is added to all AI Deploy apps: +In addition to these default labels, you can **create new ones** to further customize and secure your application access. -- `ovh/id` whose value is the ID of the AI Deploy app -- `ovh/type` with value `app`, the type of AI resource +### Adding labels to an app -> [!primary] -> Labels prefixed by `ovh/` are reserved by the platform, those labels are overriden upon submission if any are provided. +Tokens are scoped based on labels added to your AI Deploy app. To scope a token, you must add a label to your AI Deploy app. This can be done either during the app creation process or after the app is deployed. You can add multiple labels by repeating the following process. -All the labels of an AI Deploy app are listed on the AI Deploy app details under **Tags**: +#### Adding label during app creation -![app dashboard](images/02-app-dashboard.png){.thumbnail} +To add a label when creating an AI Deploy app, access the `Advanced Configuration`{.action} step in the app creation process. This section allows you to specify a custom Docker command, the mounted volumes, and **the app labels**. -### Generating tokens +From this last sub-section, you can add a key-value pair. The key is the label identifier (e.g., `group`), while the value is the corresponding value assigned to this key (e.g., `A`). In this tutorial, we use the example `group=A` as the label of the AI Deploy app: -From the **AI Deploy** page, you access the tokens management page by clicking the `Tokens`{.action} tab. +![app add label](images/01-app-add-label.png){.thumbnail} -![app list](images/03-app-list.png) +Once created, all the labels of an AI Deploy app are listed on the app details, under **Labels** field: -Once on the token management tab, simply click on `New Token`{.action}. +![app dashboard](images/02-app-dashboard.png){.thumbnail} -![token list new](images/04-token-list-new-2.png){.thumbnail} +#### Adding label to an existing app + +If your app is already deployed, you can still add or update labels at any time using the Control Panel (UI) or the `ovhai` CLI. + +> [!tabs] +> **Using the Control Panel (UI)** +>> +>> Navigate to the **AI Deploy** section where all your apps are listed. **Click the name of your app** to open its details page. Locate the **Labels** section. Enter the key-value pair and click `+`{.action} to add the label to your app. +>> +>> ![image](images/03-app-add-label-existing-app.png){.thumbnail} +>> +>> After saving, the added label will be visible in the **Labels** section of the app details. +>> +> **Using ovhai CLI** +>> +>> To follow this section, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. +>> +>> You can also add labels to an existing app using the `ovhai` CLI. Run the following command: +>> +>> ```console +>> ovhai app set-label +>> ``` +>> +>> Replace with the unique identifier of your app (found in the app details or by running `ovhai app ls`). And replace and with your desired key and value pair. For example: +>> +>> ```console +>> ovhai app set-label a8318623-8357-48b4-bd3b-648c3e343ec9 group A +>> ``` +>> +>> This command adds the label `group=A` to the app with ID `a8318623-8357-48b4-bd3b-648c3e343ec9`. +>> +>> You can verify app labels by running `ovhai app get `. Labels will be displayed at the top of app details, in the *Labels* field. -#### Read token +### Generating tokens -There are two types of roles that can be assigned to a token: +From the **AI Dashboard** page, you can access the tokens management page by clicking on the `Tokens`{.action} tab. From there, you can click on the `+ Create a token`{.action} button to create a new token: -- AI Platform - Read-only -- AI Platform - Operator +![token list new](images/04-token-list.png){.thumbnail} -A Read-only token will only grant you the right to query the deployed app while an Operator token would also allow you to manage the AI Deploy app itself. +There are two types of roles that can be assigned to a token: -Let us create a token for the AI Deploy apps matching the label `group=A` with read-only access in the GRA cluster. -To create an AI Deploy app token we need to specify 3 parameters: +- **AI Platform - Reader**: allows only querying the app +- **AI Platform - Operator**: allows querying and full lifecycle management (start/stop/delete) -- The token scope is specified through label selectors, and a token will be scoped over any app matching the set of label selectors. In this case `group=A` -- The token role: AI Training - Read-only -- The region (cluster in which are deployed the AI Deploy apps): GRA. +#### Read token -Fill out the form: +Let's create a token for the AI Deploy apps matching the label `group=A` with read-only access in the GRA (Gravelines) cluster. To do this, we will need to fill 4 parameters: ![token generation input read](images/05-token-gen-input-read.png){.thumbnail} -Click `Generate`{.action}. Upon success, you are redirected to the token list with the newly generated token displayed at the top: - -![token generation result](images/06-token-gen-result-read.png){.thumbnail} +- **Token name**: used for token identification, management only. +- **Label selector**: determines which apps the token applies to (e.g., `group=A`). +- **Role**: choose from: + - `AI Reader`: read-only + - `AI Operator`: read & manage +- **Region**: e.g., `GRA` (for Gravelines) -Save the token string for later use. +After completing the form, click `Generate`{.action} to confirm the token creation. > [!warning] -> The token is only displayed once, make sure to save it before leaving the page or you will need to regenerate the token. +> You will then receive the value of your new token, which you must **carefully save**, as its value is only displayed once. If you lose the token value, you will need to [regenerate it](#regenerating-a-token). + +You will then be redirected to the token list, where the newly generated token will be displayed at the top: + +![token generation result](images/06-token-gen-result-read.png){.thumbnail} This newly generated token provides read access over all resources tagged with the label `group=A` including the ones submitted after the creation of the token. -#### Operator token +If you prefer working from the command line, you can generate the same token using the `ovhai` CLI: -An operator token grants read access along with management access for the matching apps. This means that you can manage the AI Deploy app lifecycle (start/stop/delete) using either the CLI (more info [here](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli)) or the [AI Training API](https://gra.ai.cloud.ovh.net/) by providing this token. +```console +ovhai token create reader-token --role read --label-selector group=A +``` -This Operator token will be scoped on a specific AI Deploy app and we will use the default `ovh/id` label to do so (since it is reserved, there is only one AI Deploy app that can match this label selector). +#### Operator token -- Token scope: `ovh/id=4c4f6253-a059-424a-92da-5e06a12ddffd` -- The token role: AI Training - Operator -- The region: GRA. +An operator token grants read access along with management access for the matching apps. This allows you to manage the AI Deploy app lifecycle (start/stop/delete) using either the CLI (more info [here](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli)) or the [AI API](https://gra.ai.cloud.ovh.net/) by providing this token. ![token generation input operator](images/07-token-gen-input-op.png){.thumbnail} -Additional information about the use of a token to manage an AI Training resource can be found [here](/pages/public_cloud/ai_machine_learning/cli_13_howto_app_token_cli#use-the-app-token). +Equivalent CLI command is: + +```console +ovhai token create operator-token --role operator --label-selector group=A +``` + +You can also scope a token to a specific app using the `ovh/id` label and the app’s ID as its value. This label is added automatically by default as explained [above](#instructions) and, because it is reserved, it will uniquely match only one app. ### Using a token to query an AI Deploy app With the token we generated in the previous step, we will now query the app. For this demonstration, we deployed a simple Hello World app that always responds `Hello, World!`. -You can get the access URL of your app in the details of the AI Deploy app, above the **Tags**. +You can get the access URL of your app in the details of the AI Deploy app, above the **Labels**. #### Browser @@ -114,7 +159,7 @@ You now land on the exposed AI Deploy app service: #### Code integration -You can also directly CURL the AI Deploy app using the token as an `Authorization` header: +You can also use CURL to directly query the AI Deploy app using the token as an `Authorization` header: ```bash export TOKEN= @@ -137,7 +182,7 @@ From the list of tokens, click on the action menu and select `Regenerate`{.actio ![token regenerate](images/08-token-list-regen.png){.thumbnail} -Then click on `Regenerate`{.action} to confirm. +Then click on `Confirm`{.action}: ![token regenerate confirm](images/09-token-regen-confirm.png){.thumbnail} @@ -147,8 +192,12 @@ If you simply need to invalidate the token, you can delete it using the same act ![token delete](images/10-token-list-delete.png) +## Go further + +Additional information about the use of a token to manage AI Solutions using `ovhai` CLI can be found [here](/pages/public_cloud/ai_machine_learning/cli_13_howto_app_token_cli). + ## Feedback Please feel free to send us your questions, feedback and suggestions to help our team improve the service on the OVHcloud [Discord server](https://discord.gg/ovhcloud) -If you need training or technical assistance to implement our solutions, contact your sales representative or click on [this link](/links/professional-services) to get a quote and ask our Professional Services experts for a custom analysis of your project. +If you need training or technical assistance to implement our solutions, contact your sales representative or click on [this link](/links/professional-services) to get a quote and ask our Professional Services experts for a custom analysis of your project. \ No newline at end of file diff --git a/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/guide.en-gb.md b/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/guide.en-gb.md index 58492b8fe82..a1ded8775a2 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/guide.en-gb.md +++ b/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/guide.en-gb.md @@ -1,7 +1,7 @@ --- title: AI Deploy - Accessing your app with tokens -excerpt: Discover how to create a scoped token and query your AI Deploy app -updated: 2023-04-04 +excerpt: Learn how to create scoped tokens to securely access your AI Deploy applications +updated: 2025-07-28 --- > [!primary] @@ -13,92 +13,137 @@ updated: 2023-04-04 This guide covers the creation of application tokens for AI Deploy. +This is particularly useful when you want to make your app accessible to others without sharing your username and password. Moreover, using tokens facilitates the integration of your app with other services or scripts, such as those using curl, allowing for a more automated and flexible interaction with your AI Deploy application. + +In this tutorial, we will create and assign tokens to a basic AI Deploy app, running the [infrastructureascode/hello-world](https://hub.docker.com/r/infrastructureascode/hello-world) Docker image. + ## Requirements - a **Public cloud** project - access to the [OVHcloud Control Panel](/links/manager) -- a Running AI Deploy app (the deployed app in this guide uses the Docker image [infrastructureascode/hello-world](https://hub.docker.com/r/infrastructureascode/hello-world)) ## Instructions -### Adding labels to an App +By default, the following two labels are automatically added to each AI Deploy application: -Tokens are scoped based on labels added to your AI Deploy app. To scope a token you need to add a label to your AI Deploy app upon submission. +- `ovh/id` whose value is the ID of the AI Deploy app +- `ovh/type` whose value is `app`, the type of AI resource -![app add label](images/01-app-add-label.png){.thumbnail} +> [!primary] +> These labels are prefixed by `ovh/`, meaning these are reserved by the platform. These labels will be automatically overwritten by the platform if you attempt to redefine them during submission. They won't be displayed in the manager UI. +> -In this instance we add the label `group=A` to the AI Deploy app. A set of defaults labels is added to all AI Deploy apps: +In addition to these default labels, you can **create new ones** to further customize and secure your application access. -- `ovh/id` whose value is the ID of the AI Deploy app -- `ovh/type` with value `app`, the type of AI resource +### Adding labels to an app -> [!primary] -> Labels prefixed by `ovh/` are reserved by the platform, those labels are overriden upon submission if any are provided. +Tokens are scoped based on labels added to your AI Deploy app. To scope a token, you must add a label to your AI Deploy app. This can be done either during the app creation process or after the app is deployed. You can add multiple labels by repeating the following process. -All the labels of an AI Deploy app are listed on the AI Deploy app details under **Tags**: +#### Adding label during app creation -![app dashboard](images/02-app-dashboard.png){.thumbnail} +To add a label when creating an AI Deploy app, access the `Advanced Configuration`{.action} step in the app creation process. This section allows you to specify a custom Docker command, the mounted volumes, and **the app labels**. -### Generating tokens +From this last sub-section, you can add a key-value pair. The key is the label identifier (e.g., `group`), while the value is the corresponding value assigned to this key (e.g., `A`). In this tutorial, we use the example `group=A` as the label of the AI Deploy app: -From the **AI Deploy** page, you access the tokens management page by clicking the `Tokens`{.action} tab. +![app add label](images/01-app-add-label.png){.thumbnail} -![app list](images/03-app-list.png) +Once created, all the labels of an AI Deploy app are listed on the app details, under **Labels** field: -Once on the token management tab, simply click on `New Token`{.action}. +![app dashboard](images/02-app-dashboard.png){.thumbnail} -![token list new](images/04-token-list-new-2.png){.thumbnail} +#### Adding label to an existing app + +If your app is already deployed, you can still add or update labels at any time using the Control Panel (UI) or the `ovhai` CLI. + +> [!tabs] +> **Using the Control Panel (UI)** +>> +>> Navigate to the **AI Deploy** section where all your apps are listed. **Click the name of your app** to open its details page. Locate the **Labels** section. Enter the key-value pair and click `+`{.action} to add the label to your app. +>> +>> ![image](images/03-app-add-label-existing-app.png){.thumbnail} +>> +>> After saving, the added label will be visible in the **Labels** section of the app details. +>> +> **Using ovhai CLI** +>> +>> To follow this section, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. +>> +>> You can also add labels to an existing app using the `ovhai` CLI. Run the following command: +>> +>> ```console +>> ovhai app set-label +>> ``` +>> +>> Replace with the unique identifier of your app (found in the app details or by running `ovhai app ls`). And replace and with your desired key and value pair. For example: +>> +>> ```console +>> ovhai app set-label a8318623-8357-48b4-bd3b-648c3e343ec9 group A +>> ``` +>> +>> This command adds the label `group=A` to the app with ID `a8318623-8357-48b4-bd3b-648c3e343ec9`. +>> +>> You can verify app labels by running `ovhai app get `. Labels will be displayed at the top of app details, in the *Labels* field. -#### Read token +### Generating tokens -There are two types of roles that can be assigned to a token: +From the **AI Dashboard** page, you can access the tokens management page by clicking on the `Tokens`{.action} tab. From there, you can click on the `+ Create a token`{.action} button to create a new token: -- AI Platform - Read-only -- AI Platform - Operator +![token list new](images/04-token-list.png){.thumbnail} -A Read-only token will only grant you the right to query the deployed app while an Operator token would also allow you to manage the AI Deploy app itself. +There are two types of roles that can be assigned to a token: -Let us create a token for the AI Deploy apps matching the label `group=A` with read-only access in the GRA cluster. -To create an AI Deploy app token we need to specify 3 parameters: +- **AI Platform - Reader**: allows only querying the app +- **AI Platform - Operator**: allows querying and full lifecycle management (start/stop/delete) -- The token scope is specified through label selectors, and a token will be scoped over any app matching the set of label selectors. In this case `group=A` -- The token role: AI Training - Read-only -- The region (cluster in which are deployed the AI Deploy apps): GRA. +#### Read token -Fill out the form: +Let's create a token for the AI Deploy apps matching the label `group=A` with read-only access in the GRA (Gravelines) cluster. To do this, we will need to fill 4 parameters: ![token generation input read](images/05-token-gen-input-read.png){.thumbnail} -Click `Generate`{.action}. Upon success, you are redirected to the token list with the newly generated token displayed at the top: - -![token generation result](images/06-token-gen-result-read.png){.thumbnail} +- **Token name**: used for token identification, management only. +- **Label selector**: determines which apps the token applies to (e.g., `group=A`). +- **Role**: choose from: + - `AI Reader`: read-only + - `AI Operator`: read & manage +- **Region**: e.g., `GRA` (for Gravelines) -Save the token string for later use. +After completing the form, click `Generate`{.action} to confirm the token creation. > [!warning] -> The token is only displayed once, make sure to save it before leaving the page or you will need to regenerate the token. +> You will then receive the value of your new token, which you must **carefully save**, as its value is only displayed once. If you lose the token value, you will need to [regenerate it](#regenerating-a-token). + +You will then be redirected to the token list, where the newly generated token will be displayed at the top: + +![token generation result](images/06-token-gen-result-read.png){.thumbnail} This newly generated token provides read access over all resources tagged with the label `group=A` including the ones submitted after the creation of the token. -#### Operator token +If you prefer working from the command line, you can generate the same token using the `ovhai` CLI: -An operator token grants read access along with management access for the matching apps. This means that you can manage the AI Deploy app lifecycle (start/stop/delete) using either the CLI (more info [here](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli)) or the [AI Training API](https://gra.ai.cloud.ovh.net/) by providing this token. +```console +ovhai token create reader-token --role read --label-selector group=A +``` -This Operator token will be scoped on a specific AI Deploy app and we will use the default `ovh/id` label to do so (since it is reserved, there is only one AI Deploy app that can match this label selector). +#### Operator token -- Token scope: `ovh/id=4c4f6253-a059-424a-92da-5e06a12ddffd` -- The token role: AI Training - Operator -- The region: GRA. +An operator token grants read access along with management access for the matching apps. This allows you to manage the AI Deploy app lifecycle (start/stop/delete) using either the CLI (more info [here](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli)) or the [AI API](https://gra.ai.cloud.ovh.net/) by providing this token. ![token generation input operator](images/07-token-gen-input-op.png){.thumbnail} -Additional information about the use of a token to manage an AI Training resource can be found [here](/pages/public_cloud/ai_machine_learning/cli_13_howto_app_token_cli#use-the-app-token). +Equivalent CLI command is: + +```console +ovhai token create operator-token --role operator --label-selector group=A +``` + +You can also scope a token to a specific app using the `ovh/id` label and the app’s ID as its value. This label is added automatically by default as explained [above](#instructions) and, because it is reserved, it will uniquely match only one app. ### Using a token to query an AI Deploy app With the token we generated in the previous step, we will now query the app. For this demonstration, we deployed a simple Hello World app that always responds `Hello, World!`. -You can get the access URL of your app in the details of the AI Deploy app, above the **Tags**. +You can get the access URL of your app in the details of the AI Deploy app, above the **Labels**. #### Browser @@ -114,7 +159,7 @@ You now land on the exposed AI Deploy app service: #### Code integration -You can also directly CURL the AI Deploy app using the token as an `Authorization` header: +You can also use CURL to directly query the AI Deploy app using the token as an `Authorization` header: ```bash export TOKEN= @@ -137,7 +182,7 @@ From the list of tokens, click on the action menu and select `Regenerate`{.actio ![token regenerate](images/08-token-list-regen.png){.thumbnail} -Then click on `Regenerate`{.action} to confirm. +Then click on `Confirm`{.action}: ![token regenerate confirm](images/09-token-regen-confirm.png){.thumbnail} @@ -147,6 +192,10 @@ If you simply need to invalidate the token, you can delete it using the same act ![token delete](images/10-token-list-delete.png) +## Go further + +Additional information about the use of a token to manage AI Solutions using `ovhai` CLI can be found [here](/pages/public_cloud/ai_machine_learning/cli_13_howto_app_token_cli). + ## Feedback Please feel free to send us your questions, feedback and suggestions to help our team improve the service on the OVHcloud [Discord server](https://discord.gg/ovhcloud) diff --git a/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/guide.en-ie.md b/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/guide.en-ie.md index 58492b8fe82..2f994c23ae5 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/guide.en-ie.md +++ b/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/guide.en-ie.md @@ -1,7 +1,7 @@ --- title: AI Deploy - Accessing your app with tokens -excerpt: Discover how to create a scoped token and query your AI Deploy app -updated: 2023-04-04 +excerpt: Learn how to create scoped tokens to securely access your AI Deploy applications +updated: 2025-07-28 --- > [!primary] @@ -13,92 +13,137 @@ updated: 2023-04-04 This guide covers the creation of application tokens for AI Deploy. +This is particularly useful when you want to make your app accessible to others without sharing your username and password. Moreover, using tokens facilitates the integration of your app with other services or scripts, such as those using curl, allowing for a more automated and flexible interaction with your AI Deploy application. + +In this tutorial, we will create and assign tokens to a basic AI Deploy app, running the [infrastructureascode/hello-world](https://hub.docker.com/r/infrastructureascode/hello-world) Docker image. + ## Requirements - a **Public cloud** project - access to the [OVHcloud Control Panel](/links/manager) -- a Running AI Deploy app (the deployed app in this guide uses the Docker image [infrastructureascode/hello-world](https://hub.docker.com/r/infrastructureascode/hello-world)) ## Instructions -### Adding labels to an App +By default, the following two labels are automatically added to each AI Deploy application: -Tokens are scoped based on labels added to your AI Deploy app. To scope a token you need to add a label to your AI Deploy app upon submission. +- `ovh/id` whose value is the ID of the AI Deploy app +- `ovh/type` whose value is `app`, the type of AI resource -![app add label](images/01-app-add-label.png){.thumbnail} +> [!primary] +> These labels are prefixed by `ovh/`, meaning these are reserved by the platform. These labels will be automatically overwritten by the platform if you attempt to redefine them during submission. They won't be displayed in the manager UI. +> -In this instance we add the label `group=A` to the AI Deploy app. A set of defaults labels is added to all AI Deploy apps: +In addition to these default labels, you can **create new ones** to further customize and secure your application access. -- `ovh/id` whose value is the ID of the AI Deploy app -- `ovh/type` with value `app`, the type of AI resource +### Adding labels to an app -> [!primary] -> Labels prefixed by `ovh/` are reserved by the platform, those labels are overriden upon submission if any are provided. +Tokens are scoped based on labels added to your AI Deploy app. To scope a token, you must add a label to your AI Deploy app. This can be done either during the app creation process or after the app is deployed. You can add multiple labels by repeating the following process. -All the labels of an AI Deploy app are listed on the AI Deploy app details under **Tags**: +#### Adding label during app creation -![app dashboard](images/02-app-dashboard.png){.thumbnail} +To add a label when creating an AI Deploy app, access the `Advanced Configuration`{.action} step in the app creation process. This section allows you to specify a custom Docker command, the mounted volumes, and **the app labels**. -### Generating tokens +From this last sub-section, you can add a key-value pair. The key is the label identifier (e.g., `group`), while the value is the corresponding value assigned to this key (e.g., `A`). In this tutorial, we use the example `group=A` as the label of the AI Deploy app: -From the **AI Deploy** page, you access the tokens management page by clicking the `Tokens`{.action} tab. +![app add label](images/01-app-add-label.png){.thumbnail} -![app list](images/03-app-list.png) +Once created, all the labels of an AI Deploy app are listed on the app details, under **Labels** field: -Once on the token management tab, simply click on `New Token`{.action}. +![app dashboard](images/02-app-dashboard.png){.thumbnail} -![token list new](images/04-token-list-new-2.png){.thumbnail} +#### Adding label to an existing app + +If your app is already deployed, you can still add or update labels at any time using the Control Panel (UI) or the `ovhai` CLI. + +> [!tabs] +> **Using the Control Panel (UI)** +>> +>> Navigate to the **AI Deploy** section where all your apps are listed. **Click the name of your app** to open its details page. Locate the **Labels** section. Enter the key-value pair and click `+`{.action} to add the label to your app. +>> +>> ![image](images/03-app-add-label-existing-app.png){.thumbnail} +>> +>> After saving, the added label will be visible in the **Labels** section of the app details. +>> +> **Using ovhai CLI** +>> +>> To follow this section, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. +>> +>> You can also add labels to an existing app using the `ovhai` CLI. Run the following command: +>> +>> ```console +>> ovhai app set-label +>> ``` +>> +>> Replace with the unique identifier of your app (found in the app details or by running `ovhai app ls`). And replace and with your desired key and value pair. For example: +>> +>> ```console +>> ovhai app set-label a8318623-8357-48b4-bd3b-648c3e343ec9 group A +>> ``` +>> +>> This command adds the label `group=A` to the app with ID `a8318623-8357-48b4-bd3b-648c3e343ec9`. +>> +>> You can verify app labels by running `ovhai app get `. Labels will be displayed at the top of app details, in the *Labels* field. -#### Read token +### Generating tokens -There are two types of roles that can be assigned to a token: +From the **AI Dashboard** page, you can access the tokens management page by clicking on the `Tokens`{.action} tab. From there, you can click on the `+ Create a token`{.action} button to create a new token: -- AI Platform - Read-only -- AI Platform - Operator +![token list new](images/04-token-list.png){.thumbnail} -A Read-only token will only grant you the right to query the deployed app while an Operator token would also allow you to manage the AI Deploy app itself. +There are two types of roles that can be assigned to a token: -Let us create a token for the AI Deploy apps matching the label `group=A` with read-only access in the GRA cluster. -To create an AI Deploy app token we need to specify 3 parameters: +- **AI Platform - Reader**: allows only querying the app +- **AI Platform - Operator**: allows querying and full lifecycle management (start/stop/delete) -- The token scope is specified through label selectors, and a token will be scoped over any app matching the set of label selectors. In this case `group=A` -- The token role: AI Training - Read-only -- The region (cluster in which are deployed the AI Deploy apps): GRA. +#### Read token -Fill out the form: +Let's create a token for the AI Deploy apps matching the label `group=A` with read-only access in the GRA (Gravelines) cluster. To do this, we will need to fill 4 parameters: ![token generation input read](images/05-token-gen-input-read.png){.thumbnail} -Click `Generate`{.action}. Upon success, you are redirected to the token list with the newly generated token displayed at the top: - -![token generation result](images/06-token-gen-result-read.png){.thumbnail} +- **Token name**: used for token identification, management only. +- **Label selector**: determines which apps the token applies to (e.g., `group=A`). +- **Role**: choose from: + - `AI Reader`: read-only + - `AI Operator`: read & manage +- **Region**: e.g., `GRA` (for Gravelines) -Save the token string for later use. +After completing the form, click `Generate`{.action} to confirm the token creation. > [!warning] -> The token is only displayed once, make sure to save it before leaving the page or you will need to regenerate the token. +> You will then receive the value of your new token, which you must **carefully save**, as its value is only displayed once. If you lose the token value, you will need to [regenerate it](#regenerating-a-token). + +You will then be redirected to the token list, where the newly generated token will be displayed at the top: + +![token generation result](images/06-token-gen-result-read.png){.thumbnail} This newly generated token provides read access over all resources tagged with the label `group=A` including the ones submitted after the creation of the token. -#### Operator token +If you prefer working from the command line, you can generate the same token using the `ovhai` CLI: -An operator token grants read access along with management access for the matching apps. This means that you can manage the AI Deploy app lifecycle (start/stop/delete) using either the CLI (more info [here](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli)) or the [AI Training API](https://gra.ai.cloud.ovh.net/) by providing this token. +```console +ovhai token create reader-token --role read --label-selector group=A +``` -This Operator token will be scoped on a specific AI Deploy app and we will use the default `ovh/id` label to do so (since it is reserved, there is only one AI Deploy app that can match this label selector). +#### Operator token -- Token scope: `ovh/id=4c4f6253-a059-424a-92da-5e06a12ddffd` -- The token role: AI Training - Operator -- The region: GRA. +An operator token grants read access along with management access for the matching apps. This allows you to manage the AI Deploy app lifecycle (start/stop/delete) using either the CLI (more info [here](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli)) or the [AI API](https://gra.ai.cloud.ovh.net/) by providing this token. ![token generation input operator](images/07-token-gen-input-op.png){.thumbnail} -Additional information about the use of a token to manage an AI Training resource can be found [here](/pages/public_cloud/ai_machine_learning/cli_13_howto_app_token_cli#use-the-app-token). +Equivalent CLI command is: + +```console +ovhai token create operator-token --role operator --label-selector group=A +``` + +You can also scope a token to a specific app using the `ovh/id` label and the app’s ID as its value. This label is added automatically by default as explained [above](#instructions) and, because it is reserved, it will uniquely match only one app. ### Using a token to query an AI Deploy app With the token we generated in the previous step, we will now query the app. For this demonstration, we deployed a simple Hello World app that always responds `Hello, World!`. -You can get the access URL of your app in the details of the AI Deploy app, above the **Tags**. +You can get the access URL of your app in the details of the AI Deploy app, above the **Labels**. #### Browser @@ -114,7 +159,7 @@ You now land on the exposed AI Deploy app service: #### Code integration -You can also directly CURL the AI Deploy app using the token as an `Authorization` header: +You can also use CURL to directly query the AI Deploy app using the token as an `Authorization` header: ```bash export TOKEN= @@ -137,7 +182,7 @@ From the list of tokens, click on the action menu and select `Regenerate`{.actio ![token regenerate](images/08-token-list-regen.png){.thumbnail} -Then click on `Regenerate`{.action} to confirm. +Then click on `Confirm`{.action}: ![token regenerate confirm](images/09-token-regen-confirm.png){.thumbnail} @@ -147,8 +192,12 @@ If you simply need to invalidate the token, you can delete it using the same act ![token delete](images/10-token-list-delete.png) +## Go further + +Additional information about the use of a token to manage AI Solutions using `ovhai` CLI can be found [here](/pages/public_cloud/ai_machine_learning/cli_13_howto_app_token_cli). + ## Feedback Please feel free to send us your questions, feedback and suggestions to help our team improve the service on the OVHcloud [Discord server](https://discord.gg/ovhcloud) -If you need training or technical assistance to implement our solutions, contact your sales representative or click on [this link](/links/professional-services) to get a quote and ask our Professional Services experts for a custom analysis of your project. +If you need training or technical assistance to implement our solutions, contact your sales representative or click on [this link](/links/professional-services) to get a quote and ask our Professional Services experts for a custom analysis of your project. \ No newline at end of file diff --git a/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/guide.en-sg.md b/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/guide.en-sg.md index 58492b8fe82..2f994c23ae5 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/guide.en-sg.md +++ b/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/guide.en-sg.md @@ -1,7 +1,7 @@ --- title: AI Deploy - Accessing your app with tokens -excerpt: Discover how to create a scoped token and query your AI Deploy app -updated: 2023-04-04 +excerpt: Learn how to create scoped tokens to securely access your AI Deploy applications +updated: 2025-07-28 --- > [!primary] @@ -13,92 +13,137 @@ updated: 2023-04-04 This guide covers the creation of application tokens for AI Deploy. +This is particularly useful when you want to make your app accessible to others without sharing your username and password. Moreover, using tokens facilitates the integration of your app with other services or scripts, such as those using curl, allowing for a more automated and flexible interaction with your AI Deploy application. + +In this tutorial, we will create and assign tokens to a basic AI Deploy app, running the [infrastructureascode/hello-world](https://hub.docker.com/r/infrastructureascode/hello-world) Docker image. + ## Requirements - a **Public cloud** project - access to the [OVHcloud Control Panel](/links/manager) -- a Running AI Deploy app (the deployed app in this guide uses the Docker image [infrastructureascode/hello-world](https://hub.docker.com/r/infrastructureascode/hello-world)) ## Instructions -### Adding labels to an App +By default, the following two labels are automatically added to each AI Deploy application: -Tokens are scoped based on labels added to your AI Deploy app. To scope a token you need to add a label to your AI Deploy app upon submission. +- `ovh/id` whose value is the ID of the AI Deploy app +- `ovh/type` whose value is `app`, the type of AI resource -![app add label](images/01-app-add-label.png){.thumbnail} +> [!primary] +> These labels are prefixed by `ovh/`, meaning these are reserved by the platform. These labels will be automatically overwritten by the platform if you attempt to redefine them during submission. They won't be displayed in the manager UI. +> -In this instance we add the label `group=A` to the AI Deploy app. A set of defaults labels is added to all AI Deploy apps: +In addition to these default labels, you can **create new ones** to further customize and secure your application access. -- `ovh/id` whose value is the ID of the AI Deploy app -- `ovh/type` with value `app`, the type of AI resource +### Adding labels to an app -> [!primary] -> Labels prefixed by `ovh/` are reserved by the platform, those labels are overriden upon submission if any are provided. +Tokens are scoped based on labels added to your AI Deploy app. To scope a token, you must add a label to your AI Deploy app. This can be done either during the app creation process or after the app is deployed. You can add multiple labels by repeating the following process. -All the labels of an AI Deploy app are listed on the AI Deploy app details under **Tags**: +#### Adding label during app creation -![app dashboard](images/02-app-dashboard.png){.thumbnail} +To add a label when creating an AI Deploy app, access the `Advanced Configuration`{.action} step in the app creation process. This section allows you to specify a custom Docker command, the mounted volumes, and **the app labels**. -### Generating tokens +From this last sub-section, you can add a key-value pair. The key is the label identifier (e.g., `group`), while the value is the corresponding value assigned to this key (e.g., `A`). In this tutorial, we use the example `group=A` as the label of the AI Deploy app: -From the **AI Deploy** page, you access the tokens management page by clicking the `Tokens`{.action} tab. +![app add label](images/01-app-add-label.png){.thumbnail} -![app list](images/03-app-list.png) +Once created, all the labels of an AI Deploy app are listed on the app details, under **Labels** field: -Once on the token management tab, simply click on `New Token`{.action}. +![app dashboard](images/02-app-dashboard.png){.thumbnail} -![token list new](images/04-token-list-new-2.png){.thumbnail} +#### Adding label to an existing app + +If your app is already deployed, you can still add or update labels at any time using the Control Panel (UI) or the `ovhai` CLI. + +> [!tabs] +> **Using the Control Panel (UI)** +>> +>> Navigate to the **AI Deploy** section where all your apps are listed. **Click the name of your app** to open its details page. Locate the **Labels** section. Enter the key-value pair and click `+`{.action} to add the label to your app. +>> +>> ![image](images/03-app-add-label-existing-app.png){.thumbnail} +>> +>> After saving, the added label will be visible in the **Labels** section of the app details. +>> +> **Using ovhai CLI** +>> +>> To follow this section, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. +>> +>> You can also add labels to an existing app using the `ovhai` CLI. Run the following command: +>> +>> ```console +>> ovhai app set-label +>> ``` +>> +>> Replace with the unique identifier of your app (found in the app details or by running `ovhai app ls`). And replace and with your desired key and value pair. For example: +>> +>> ```console +>> ovhai app set-label a8318623-8357-48b4-bd3b-648c3e343ec9 group A +>> ``` +>> +>> This command adds the label `group=A` to the app with ID `a8318623-8357-48b4-bd3b-648c3e343ec9`. +>> +>> You can verify app labels by running `ovhai app get `. Labels will be displayed at the top of app details, in the *Labels* field. -#### Read token +### Generating tokens -There are two types of roles that can be assigned to a token: +From the **AI Dashboard** page, you can access the tokens management page by clicking on the `Tokens`{.action} tab. From there, you can click on the `+ Create a token`{.action} button to create a new token: -- AI Platform - Read-only -- AI Platform - Operator +![token list new](images/04-token-list.png){.thumbnail} -A Read-only token will only grant you the right to query the deployed app while an Operator token would also allow you to manage the AI Deploy app itself. +There are two types of roles that can be assigned to a token: -Let us create a token for the AI Deploy apps matching the label `group=A` with read-only access in the GRA cluster. -To create an AI Deploy app token we need to specify 3 parameters: +- **AI Platform - Reader**: allows only querying the app +- **AI Platform - Operator**: allows querying and full lifecycle management (start/stop/delete) -- The token scope is specified through label selectors, and a token will be scoped over any app matching the set of label selectors. In this case `group=A` -- The token role: AI Training - Read-only -- The region (cluster in which are deployed the AI Deploy apps): GRA. +#### Read token -Fill out the form: +Let's create a token for the AI Deploy apps matching the label `group=A` with read-only access in the GRA (Gravelines) cluster. To do this, we will need to fill 4 parameters: ![token generation input read](images/05-token-gen-input-read.png){.thumbnail} -Click `Generate`{.action}. Upon success, you are redirected to the token list with the newly generated token displayed at the top: - -![token generation result](images/06-token-gen-result-read.png){.thumbnail} +- **Token name**: used for token identification, management only. +- **Label selector**: determines which apps the token applies to (e.g., `group=A`). +- **Role**: choose from: + - `AI Reader`: read-only + - `AI Operator`: read & manage +- **Region**: e.g., `GRA` (for Gravelines) -Save the token string for later use. +After completing the form, click `Generate`{.action} to confirm the token creation. > [!warning] -> The token is only displayed once, make sure to save it before leaving the page or you will need to regenerate the token. +> You will then receive the value of your new token, which you must **carefully save**, as its value is only displayed once. If you lose the token value, you will need to [regenerate it](#regenerating-a-token). + +You will then be redirected to the token list, where the newly generated token will be displayed at the top: + +![token generation result](images/06-token-gen-result-read.png){.thumbnail} This newly generated token provides read access over all resources tagged with the label `group=A` including the ones submitted after the creation of the token. -#### Operator token +If you prefer working from the command line, you can generate the same token using the `ovhai` CLI: -An operator token grants read access along with management access for the matching apps. This means that you can manage the AI Deploy app lifecycle (start/stop/delete) using either the CLI (more info [here](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli)) or the [AI Training API](https://gra.ai.cloud.ovh.net/) by providing this token. +```console +ovhai token create reader-token --role read --label-selector group=A +``` -This Operator token will be scoped on a specific AI Deploy app and we will use the default `ovh/id` label to do so (since it is reserved, there is only one AI Deploy app that can match this label selector). +#### Operator token -- Token scope: `ovh/id=4c4f6253-a059-424a-92da-5e06a12ddffd` -- The token role: AI Training - Operator -- The region: GRA. +An operator token grants read access along with management access for the matching apps. This allows you to manage the AI Deploy app lifecycle (start/stop/delete) using either the CLI (more info [here](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli)) or the [AI API](https://gra.ai.cloud.ovh.net/) by providing this token. ![token generation input operator](images/07-token-gen-input-op.png){.thumbnail} -Additional information about the use of a token to manage an AI Training resource can be found [here](/pages/public_cloud/ai_machine_learning/cli_13_howto_app_token_cli#use-the-app-token). +Equivalent CLI command is: + +```console +ovhai token create operator-token --role operator --label-selector group=A +``` + +You can also scope a token to a specific app using the `ovh/id` label and the app’s ID as its value. This label is added automatically by default as explained [above](#instructions) and, because it is reserved, it will uniquely match only one app. ### Using a token to query an AI Deploy app With the token we generated in the previous step, we will now query the app. For this demonstration, we deployed a simple Hello World app that always responds `Hello, World!`. -You can get the access URL of your app in the details of the AI Deploy app, above the **Tags**. +You can get the access URL of your app in the details of the AI Deploy app, above the **Labels**. #### Browser @@ -114,7 +159,7 @@ You now land on the exposed AI Deploy app service: #### Code integration -You can also directly CURL the AI Deploy app using the token as an `Authorization` header: +You can also use CURL to directly query the AI Deploy app using the token as an `Authorization` header: ```bash export TOKEN= @@ -137,7 +182,7 @@ From the list of tokens, click on the action menu and select `Regenerate`{.actio ![token regenerate](images/08-token-list-regen.png){.thumbnail} -Then click on `Regenerate`{.action} to confirm. +Then click on `Confirm`{.action}: ![token regenerate confirm](images/09-token-regen-confirm.png){.thumbnail} @@ -147,8 +192,12 @@ If you simply need to invalidate the token, you can delete it using the same act ![token delete](images/10-token-list-delete.png) +## Go further + +Additional information about the use of a token to manage AI Solutions using `ovhai` CLI can be found [here](/pages/public_cloud/ai_machine_learning/cli_13_howto_app_token_cli). + ## Feedback Please feel free to send us your questions, feedback and suggestions to help our team improve the service on the OVHcloud [Discord server](https://discord.gg/ovhcloud) -If you need training or technical assistance to implement our solutions, contact your sales representative or click on [this link](/links/professional-services) to get a quote and ask our Professional Services experts for a custom analysis of your project. +If you need training or technical assistance to implement our solutions, contact your sales representative or click on [this link](/links/professional-services) to get a quote and ask our Professional Services experts for a custom analysis of your project. \ No newline at end of file diff --git a/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/guide.en-us.md b/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/guide.en-us.md index 58492b8fe82..2f994c23ae5 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/guide.en-us.md +++ b/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/guide.en-us.md @@ -1,7 +1,7 @@ --- title: AI Deploy - Accessing your app with tokens -excerpt: Discover how to create a scoped token and query your AI Deploy app -updated: 2023-04-04 +excerpt: Learn how to create scoped tokens to securely access your AI Deploy applications +updated: 2025-07-28 --- > [!primary] @@ -13,92 +13,137 @@ updated: 2023-04-04 This guide covers the creation of application tokens for AI Deploy. +This is particularly useful when you want to make your app accessible to others without sharing your username and password. Moreover, using tokens facilitates the integration of your app with other services or scripts, such as those using curl, allowing for a more automated and flexible interaction with your AI Deploy application. + +In this tutorial, we will create and assign tokens to a basic AI Deploy app, running the [infrastructureascode/hello-world](https://hub.docker.com/r/infrastructureascode/hello-world) Docker image. + ## Requirements - a **Public cloud** project - access to the [OVHcloud Control Panel](/links/manager) -- a Running AI Deploy app (the deployed app in this guide uses the Docker image [infrastructureascode/hello-world](https://hub.docker.com/r/infrastructureascode/hello-world)) ## Instructions -### Adding labels to an App +By default, the following two labels are automatically added to each AI Deploy application: -Tokens are scoped based on labels added to your AI Deploy app. To scope a token you need to add a label to your AI Deploy app upon submission. +- `ovh/id` whose value is the ID of the AI Deploy app +- `ovh/type` whose value is `app`, the type of AI resource -![app add label](images/01-app-add-label.png){.thumbnail} +> [!primary] +> These labels are prefixed by `ovh/`, meaning these are reserved by the platform. These labels will be automatically overwritten by the platform if you attempt to redefine them during submission. They won't be displayed in the manager UI. +> -In this instance we add the label `group=A` to the AI Deploy app. A set of defaults labels is added to all AI Deploy apps: +In addition to these default labels, you can **create new ones** to further customize and secure your application access. -- `ovh/id` whose value is the ID of the AI Deploy app -- `ovh/type` with value `app`, the type of AI resource +### Adding labels to an app -> [!primary] -> Labels prefixed by `ovh/` are reserved by the platform, those labels are overriden upon submission if any are provided. +Tokens are scoped based on labels added to your AI Deploy app. To scope a token, you must add a label to your AI Deploy app. This can be done either during the app creation process or after the app is deployed. You can add multiple labels by repeating the following process. -All the labels of an AI Deploy app are listed on the AI Deploy app details under **Tags**: +#### Adding label during app creation -![app dashboard](images/02-app-dashboard.png){.thumbnail} +To add a label when creating an AI Deploy app, access the `Advanced Configuration`{.action} step in the app creation process. This section allows you to specify a custom Docker command, the mounted volumes, and **the app labels**. -### Generating tokens +From this last sub-section, you can add a key-value pair. The key is the label identifier (e.g., `group`), while the value is the corresponding value assigned to this key (e.g., `A`). In this tutorial, we use the example `group=A` as the label of the AI Deploy app: -From the **AI Deploy** page, you access the tokens management page by clicking the `Tokens`{.action} tab. +![app add label](images/01-app-add-label.png){.thumbnail} -![app list](images/03-app-list.png) +Once created, all the labels of an AI Deploy app are listed on the app details, under **Labels** field: -Once on the token management tab, simply click on `New Token`{.action}. +![app dashboard](images/02-app-dashboard.png){.thumbnail} -![token list new](images/04-token-list-new-2.png){.thumbnail} +#### Adding label to an existing app + +If your app is already deployed, you can still add or update labels at any time using the Control Panel (UI) or the `ovhai` CLI. + +> [!tabs] +> **Using the Control Panel (UI)** +>> +>> Navigate to the **AI Deploy** section where all your apps are listed. **Click the name of your app** to open its details page. Locate the **Labels** section. Enter the key-value pair and click `+`{.action} to add the label to your app. +>> +>> ![image](images/03-app-add-label-existing-app.png){.thumbnail} +>> +>> After saving, the added label will be visible in the **Labels** section of the app details. +>> +> **Using ovhai CLI** +>> +>> To follow this section, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. +>> +>> You can also add labels to an existing app using the `ovhai` CLI. Run the following command: +>> +>> ```console +>> ovhai app set-label +>> ``` +>> +>> Replace with the unique identifier of your app (found in the app details or by running `ovhai app ls`). And replace and with your desired key and value pair. For example: +>> +>> ```console +>> ovhai app set-label a8318623-8357-48b4-bd3b-648c3e343ec9 group A +>> ``` +>> +>> This command adds the label `group=A` to the app with ID `a8318623-8357-48b4-bd3b-648c3e343ec9`. +>> +>> You can verify app labels by running `ovhai app get `. Labels will be displayed at the top of app details, in the *Labels* field. -#### Read token +### Generating tokens -There are two types of roles that can be assigned to a token: +From the **AI Dashboard** page, you can access the tokens management page by clicking on the `Tokens`{.action} tab. From there, you can click on the `+ Create a token`{.action} button to create a new token: -- AI Platform - Read-only -- AI Platform - Operator +![token list new](images/04-token-list.png){.thumbnail} -A Read-only token will only grant you the right to query the deployed app while an Operator token would also allow you to manage the AI Deploy app itself. +There are two types of roles that can be assigned to a token: -Let us create a token for the AI Deploy apps matching the label `group=A` with read-only access in the GRA cluster. -To create an AI Deploy app token we need to specify 3 parameters: +- **AI Platform - Reader**: allows only querying the app +- **AI Platform - Operator**: allows querying and full lifecycle management (start/stop/delete) -- The token scope is specified through label selectors, and a token will be scoped over any app matching the set of label selectors. In this case `group=A` -- The token role: AI Training - Read-only -- The region (cluster in which are deployed the AI Deploy apps): GRA. +#### Read token -Fill out the form: +Let's create a token for the AI Deploy apps matching the label `group=A` with read-only access in the GRA (Gravelines) cluster. To do this, we will need to fill 4 parameters: ![token generation input read](images/05-token-gen-input-read.png){.thumbnail} -Click `Generate`{.action}. Upon success, you are redirected to the token list with the newly generated token displayed at the top: - -![token generation result](images/06-token-gen-result-read.png){.thumbnail} +- **Token name**: used for token identification, management only. +- **Label selector**: determines which apps the token applies to (e.g., `group=A`). +- **Role**: choose from: + - `AI Reader`: read-only + - `AI Operator`: read & manage +- **Region**: e.g., `GRA` (for Gravelines) -Save the token string for later use. +After completing the form, click `Generate`{.action} to confirm the token creation. > [!warning] -> The token is only displayed once, make sure to save it before leaving the page or you will need to regenerate the token. +> You will then receive the value of your new token, which you must **carefully save**, as its value is only displayed once. If you lose the token value, you will need to [regenerate it](#regenerating-a-token). + +You will then be redirected to the token list, where the newly generated token will be displayed at the top: + +![token generation result](images/06-token-gen-result-read.png){.thumbnail} This newly generated token provides read access over all resources tagged with the label `group=A` including the ones submitted after the creation of the token. -#### Operator token +If you prefer working from the command line, you can generate the same token using the `ovhai` CLI: -An operator token grants read access along with management access for the matching apps. This means that you can manage the AI Deploy app lifecycle (start/stop/delete) using either the CLI (more info [here](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli)) or the [AI Training API](https://gra.ai.cloud.ovh.net/) by providing this token. +```console +ovhai token create reader-token --role read --label-selector group=A +``` -This Operator token will be scoped on a specific AI Deploy app and we will use the default `ovh/id` label to do so (since it is reserved, there is only one AI Deploy app that can match this label selector). +#### Operator token -- Token scope: `ovh/id=4c4f6253-a059-424a-92da-5e06a12ddffd` -- The token role: AI Training - Operator -- The region: GRA. +An operator token grants read access along with management access for the matching apps. This allows you to manage the AI Deploy app lifecycle (start/stop/delete) using either the CLI (more info [here](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli)) or the [AI API](https://gra.ai.cloud.ovh.net/) by providing this token. ![token generation input operator](images/07-token-gen-input-op.png){.thumbnail} -Additional information about the use of a token to manage an AI Training resource can be found [here](/pages/public_cloud/ai_machine_learning/cli_13_howto_app_token_cli#use-the-app-token). +Equivalent CLI command is: + +```console +ovhai token create operator-token --role operator --label-selector group=A +``` + +You can also scope a token to a specific app using the `ovh/id` label and the app’s ID as its value. This label is added automatically by default as explained [above](#instructions) and, because it is reserved, it will uniquely match only one app. ### Using a token to query an AI Deploy app With the token we generated in the previous step, we will now query the app. For this demonstration, we deployed a simple Hello World app that always responds `Hello, World!`. -You can get the access URL of your app in the details of the AI Deploy app, above the **Tags**. +You can get the access URL of your app in the details of the AI Deploy app, above the **Labels**. #### Browser @@ -114,7 +159,7 @@ You now land on the exposed AI Deploy app service: #### Code integration -You can also directly CURL the AI Deploy app using the token as an `Authorization` header: +You can also use CURL to directly query the AI Deploy app using the token as an `Authorization` header: ```bash export TOKEN= @@ -137,7 +182,7 @@ From the list of tokens, click on the action menu and select `Regenerate`{.actio ![token regenerate](images/08-token-list-regen.png){.thumbnail} -Then click on `Regenerate`{.action} to confirm. +Then click on `Confirm`{.action}: ![token regenerate confirm](images/09-token-regen-confirm.png){.thumbnail} @@ -147,8 +192,12 @@ If you simply need to invalidate the token, you can delete it using the same act ![token delete](images/10-token-list-delete.png) +## Go further + +Additional information about the use of a token to manage AI Solutions using `ovhai` CLI can be found [here](/pages/public_cloud/ai_machine_learning/cli_13_howto_app_token_cli). + ## Feedback Please feel free to send us your questions, feedback and suggestions to help our team improve the service on the OVHcloud [Discord server](https://discord.gg/ovhcloud) -If you need training or technical assistance to implement our solutions, contact your sales representative or click on [this link](/links/professional-services) to get a quote and ask our Professional Services experts for a custom analysis of your project. +If you need training or technical assistance to implement our solutions, contact your sales representative or click on [this link](/links/professional-services) to get a quote and ask our Professional Services experts for a custom analysis of your project. \ No newline at end of file diff --git a/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/guide.es-es.md b/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/guide.es-es.md index 58492b8fe82..37218355b3f 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/guide.es-es.md +++ b/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/guide.es-es.md @@ -1,7 +1,7 @@ --- -title: AI Deploy - Accessing your app with tokens -excerpt: Discover how to create a scoped token and query your AI Deploy app -updated: 2023-04-04 +title: AI Deploy - Accessing your app with tokens (EN) +excerpt: Learn how to create scoped tokens to securely access your AI Deploy applications +updated: 2025-07-28 --- > [!primary] @@ -13,92 +13,137 @@ updated: 2023-04-04 This guide covers the creation of application tokens for AI Deploy. +This is particularly useful when you want to make your app accessible to others without sharing your username and password. Moreover, using tokens facilitates the integration of your app with other services or scripts, such as those using curl, allowing for a more automated and flexible interaction with your AI Deploy application. + +In this tutorial, we will create and assign tokens to a basic AI Deploy app, running the [infrastructureascode/hello-world](https://hub.docker.com/r/infrastructureascode/hello-world) Docker image. + ## Requirements - a **Public cloud** project - access to the [OVHcloud Control Panel](/links/manager) -- a Running AI Deploy app (the deployed app in this guide uses the Docker image [infrastructureascode/hello-world](https://hub.docker.com/r/infrastructureascode/hello-world)) ## Instructions -### Adding labels to an App +By default, the following two labels are automatically added to each AI Deploy application: -Tokens are scoped based on labels added to your AI Deploy app. To scope a token you need to add a label to your AI Deploy app upon submission. +- `ovh/id` whose value is the ID of the AI Deploy app +- `ovh/type` whose value is `app`, the type of AI resource -![app add label](images/01-app-add-label.png){.thumbnail} +> [!primary] +> These labels are prefixed by `ovh/`, meaning these are reserved by the platform. These labels will be automatically overwritten by the platform if you attempt to redefine them during submission. They won't be displayed in the manager UI. +> -In this instance we add the label `group=A` to the AI Deploy app. A set of defaults labels is added to all AI Deploy apps: +In addition to these default labels, you can **create new ones** to further customize and secure your application access. -- `ovh/id` whose value is the ID of the AI Deploy app -- `ovh/type` with value `app`, the type of AI resource +### Adding labels to an app -> [!primary] -> Labels prefixed by `ovh/` are reserved by the platform, those labels are overriden upon submission if any are provided. +Tokens are scoped based on labels added to your AI Deploy app. To scope a token, you must add a label to your AI Deploy app. This can be done either during the app creation process or after the app is deployed. You can add multiple labels by repeating the following process. -All the labels of an AI Deploy app are listed on the AI Deploy app details under **Tags**: +#### Adding label during app creation -![app dashboard](images/02-app-dashboard.png){.thumbnail} +To add a label when creating an AI Deploy app, access the `Advanced Configuration`{.action} step in the app creation process. This section allows you to specify a custom Docker command, the mounted volumes, and **the app labels**. -### Generating tokens +From this last sub-section, you can add a key-value pair. The key is the label identifier (e.g., `group`), while the value is the corresponding value assigned to this key (e.g., `A`). In this tutorial, we use the example `group=A` as the label of the AI Deploy app: -From the **AI Deploy** page, you access the tokens management page by clicking the `Tokens`{.action} tab. +![app add label](images/01-app-add-label.png){.thumbnail} -![app list](images/03-app-list.png) +Once created, all the labels of an AI Deploy app are listed on the app details, under **Labels** field: -Once on the token management tab, simply click on `New Token`{.action}. +![app dashboard](images/02-app-dashboard.png){.thumbnail} -![token list new](images/04-token-list-new-2.png){.thumbnail} +#### Adding label to an existing app + +If your app is already deployed, you can still add or update labels at any time using the Control Panel (UI) or the `ovhai` CLI. + +> [!tabs] +> **Using the Control Panel (UI)** +>> +>> Navigate to the **AI Deploy** section where all your apps are listed. **Click the name of your app** to open its details page. Locate the **Labels** section. Enter the key-value pair and click `+`{.action} to add the label to your app. +>> +>> ![image](images/03-app-add-label-existing-app.png){.thumbnail} +>> +>> After saving, the added label will be visible in the **Labels** section of the app details. +>> +> **Using ovhai CLI** +>> +>> To follow this section, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. +>> +>> You can also add labels to an existing app using the `ovhai` CLI. Run the following command: +>> +>> ```console +>> ovhai app set-label +>> ``` +>> +>> Replace with the unique identifier of your app (found in the app details or by running `ovhai app ls`). And replace and with your desired key and value pair. For example: +>> +>> ```console +>> ovhai app set-label a8318623-8357-48b4-bd3b-648c3e343ec9 group A +>> ``` +>> +>> This command adds the label `group=A` to the app with ID `a8318623-8357-48b4-bd3b-648c3e343ec9`. +>> +>> You can verify app labels by running `ovhai app get `. Labels will be displayed at the top of app details, in the *Labels* field. -#### Read token +### Generating tokens -There are two types of roles that can be assigned to a token: +From the **AI Dashboard** page, you can access the tokens management page by clicking on the `Tokens`{.action} tab. From there, you can click on the `+ Create a token`{.action} button to create a new token: -- AI Platform - Read-only -- AI Platform - Operator +![token list new](images/04-token-list.png){.thumbnail} -A Read-only token will only grant you the right to query the deployed app while an Operator token would also allow you to manage the AI Deploy app itself. +There are two types of roles that can be assigned to a token: -Let us create a token for the AI Deploy apps matching the label `group=A` with read-only access in the GRA cluster. -To create an AI Deploy app token we need to specify 3 parameters: +- **AI Platform - Reader**: allows only querying the app +- **AI Platform - Operator**: allows querying and full lifecycle management (start/stop/delete) -- The token scope is specified through label selectors, and a token will be scoped over any app matching the set of label selectors. In this case `group=A` -- The token role: AI Training - Read-only -- The region (cluster in which are deployed the AI Deploy apps): GRA. +#### Read token -Fill out the form: +Let's create a token for the AI Deploy apps matching the label `group=A` with read-only access in the GRA (Gravelines) cluster. To do this, we will need to fill 4 parameters: ![token generation input read](images/05-token-gen-input-read.png){.thumbnail} -Click `Generate`{.action}. Upon success, you are redirected to the token list with the newly generated token displayed at the top: - -![token generation result](images/06-token-gen-result-read.png){.thumbnail} +- **Token name**: used for token identification, management only. +- **Label selector**: determines which apps the token applies to (e.g., `group=A`). +- **Role**: choose from: + - `AI Reader`: read-only + - `AI Operator`: read & manage +- **Region**: e.g., `GRA` (for Gravelines) -Save the token string for later use. +After completing the form, click `Generate`{.action} to confirm the token creation. > [!warning] -> The token is only displayed once, make sure to save it before leaving the page or you will need to regenerate the token. +> You will then receive the value of your new token, which you must **carefully save**, as its value is only displayed once. If you lose the token value, you will need to [regenerate it](#regenerating-a-token). + +You will then be redirected to the token list, where the newly generated token will be displayed at the top: + +![token generation result](images/06-token-gen-result-read.png){.thumbnail} This newly generated token provides read access over all resources tagged with the label `group=A` including the ones submitted after the creation of the token. -#### Operator token +If you prefer working from the command line, you can generate the same token using the `ovhai` CLI: -An operator token grants read access along with management access for the matching apps. This means that you can manage the AI Deploy app lifecycle (start/stop/delete) using either the CLI (more info [here](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli)) or the [AI Training API](https://gra.ai.cloud.ovh.net/) by providing this token. +```console +ovhai token create reader-token --role read --label-selector group=A +``` -This Operator token will be scoped on a specific AI Deploy app and we will use the default `ovh/id` label to do so (since it is reserved, there is only one AI Deploy app that can match this label selector). +#### Operator token -- Token scope: `ovh/id=4c4f6253-a059-424a-92da-5e06a12ddffd` -- The token role: AI Training - Operator -- The region: GRA. +An operator token grants read access along with management access for the matching apps. This allows you to manage the AI Deploy app lifecycle (start/stop/delete) using either the CLI (more info [here](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli)) or the [AI API](https://gra.ai.cloud.ovh.net/) by providing this token. ![token generation input operator](images/07-token-gen-input-op.png){.thumbnail} -Additional information about the use of a token to manage an AI Training resource can be found [here](/pages/public_cloud/ai_machine_learning/cli_13_howto_app_token_cli#use-the-app-token). +Equivalent CLI command is: + +```console +ovhai token create operator-token --role operator --label-selector group=A +``` + +You can also scope a token to a specific app using the `ovh/id` label and the app’s ID as its value. This label is added automatically by default as explained [above](#instructions) and, because it is reserved, it will uniquely match only one app. ### Using a token to query an AI Deploy app With the token we generated in the previous step, we will now query the app. For this demonstration, we deployed a simple Hello World app that always responds `Hello, World!`. -You can get the access URL of your app in the details of the AI Deploy app, above the **Tags**. +You can get the access URL of your app in the details of the AI Deploy app, above the **Labels**. #### Browser @@ -114,7 +159,7 @@ You now land on the exposed AI Deploy app service: #### Code integration -You can also directly CURL the AI Deploy app using the token as an `Authorization` header: +You can also use CURL to directly query the AI Deploy app using the token as an `Authorization` header: ```bash export TOKEN= @@ -137,7 +182,7 @@ From the list of tokens, click on the action menu and select `Regenerate`{.actio ![token regenerate](images/08-token-list-regen.png){.thumbnail} -Then click on `Regenerate`{.action} to confirm. +Then click on `Confirm`{.action}: ![token regenerate confirm](images/09-token-regen-confirm.png){.thumbnail} @@ -147,8 +192,12 @@ If you simply need to invalidate the token, you can delete it using the same act ![token delete](images/10-token-list-delete.png) +## Go further + +Additional information about the use of a token to manage AI Solutions using `ovhai` CLI can be found [here](/pages/public_cloud/ai_machine_learning/cli_13_howto_app_token_cli). + ## Feedback Please feel free to send us your questions, feedback and suggestions to help our team improve the service on the OVHcloud [Discord server](https://discord.gg/ovhcloud) -If you need training or technical assistance to implement our solutions, contact your sales representative or click on [this link](/links/professional-services) to get a quote and ask our Professional Services experts for a custom analysis of your project. +If you need training or technical assistance to implement our solutions, contact your sales representative or click on [this link](/links/professional-services) to get a quote and ask our Professional Services experts for a custom analysis of your project. \ No newline at end of file diff --git a/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/guide.es-us.md b/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/guide.es-us.md index 58492b8fe82..37218355b3f 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/guide.es-us.md +++ b/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/guide.es-us.md @@ -1,7 +1,7 @@ --- -title: AI Deploy - Accessing your app with tokens -excerpt: Discover how to create a scoped token and query your AI Deploy app -updated: 2023-04-04 +title: AI Deploy - Accessing your app with tokens (EN) +excerpt: Learn how to create scoped tokens to securely access your AI Deploy applications +updated: 2025-07-28 --- > [!primary] @@ -13,92 +13,137 @@ updated: 2023-04-04 This guide covers the creation of application tokens for AI Deploy. +This is particularly useful when you want to make your app accessible to others without sharing your username and password. Moreover, using tokens facilitates the integration of your app with other services or scripts, such as those using curl, allowing for a more automated and flexible interaction with your AI Deploy application. + +In this tutorial, we will create and assign tokens to a basic AI Deploy app, running the [infrastructureascode/hello-world](https://hub.docker.com/r/infrastructureascode/hello-world) Docker image. + ## Requirements - a **Public cloud** project - access to the [OVHcloud Control Panel](/links/manager) -- a Running AI Deploy app (the deployed app in this guide uses the Docker image [infrastructureascode/hello-world](https://hub.docker.com/r/infrastructureascode/hello-world)) ## Instructions -### Adding labels to an App +By default, the following two labels are automatically added to each AI Deploy application: -Tokens are scoped based on labels added to your AI Deploy app. To scope a token you need to add a label to your AI Deploy app upon submission. +- `ovh/id` whose value is the ID of the AI Deploy app +- `ovh/type` whose value is `app`, the type of AI resource -![app add label](images/01-app-add-label.png){.thumbnail} +> [!primary] +> These labels are prefixed by `ovh/`, meaning these are reserved by the platform. These labels will be automatically overwritten by the platform if you attempt to redefine them during submission. They won't be displayed in the manager UI. +> -In this instance we add the label `group=A` to the AI Deploy app. A set of defaults labels is added to all AI Deploy apps: +In addition to these default labels, you can **create new ones** to further customize and secure your application access. -- `ovh/id` whose value is the ID of the AI Deploy app -- `ovh/type` with value `app`, the type of AI resource +### Adding labels to an app -> [!primary] -> Labels prefixed by `ovh/` are reserved by the platform, those labels are overriden upon submission if any are provided. +Tokens are scoped based on labels added to your AI Deploy app. To scope a token, you must add a label to your AI Deploy app. This can be done either during the app creation process or after the app is deployed. You can add multiple labels by repeating the following process. -All the labels of an AI Deploy app are listed on the AI Deploy app details under **Tags**: +#### Adding label during app creation -![app dashboard](images/02-app-dashboard.png){.thumbnail} +To add a label when creating an AI Deploy app, access the `Advanced Configuration`{.action} step in the app creation process. This section allows you to specify a custom Docker command, the mounted volumes, and **the app labels**. -### Generating tokens +From this last sub-section, you can add a key-value pair. The key is the label identifier (e.g., `group`), while the value is the corresponding value assigned to this key (e.g., `A`). In this tutorial, we use the example `group=A` as the label of the AI Deploy app: -From the **AI Deploy** page, you access the tokens management page by clicking the `Tokens`{.action} tab. +![app add label](images/01-app-add-label.png){.thumbnail} -![app list](images/03-app-list.png) +Once created, all the labels of an AI Deploy app are listed on the app details, under **Labels** field: -Once on the token management tab, simply click on `New Token`{.action}. +![app dashboard](images/02-app-dashboard.png){.thumbnail} -![token list new](images/04-token-list-new-2.png){.thumbnail} +#### Adding label to an existing app + +If your app is already deployed, you can still add or update labels at any time using the Control Panel (UI) or the `ovhai` CLI. + +> [!tabs] +> **Using the Control Panel (UI)** +>> +>> Navigate to the **AI Deploy** section where all your apps are listed. **Click the name of your app** to open its details page. Locate the **Labels** section. Enter the key-value pair and click `+`{.action} to add the label to your app. +>> +>> ![image](images/03-app-add-label-existing-app.png){.thumbnail} +>> +>> After saving, the added label will be visible in the **Labels** section of the app details. +>> +> **Using ovhai CLI** +>> +>> To follow this section, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. +>> +>> You can also add labels to an existing app using the `ovhai` CLI. Run the following command: +>> +>> ```console +>> ovhai app set-label +>> ``` +>> +>> Replace with the unique identifier of your app (found in the app details or by running `ovhai app ls`). And replace and with your desired key and value pair. For example: +>> +>> ```console +>> ovhai app set-label a8318623-8357-48b4-bd3b-648c3e343ec9 group A +>> ``` +>> +>> This command adds the label `group=A` to the app with ID `a8318623-8357-48b4-bd3b-648c3e343ec9`. +>> +>> You can verify app labels by running `ovhai app get `. Labels will be displayed at the top of app details, in the *Labels* field. -#### Read token +### Generating tokens -There are two types of roles that can be assigned to a token: +From the **AI Dashboard** page, you can access the tokens management page by clicking on the `Tokens`{.action} tab. From there, you can click on the `+ Create a token`{.action} button to create a new token: -- AI Platform - Read-only -- AI Platform - Operator +![token list new](images/04-token-list.png){.thumbnail} -A Read-only token will only grant you the right to query the deployed app while an Operator token would also allow you to manage the AI Deploy app itself. +There are two types of roles that can be assigned to a token: -Let us create a token for the AI Deploy apps matching the label `group=A` with read-only access in the GRA cluster. -To create an AI Deploy app token we need to specify 3 parameters: +- **AI Platform - Reader**: allows only querying the app +- **AI Platform - Operator**: allows querying and full lifecycle management (start/stop/delete) -- The token scope is specified through label selectors, and a token will be scoped over any app matching the set of label selectors. In this case `group=A` -- The token role: AI Training - Read-only -- The region (cluster in which are deployed the AI Deploy apps): GRA. +#### Read token -Fill out the form: +Let's create a token for the AI Deploy apps matching the label `group=A` with read-only access in the GRA (Gravelines) cluster. To do this, we will need to fill 4 parameters: ![token generation input read](images/05-token-gen-input-read.png){.thumbnail} -Click `Generate`{.action}. Upon success, you are redirected to the token list with the newly generated token displayed at the top: - -![token generation result](images/06-token-gen-result-read.png){.thumbnail} +- **Token name**: used for token identification, management only. +- **Label selector**: determines which apps the token applies to (e.g., `group=A`). +- **Role**: choose from: + - `AI Reader`: read-only + - `AI Operator`: read & manage +- **Region**: e.g., `GRA` (for Gravelines) -Save the token string for later use. +After completing the form, click `Generate`{.action} to confirm the token creation. > [!warning] -> The token is only displayed once, make sure to save it before leaving the page or you will need to regenerate the token. +> You will then receive the value of your new token, which you must **carefully save**, as its value is only displayed once. If you lose the token value, you will need to [regenerate it](#regenerating-a-token). + +You will then be redirected to the token list, where the newly generated token will be displayed at the top: + +![token generation result](images/06-token-gen-result-read.png){.thumbnail} This newly generated token provides read access over all resources tagged with the label `group=A` including the ones submitted after the creation of the token. -#### Operator token +If you prefer working from the command line, you can generate the same token using the `ovhai` CLI: -An operator token grants read access along with management access for the matching apps. This means that you can manage the AI Deploy app lifecycle (start/stop/delete) using either the CLI (more info [here](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli)) or the [AI Training API](https://gra.ai.cloud.ovh.net/) by providing this token. +```console +ovhai token create reader-token --role read --label-selector group=A +``` -This Operator token will be scoped on a specific AI Deploy app and we will use the default `ovh/id` label to do so (since it is reserved, there is only one AI Deploy app that can match this label selector). +#### Operator token -- Token scope: `ovh/id=4c4f6253-a059-424a-92da-5e06a12ddffd` -- The token role: AI Training - Operator -- The region: GRA. +An operator token grants read access along with management access for the matching apps. This allows you to manage the AI Deploy app lifecycle (start/stop/delete) using either the CLI (more info [here](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli)) or the [AI API](https://gra.ai.cloud.ovh.net/) by providing this token. ![token generation input operator](images/07-token-gen-input-op.png){.thumbnail} -Additional information about the use of a token to manage an AI Training resource can be found [here](/pages/public_cloud/ai_machine_learning/cli_13_howto_app_token_cli#use-the-app-token). +Equivalent CLI command is: + +```console +ovhai token create operator-token --role operator --label-selector group=A +``` + +You can also scope a token to a specific app using the `ovh/id` label and the app’s ID as its value. This label is added automatically by default as explained [above](#instructions) and, because it is reserved, it will uniquely match only one app. ### Using a token to query an AI Deploy app With the token we generated in the previous step, we will now query the app. For this demonstration, we deployed a simple Hello World app that always responds `Hello, World!`. -You can get the access URL of your app in the details of the AI Deploy app, above the **Tags**. +You can get the access URL of your app in the details of the AI Deploy app, above the **Labels**. #### Browser @@ -114,7 +159,7 @@ You now land on the exposed AI Deploy app service: #### Code integration -You can also directly CURL the AI Deploy app using the token as an `Authorization` header: +You can also use CURL to directly query the AI Deploy app using the token as an `Authorization` header: ```bash export TOKEN= @@ -137,7 +182,7 @@ From the list of tokens, click on the action menu and select `Regenerate`{.actio ![token regenerate](images/08-token-list-regen.png){.thumbnail} -Then click on `Regenerate`{.action} to confirm. +Then click on `Confirm`{.action}: ![token regenerate confirm](images/09-token-regen-confirm.png){.thumbnail} @@ -147,8 +192,12 @@ If you simply need to invalidate the token, you can delete it using the same act ![token delete](images/10-token-list-delete.png) +## Go further + +Additional information about the use of a token to manage AI Solutions using `ovhai` CLI can be found [here](/pages/public_cloud/ai_machine_learning/cli_13_howto_app_token_cli). + ## Feedback Please feel free to send us your questions, feedback and suggestions to help our team improve the service on the OVHcloud [Discord server](https://discord.gg/ovhcloud) -If you need training or technical assistance to implement our solutions, contact your sales representative or click on [this link](/links/professional-services) to get a quote and ask our Professional Services experts for a custom analysis of your project. +If you need training or technical assistance to implement our solutions, contact your sales representative or click on [this link](/links/professional-services) to get a quote and ask our Professional Services experts for a custom analysis of your project. \ No newline at end of file diff --git a/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/guide.fr-ca.md b/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/guide.fr-ca.md index 83169b60204..345678b5b45 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/guide.fr-ca.md +++ b/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/guide.fr-ca.md @@ -1,6 +1,6 @@ --- title: AI Deploy - Accéder à vos apps via tokens (EN) -excerpt: Découvrez comment créer un token pour votre app AI Deploy +excerpt: Découvrez comment créer des tokens étendus pour accéder en toute sécurité à vos applications AI Deploy updated: 2023-04-04 --- @@ -13,92 +13,137 @@ updated: 2023-04-04 This guide covers the creation of application tokens for AI Deploy. +This is particularly useful when you want to make your app accessible to others without sharing your username and password. Moreover, using tokens facilitates the integration of your app with other services or scripts, such as those using curl, allowing for a more automated and flexible interaction with your AI Deploy application. + +In this tutorial, we will create and assign tokens to a basic AI Deploy app, running the [infrastructureascode/hello-world](https://hub.docker.com/r/infrastructureascode/hello-world) Docker image. + ## Requirements - a **Public cloud** project - access to the [OVHcloud Control Panel](/links/manager) -- a Running AI Deploy app (the deployed app in this guide uses the Docker image [infrastructureascode/hello-world](https://hub.docker.com/r/infrastructureascode/hello-world)) ## Instructions -### Adding labels to an App +By default, the following two labels are automatically added to each AI Deploy application: -Tokens are scoped based on labels added to your AI Deploy app. To scope a token you need to add a label to your AI Deploy app upon submission. +- `ovh/id` whose value is the ID of the AI Deploy app +- `ovh/type` whose value is `app`, the type of AI resource -![app add label](images/01-app-add-label.png){.thumbnail} +> [!primary] +> These labels are prefixed by `ovh/`, meaning these are reserved by the platform. These labels will be automatically overwritten by the platform if you attempt to redefine them during submission. They won't be displayed in the manager UI. +> -In this instance we add the label `group=A` to the AI Deploy app. A set of defaults labels is added to all AI Deploy apps: +In addition to these default labels, you can **create new ones** to further customize and secure your application access. -- `ovh/id` whose value is the ID of the AI Deploy app -- `ovh/type` with value `app`, the type of AI resource +### Adding labels to an app -> [!primary] -> Labels prefixed by `ovh/` are reserved by the platform, those labels are overriden upon submission if any are provided. +Tokens are scoped based on labels added to your AI Deploy app. To scope a token, you must add a label to your AI Deploy app. This can be done either during the app creation process or after the app is deployed. You can add multiple labels by repeating the following process. -All the labels of an AI Deploy app are listed on the AI Deploy app details under **Tags**: +#### Adding label during app creation -![app dashboard](images/02-app-dashboard.png){.thumbnail} +To add a label when creating an AI Deploy app, access the `Advanced Configuration`{.action} step in the app creation process. This section allows you to specify a custom Docker command, the mounted volumes, and **the app labels**. -### Generating tokens +From this last sub-section, you can add a key-value pair. The key is the label identifier (e.g., `group`), while the value is the corresponding value assigned to this key (e.g., `A`). In this tutorial, we use the example `group=A` as the label of the AI Deploy app: -From the **AI Deploy** page, you access the tokens management page by clicking the `Tokens`{.action} tab. +![app add label](images/01-app-add-label.png){.thumbnail} -![app list](images/03-app-list.png) +Once created, all the labels of an AI Deploy app are listed on the app details, under **Labels** field: -Once on the token management tab, simply click on `New Token`{.action}. +![app dashboard](images/02-app-dashboard.png){.thumbnail} -![token list new](images/04-token-list-new-2.png){.thumbnail} +#### Adding label to an existing app + +If your app is already deployed, you can still add or update labels at any time using the Control Panel (UI) or the `ovhai` CLI. + +> [!tabs] +> **Using the Control Panel (UI)** +>> +>> Navigate to the **AI Deploy** section where all your apps are listed. **Click the name of your app** to open its details page. Locate the **Labels** section. Enter the key-value pair and click `+`{.action} to add the label to your app. +>> +>> ![image](images/03-app-add-label-existing-app.png){.thumbnail} +>> +>> After saving, the added label will be visible in the **Labels** section of the app details. +>> +> **Using ovhai CLI** +>> +>> To follow this section, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. +>> +>> You can also add labels to an existing app using the `ovhai` CLI. Run the following command: +>> +>> ```console +>> ovhai app set-label +>> ``` +>> +>> Replace with the unique identifier of your app (found in the app details or by running `ovhai app ls`). And replace and with your desired key and value pair. For example: +>> +>> ```console +>> ovhai app set-label a8318623-8357-48b4-bd3b-648c3e343ec9 group A +>> ``` +>> +>> This command adds the label `group=A` to the app with ID `a8318623-8357-48b4-bd3b-648c3e343ec9`. +>> +>> You can verify app labels by running `ovhai app get `. Labels will be displayed at the top of app details, in the *Labels* field. -#### Read token +### Generating tokens -There are two types of roles that can be assigned to a token: +From the **AI Dashboard** page, you can access the tokens management page by clicking on the `Tokens`{.action} tab. From there, you can click on the `+ Create a token`{.action} button to create a new token: -- AI Platform - Read-only -- AI Platform - Operator +![token list new](images/04-token-list.png){.thumbnail} -A Read-only token will only grant you the right to query the deployed app while an Operator token would also allow you to manage the AI Deploy app itself. +There are two types of roles that can be assigned to a token: -Let us create a token for the AI Deploy apps matching the label `group=A` with read-only access in the GRA cluster. -To create an AI Deploy app token we need to specify 3 parameters: +- **AI Platform - Reader**: allows only querying the app +- **AI Platform - Operator**: allows querying and full lifecycle management (start/stop/delete) -- The token scope is specified through label selectors, and a token will be scoped over any app matching the set of label selectors. In this case `group=A` -- The token role: AI Training - Read-only -- The region (cluster in which are deployed the AI Deploy apps): GRA. +#### Read token -Fill out the form: +Let's create a token for the AI Deploy apps matching the label `group=A` with read-only access in the GRA (Gravelines) cluster. To do this, we will need to fill 4 parameters: ![token generation input read](images/05-token-gen-input-read.png){.thumbnail} -Click `Generate`{.action}. Upon success, you are redirected to the token list with the newly generated token displayed at the top: - -![token generation result](images/06-token-gen-result-read.png){.thumbnail} +- **Token name**: used for token identification, management only. +- **Label selector**: determines which apps the token applies to (e.g., `group=A`). +- **Role**: choose from: + - `AI Reader`: read-only + - `AI Operator`: read & manage +- **Region**: e.g., `GRA` (for Gravelines) -Save the token string for later use. +After completing the form, click `Generate`{.action} to confirm the token creation. > [!warning] -> The token is only displayed once, make sure to save it before leaving the page or you will need to regenerate the token. +> You will then receive the value of your new token, which you must **carefully save**, as its value is only displayed once. If you lose the token value, you will need to [regenerate it](#regenerating-a-token). + +You will then be redirected to the token list, where the newly generated token will be displayed at the top: + +![token generation result](images/06-token-gen-result-read.png){.thumbnail} This newly generated token provides read access over all resources tagged with the label `group=A` including the ones submitted after the creation of the token. -#### Operator token +If you prefer working from the command line, you can generate the same token using the `ovhai` CLI: -An operator token grants read access along with management access for the matching apps. This means that you can manage the AI Deploy app lifecycle (start/stop/delete) using either the CLI (more info [here](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli)) or the [AI Training API](https://gra.ai.cloud.ovh.net/) by providing this token. +```console +ovhai token create reader-token --role read --label-selector group=A +``` -This Operator token will be scoped on a specific AI Deploy app and we will use the default `ovh/id` label to do so (since it is reserved, there is only one AI Deploy app that can match this label selector). +#### Operator token -- Token scope: `ovh/id=4c4f6253-a059-424a-92da-5e06a12ddffd` -- The token role: AI Training - Operator -- The region: GRA. +An operator token grants read access along with management access for the matching apps. This allows you to manage the AI Deploy app lifecycle (start/stop/delete) using either the CLI (more info [here](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli)) or the [AI API](https://gra.ai.cloud.ovh.net/) by providing this token. ![token generation input operator](images/07-token-gen-input-op.png){.thumbnail} -Additional information about the use of a token to manage an AI Training resource can be found [here](/pages/public_cloud/ai_machine_learning/cli_13_howto_app_token_cli#use-the-app-token). +Equivalent CLI command is: + +```console +ovhai token create operator-token --role operator --label-selector group=A +``` + +You can also scope a token to a specific app using the `ovh/id` label and the app’s ID as its value. This label is added automatically by default as explained [above](#instructions) and, because it is reserved, it will uniquely match only one app. ### Using a token to query an AI Deploy app With the token we generated in the previous step, we will now query the app. For this demonstration, we deployed a simple Hello World app that always responds `Hello, World!`. -You can get the access URL of your app in the details of the AI Deploy app, above the **Tags**. +You can get the access URL of your app in the details of the AI Deploy app, above the **Labels**. #### Browser @@ -114,7 +159,7 @@ You now land on the exposed AI Deploy app service: #### Code integration -You can also directly CURL the AI Deploy app using the token as an `Authorization` header: +You can also use CURL to directly query the AI Deploy app using the token as an `Authorization` header: ```bash export TOKEN= @@ -137,7 +182,7 @@ From the list of tokens, click on the action menu and select `Regenerate`{.actio ![token regenerate](images/08-token-list-regen.png){.thumbnail} -Then click on `Regenerate`{.action} to confirm. +Then click on `Confirm`{.action}: ![token regenerate confirm](images/09-token-regen-confirm.png){.thumbnail} @@ -147,8 +192,12 @@ If you simply need to invalidate the token, you can delete it using the same act ![token delete](images/10-token-list-delete.png) +## Go further + +Additional information about the use of a token to manage AI Solutions using `ovhai` CLI can be found [here](/pages/public_cloud/ai_machine_learning/cli_13_howto_app_token_cli). + ## Feedback Please feel free to send us your questions, feedback and suggestions to help our team improve the service on the OVHcloud [Discord server](https://discord.gg/ovhcloud) -If you need training or technical assistance to implement our solutions, contact your sales representative or click on [this link](/links/professional-services) to get a quote and ask our Professional Services experts for a custom analysis of your project. +If you need training or technical assistance to implement our solutions, contact your sales representative or click on [this link](/links/professional-services) to get a quote and ask our Professional Services experts for a custom analysis of your project. \ No newline at end of file diff --git a/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/guide.fr-fr.md b/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/guide.fr-fr.md index 83169b60204..7122835dfa6 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/guide.fr-fr.md +++ b/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/guide.fr-fr.md @@ -13,92 +13,137 @@ updated: 2023-04-04 This guide covers the creation of application tokens for AI Deploy. +This is particularly useful when you want to make your app accessible to others without sharing your username and password. Moreover, using tokens facilitates the integration of your app with other services or scripts, such as those using curl, allowing for a more automated and flexible interaction with your AI Deploy application. + +In this tutorial, we will create and assign tokens to a basic AI Deploy app, running the [infrastructureascode/hello-world](https://hub.docker.com/r/infrastructureascode/hello-world) Docker image. + ## Requirements - a **Public cloud** project - access to the [OVHcloud Control Panel](/links/manager) -- a Running AI Deploy app (the deployed app in this guide uses the Docker image [infrastructureascode/hello-world](https://hub.docker.com/r/infrastructureascode/hello-world)) ## Instructions -### Adding labels to an App +By default, the following two labels are automatically added to each AI Deploy application: -Tokens are scoped based on labels added to your AI Deploy app. To scope a token you need to add a label to your AI Deploy app upon submission. +- `ovh/id` whose value is the ID of the AI Deploy app +- `ovh/type` whose value is `app`, the type of AI resource -![app add label](images/01-app-add-label.png){.thumbnail} +> [!primary] +> These labels are prefixed by `ovh/`, meaning these are reserved by the platform. These labels will be automatically overwritten by the platform if you attempt to redefine them during submission. They won't be displayed in the manager UI. +> -In this instance we add the label `group=A` to the AI Deploy app. A set of defaults labels is added to all AI Deploy apps: +In addition to these default labels, you can **create new ones** to further customize and secure your application access. -- `ovh/id` whose value is the ID of the AI Deploy app -- `ovh/type` with value `app`, the type of AI resource +### Adding labels to an app -> [!primary] -> Labels prefixed by `ovh/` are reserved by the platform, those labels are overriden upon submission if any are provided. +Tokens are scoped based on labels added to your AI Deploy app. To scope a token, you must add a label to your AI Deploy app. This can be done either during the app creation process or after the app is deployed. You can add multiple labels by repeating the following process. -All the labels of an AI Deploy app are listed on the AI Deploy app details under **Tags**: +#### Adding label during app creation -![app dashboard](images/02-app-dashboard.png){.thumbnail} +To add a label when creating an AI Deploy app, access the `Advanced Configuration`{.action} step in the app creation process. This section allows you to specify a custom Docker command, the mounted volumes, and **the app labels**. -### Generating tokens +From this last sub-section, you can add a key-value pair. The key is the label identifier (e.g., `group`), while the value is the corresponding value assigned to this key (e.g., `A`). In this tutorial, we use the example `group=A` as the label of the AI Deploy app: -From the **AI Deploy** page, you access the tokens management page by clicking the `Tokens`{.action} tab. +![app add label](images/01-app-add-label.png){.thumbnail} -![app list](images/03-app-list.png) +Once created, all the labels of an AI Deploy app are listed on the app details, under **Labels** field: -Once on the token management tab, simply click on `New Token`{.action}. +![app dashboard](images/02-app-dashboard.png){.thumbnail} -![token list new](images/04-token-list-new-2.png){.thumbnail} +#### Adding label to an existing app + +If your app is already deployed, you can still add or update labels at any time using the Control Panel (UI) or the `ovhai` CLI. + +> [!tabs] +> **Using the Control Panel (UI)** +>> +>> Navigate to the **AI Deploy** section where all your apps are listed. **Click the name of your app** to open its details page. Locate the **Labels** section. Enter the key-value pair and click `+`{.action} to add the label to your app. +>> +>> ![image](images/03-app-add-label-existing-app.png){.thumbnail} +>> +>> After saving, the added label will be visible in the **Labels** section of the app details. +>> +> **Using ovhai CLI** +>> +>> To follow this section, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. +>> +>> You can also add labels to an existing app using the `ovhai` CLI. Run the following command: +>> +>> ```console +>> ovhai app set-label +>> ``` +>> +>> Replace with the unique identifier of your app (found in the app details or by running `ovhai app ls`). And replace and with your desired key and value pair. For example: +>> +>> ```console +>> ovhai app set-label a8318623-8357-48b4-bd3b-648c3e343ec9 group A +>> ``` +>> +>> This command adds the label `group=A` to the app with ID `a8318623-8357-48b4-bd3b-648c3e343ec9`. +>> +>> You can verify app labels by running `ovhai app get `. Labels will be displayed at the top of app details, in the *Labels* field. -#### Read token +### Generating tokens -There are two types of roles that can be assigned to a token: +From the **AI Dashboard** page, you can access the tokens management page by clicking on the `Tokens`{.action} tab. From there, you can click on the `+ Create a token`{.action} button to create a new token: -- AI Platform - Read-only -- AI Platform - Operator +![token list new](images/04-token-list.png){.thumbnail} -A Read-only token will only grant you the right to query the deployed app while an Operator token would also allow you to manage the AI Deploy app itself. +There are two types of roles that can be assigned to a token: -Let us create a token for the AI Deploy apps matching the label `group=A` with read-only access in the GRA cluster. -To create an AI Deploy app token we need to specify 3 parameters: +- **AI Platform - Reader**: allows only querying the app +- **AI Platform - Operator**: allows querying and full lifecycle management (start/stop/delete) -- The token scope is specified through label selectors, and a token will be scoped over any app matching the set of label selectors. In this case `group=A` -- The token role: AI Training - Read-only -- The region (cluster in which are deployed the AI Deploy apps): GRA. +#### Read token -Fill out the form: +Let's create a token for the AI Deploy apps matching the label `group=A` with read-only access in the GRA (Gravelines) cluster. To do this, we will need to fill 4 parameters: ![token generation input read](images/05-token-gen-input-read.png){.thumbnail} -Click `Generate`{.action}. Upon success, you are redirected to the token list with the newly generated token displayed at the top: - -![token generation result](images/06-token-gen-result-read.png){.thumbnail} +- **Token name**: used for token identification, management only. +- **Label selector**: determines which apps the token applies to (e.g., `group=A`). +- **Role**: choose from: + - `AI Reader`: read-only + - `AI Operator`: read & manage +- **Region**: e.g., `GRA` (for Gravelines) -Save the token string for later use. +After completing the form, click `Generate`{.action} to confirm the token creation. > [!warning] -> The token is only displayed once, make sure to save it before leaving the page or you will need to regenerate the token. +> You will then receive the value of your new token, which you must **carefully save**, as its value is only displayed once. If you lose the token value, you will need to [regenerate it](#regenerating-a-token). + +You will then be redirected to the token list, where the newly generated token will be displayed at the top: + +![token generation result](images/06-token-gen-result-read.png){.thumbnail} This newly generated token provides read access over all resources tagged with the label `group=A` including the ones submitted after the creation of the token. -#### Operator token +If you prefer working from the command line, you can generate the same token using the `ovhai` CLI: -An operator token grants read access along with management access for the matching apps. This means that you can manage the AI Deploy app lifecycle (start/stop/delete) using either the CLI (more info [here](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli)) or the [AI Training API](https://gra.ai.cloud.ovh.net/) by providing this token. +```console +ovhai token create reader-token --role read --label-selector group=A +``` -This Operator token will be scoped on a specific AI Deploy app and we will use the default `ovh/id` label to do so (since it is reserved, there is only one AI Deploy app that can match this label selector). +#### Operator token -- Token scope: `ovh/id=4c4f6253-a059-424a-92da-5e06a12ddffd` -- The token role: AI Training - Operator -- The region: GRA. +An operator token grants read access along with management access for the matching apps. This allows you to manage the AI Deploy app lifecycle (start/stop/delete) using either the CLI (more info [here](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli)) or the [AI API](https://gra.ai.cloud.ovh.net/) by providing this token. ![token generation input operator](images/07-token-gen-input-op.png){.thumbnail} -Additional information about the use of a token to manage an AI Training resource can be found [here](/pages/public_cloud/ai_machine_learning/cli_13_howto_app_token_cli#use-the-app-token). +Equivalent CLI command is: + +```console +ovhai token create operator-token --role operator --label-selector group=A +``` + +You can also scope a token to a specific app using the `ovh/id` label and the app’s ID as its value. This label is added automatically by default as explained [above](#instructions) and, because it is reserved, it will uniquely match only one app. ### Using a token to query an AI Deploy app With the token we generated in the previous step, we will now query the app. For this demonstration, we deployed a simple Hello World app that always responds `Hello, World!`. -You can get the access URL of your app in the details of the AI Deploy app, above the **Tags**. +You can get the access URL of your app in the details of the AI Deploy app, above the **Labels**. #### Browser @@ -114,7 +159,7 @@ You now land on the exposed AI Deploy app service: #### Code integration -You can also directly CURL the AI Deploy app using the token as an `Authorization` header: +You can also use CURL to directly query the AI Deploy app using the token as an `Authorization` header: ```bash export TOKEN= @@ -137,7 +182,7 @@ From the list of tokens, click on the action menu and select `Regenerate`{.actio ![token regenerate](images/08-token-list-regen.png){.thumbnail} -Then click on `Regenerate`{.action} to confirm. +Then click on `Confirm`{.action}: ![token regenerate confirm](images/09-token-regen-confirm.png){.thumbnail} @@ -147,8 +192,12 @@ If you simply need to invalidate the token, you can delete it using the same act ![token delete](images/10-token-list-delete.png) +## Go further + +Additional information about the use of a token to manage AI Solutions using `ovhai` CLI can be found [here](/pages/public_cloud/ai_machine_learning/cli_13_howto_app_token_cli). + ## Feedback Please feel free to send us your questions, feedback and suggestions to help our team improve the service on the OVHcloud [Discord server](https://discord.gg/ovhcloud) -If you need training or technical assistance to implement our solutions, contact your sales representative or click on [this link](/links/professional-services) to get a quote and ask our Professional Services experts for a custom analysis of your project. +If you need training or technical assistance to implement our solutions, contact your sales representative or click on [this link](/links/professional-services) to get a quote and ask our Professional Services experts for a custom analysis of your project. \ No newline at end of file diff --git a/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/images/01-app-add-label.png b/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/images/01-app-add-label.png index d15996c19bc..f4dd248a9dd 100644 Binary files a/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/images/01-app-add-label.png and b/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/images/01-app-add-label.png differ diff --git a/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/images/02-app-dashboard.png b/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/images/02-app-dashboard.png index 166f67aa3a4..0672f3936ca 100644 Binary files a/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/images/02-app-dashboard.png and b/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/images/02-app-dashboard.png differ diff --git a/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/images/03-app-add-label-existing-app.png b/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/images/03-app-add-label-existing-app.png new file mode 100644 index 00000000000..1094eb1bfe6 Binary files /dev/null and b/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/images/03-app-add-label-existing-app.png differ diff --git a/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/images/03-app-list.png b/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/images/03-app-list.png deleted file mode 100644 index d7aacebecda..00000000000 Binary files a/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/images/03-app-list.png and /dev/null differ diff --git a/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/images/04-token-list-new-2.png b/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/images/04-token-list-new-2.png deleted file mode 100644 index c14aeb17b47..00000000000 Binary files a/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/images/04-token-list-new-2.png and /dev/null differ diff --git a/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/images/04-token-list.png b/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/images/04-token-list.png new file mode 100644 index 00000000000..a543c7772f1 Binary files /dev/null and b/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/images/04-token-list.png differ diff --git a/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/images/05-token-gen-input-read.png b/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/images/05-token-gen-input-read.png index 6ca2f233a5c..ca9e34aa5ed 100644 Binary files a/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/images/05-token-gen-input-read.png and b/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/images/05-token-gen-input-read.png differ diff --git a/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/images/07-token-gen-input-op.png b/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/images/07-token-gen-input-op.png index 173dc9d3d0f..3482ead7500 100644 Binary files a/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/images/07-token-gen-input-op.png and b/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/images/07-token-gen-input-op.png differ diff --git a/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/images/08-token-list-regen.png b/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/images/08-token-list-regen.png index 340c4b72c55..afe5224ac1b 100644 Binary files a/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/images/08-token-list-regen.png and b/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/images/08-token-list-regen.png differ diff --git a/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/images/09-token-regen-confirm.png b/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/images/09-token-regen-confirm.png index 6b831df3a47..db1644cb3ec 100644 Binary files a/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/images/09-token-regen-confirm.png and b/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/images/09-token-regen-confirm.png differ diff --git a/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/images/10-token-list-delete.png b/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/images/10-token-list-delete.png index ef5d924cfe1..0cbb14c320f 100644 Binary files a/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/images/10-token-list-delete.png and b/pages/public_cloud/ai_machine_learning/deploy_guide_03_tokens/images/10-token-list-delete.png differ diff --git a/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.de-de.md b/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.de-de.md index 1cd7e9b5841..3b917a5ec49 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.de-de.md +++ b/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.de-de.md @@ -30,7 +30,7 @@ To deploy your app, you need: - An access to the [OVHcloud Control Panel](/links/manager). - An AI Deploy Project created inside a [Public Cloud project](/links/public-cloud/public-cloud) in your OVHcloud account - A [user for AI Deploy](/pages/public_cloud/ai_machine_learning/gi_01_manage_users). -- [The OVHcloud AI CLI](https://cli.bhs.ai.cloud.ovh.net/) **and** [Docker](https://www.docker.com/get-started) installed on your local computer, **or** only an access to a Debian Docker Instance on the [Public Cloud](/links/manager). +- [The OVHcloud AI CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) **and** [Docker](https://www.docker.com/get-started) installed on your local computer, **or** only an access to a Debian Docker Instance on the [Public Cloud](/links/manager). - To deploy your app, you must have the full code of the application, either by cloning the [GitHub repository](https://github.com/ovh/ai-training-examples/tree/main/apps/streamlit/speech-to-text), or by having followed our [blog article](https://blog.ovhcloud.com/how-to-build-a-speech-to-text-application-with-python-1-3/) that taught you how to build this app step by step. - If you want the diarization option (speakers differentiation), you will need an access token. This token will be requested at the launch of the application. To create your token, follow the steps indicated on the [model page](https://huggingface.co/pyannote/speaker-diarization). If the token is not specified, the application will be launched without this feature. diff --git a/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.en-asia.md b/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.en-asia.md index 1cd7e9b5841..3b917a5ec49 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.en-asia.md +++ b/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.en-asia.md @@ -30,7 +30,7 @@ To deploy your app, you need: - An access to the [OVHcloud Control Panel](/links/manager). - An AI Deploy Project created inside a [Public Cloud project](/links/public-cloud/public-cloud) in your OVHcloud account - A [user for AI Deploy](/pages/public_cloud/ai_machine_learning/gi_01_manage_users). -- [The OVHcloud AI CLI](https://cli.bhs.ai.cloud.ovh.net/) **and** [Docker](https://www.docker.com/get-started) installed on your local computer, **or** only an access to a Debian Docker Instance on the [Public Cloud](/links/manager). +- [The OVHcloud AI CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) **and** [Docker](https://www.docker.com/get-started) installed on your local computer, **or** only an access to a Debian Docker Instance on the [Public Cloud](/links/manager). - To deploy your app, you must have the full code of the application, either by cloning the [GitHub repository](https://github.com/ovh/ai-training-examples/tree/main/apps/streamlit/speech-to-text), or by having followed our [blog article](https://blog.ovhcloud.com/how-to-build-a-speech-to-text-application-with-python-1-3/) that taught you how to build this app step by step. - If you want the diarization option (speakers differentiation), you will need an access token. This token will be requested at the launch of the application. To create your token, follow the steps indicated on the [model page](https://huggingface.co/pyannote/speaker-diarization). If the token is not specified, the application will be launched without this feature. diff --git a/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.en-au.md b/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.en-au.md index 1cd7e9b5841..3b917a5ec49 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.en-au.md +++ b/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.en-au.md @@ -30,7 +30,7 @@ To deploy your app, you need: - An access to the [OVHcloud Control Panel](/links/manager). - An AI Deploy Project created inside a [Public Cloud project](/links/public-cloud/public-cloud) in your OVHcloud account - A [user for AI Deploy](/pages/public_cloud/ai_machine_learning/gi_01_manage_users). -- [The OVHcloud AI CLI](https://cli.bhs.ai.cloud.ovh.net/) **and** [Docker](https://www.docker.com/get-started) installed on your local computer, **or** only an access to a Debian Docker Instance on the [Public Cloud](/links/manager). +- [The OVHcloud AI CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) **and** [Docker](https://www.docker.com/get-started) installed on your local computer, **or** only an access to a Debian Docker Instance on the [Public Cloud](/links/manager). - To deploy your app, you must have the full code of the application, either by cloning the [GitHub repository](https://github.com/ovh/ai-training-examples/tree/main/apps/streamlit/speech-to-text), or by having followed our [blog article](https://blog.ovhcloud.com/how-to-build-a-speech-to-text-application-with-python-1-3/) that taught you how to build this app step by step. - If you want the diarization option (speakers differentiation), you will need an access token. This token will be requested at the launch of the application. To create your token, follow the steps indicated on the [model page](https://huggingface.co/pyannote/speaker-diarization). If the token is not specified, the application will be launched without this feature. diff --git a/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.en-ca.md b/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.en-ca.md index 1cd7e9b5841..3b917a5ec49 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.en-ca.md +++ b/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.en-ca.md @@ -30,7 +30,7 @@ To deploy your app, you need: - An access to the [OVHcloud Control Panel](/links/manager). - An AI Deploy Project created inside a [Public Cloud project](/links/public-cloud/public-cloud) in your OVHcloud account - A [user for AI Deploy](/pages/public_cloud/ai_machine_learning/gi_01_manage_users). -- [The OVHcloud AI CLI](https://cli.bhs.ai.cloud.ovh.net/) **and** [Docker](https://www.docker.com/get-started) installed on your local computer, **or** only an access to a Debian Docker Instance on the [Public Cloud](/links/manager). +- [The OVHcloud AI CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) **and** [Docker](https://www.docker.com/get-started) installed on your local computer, **or** only an access to a Debian Docker Instance on the [Public Cloud](/links/manager). - To deploy your app, you must have the full code of the application, either by cloning the [GitHub repository](https://github.com/ovh/ai-training-examples/tree/main/apps/streamlit/speech-to-text), or by having followed our [blog article](https://blog.ovhcloud.com/how-to-build-a-speech-to-text-application-with-python-1-3/) that taught you how to build this app step by step. - If you want the diarization option (speakers differentiation), you will need an access token. This token will be requested at the launch of the application. To create your token, follow the steps indicated on the [model page](https://huggingface.co/pyannote/speaker-diarization). If the token is not specified, the application will be launched without this feature. diff --git a/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.en-gb.md b/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.en-gb.md index d61c2cf711c..dbcd35ca17d 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.en-gb.md +++ b/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.en-gb.md @@ -30,7 +30,7 @@ To deploy your app, you need: - An access to the [OVHcloud Control Panel](/links/manager). - An AI Deploy Project created inside a [Public Cloud project](/links/public-cloud/public-cloud) in your OVHcloud account - A [user for AI Deploy](/pages/public_cloud/ai_machine_learning/gi_01_manage_users). -- [The OVHcloud AI CLI](https://cli.bhs.ai.cloud.ovh.net/) **and** [Docker](https://www.docker.com/get-started) installed on your local computer, **or** only an access to a Debian Docker Instance on the [Public Cloud](/links/manager). +- [The OVHcloud AI CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) **and** [Docker](https://www.docker.com/get-started) installed on your local computer, **or** only an access to a Debian Docker Instance on the [Public Cloud](/links/manager). - To deploy your app, you must have the full code of the application, either by cloning the [GitHub repository](https://github.com/ovh/ai-training-examples/tree/main/apps/streamlit/speech-to-text), or by having followed our [blog article](https://blog.ovhcloud.com/how-to-build-a-speech-to-text-application-with-python-1-3/) that taught you how to build this app step by step. - If you want the diarization option (speakers differentiation), you will need an access token. This token will be requested at the launch of the application. To create your token, follow the steps indicated on the [model page](https://huggingface.co/pyannote/speaker-diarization). If the token is not specified, the application will be launched without this feature. diff --git a/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.en-ie.md b/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.en-ie.md index 1cd7e9b5841..3b917a5ec49 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.en-ie.md +++ b/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.en-ie.md @@ -30,7 +30,7 @@ To deploy your app, you need: - An access to the [OVHcloud Control Panel](/links/manager). - An AI Deploy Project created inside a [Public Cloud project](/links/public-cloud/public-cloud) in your OVHcloud account - A [user for AI Deploy](/pages/public_cloud/ai_machine_learning/gi_01_manage_users). -- [The OVHcloud AI CLI](https://cli.bhs.ai.cloud.ovh.net/) **and** [Docker](https://www.docker.com/get-started) installed on your local computer, **or** only an access to a Debian Docker Instance on the [Public Cloud](/links/manager). +- [The OVHcloud AI CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) **and** [Docker](https://www.docker.com/get-started) installed on your local computer, **or** only an access to a Debian Docker Instance on the [Public Cloud](/links/manager). - To deploy your app, you must have the full code of the application, either by cloning the [GitHub repository](https://github.com/ovh/ai-training-examples/tree/main/apps/streamlit/speech-to-text), or by having followed our [blog article](https://blog.ovhcloud.com/how-to-build-a-speech-to-text-application-with-python-1-3/) that taught you how to build this app step by step. - If you want the diarization option (speakers differentiation), you will need an access token. This token will be requested at the launch of the application. To create your token, follow the steps indicated on the [model page](https://huggingface.co/pyannote/speaker-diarization). If the token is not specified, the application will be launched without this feature. diff --git a/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.en-sg.md b/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.en-sg.md index 1cd7e9b5841..3b917a5ec49 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.en-sg.md +++ b/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.en-sg.md @@ -30,7 +30,7 @@ To deploy your app, you need: - An access to the [OVHcloud Control Panel](/links/manager). - An AI Deploy Project created inside a [Public Cloud project](/links/public-cloud/public-cloud) in your OVHcloud account - A [user for AI Deploy](/pages/public_cloud/ai_machine_learning/gi_01_manage_users). -- [The OVHcloud AI CLI](https://cli.bhs.ai.cloud.ovh.net/) **and** [Docker](https://www.docker.com/get-started) installed on your local computer, **or** only an access to a Debian Docker Instance on the [Public Cloud](/links/manager). +- [The OVHcloud AI CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) **and** [Docker](https://www.docker.com/get-started) installed on your local computer, **or** only an access to a Debian Docker Instance on the [Public Cloud](/links/manager). - To deploy your app, you must have the full code of the application, either by cloning the [GitHub repository](https://github.com/ovh/ai-training-examples/tree/main/apps/streamlit/speech-to-text), or by having followed our [blog article](https://blog.ovhcloud.com/how-to-build-a-speech-to-text-application-with-python-1-3/) that taught you how to build this app step by step. - If you want the diarization option (speakers differentiation), you will need an access token. This token will be requested at the launch of the application. To create your token, follow the steps indicated on the [model page](https://huggingface.co/pyannote/speaker-diarization). If the token is not specified, the application will be launched without this feature. diff --git a/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.en-us.md b/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.en-us.md index 1cd7e9b5841..3b917a5ec49 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.en-us.md +++ b/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.en-us.md @@ -30,7 +30,7 @@ To deploy your app, you need: - An access to the [OVHcloud Control Panel](/links/manager). - An AI Deploy Project created inside a [Public Cloud project](/links/public-cloud/public-cloud) in your OVHcloud account - A [user for AI Deploy](/pages/public_cloud/ai_machine_learning/gi_01_manage_users). -- [The OVHcloud AI CLI](https://cli.bhs.ai.cloud.ovh.net/) **and** [Docker](https://www.docker.com/get-started) installed on your local computer, **or** only an access to a Debian Docker Instance on the [Public Cloud](/links/manager). +- [The OVHcloud AI CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) **and** [Docker](https://www.docker.com/get-started) installed on your local computer, **or** only an access to a Debian Docker Instance on the [Public Cloud](/links/manager). - To deploy your app, you must have the full code of the application, either by cloning the [GitHub repository](https://github.com/ovh/ai-training-examples/tree/main/apps/streamlit/speech-to-text), or by having followed our [blog article](https://blog.ovhcloud.com/how-to-build-a-speech-to-text-application-with-python-1-3/) that taught you how to build this app step by step. - If you want the diarization option (speakers differentiation), you will need an access token. This token will be requested at the launch of the application. To create your token, follow the steps indicated on the [model page](https://huggingface.co/pyannote/speaker-diarization). If the token is not specified, the application will be launched without this feature. diff --git a/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.es-es.md b/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.es-es.md index 1cd7e9b5841..3b917a5ec49 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.es-es.md +++ b/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.es-es.md @@ -30,7 +30,7 @@ To deploy your app, you need: - An access to the [OVHcloud Control Panel](/links/manager). - An AI Deploy Project created inside a [Public Cloud project](/links/public-cloud/public-cloud) in your OVHcloud account - A [user for AI Deploy](/pages/public_cloud/ai_machine_learning/gi_01_manage_users). -- [The OVHcloud AI CLI](https://cli.bhs.ai.cloud.ovh.net/) **and** [Docker](https://www.docker.com/get-started) installed on your local computer, **or** only an access to a Debian Docker Instance on the [Public Cloud](/links/manager). +- [The OVHcloud AI CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) **and** [Docker](https://www.docker.com/get-started) installed on your local computer, **or** only an access to a Debian Docker Instance on the [Public Cloud](/links/manager). - To deploy your app, you must have the full code of the application, either by cloning the [GitHub repository](https://github.com/ovh/ai-training-examples/tree/main/apps/streamlit/speech-to-text), or by having followed our [blog article](https://blog.ovhcloud.com/how-to-build-a-speech-to-text-application-with-python-1-3/) that taught you how to build this app step by step. - If you want the diarization option (speakers differentiation), you will need an access token. This token will be requested at the launch of the application. To create your token, follow the steps indicated on the [model page](https://huggingface.co/pyannote/speaker-diarization). If the token is not specified, the application will be launched without this feature. diff --git a/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.es-us.md b/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.es-us.md index 1cd7e9b5841..3b917a5ec49 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.es-us.md +++ b/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.es-us.md @@ -30,7 +30,7 @@ To deploy your app, you need: - An access to the [OVHcloud Control Panel](/links/manager). - An AI Deploy Project created inside a [Public Cloud project](/links/public-cloud/public-cloud) in your OVHcloud account - A [user for AI Deploy](/pages/public_cloud/ai_machine_learning/gi_01_manage_users). -- [The OVHcloud AI CLI](https://cli.bhs.ai.cloud.ovh.net/) **and** [Docker](https://www.docker.com/get-started) installed on your local computer, **or** only an access to a Debian Docker Instance on the [Public Cloud](/links/manager). +- [The OVHcloud AI CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) **and** [Docker](https://www.docker.com/get-started) installed on your local computer, **or** only an access to a Debian Docker Instance on the [Public Cloud](/links/manager). - To deploy your app, you must have the full code of the application, either by cloning the [GitHub repository](https://github.com/ovh/ai-training-examples/tree/main/apps/streamlit/speech-to-text), or by having followed our [blog article](https://blog.ovhcloud.com/how-to-build-a-speech-to-text-application-with-python-1-3/) that taught you how to build this app step by step. - If you want the diarization option (speakers differentiation), you will need an access token. This token will be requested at the launch of the application. To create your token, follow the steps indicated on the [model page](https://huggingface.co/pyannote/speaker-diarization). If the token is not specified, the application will be launched without this feature. diff --git a/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.fr-ca.md b/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.fr-ca.md index 662f82928bd..5f062a65b3f 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.fr-ca.md +++ b/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.fr-ca.md @@ -30,7 +30,7 @@ To deploy your app, you need: - An access to the [OVHcloud Control Panel](/links/manager). - An AI Deploy Project created inside a [Public Cloud project](/links/public-cloud/public-cloud) in your OVHcloud account - A [user for AI Deploy](/pages/public_cloud/ai_machine_learning/gi_01_manage_users). -- [The OVHcloud AI CLI](https://cli.bhs.ai.cloud.ovh.net/) **and** [Docker](https://www.docker.com/get-started) installed on your local computer, **or** only an access to a Debian Docker Instance on the [Public Cloud](/links/manager). +- [The OVHcloud AI CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) **and** [Docker](https://www.docker.com/get-started) installed on your local computer, **or** only an access to a Debian Docker Instance on the [Public Cloud](/links/manager). - To deploy your app, you must have the full code of the application, either by cloning the [GitHub repository](https://github.com/ovh/ai-training-examples/tree/main/apps/streamlit/speech-to-text), or by having followed our [blog article](https://blog.ovhcloud.com/how-to-build-a-speech-to-text-application-with-python-1-3/) that taught you how to build this app step by step. - If you want the diarization option (speakers differentiation), you will need an access token. This token will be requested at the launch of the application. To create your token, follow the steps indicated on the [model page](https://huggingface.co/pyannote/speaker-diarization). If the token is not specified, the application will be launched without this feature. diff --git a/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.fr-fr.md b/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.fr-fr.md index 662f82928bd..5f062a65b3f 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.fr-fr.md +++ b/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.fr-fr.md @@ -30,7 +30,7 @@ To deploy your app, you need: - An access to the [OVHcloud Control Panel](/links/manager). - An AI Deploy Project created inside a [Public Cloud project](/links/public-cloud/public-cloud) in your OVHcloud account - A [user for AI Deploy](/pages/public_cloud/ai_machine_learning/gi_01_manage_users). -- [The OVHcloud AI CLI](https://cli.bhs.ai.cloud.ovh.net/) **and** [Docker](https://www.docker.com/get-started) installed on your local computer, **or** only an access to a Debian Docker Instance on the [Public Cloud](/links/manager). +- [The OVHcloud AI CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) **and** [Docker](https://www.docker.com/get-started) installed on your local computer, **or** only an access to a Debian Docker Instance on the [Public Cloud](/links/manager). - To deploy your app, you must have the full code of the application, either by cloning the [GitHub repository](https://github.com/ovh/ai-training-examples/tree/main/apps/streamlit/speech-to-text), or by having followed our [blog article](https://blog.ovhcloud.com/how-to-build-a-speech-to-text-application-with-python-1-3/) that taught you how to build this app step by step. - If you want the diarization option (speakers differentiation), you will need an access token. This token will be requested at the launch of the application. To create your token, follow the steps indicated on the [model page](https://huggingface.co/pyannote/speaker-diarization). If the token is not specified, the application will be launched without this feature. diff --git a/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.it-it.md b/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.it-it.md index 1cd7e9b5841..3b917a5ec49 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.it-it.md +++ b/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.it-it.md @@ -30,7 +30,7 @@ To deploy your app, you need: - An access to the [OVHcloud Control Panel](/links/manager). - An AI Deploy Project created inside a [Public Cloud project](/links/public-cloud/public-cloud) in your OVHcloud account - A [user for AI Deploy](/pages/public_cloud/ai_machine_learning/gi_01_manage_users). -- [The OVHcloud AI CLI](https://cli.bhs.ai.cloud.ovh.net/) **and** [Docker](https://www.docker.com/get-started) installed on your local computer, **or** only an access to a Debian Docker Instance on the [Public Cloud](/links/manager). +- [The OVHcloud AI CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) **and** [Docker](https://www.docker.com/get-started) installed on your local computer, **or** only an access to a Debian Docker Instance on the [Public Cloud](/links/manager). - To deploy your app, you must have the full code of the application, either by cloning the [GitHub repository](https://github.com/ovh/ai-training-examples/tree/main/apps/streamlit/speech-to-text), or by having followed our [blog article](https://blog.ovhcloud.com/how-to-build-a-speech-to-text-application-with-python-1-3/) that taught you how to build this app step by step. - If you want the diarization option (speakers differentiation), you will need an access token. This token will be requested at the launch of the application. To create your token, follow the steps indicated on the [model page](https://huggingface.co/pyannote/speaker-diarization). If the token is not specified, the application will be launched without this feature. diff --git a/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.pl-pl.md b/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.pl-pl.md index 1cd7e9b5841..3b917a5ec49 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.pl-pl.md +++ b/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.pl-pl.md @@ -30,7 +30,7 @@ To deploy your app, you need: - An access to the [OVHcloud Control Panel](/links/manager). - An AI Deploy Project created inside a [Public Cloud project](/links/public-cloud/public-cloud) in your OVHcloud account - A [user for AI Deploy](/pages/public_cloud/ai_machine_learning/gi_01_manage_users). -- [The OVHcloud AI CLI](https://cli.bhs.ai.cloud.ovh.net/) **and** [Docker](https://www.docker.com/get-started) installed on your local computer, **or** only an access to a Debian Docker Instance on the [Public Cloud](/links/manager). +- [The OVHcloud AI CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) **and** [Docker](https://www.docker.com/get-started) installed on your local computer, **or** only an access to a Debian Docker Instance on the [Public Cloud](/links/manager). - To deploy your app, you must have the full code of the application, either by cloning the [GitHub repository](https://github.com/ovh/ai-training-examples/tree/main/apps/streamlit/speech-to-text), or by having followed our [blog article](https://blog.ovhcloud.com/how-to-build-a-speech-to-text-application-with-python-1-3/) that taught you how to build this app step by step. - If you want the diarization option (speakers differentiation), you will need an access token. This token will be requested at the launch of the application. To create your token, follow the steps indicated on the [model page](https://huggingface.co/pyannote/speaker-diarization). If the token is not specified, the application will be launched without this feature. diff --git a/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.pt-pt.md b/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.pt-pt.md index 1cd7e9b5841..3b917a5ec49 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.pt-pt.md +++ b/pages/public_cloud/ai_machine_learning/deploy_tuto_09_streamlit_speech_to_text_app/guide.pt-pt.md @@ -30,7 +30,7 @@ To deploy your app, you need: - An access to the [OVHcloud Control Panel](/links/manager). - An AI Deploy Project created inside a [Public Cloud project](/links/public-cloud/public-cloud) in your OVHcloud account - A [user for AI Deploy](/pages/public_cloud/ai_machine_learning/gi_01_manage_users). -- [The OVHcloud AI CLI](https://cli.bhs.ai.cloud.ovh.net/) **and** [Docker](https://www.docker.com/get-started) installed on your local computer, **or** only an access to a Debian Docker Instance on the [Public Cloud](/links/manager). +- [The OVHcloud AI CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) **and** [Docker](https://www.docker.com/get-started) installed on your local computer, **or** only an access to a Debian Docker Instance on the [Public Cloud](/links/manager). - To deploy your app, you must have the full code of the application, either by cloning the [GitHub repository](https://github.com/ovh/ai-training-examples/tree/main/apps/streamlit/speech-to-text), or by having followed our [blog article](https://blog.ovhcloud.com/how-to-build-a-speech-to-text-application-with-python-1-3/) that taught you how to build this app step by step. - If you want the diarization option (speakers differentiation), you will need an access token. This token will be requested at the launch of the application. To create your token, follow the steps indicated on the [model page](https://huggingface.co/pyannote/speaker-diarization). If the token is not specified, the application will be launched without this feature. diff --git a/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.de-de.md b/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.de-de.md index dbc56932120..1148f3cbe5c 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.de-de.md +++ b/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.de-de.md @@ -137,7 +137,7 @@ Once your object containers are created, you will see them in the Object Storage #### 1.2 - Upload data via CLI -To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. As in the Control Panel, you will have to specify the `region`, the `name of your container` and the `path` where your data will be located. The creation of an object container can be done with the following command: diff --git a/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.en-asia.md b/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.en-asia.md index dbc56932120..1148f3cbe5c 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.en-asia.md +++ b/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.en-asia.md @@ -137,7 +137,7 @@ Once your object containers are created, you will see them in the Object Storage #### 1.2 - Upload data via CLI -To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. As in the Control Panel, you will have to specify the `region`, the `name of your container` and the `path` where your data will be located. The creation of an object container can be done with the following command: diff --git a/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.en-au.md b/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.en-au.md index dbc56932120..1148f3cbe5c 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.en-au.md +++ b/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.en-au.md @@ -137,7 +137,7 @@ Once your object containers are created, you will see them in the Object Storage #### 1.2 - Upload data via CLI -To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. As in the Control Panel, you will have to specify the `region`, the `name of your container` and the `path` where your data will be located. The creation of an object container can be done with the following command: diff --git a/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.en-ca.md b/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.en-ca.md index dbc56932120..1148f3cbe5c 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.en-ca.md +++ b/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.en-ca.md @@ -137,7 +137,7 @@ Once your object containers are created, you will see them in the Object Storage #### 1.2 - Upload data via CLI -To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. As in the Control Panel, you will have to specify the `region`, the `name of your container` and the `path` where your data will be located. The creation of an object container can be done with the following command: diff --git a/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.en-gb.md b/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.en-gb.md index dbc56932120..1148f3cbe5c 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.en-gb.md +++ b/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.en-gb.md @@ -137,7 +137,7 @@ Once your object containers are created, you will see them in the Object Storage #### 1.2 - Upload data via CLI -To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. As in the Control Panel, you will have to specify the `region`, the `name of your container` and the `path` where your data will be located. The creation of an object container can be done with the following command: diff --git a/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.en-ie.md b/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.en-ie.md index dbc56932120..1148f3cbe5c 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.en-ie.md +++ b/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.en-ie.md @@ -137,7 +137,7 @@ Once your object containers are created, you will see them in the Object Storage #### 1.2 - Upload data via CLI -To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. As in the Control Panel, you will have to specify the `region`, the `name of your container` and the `path` where your data will be located. The creation of an object container can be done with the following command: diff --git a/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.en-sg.md b/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.en-sg.md index dbc56932120..1148f3cbe5c 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.en-sg.md +++ b/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.en-sg.md @@ -137,7 +137,7 @@ Once your object containers are created, you will see them in the Object Storage #### 1.2 - Upload data via CLI -To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. As in the Control Panel, you will have to specify the `region`, the `name of your container` and the `path` where your data will be located. The creation of an object container can be done with the following command: diff --git a/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.en-us.md b/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.en-us.md index dbc56932120..1148f3cbe5c 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.en-us.md +++ b/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.en-us.md @@ -137,7 +137,7 @@ Once your object containers are created, you will see them in the Object Storage #### 1.2 - Upload data via CLI -To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. As in the Control Panel, you will have to specify the `region`, the `name of your container` and the `path` where your data will be located. The creation of an object container can be done with the following command: diff --git a/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.es-es.md b/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.es-es.md index dbc56932120..1148f3cbe5c 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.es-es.md +++ b/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.es-es.md @@ -137,7 +137,7 @@ Once your object containers are created, you will see them in the Object Storage #### 1.2 - Upload data via CLI -To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. As in the Control Panel, you will have to specify the `region`, the `name of your container` and the `path` where your data will be located. The creation of an object container can be done with the following command: diff --git a/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.es-us.md b/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.es-us.md index dbc56932120..1148f3cbe5c 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.es-us.md +++ b/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.es-us.md @@ -137,7 +137,7 @@ Once your object containers are created, you will see them in the Object Storage #### 1.2 - Upload data via CLI -To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. As in the Control Panel, you will have to specify the `region`, the `name of your container` and the `path` where your data will be located. The creation of an object container can be done with the following command: diff --git a/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.fr-ca.md b/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.fr-ca.md index 85ca255d740..8959b92f573 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.fr-ca.md +++ b/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.fr-ca.md @@ -137,7 +137,7 @@ Once your object containers are created, you will see them in the Object Storage #### 1.2 - Upload data via CLI -To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. As in the Control Panel, you will have to specify the `region`, the `name of your container` and the `path` where your data will be located. The creation of an object container can be done with the following command: diff --git a/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.fr-fr.md b/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.fr-fr.md index 85ca255d740..8959b92f573 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.fr-fr.md +++ b/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.fr-fr.md @@ -137,7 +137,7 @@ Once your object containers are created, you will see them in the Object Storage #### 1.2 - Upload data via CLI -To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. As in the Control Panel, you will have to specify the `region`, the `name of your container` and the `path` where your data will be located. The creation of an object container can be done with the following command: diff --git a/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.it-it.md b/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.it-it.md index dbc56932120..1148f3cbe5c 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.it-it.md +++ b/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.it-it.md @@ -137,7 +137,7 @@ Once your object containers are created, you will see them in the Object Storage #### 1.2 - Upload data via CLI -To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. As in the Control Panel, you will have to specify the `region`, the `name of your container` and the `path` where your data will be located. The creation of an object container can be done with the following command: diff --git a/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.pl-pl.md b/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.pl-pl.md index dbc56932120..1148f3cbe5c 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.pl-pl.md +++ b/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.pl-pl.md @@ -137,7 +137,7 @@ Once your object containers are created, you will see them in the Object Storage #### 1.2 - Upload data via CLI -To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. As in the Control Panel, you will have to specify the `region`, the `name of your container` and the `path` where your data will be located. The creation of an object container can be done with the following command: diff --git a/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.pt-pt.md b/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.pt-pt.md index dbc56932120..1148f3cbe5c 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.pt-pt.md +++ b/pages/public_cloud/ai_machine_learning/deploy_tuto_14_img_segmentation_app/guide.pt-pt.md @@ -137,7 +137,7 @@ Once your object containers are created, you will see them in the Object Storage #### 1.2 - Upload data via CLI -To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. As in the Control Panel, you will have to specify the `region`, the `name of your container` and the `path` where your data will be located. The creation of an object container can be done with the following command: diff --git a/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.de-de.md b/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.de-de.md index 39e47f9bc24..cf3cea1cad9 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.de-de.md +++ b/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.de-de.md @@ -94,7 +94,7 @@ You can create your Object Storage bucket using either the UI (OVHcloud Control >> > **Using ovhai CLI** >> ->> To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +>> To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. >> >> As in the Control Panel, you will have to specify the `datastore_alias` and the `name` of your bucket. Create your Object Storage bucket as follows: >> diff --git a/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.en-asia.md b/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.en-asia.md index 39e47f9bc24..cf3cea1cad9 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.en-asia.md +++ b/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.en-asia.md @@ -94,7 +94,7 @@ You can create your Object Storage bucket using either the UI (OVHcloud Control >> > **Using ovhai CLI** >> ->> To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +>> To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. >> >> As in the Control Panel, you will have to specify the `datastore_alias` and the `name` of your bucket. Create your Object Storage bucket as follows: >> diff --git a/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.en-au.md b/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.en-au.md index 39e47f9bc24..cf3cea1cad9 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.en-au.md +++ b/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.en-au.md @@ -94,7 +94,7 @@ You can create your Object Storage bucket using either the UI (OVHcloud Control >> > **Using ovhai CLI** >> ->> To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +>> To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. >> >> As in the Control Panel, you will have to specify the `datastore_alias` and the `name` of your bucket. Create your Object Storage bucket as follows: >> diff --git a/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.en-ca.md b/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.en-ca.md index 39e47f9bc24..cf3cea1cad9 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.en-ca.md +++ b/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.en-ca.md @@ -94,7 +94,7 @@ You can create your Object Storage bucket using either the UI (OVHcloud Control >> > **Using ovhai CLI** >> ->> To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +>> To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. >> >> As in the Control Panel, you will have to specify the `datastore_alias` and the `name` of your bucket. Create your Object Storage bucket as follows: >> diff --git a/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.en-gb.md b/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.en-gb.md index 65cf8b4178c..398941cc1c8 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.en-gb.md +++ b/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.en-gb.md @@ -94,7 +94,7 @@ You can create your Object Storage bucket using either the UI (OVHcloud Control >> > **Using ovhai CLI** >> ->> To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +>> To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. >> >> As in the Control Panel, you will have to specify the `datastore_alias` and the `name` of your bucket. Create your Object Storage bucket as follows: >> diff --git a/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.en-ie.md b/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.en-ie.md index 39e47f9bc24..cf3cea1cad9 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.en-ie.md +++ b/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.en-ie.md @@ -94,7 +94,7 @@ You can create your Object Storage bucket using either the UI (OVHcloud Control >> > **Using ovhai CLI** >> ->> To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +>> To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. >> >> As in the Control Panel, you will have to specify the `datastore_alias` and the `name` of your bucket. Create your Object Storage bucket as follows: >> diff --git a/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.en-sg.md b/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.en-sg.md index 39e47f9bc24..cf3cea1cad9 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.en-sg.md +++ b/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.en-sg.md @@ -94,7 +94,7 @@ You can create your Object Storage bucket using either the UI (OVHcloud Control >> > **Using ovhai CLI** >> ->> To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +>> To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. >> >> As in the Control Panel, you will have to specify the `datastore_alias` and the `name` of your bucket. Create your Object Storage bucket as follows: >> diff --git a/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.en-us.md b/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.en-us.md index 39e47f9bc24..cf3cea1cad9 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.en-us.md +++ b/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.en-us.md @@ -94,7 +94,7 @@ You can create your Object Storage bucket using either the UI (OVHcloud Control >> > **Using ovhai CLI** >> ->> To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +>> To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. >> >> As in the Control Panel, you will have to specify the `datastore_alias` and the `name` of your bucket. Create your Object Storage bucket as follows: >> diff --git a/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.es-es.md b/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.es-es.md index 39e47f9bc24..cf3cea1cad9 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.es-es.md +++ b/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.es-es.md @@ -94,7 +94,7 @@ You can create your Object Storage bucket using either the UI (OVHcloud Control >> > **Using ovhai CLI** >> ->> To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +>> To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. >> >> As in the Control Panel, you will have to specify the `datastore_alias` and the `name` of your bucket. Create your Object Storage bucket as follows: >> diff --git a/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.es-us.md b/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.es-us.md index 39e47f9bc24..cf3cea1cad9 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.es-us.md +++ b/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.es-us.md @@ -94,7 +94,7 @@ You can create your Object Storage bucket using either the UI (OVHcloud Control >> > **Using ovhai CLI** >> ->> To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +>> To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. >> >> As in the Control Panel, you will have to specify the `datastore_alias` and the `name` of your bucket. Create your Object Storage bucket as follows: >> diff --git a/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.fr-ca.md b/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.fr-ca.md index 1751a8b7e2e..f483c3f62c1 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.fr-ca.md +++ b/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.fr-ca.md @@ -94,7 +94,7 @@ You can create your Object Storage bucket using either the UI (OVHcloud Control >> > **Using ovhai CLI** >> ->> To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +>> To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. >> >> As in the Control Panel, you will have to specify the `datastore_alias` and the `name` of your bucket. Create your Object Storage bucket as follows: >> diff --git a/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.fr-fr.md b/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.fr-fr.md index 1751a8b7e2e..f483c3f62c1 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.fr-fr.md +++ b/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.fr-fr.md @@ -94,7 +94,7 @@ You can create your Object Storage bucket using either the UI (OVHcloud Control >> > **Using ovhai CLI** >> ->> To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +>> To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. >> >> As in the Control Panel, you will have to specify the `datastore_alias` and the `name` of your bucket. Create your Object Storage bucket as follows: >> diff --git a/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.it-it.md b/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.it-it.md index 39e47f9bc24..cf3cea1cad9 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.it-it.md +++ b/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.it-it.md @@ -94,7 +94,7 @@ You can create your Object Storage bucket using either the UI (OVHcloud Control >> > **Using ovhai CLI** >> ->> To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +>> To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. >> >> As in the Control Panel, you will have to specify the `datastore_alias` and the `name` of your bucket. Create your Object Storage bucket as follows: >> diff --git a/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.pl-pl.md b/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.pl-pl.md index 39e47f9bc24..cf3cea1cad9 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.pl-pl.md +++ b/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.pl-pl.md @@ -94,7 +94,7 @@ You can create your Object Storage bucket using either the UI (OVHcloud Control >> > **Using ovhai CLI** >> ->> To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +>> To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. >> >> As in the Control Panel, you will have to specify the `datastore_alias` and the `name` of your bucket. Create your Object Storage bucket as follows: >> diff --git a/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.pt-pt.md b/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.pt-pt.md index 39e47f9bc24..cf3cea1cad9 100644 --- a/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.pt-pt.md +++ b/pages/public_cloud/ai_machine_learning/deploy_tuto_17_streamlit_whisper/guide.pt-pt.md @@ -94,7 +94,7 @@ You can create your Object Storage bucket using either the UI (OVHcloud Control >> > **Using ovhai CLI** >> ->> To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +>> To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. >> >> As in the Control Panel, you will have to specify the `datastore_alias` and the `name` of your bucket. Create your Object Storage bucket as follows: >> diff --git a/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.de-de.md b/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.de-de.md index e3cee80fd10..b1507535181 100644 --- a/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.de-de.md +++ b/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.de-de.md @@ -33,25 +33,21 @@ By following this model lifecycle process, OVHcloud ensures that customers are w Here is the model billing overview for AI Endpoints. -> [!primary] -> -> In appreciation of their continued support, our **Beta testers will have the possibility to keep using their existing API access keys and create new ones and won't be billed until 31th May**. After this date, the pricing will be implemented for them and clearly outlined in the table below, which details the categories, models, and their respective pricing information: -> - | Category | Model | Price ($) | Price (€) | Unit Price | | -------------- | --------------- | ------ | ------ | ---------- | -| Large Language Model (LLM) | Llama 3.3 70B Instruct | 0.70 | 0.67 | per 1M tokens | -| Large Language Model (LLM) | Llama 3.1 70B Instruct | 0.70 | 0.67 | per 1M tokens | -| Large Language Model (LLM) | Mixtral 8x7B Instruct v0.1 | 0.65 | 0.63 | per 1M tokens | +| Large Language Model (LLM) | Llama 3.3 70B Instruct | 0.74 | 0.67 | per 1M tokens | +| Large Language Model (LLM) | Llama 3.1 70B Instruct | 0.74 | 0.67 | per 1M tokens | +| Large Language Model (LLM) | Mixtral 8x7B Instruct v0.1 | 0.70 | 0.63 | per 1M tokens | | Large Language Model (LLM) | Mistral-Nemo-Instruct-2407 | 0.14 | 0.13 | per 1M tokens | | Large Language Model (LLM) | Llama 3.1 8B Instruct | 0.10 | 0.10 | per 1M tokens | -| Large Language Model (LLM) | Mistral 7B Instruct v0.3 | 0.10 | 0.10 | per 1M tokens | -| Reasoning LLM | DeepSeek R1 | Free | Free | per 1M tokens | -| Reasoning LLM | DeepSeek R1 Distill Llama 70B | 0.70 | 0.67 | per 1M tokens | -| Code LLM | Qwen2.5 Coder 32B Instruct | 0.90 | 0.87 | per 1M tokens | -| Code LLM | Mamba Codestral 7B v0.1 | 0.20 | 0.19 | per 1M tokens | -| Visual LLM | Qwen2.5 VL 72B Instruct | 0.95 | 0.91 | per 1M tokens | -| Visual LLM | Llava Next Mistral 7B | 0.30 | 0.29 | per 1M tokens | +| Large Language Model (LLM) | Mistral 7B Instruct v0.3 | 0.11 | 0.10 | per 1M tokens | +| Reasoning LLM | DeepSeek R1 Distill Llama 70B | 0.74 | 0.67 | per 1M tokens | +| Reasoning LLM | Qwen3 32B | 0.09 | 0.08 | per 1M tokens | +| Code LLM | Qwen2.5 Coder 32B Instruct | 0.96 | 0.87 | per 1M tokens | +| Code LLM | Mamba Codestral 7B v0.1 | 0.21 | 0.19 | per 1M tokens | +| Visual LLM | Mistral Small 3.2 24B Instruct 2506 | 0.10 | 0.09 | per 1M tokens | +| Visual LLM | Qwen2.5 VL 72B Instruct | 1.01 | 0.91 | per 1M tokens | +| Visual LLM | Llava Next Mistral 7B | 0.32 | 0.29 | per 1M tokens | | Embeddings | BGE Multilingual Gemma2 | 0.01 | 0.01 | per 1M tokens | | Embeddings | BGE-M3 | 0.01 | 0.01 | per 1M tokens | | Embeddings | BGE Base EN v1.5 | 0.01 | 0.005 | per 1M tokens | diff --git a/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.en-asia.md b/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.en-asia.md index e3cee80fd10..b1507535181 100644 --- a/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.en-asia.md +++ b/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.en-asia.md @@ -33,25 +33,21 @@ By following this model lifecycle process, OVHcloud ensures that customers are w Here is the model billing overview for AI Endpoints. -> [!primary] -> -> In appreciation of their continued support, our **Beta testers will have the possibility to keep using their existing API access keys and create new ones and won't be billed until 31th May**. After this date, the pricing will be implemented for them and clearly outlined in the table below, which details the categories, models, and their respective pricing information: -> - | Category | Model | Price ($) | Price (€) | Unit Price | | -------------- | --------------- | ------ | ------ | ---------- | -| Large Language Model (LLM) | Llama 3.3 70B Instruct | 0.70 | 0.67 | per 1M tokens | -| Large Language Model (LLM) | Llama 3.1 70B Instruct | 0.70 | 0.67 | per 1M tokens | -| Large Language Model (LLM) | Mixtral 8x7B Instruct v0.1 | 0.65 | 0.63 | per 1M tokens | +| Large Language Model (LLM) | Llama 3.3 70B Instruct | 0.74 | 0.67 | per 1M tokens | +| Large Language Model (LLM) | Llama 3.1 70B Instruct | 0.74 | 0.67 | per 1M tokens | +| Large Language Model (LLM) | Mixtral 8x7B Instruct v0.1 | 0.70 | 0.63 | per 1M tokens | | Large Language Model (LLM) | Mistral-Nemo-Instruct-2407 | 0.14 | 0.13 | per 1M tokens | | Large Language Model (LLM) | Llama 3.1 8B Instruct | 0.10 | 0.10 | per 1M tokens | -| Large Language Model (LLM) | Mistral 7B Instruct v0.3 | 0.10 | 0.10 | per 1M tokens | -| Reasoning LLM | DeepSeek R1 | Free | Free | per 1M tokens | -| Reasoning LLM | DeepSeek R1 Distill Llama 70B | 0.70 | 0.67 | per 1M tokens | -| Code LLM | Qwen2.5 Coder 32B Instruct | 0.90 | 0.87 | per 1M tokens | -| Code LLM | Mamba Codestral 7B v0.1 | 0.20 | 0.19 | per 1M tokens | -| Visual LLM | Qwen2.5 VL 72B Instruct | 0.95 | 0.91 | per 1M tokens | -| Visual LLM | Llava Next Mistral 7B | 0.30 | 0.29 | per 1M tokens | +| Large Language Model (LLM) | Mistral 7B Instruct v0.3 | 0.11 | 0.10 | per 1M tokens | +| Reasoning LLM | DeepSeek R1 Distill Llama 70B | 0.74 | 0.67 | per 1M tokens | +| Reasoning LLM | Qwen3 32B | 0.09 | 0.08 | per 1M tokens | +| Code LLM | Qwen2.5 Coder 32B Instruct | 0.96 | 0.87 | per 1M tokens | +| Code LLM | Mamba Codestral 7B v0.1 | 0.21 | 0.19 | per 1M tokens | +| Visual LLM | Mistral Small 3.2 24B Instruct 2506 | 0.10 | 0.09 | per 1M tokens | +| Visual LLM | Qwen2.5 VL 72B Instruct | 1.01 | 0.91 | per 1M tokens | +| Visual LLM | Llava Next Mistral 7B | 0.32 | 0.29 | per 1M tokens | | Embeddings | BGE Multilingual Gemma2 | 0.01 | 0.01 | per 1M tokens | | Embeddings | BGE-M3 | 0.01 | 0.01 | per 1M tokens | | Embeddings | BGE Base EN v1.5 | 0.01 | 0.005 | per 1M tokens | diff --git a/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.en-au.md b/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.en-au.md index e3cee80fd10..b1507535181 100644 --- a/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.en-au.md +++ b/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.en-au.md @@ -33,25 +33,21 @@ By following this model lifecycle process, OVHcloud ensures that customers are w Here is the model billing overview for AI Endpoints. -> [!primary] -> -> In appreciation of their continued support, our **Beta testers will have the possibility to keep using their existing API access keys and create new ones and won't be billed until 31th May**. After this date, the pricing will be implemented for them and clearly outlined in the table below, which details the categories, models, and their respective pricing information: -> - | Category | Model | Price ($) | Price (€) | Unit Price | | -------------- | --------------- | ------ | ------ | ---------- | -| Large Language Model (LLM) | Llama 3.3 70B Instruct | 0.70 | 0.67 | per 1M tokens | -| Large Language Model (LLM) | Llama 3.1 70B Instruct | 0.70 | 0.67 | per 1M tokens | -| Large Language Model (LLM) | Mixtral 8x7B Instruct v0.1 | 0.65 | 0.63 | per 1M tokens | +| Large Language Model (LLM) | Llama 3.3 70B Instruct | 0.74 | 0.67 | per 1M tokens | +| Large Language Model (LLM) | Llama 3.1 70B Instruct | 0.74 | 0.67 | per 1M tokens | +| Large Language Model (LLM) | Mixtral 8x7B Instruct v0.1 | 0.70 | 0.63 | per 1M tokens | | Large Language Model (LLM) | Mistral-Nemo-Instruct-2407 | 0.14 | 0.13 | per 1M tokens | | Large Language Model (LLM) | Llama 3.1 8B Instruct | 0.10 | 0.10 | per 1M tokens | -| Large Language Model (LLM) | Mistral 7B Instruct v0.3 | 0.10 | 0.10 | per 1M tokens | -| Reasoning LLM | DeepSeek R1 | Free | Free | per 1M tokens | -| Reasoning LLM | DeepSeek R1 Distill Llama 70B | 0.70 | 0.67 | per 1M tokens | -| Code LLM | Qwen2.5 Coder 32B Instruct | 0.90 | 0.87 | per 1M tokens | -| Code LLM | Mamba Codestral 7B v0.1 | 0.20 | 0.19 | per 1M tokens | -| Visual LLM | Qwen2.5 VL 72B Instruct | 0.95 | 0.91 | per 1M tokens | -| Visual LLM | Llava Next Mistral 7B | 0.30 | 0.29 | per 1M tokens | +| Large Language Model (LLM) | Mistral 7B Instruct v0.3 | 0.11 | 0.10 | per 1M tokens | +| Reasoning LLM | DeepSeek R1 Distill Llama 70B | 0.74 | 0.67 | per 1M tokens | +| Reasoning LLM | Qwen3 32B | 0.09 | 0.08 | per 1M tokens | +| Code LLM | Qwen2.5 Coder 32B Instruct | 0.96 | 0.87 | per 1M tokens | +| Code LLM | Mamba Codestral 7B v0.1 | 0.21 | 0.19 | per 1M tokens | +| Visual LLM | Mistral Small 3.2 24B Instruct 2506 | 0.10 | 0.09 | per 1M tokens | +| Visual LLM | Qwen2.5 VL 72B Instruct | 1.01 | 0.91 | per 1M tokens | +| Visual LLM | Llava Next Mistral 7B | 0.32 | 0.29 | per 1M tokens | | Embeddings | BGE Multilingual Gemma2 | 0.01 | 0.01 | per 1M tokens | | Embeddings | BGE-M3 | 0.01 | 0.01 | per 1M tokens | | Embeddings | BGE Base EN v1.5 | 0.01 | 0.005 | per 1M tokens | diff --git a/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.en-ca.md b/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.en-ca.md index e3cee80fd10..b1507535181 100644 --- a/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.en-ca.md +++ b/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.en-ca.md @@ -33,25 +33,21 @@ By following this model lifecycle process, OVHcloud ensures that customers are w Here is the model billing overview for AI Endpoints. -> [!primary] -> -> In appreciation of their continued support, our **Beta testers will have the possibility to keep using their existing API access keys and create new ones and won't be billed until 31th May**. After this date, the pricing will be implemented for them and clearly outlined in the table below, which details the categories, models, and their respective pricing information: -> - | Category | Model | Price ($) | Price (€) | Unit Price | | -------------- | --------------- | ------ | ------ | ---------- | -| Large Language Model (LLM) | Llama 3.3 70B Instruct | 0.70 | 0.67 | per 1M tokens | -| Large Language Model (LLM) | Llama 3.1 70B Instruct | 0.70 | 0.67 | per 1M tokens | -| Large Language Model (LLM) | Mixtral 8x7B Instruct v0.1 | 0.65 | 0.63 | per 1M tokens | +| Large Language Model (LLM) | Llama 3.3 70B Instruct | 0.74 | 0.67 | per 1M tokens | +| Large Language Model (LLM) | Llama 3.1 70B Instruct | 0.74 | 0.67 | per 1M tokens | +| Large Language Model (LLM) | Mixtral 8x7B Instruct v0.1 | 0.70 | 0.63 | per 1M tokens | | Large Language Model (LLM) | Mistral-Nemo-Instruct-2407 | 0.14 | 0.13 | per 1M tokens | | Large Language Model (LLM) | Llama 3.1 8B Instruct | 0.10 | 0.10 | per 1M tokens | -| Large Language Model (LLM) | Mistral 7B Instruct v0.3 | 0.10 | 0.10 | per 1M tokens | -| Reasoning LLM | DeepSeek R1 | Free | Free | per 1M tokens | -| Reasoning LLM | DeepSeek R1 Distill Llama 70B | 0.70 | 0.67 | per 1M tokens | -| Code LLM | Qwen2.5 Coder 32B Instruct | 0.90 | 0.87 | per 1M tokens | -| Code LLM | Mamba Codestral 7B v0.1 | 0.20 | 0.19 | per 1M tokens | -| Visual LLM | Qwen2.5 VL 72B Instruct | 0.95 | 0.91 | per 1M tokens | -| Visual LLM | Llava Next Mistral 7B | 0.30 | 0.29 | per 1M tokens | +| Large Language Model (LLM) | Mistral 7B Instruct v0.3 | 0.11 | 0.10 | per 1M tokens | +| Reasoning LLM | DeepSeek R1 Distill Llama 70B | 0.74 | 0.67 | per 1M tokens | +| Reasoning LLM | Qwen3 32B | 0.09 | 0.08 | per 1M tokens | +| Code LLM | Qwen2.5 Coder 32B Instruct | 0.96 | 0.87 | per 1M tokens | +| Code LLM | Mamba Codestral 7B v0.1 | 0.21 | 0.19 | per 1M tokens | +| Visual LLM | Mistral Small 3.2 24B Instruct 2506 | 0.10 | 0.09 | per 1M tokens | +| Visual LLM | Qwen2.5 VL 72B Instruct | 1.01 | 0.91 | per 1M tokens | +| Visual LLM | Llava Next Mistral 7B | 0.32 | 0.29 | per 1M tokens | | Embeddings | BGE Multilingual Gemma2 | 0.01 | 0.01 | per 1M tokens | | Embeddings | BGE-M3 | 0.01 | 0.01 | per 1M tokens | | Embeddings | BGE Base EN v1.5 | 0.01 | 0.005 | per 1M tokens | diff --git a/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.en-gb.md b/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.en-gb.md index e3cee80fd10..387598c897e 100644 --- a/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.en-gb.md +++ b/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.en-gb.md @@ -31,27 +31,23 @@ By following this model lifecycle process, OVHcloud ensures that customers are w ## Billing principles -Here is the model billing overview for AI Endpoints. - -> [!primary] -> -> In appreciation of their continued support, our **Beta testers will have the possibility to keep using their existing API access keys and create new ones and won't be billed until 31th May**. After this date, the pricing will be implemented for them and clearly outlined in the table below, which details the categories, models, and their respective pricing information: -> +Here is the model billing overview for AI Endpoints: | Category | Model | Price ($) | Price (€) | Unit Price | | -------------- | --------------- | ------ | ------ | ---------- | -| Large Language Model (LLM) | Llama 3.3 70B Instruct | 0.70 | 0.67 | per 1M tokens | -| Large Language Model (LLM) | Llama 3.1 70B Instruct | 0.70 | 0.67 | per 1M tokens | -| Large Language Model (LLM) | Mixtral 8x7B Instruct v0.1 | 0.65 | 0.63 | per 1M tokens | +| Large Language Model (LLM) | Llama 3.3 70B Instruct | 0.74 | 0.67 | per 1M tokens | +| Large Language Model (LLM) | Llama 3.1 70B Instruct | 0.74 | 0.67 | per 1M tokens | +| Large Language Model (LLM) | Mixtral 8x7B Instruct v0.1 | 0.70 | 0.63 | per 1M tokens | | Large Language Model (LLM) | Mistral-Nemo-Instruct-2407 | 0.14 | 0.13 | per 1M tokens | -| Large Language Model (LLM) | Llama 3.1 8B Instruct | 0.10 | 0.10 | per 1M tokens | -| Large Language Model (LLM) | Mistral 7B Instruct v0.3 | 0.10 | 0.10 | per 1M tokens | -| Reasoning LLM | DeepSeek R1 | Free | Free | per 1M tokens | -| Reasoning LLM | DeepSeek R1 Distill Llama 70B | 0.70 | 0.67 | per 1M tokens | -| Code LLM | Qwen2.5 Coder 32B Instruct | 0.90 | 0.87 | per 1M tokens | -| Code LLM | Mamba Codestral 7B v0.1 | 0.20 | 0.19 | per 1M tokens | -| Visual LLM | Qwen2.5 VL 72B Instruct | 0.95 | 0.91 | per 1M tokens | -| Visual LLM | Llava Next Mistral 7B | 0.30 | 0.29 | per 1M tokens | +| Large Language Model (LLM) | Llama 3.1 8B Instruct | 0.11 | 0.10 | per 1M tokens | +| Large Language Model (LLM) | Mistral 7B Instruct v0.3 | 0.11 | 0.10 | per 1M tokens | +| Reasoning LLM | DeepSeek R1 Distill Llama 70B | 0.74 | 0.67 | per 1M tokens | +| Reasoning LLM | Qwen3 32B | 0.09 | 0.08 | per 1M tokens | +| Code LLM | Qwen2.5 Coder 32B Instruct | 0.96 | 0.87 | per 1M tokens | +| Code LLM | Mamba Codestral 7B v0.1 | 0.21 | 0.19 | per 1M tokens | +| Visual LLM | Mistral Small 3.2 24B Instruct 2506 | 0.10 | 0.09 | per 1M tokens | +| Visual LLM | Qwen2.5 VL 72B Instruct | 1.01 | 0.91 | per 1M tokens | +| Visual LLM | Llava Next Mistral 7B | 0.32 | 0.29 | per 1M tokens | | Embeddings | BGE Multilingual Gemma2 | 0.01 | 0.01 | per 1M tokens | | Embeddings | BGE-M3 | 0.01 | 0.01 | per 1M tokens | | Embeddings | BGE Base EN v1.5 | 0.01 | 0.005 | per 1M tokens | diff --git a/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.en-ie.md b/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.en-ie.md index e3cee80fd10..b1507535181 100644 --- a/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.en-ie.md +++ b/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.en-ie.md @@ -33,25 +33,21 @@ By following this model lifecycle process, OVHcloud ensures that customers are w Here is the model billing overview for AI Endpoints. -> [!primary] -> -> In appreciation of their continued support, our **Beta testers will have the possibility to keep using their existing API access keys and create new ones and won't be billed until 31th May**. After this date, the pricing will be implemented for them and clearly outlined in the table below, which details the categories, models, and their respective pricing information: -> - | Category | Model | Price ($) | Price (€) | Unit Price | | -------------- | --------------- | ------ | ------ | ---------- | -| Large Language Model (LLM) | Llama 3.3 70B Instruct | 0.70 | 0.67 | per 1M tokens | -| Large Language Model (LLM) | Llama 3.1 70B Instruct | 0.70 | 0.67 | per 1M tokens | -| Large Language Model (LLM) | Mixtral 8x7B Instruct v0.1 | 0.65 | 0.63 | per 1M tokens | +| Large Language Model (LLM) | Llama 3.3 70B Instruct | 0.74 | 0.67 | per 1M tokens | +| Large Language Model (LLM) | Llama 3.1 70B Instruct | 0.74 | 0.67 | per 1M tokens | +| Large Language Model (LLM) | Mixtral 8x7B Instruct v0.1 | 0.70 | 0.63 | per 1M tokens | | Large Language Model (LLM) | Mistral-Nemo-Instruct-2407 | 0.14 | 0.13 | per 1M tokens | | Large Language Model (LLM) | Llama 3.1 8B Instruct | 0.10 | 0.10 | per 1M tokens | -| Large Language Model (LLM) | Mistral 7B Instruct v0.3 | 0.10 | 0.10 | per 1M tokens | -| Reasoning LLM | DeepSeek R1 | Free | Free | per 1M tokens | -| Reasoning LLM | DeepSeek R1 Distill Llama 70B | 0.70 | 0.67 | per 1M tokens | -| Code LLM | Qwen2.5 Coder 32B Instruct | 0.90 | 0.87 | per 1M tokens | -| Code LLM | Mamba Codestral 7B v0.1 | 0.20 | 0.19 | per 1M tokens | -| Visual LLM | Qwen2.5 VL 72B Instruct | 0.95 | 0.91 | per 1M tokens | -| Visual LLM | Llava Next Mistral 7B | 0.30 | 0.29 | per 1M tokens | +| Large Language Model (LLM) | Mistral 7B Instruct v0.3 | 0.11 | 0.10 | per 1M tokens | +| Reasoning LLM | DeepSeek R1 Distill Llama 70B | 0.74 | 0.67 | per 1M tokens | +| Reasoning LLM | Qwen3 32B | 0.09 | 0.08 | per 1M tokens | +| Code LLM | Qwen2.5 Coder 32B Instruct | 0.96 | 0.87 | per 1M tokens | +| Code LLM | Mamba Codestral 7B v0.1 | 0.21 | 0.19 | per 1M tokens | +| Visual LLM | Mistral Small 3.2 24B Instruct 2506 | 0.10 | 0.09 | per 1M tokens | +| Visual LLM | Qwen2.5 VL 72B Instruct | 1.01 | 0.91 | per 1M tokens | +| Visual LLM | Llava Next Mistral 7B | 0.32 | 0.29 | per 1M tokens | | Embeddings | BGE Multilingual Gemma2 | 0.01 | 0.01 | per 1M tokens | | Embeddings | BGE-M3 | 0.01 | 0.01 | per 1M tokens | | Embeddings | BGE Base EN v1.5 | 0.01 | 0.005 | per 1M tokens | diff --git a/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.en-sg.md b/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.en-sg.md index e3cee80fd10..b1507535181 100644 --- a/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.en-sg.md +++ b/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.en-sg.md @@ -33,25 +33,21 @@ By following this model lifecycle process, OVHcloud ensures that customers are w Here is the model billing overview for AI Endpoints. -> [!primary] -> -> In appreciation of their continued support, our **Beta testers will have the possibility to keep using their existing API access keys and create new ones and won't be billed until 31th May**. After this date, the pricing will be implemented for them and clearly outlined in the table below, which details the categories, models, and their respective pricing information: -> - | Category | Model | Price ($) | Price (€) | Unit Price | | -------------- | --------------- | ------ | ------ | ---------- | -| Large Language Model (LLM) | Llama 3.3 70B Instruct | 0.70 | 0.67 | per 1M tokens | -| Large Language Model (LLM) | Llama 3.1 70B Instruct | 0.70 | 0.67 | per 1M tokens | -| Large Language Model (LLM) | Mixtral 8x7B Instruct v0.1 | 0.65 | 0.63 | per 1M tokens | +| Large Language Model (LLM) | Llama 3.3 70B Instruct | 0.74 | 0.67 | per 1M tokens | +| Large Language Model (LLM) | Llama 3.1 70B Instruct | 0.74 | 0.67 | per 1M tokens | +| Large Language Model (LLM) | Mixtral 8x7B Instruct v0.1 | 0.70 | 0.63 | per 1M tokens | | Large Language Model (LLM) | Mistral-Nemo-Instruct-2407 | 0.14 | 0.13 | per 1M tokens | | Large Language Model (LLM) | Llama 3.1 8B Instruct | 0.10 | 0.10 | per 1M tokens | -| Large Language Model (LLM) | Mistral 7B Instruct v0.3 | 0.10 | 0.10 | per 1M tokens | -| Reasoning LLM | DeepSeek R1 | Free | Free | per 1M tokens | -| Reasoning LLM | DeepSeek R1 Distill Llama 70B | 0.70 | 0.67 | per 1M tokens | -| Code LLM | Qwen2.5 Coder 32B Instruct | 0.90 | 0.87 | per 1M tokens | -| Code LLM | Mamba Codestral 7B v0.1 | 0.20 | 0.19 | per 1M tokens | -| Visual LLM | Qwen2.5 VL 72B Instruct | 0.95 | 0.91 | per 1M tokens | -| Visual LLM | Llava Next Mistral 7B | 0.30 | 0.29 | per 1M tokens | +| Large Language Model (LLM) | Mistral 7B Instruct v0.3 | 0.11 | 0.10 | per 1M tokens | +| Reasoning LLM | DeepSeek R1 Distill Llama 70B | 0.74 | 0.67 | per 1M tokens | +| Reasoning LLM | Qwen3 32B | 0.09 | 0.08 | per 1M tokens | +| Code LLM | Qwen2.5 Coder 32B Instruct | 0.96 | 0.87 | per 1M tokens | +| Code LLM | Mamba Codestral 7B v0.1 | 0.21 | 0.19 | per 1M tokens | +| Visual LLM | Mistral Small 3.2 24B Instruct 2506 | 0.10 | 0.09 | per 1M tokens | +| Visual LLM | Qwen2.5 VL 72B Instruct | 1.01 | 0.91 | per 1M tokens | +| Visual LLM | Llava Next Mistral 7B | 0.32 | 0.29 | per 1M tokens | | Embeddings | BGE Multilingual Gemma2 | 0.01 | 0.01 | per 1M tokens | | Embeddings | BGE-M3 | 0.01 | 0.01 | per 1M tokens | | Embeddings | BGE Base EN v1.5 | 0.01 | 0.005 | per 1M tokens | diff --git a/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.en-us.md b/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.en-us.md index e3cee80fd10..b1507535181 100644 --- a/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.en-us.md +++ b/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.en-us.md @@ -33,25 +33,21 @@ By following this model lifecycle process, OVHcloud ensures that customers are w Here is the model billing overview for AI Endpoints. -> [!primary] -> -> In appreciation of their continued support, our **Beta testers will have the possibility to keep using their existing API access keys and create new ones and won't be billed until 31th May**. After this date, the pricing will be implemented for them and clearly outlined in the table below, which details the categories, models, and their respective pricing information: -> - | Category | Model | Price ($) | Price (€) | Unit Price | | -------------- | --------------- | ------ | ------ | ---------- | -| Large Language Model (LLM) | Llama 3.3 70B Instruct | 0.70 | 0.67 | per 1M tokens | -| Large Language Model (LLM) | Llama 3.1 70B Instruct | 0.70 | 0.67 | per 1M tokens | -| Large Language Model (LLM) | Mixtral 8x7B Instruct v0.1 | 0.65 | 0.63 | per 1M tokens | +| Large Language Model (LLM) | Llama 3.3 70B Instruct | 0.74 | 0.67 | per 1M tokens | +| Large Language Model (LLM) | Llama 3.1 70B Instruct | 0.74 | 0.67 | per 1M tokens | +| Large Language Model (LLM) | Mixtral 8x7B Instruct v0.1 | 0.70 | 0.63 | per 1M tokens | | Large Language Model (LLM) | Mistral-Nemo-Instruct-2407 | 0.14 | 0.13 | per 1M tokens | | Large Language Model (LLM) | Llama 3.1 8B Instruct | 0.10 | 0.10 | per 1M tokens | -| Large Language Model (LLM) | Mistral 7B Instruct v0.3 | 0.10 | 0.10 | per 1M tokens | -| Reasoning LLM | DeepSeek R1 | Free | Free | per 1M tokens | -| Reasoning LLM | DeepSeek R1 Distill Llama 70B | 0.70 | 0.67 | per 1M tokens | -| Code LLM | Qwen2.5 Coder 32B Instruct | 0.90 | 0.87 | per 1M tokens | -| Code LLM | Mamba Codestral 7B v0.1 | 0.20 | 0.19 | per 1M tokens | -| Visual LLM | Qwen2.5 VL 72B Instruct | 0.95 | 0.91 | per 1M tokens | -| Visual LLM | Llava Next Mistral 7B | 0.30 | 0.29 | per 1M tokens | +| Large Language Model (LLM) | Mistral 7B Instruct v0.3 | 0.11 | 0.10 | per 1M tokens | +| Reasoning LLM | DeepSeek R1 Distill Llama 70B | 0.74 | 0.67 | per 1M tokens | +| Reasoning LLM | Qwen3 32B | 0.09 | 0.08 | per 1M tokens | +| Code LLM | Qwen2.5 Coder 32B Instruct | 0.96 | 0.87 | per 1M tokens | +| Code LLM | Mamba Codestral 7B v0.1 | 0.21 | 0.19 | per 1M tokens | +| Visual LLM | Mistral Small 3.2 24B Instruct 2506 | 0.10 | 0.09 | per 1M tokens | +| Visual LLM | Qwen2.5 VL 72B Instruct | 1.01 | 0.91 | per 1M tokens | +| Visual LLM | Llava Next Mistral 7B | 0.32 | 0.29 | per 1M tokens | | Embeddings | BGE Multilingual Gemma2 | 0.01 | 0.01 | per 1M tokens | | Embeddings | BGE-M3 | 0.01 | 0.01 | per 1M tokens | | Embeddings | BGE Base EN v1.5 | 0.01 | 0.005 | per 1M tokens | diff --git a/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.es-es.md b/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.es-es.md index e3cee80fd10..b1507535181 100644 --- a/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.es-es.md +++ b/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.es-es.md @@ -33,25 +33,21 @@ By following this model lifecycle process, OVHcloud ensures that customers are w Here is the model billing overview for AI Endpoints. -> [!primary] -> -> In appreciation of their continued support, our **Beta testers will have the possibility to keep using their existing API access keys and create new ones and won't be billed until 31th May**. After this date, the pricing will be implemented for them and clearly outlined in the table below, which details the categories, models, and their respective pricing information: -> - | Category | Model | Price ($) | Price (€) | Unit Price | | -------------- | --------------- | ------ | ------ | ---------- | -| Large Language Model (LLM) | Llama 3.3 70B Instruct | 0.70 | 0.67 | per 1M tokens | -| Large Language Model (LLM) | Llama 3.1 70B Instruct | 0.70 | 0.67 | per 1M tokens | -| Large Language Model (LLM) | Mixtral 8x7B Instruct v0.1 | 0.65 | 0.63 | per 1M tokens | +| Large Language Model (LLM) | Llama 3.3 70B Instruct | 0.74 | 0.67 | per 1M tokens | +| Large Language Model (LLM) | Llama 3.1 70B Instruct | 0.74 | 0.67 | per 1M tokens | +| Large Language Model (LLM) | Mixtral 8x7B Instruct v0.1 | 0.70 | 0.63 | per 1M tokens | | Large Language Model (LLM) | Mistral-Nemo-Instruct-2407 | 0.14 | 0.13 | per 1M tokens | | Large Language Model (LLM) | Llama 3.1 8B Instruct | 0.10 | 0.10 | per 1M tokens | -| Large Language Model (LLM) | Mistral 7B Instruct v0.3 | 0.10 | 0.10 | per 1M tokens | -| Reasoning LLM | DeepSeek R1 | Free | Free | per 1M tokens | -| Reasoning LLM | DeepSeek R1 Distill Llama 70B | 0.70 | 0.67 | per 1M tokens | -| Code LLM | Qwen2.5 Coder 32B Instruct | 0.90 | 0.87 | per 1M tokens | -| Code LLM | Mamba Codestral 7B v0.1 | 0.20 | 0.19 | per 1M tokens | -| Visual LLM | Qwen2.5 VL 72B Instruct | 0.95 | 0.91 | per 1M tokens | -| Visual LLM | Llava Next Mistral 7B | 0.30 | 0.29 | per 1M tokens | +| Large Language Model (LLM) | Mistral 7B Instruct v0.3 | 0.11 | 0.10 | per 1M tokens | +| Reasoning LLM | DeepSeek R1 Distill Llama 70B | 0.74 | 0.67 | per 1M tokens | +| Reasoning LLM | Qwen3 32B | 0.09 | 0.08 | per 1M tokens | +| Code LLM | Qwen2.5 Coder 32B Instruct | 0.96 | 0.87 | per 1M tokens | +| Code LLM | Mamba Codestral 7B v0.1 | 0.21 | 0.19 | per 1M tokens | +| Visual LLM | Mistral Small 3.2 24B Instruct 2506 | 0.10 | 0.09 | per 1M tokens | +| Visual LLM | Qwen2.5 VL 72B Instruct | 1.01 | 0.91 | per 1M tokens | +| Visual LLM | Llava Next Mistral 7B | 0.32 | 0.29 | per 1M tokens | | Embeddings | BGE Multilingual Gemma2 | 0.01 | 0.01 | per 1M tokens | | Embeddings | BGE-M3 | 0.01 | 0.01 | per 1M tokens | | Embeddings | BGE Base EN v1.5 | 0.01 | 0.005 | per 1M tokens | diff --git a/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.es-us.md b/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.es-us.md index e3cee80fd10..b1507535181 100644 --- a/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.es-us.md +++ b/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.es-us.md @@ -33,25 +33,21 @@ By following this model lifecycle process, OVHcloud ensures that customers are w Here is the model billing overview for AI Endpoints. -> [!primary] -> -> In appreciation of their continued support, our **Beta testers will have the possibility to keep using their existing API access keys and create new ones and won't be billed until 31th May**. After this date, the pricing will be implemented for them and clearly outlined in the table below, which details the categories, models, and their respective pricing information: -> - | Category | Model | Price ($) | Price (€) | Unit Price | | -------------- | --------------- | ------ | ------ | ---------- | -| Large Language Model (LLM) | Llama 3.3 70B Instruct | 0.70 | 0.67 | per 1M tokens | -| Large Language Model (LLM) | Llama 3.1 70B Instruct | 0.70 | 0.67 | per 1M tokens | -| Large Language Model (LLM) | Mixtral 8x7B Instruct v0.1 | 0.65 | 0.63 | per 1M tokens | +| Large Language Model (LLM) | Llama 3.3 70B Instruct | 0.74 | 0.67 | per 1M tokens | +| Large Language Model (LLM) | Llama 3.1 70B Instruct | 0.74 | 0.67 | per 1M tokens | +| Large Language Model (LLM) | Mixtral 8x7B Instruct v0.1 | 0.70 | 0.63 | per 1M tokens | | Large Language Model (LLM) | Mistral-Nemo-Instruct-2407 | 0.14 | 0.13 | per 1M tokens | | Large Language Model (LLM) | Llama 3.1 8B Instruct | 0.10 | 0.10 | per 1M tokens | -| Large Language Model (LLM) | Mistral 7B Instruct v0.3 | 0.10 | 0.10 | per 1M tokens | -| Reasoning LLM | DeepSeek R1 | Free | Free | per 1M tokens | -| Reasoning LLM | DeepSeek R1 Distill Llama 70B | 0.70 | 0.67 | per 1M tokens | -| Code LLM | Qwen2.5 Coder 32B Instruct | 0.90 | 0.87 | per 1M tokens | -| Code LLM | Mamba Codestral 7B v0.1 | 0.20 | 0.19 | per 1M tokens | -| Visual LLM | Qwen2.5 VL 72B Instruct | 0.95 | 0.91 | per 1M tokens | -| Visual LLM | Llava Next Mistral 7B | 0.30 | 0.29 | per 1M tokens | +| Large Language Model (LLM) | Mistral 7B Instruct v0.3 | 0.11 | 0.10 | per 1M tokens | +| Reasoning LLM | DeepSeek R1 Distill Llama 70B | 0.74 | 0.67 | per 1M tokens | +| Reasoning LLM | Qwen3 32B | 0.09 | 0.08 | per 1M tokens | +| Code LLM | Qwen2.5 Coder 32B Instruct | 0.96 | 0.87 | per 1M tokens | +| Code LLM | Mamba Codestral 7B v0.1 | 0.21 | 0.19 | per 1M tokens | +| Visual LLM | Mistral Small 3.2 24B Instruct 2506 | 0.10 | 0.09 | per 1M tokens | +| Visual LLM | Qwen2.5 VL 72B Instruct | 1.01 | 0.91 | per 1M tokens | +| Visual LLM | Llava Next Mistral 7B | 0.32 | 0.29 | per 1M tokens | | Embeddings | BGE Multilingual Gemma2 | 0.01 | 0.01 | per 1M tokens | | Embeddings | BGE-M3 | 0.01 | 0.01 | per 1M tokens | | Embeddings | BGE Base EN v1.5 | 0.01 | 0.005 | per 1M tokens | diff --git a/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.fr-ca.md b/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.fr-ca.md index e3cee80fd10..b1507535181 100644 --- a/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.fr-ca.md +++ b/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.fr-ca.md @@ -33,25 +33,21 @@ By following this model lifecycle process, OVHcloud ensures that customers are w Here is the model billing overview for AI Endpoints. -> [!primary] -> -> In appreciation of their continued support, our **Beta testers will have the possibility to keep using their existing API access keys and create new ones and won't be billed until 31th May**. After this date, the pricing will be implemented for them and clearly outlined in the table below, which details the categories, models, and their respective pricing information: -> - | Category | Model | Price ($) | Price (€) | Unit Price | | -------------- | --------------- | ------ | ------ | ---------- | -| Large Language Model (LLM) | Llama 3.3 70B Instruct | 0.70 | 0.67 | per 1M tokens | -| Large Language Model (LLM) | Llama 3.1 70B Instruct | 0.70 | 0.67 | per 1M tokens | -| Large Language Model (LLM) | Mixtral 8x7B Instruct v0.1 | 0.65 | 0.63 | per 1M tokens | +| Large Language Model (LLM) | Llama 3.3 70B Instruct | 0.74 | 0.67 | per 1M tokens | +| Large Language Model (LLM) | Llama 3.1 70B Instruct | 0.74 | 0.67 | per 1M tokens | +| Large Language Model (LLM) | Mixtral 8x7B Instruct v0.1 | 0.70 | 0.63 | per 1M tokens | | Large Language Model (LLM) | Mistral-Nemo-Instruct-2407 | 0.14 | 0.13 | per 1M tokens | | Large Language Model (LLM) | Llama 3.1 8B Instruct | 0.10 | 0.10 | per 1M tokens | -| Large Language Model (LLM) | Mistral 7B Instruct v0.3 | 0.10 | 0.10 | per 1M tokens | -| Reasoning LLM | DeepSeek R1 | Free | Free | per 1M tokens | -| Reasoning LLM | DeepSeek R1 Distill Llama 70B | 0.70 | 0.67 | per 1M tokens | -| Code LLM | Qwen2.5 Coder 32B Instruct | 0.90 | 0.87 | per 1M tokens | -| Code LLM | Mamba Codestral 7B v0.1 | 0.20 | 0.19 | per 1M tokens | -| Visual LLM | Qwen2.5 VL 72B Instruct | 0.95 | 0.91 | per 1M tokens | -| Visual LLM | Llava Next Mistral 7B | 0.30 | 0.29 | per 1M tokens | +| Large Language Model (LLM) | Mistral 7B Instruct v0.3 | 0.11 | 0.10 | per 1M tokens | +| Reasoning LLM | DeepSeek R1 Distill Llama 70B | 0.74 | 0.67 | per 1M tokens | +| Reasoning LLM | Qwen3 32B | 0.09 | 0.08 | per 1M tokens | +| Code LLM | Qwen2.5 Coder 32B Instruct | 0.96 | 0.87 | per 1M tokens | +| Code LLM | Mamba Codestral 7B v0.1 | 0.21 | 0.19 | per 1M tokens | +| Visual LLM | Mistral Small 3.2 24B Instruct 2506 | 0.10 | 0.09 | per 1M tokens | +| Visual LLM | Qwen2.5 VL 72B Instruct | 1.01 | 0.91 | per 1M tokens | +| Visual LLM | Llava Next Mistral 7B | 0.32 | 0.29 | per 1M tokens | | Embeddings | BGE Multilingual Gemma2 | 0.01 | 0.01 | per 1M tokens | | Embeddings | BGE-M3 | 0.01 | 0.01 | per 1M tokens | | Embeddings | BGE Base EN v1.5 | 0.01 | 0.005 | per 1M tokens | diff --git a/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.fr-fr.md b/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.fr-fr.md index e3cee80fd10..b1507535181 100644 --- a/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.fr-fr.md +++ b/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.fr-fr.md @@ -33,25 +33,21 @@ By following this model lifecycle process, OVHcloud ensures that customers are w Here is the model billing overview for AI Endpoints. -> [!primary] -> -> In appreciation of their continued support, our **Beta testers will have the possibility to keep using their existing API access keys and create new ones and won't be billed until 31th May**. After this date, the pricing will be implemented for them and clearly outlined in the table below, which details the categories, models, and their respective pricing information: -> - | Category | Model | Price ($) | Price (€) | Unit Price | | -------------- | --------------- | ------ | ------ | ---------- | -| Large Language Model (LLM) | Llama 3.3 70B Instruct | 0.70 | 0.67 | per 1M tokens | -| Large Language Model (LLM) | Llama 3.1 70B Instruct | 0.70 | 0.67 | per 1M tokens | -| Large Language Model (LLM) | Mixtral 8x7B Instruct v0.1 | 0.65 | 0.63 | per 1M tokens | +| Large Language Model (LLM) | Llama 3.3 70B Instruct | 0.74 | 0.67 | per 1M tokens | +| Large Language Model (LLM) | Llama 3.1 70B Instruct | 0.74 | 0.67 | per 1M tokens | +| Large Language Model (LLM) | Mixtral 8x7B Instruct v0.1 | 0.70 | 0.63 | per 1M tokens | | Large Language Model (LLM) | Mistral-Nemo-Instruct-2407 | 0.14 | 0.13 | per 1M tokens | | Large Language Model (LLM) | Llama 3.1 8B Instruct | 0.10 | 0.10 | per 1M tokens | -| Large Language Model (LLM) | Mistral 7B Instruct v0.3 | 0.10 | 0.10 | per 1M tokens | -| Reasoning LLM | DeepSeek R1 | Free | Free | per 1M tokens | -| Reasoning LLM | DeepSeek R1 Distill Llama 70B | 0.70 | 0.67 | per 1M tokens | -| Code LLM | Qwen2.5 Coder 32B Instruct | 0.90 | 0.87 | per 1M tokens | -| Code LLM | Mamba Codestral 7B v0.1 | 0.20 | 0.19 | per 1M tokens | -| Visual LLM | Qwen2.5 VL 72B Instruct | 0.95 | 0.91 | per 1M tokens | -| Visual LLM | Llava Next Mistral 7B | 0.30 | 0.29 | per 1M tokens | +| Large Language Model (LLM) | Mistral 7B Instruct v0.3 | 0.11 | 0.10 | per 1M tokens | +| Reasoning LLM | DeepSeek R1 Distill Llama 70B | 0.74 | 0.67 | per 1M tokens | +| Reasoning LLM | Qwen3 32B | 0.09 | 0.08 | per 1M tokens | +| Code LLM | Qwen2.5 Coder 32B Instruct | 0.96 | 0.87 | per 1M tokens | +| Code LLM | Mamba Codestral 7B v0.1 | 0.21 | 0.19 | per 1M tokens | +| Visual LLM | Mistral Small 3.2 24B Instruct 2506 | 0.10 | 0.09 | per 1M tokens | +| Visual LLM | Qwen2.5 VL 72B Instruct | 1.01 | 0.91 | per 1M tokens | +| Visual LLM | Llava Next Mistral 7B | 0.32 | 0.29 | per 1M tokens | | Embeddings | BGE Multilingual Gemma2 | 0.01 | 0.01 | per 1M tokens | | Embeddings | BGE-M3 | 0.01 | 0.01 | per 1M tokens | | Embeddings | BGE Base EN v1.5 | 0.01 | 0.005 | per 1M tokens | diff --git a/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.it-it.md b/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.it-it.md index e3cee80fd10..b1507535181 100644 --- a/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.it-it.md +++ b/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.it-it.md @@ -33,25 +33,21 @@ By following this model lifecycle process, OVHcloud ensures that customers are w Here is the model billing overview for AI Endpoints. -> [!primary] -> -> In appreciation of their continued support, our **Beta testers will have the possibility to keep using their existing API access keys and create new ones and won't be billed until 31th May**. After this date, the pricing will be implemented for them and clearly outlined in the table below, which details the categories, models, and their respective pricing information: -> - | Category | Model | Price ($) | Price (€) | Unit Price | | -------------- | --------------- | ------ | ------ | ---------- | -| Large Language Model (LLM) | Llama 3.3 70B Instruct | 0.70 | 0.67 | per 1M tokens | -| Large Language Model (LLM) | Llama 3.1 70B Instruct | 0.70 | 0.67 | per 1M tokens | -| Large Language Model (LLM) | Mixtral 8x7B Instruct v0.1 | 0.65 | 0.63 | per 1M tokens | +| Large Language Model (LLM) | Llama 3.3 70B Instruct | 0.74 | 0.67 | per 1M tokens | +| Large Language Model (LLM) | Llama 3.1 70B Instruct | 0.74 | 0.67 | per 1M tokens | +| Large Language Model (LLM) | Mixtral 8x7B Instruct v0.1 | 0.70 | 0.63 | per 1M tokens | | Large Language Model (LLM) | Mistral-Nemo-Instruct-2407 | 0.14 | 0.13 | per 1M tokens | | Large Language Model (LLM) | Llama 3.1 8B Instruct | 0.10 | 0.10 | per 1M tokens | -| Large Language Model (LLM) | Mistral 7B Instruct v0.3 | 0.10 | 0.10 | per 1M tokens | -| Reasoning LLM | DeepSeek R1 | Free | Free | per 1M tokens | -| Reasoning LLM | DeepSeek R1 Distill Llama 70B | 0.70 | 0.67 | per 1M tokens | -| Code LLM | Qwen2.5 Coder 32B Instruct | 0.90 | 0.87 | per 1M tokens | -| Code LLM | Mamba Codestral 7B v0.1 | 0.20 | 0.19 | per 1M tokens | -| Visual LLM | Qwen2.5 VL 72B Instruct | 0.95 | 0.91 | per 1M tokens | -| Visual LLM | Llava Next Mistral 7B | 0.30 | 0.29 | per 1M tokens | +| Large Language Model (LLM) | Mistral 7B Instruct v0.3 | 0.11 | 0.10 | per 1M tokens | +| Reasoning LLM | DeepSeek R1 Distill Llama 70B | 0.74 | 0.67 | per 1M tokens | +| Reasoning LLM | Qwen3 32B | 0.09 | 0.08 | per 1M tokens | +| Code LLM | Qwen2.5 Coder 32B Instruct | 0.96 | 0.87 | per 1M tokens | +| Code LLM | Mamba Codestral 7B v0.1 | 0.21 | 0.19 | per 1M tokens | +| Visual LLM | Mistral Small 3.2 24B Instruct 2506 | 0.10 | 0.09 | per 1M tokens | +| Visual LLM | Qwen2.5 VL 72B Instruct | 1.01 | 0.91 | per 1M tokens | +| Visual LLM | Llava Next Mistral 7B | 0.32 | 0.29 | per 1M tokens | | Embeddings | BGE Multilingual Gemma2 | 0.01 | 0.01 | per 1M tokens | | Embeddings | BGE-M3 | 0.01 | 0.01 | per 1M tokens | | Embeddings | BGE Base EN v1.5 | 0.01 | 0.005 | per 1M tokens | diff --git a/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.pl-pl.md b/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.pl-pl.md index e3cee80fd10..b1507535181 100644 --- a/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.pl-pl.md +++ b/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.pl-pl.md @@ -33,25 +33,21 @@ By following this model lifecycle process, OVHcloud ensures that customers are w Here is the model billing overview for AI Endpoints. -> [!primary] -> -> In appreciation of their continued support, our **Beta testers will have the possibility to keep using their existing API access keys and create new ones and won't be billed until 31th May**. After this date, the pricing will be implemented for them and clearly outlined in the table below, which details the categories, models, and their respective pricing information: -> - | Category | Model | Price ($) | Price (€) | Unit Price | | -------------- | --------------- | ------ | ------ | ---------- | -| Large Language Model (LLM) | Llama 3.3 70B Instruct | 0.70 | 0.67 | per 1M tokens | -| Large Language Model (LLM) | Llama 3.1 70B Instruct | 0.70 | 0.67 | per 1M tokens | -| Large Language Model (LLM) | Mixtral 8x7B Instruct v0.1 | 0.65 | 0.63 | per 1M tokens | +| Large Language Model (LLM) | Llama 3.3 70B Instruct | 0.74 | 0.67 | per 1M tokens | +| Large Language Model (LLM) | Llama 3.1 70B Instruct | 0.74 | 0.67 | per 1M tokens | +| Large Language Model (LLM) | Mixtral 8x7B Instruct v0.1 | 0.70 | 0.63 | per 1M tokens | | Large Language Model (LLM) | Mistral-Nemo-Instruct-2407 | 0.14 | 0.13 | per 1M tokens | | Large Language Model (LLM) | Llama 3.1 8B Instruct | 0.10 | 0.10 | per 1M tokens | -| Large Language Model (LLM) | Mistral 7B Instruct v0.3 | 0.10 | 0.10 | per 1M tokens | -| Reasoning LLM | DeepSeek R1 | Free | Free | per 1M tokens | -| Reasoning LLM | DeepSeek R1 Distill Llama 70B | 0.70 | 0.67 | per 1M tokens | -| Code LLM | Qwen2.5 Coder 32B Instruct | 0.90 | 0.87 | per 1M tokens | -| Code LLM | Mamba Codestral 7B v0.1 | 0.20 | 0.19 | per 1M tokens | -| Visual LLM | Qwen2.5 VL 72B Instruct | 0.95 | 0.91 | per 1M tokens | -| Visual LLM | Llava Next Mistral 7B | 0.30 | 0.29 | per 1M tokens | +| Large Language Model (LLM) | Mistral 7B Instruct v0.3 | 0.11 | 0.10 | per 1M tokens | +| Reasoning LLM | DeepSeek R1 Distill Llama 70B | 0.74 | 0.67 | per 1M tokens | +| Reasoning LLM | Qwen3 32B | 0.09 | 0.08 | per 1M tokens | +| Code LLM | Qwen2.5 Coder 32B Instruct | 0.96 | 0.87 | per 1M tokens | +| Code LLM | Mamba Codestral 7B v0.1 | 0.21 | 0.19 | per 1M tokens | +| Visual LLM | Mistral Small 3.2 24B Instruct 2506 | 0.10 | 0.09 | per 1M tokens | +| Visual LLM | Qwen2.5 VL 72B Instruct | 1.01 | 0.91 | per 1M tokens | +| Visual LLM | Llava Next Mistral 7B | 0.32 | 0.29 | per 1M tokens | | Embeddings | BGE Multilingual Gemma2 | 0.01 | 0.01 | per 1M tokens | | Embeddings | BGE-M3 | 0.01 | 0.01 | per 1M tokens | | Embeddings | BGE Base EN v1.5 | 0.01 | 0.005 | per 1M tokens | diff --git a/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.pt-pt.md b/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.pt-pt.md index e3cee80fd10..b1507535181 100644 --- a/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.pt-pt.md +++ b/pages/public_cloud/ai_machine_learning/endpoints_guide_04_billing_concept/guide.pt-pt.md @@ -33,25 +33,21 @@ By following this model lifecycle process, OVHcloud ensures that customers are w Here is the model billing overview for AI Endpoints. -> [!primary] -> -> In appreciation of their continued support, our **Beta testers will have the possibility to keep using their existing API access keys and create new ones and won't be billed until 31th May**. After this date, the pricing will be implemented for them and clearly outlined in the table below, which details the categories, models, and their respective pricing information: -> - | Category | Model | Price ($) | Price (€) | Unit Price | | -------------- | --------------- | ------ | ------ | ---------- | -| Large Language Model (LLM) | Llama 3.3 70B Instruct | 0.70 | 0.67 | per 1M tokens | -| Large Language Model (LLM) | Llama 3.1 70B Instruct | 0.70 | 0.67 | per 1M tokens | -| Large Language Model (LLM) | Mixtral 8x7B Instruct v0.1 | 0.65 | 0.63 | per 1M tokens | +| Large Language Model (LLM) | Llama 3.3 70B Instruct | 0.74 | 0.67 | per 1M tokens | +| Large Language Model (LLM) | Llama 3.1 70B Instruct | 0.74 | 0.67 | per 1M tokens | +| Large Language Model (LLM) | Mixtral 8x7B Instruct v0.1 | 0.70 | 0.63 | per 1M tokens | | Large Language Model (LLM) | Mistral-Nemo-Instruct-2407 | 0.14 | 0.13 | per 1M tokens | | Large Language Model (LLM) | Llama 3.1 8B Instruct | 0.10 | 0.10 | per 1M tokens | -| Large Language Model (LLM) | Mistral 7B Instruct v0.3 | 0.10 | 0.10 | per 1M tokens | -| Reasoning LLM | DeepSeek R1 | Free | Free | per 1M tokens | -| Reasoning LLM | DeepSeek R1 Distill Llama 70B | 0.70 | 0.67 | per 1M tokens | -| Code LLM | Qwen2.5 Coder 32B Instruct | 0.90 | 0.87 | per 1M tokens | -| Code LLM | Mamba Codestral 7B v0.1 | 0.20 | 0.19 | per 1M tokens | -| Visual LLM | Qwen2.5 VL 72B Instruct | 0.95 | 0.91 | per 1M tokens | -| Visual LLM | Llava Next Mistral 7B | 0.30 | 0.29 | per 1M tokens | +| Large Language Model (LLM) | Mistral 7B Instruct v0.3 | 0.11 | 0.10 | per 1M tokens | +| Reasoning LLM | DeepSeek R1 Distill Llama 70B | 0.74 | 0.67 | per 1M tokens | +| Reasoning LLM | Qwen3 32B | 0.09 | 0.08 | per 1M tokens | +| Code LLM | Qwen2.5 Coder 32B Instruct | 0.96 | 0.87 | per 1M tokens | +| Code LLM | Mamba Codestral 7B v0.1 | 0.21 | 0.19 | per 1M tokens | +| Visual LLM | Mistral Small 3.2 24B Instruct 2506 | 0.10 | 0.09 | per 1M tokens | +| Visual LLM | Qwen2.5 VL 72B Instruct | 1.01 | 0.91 | per 1M tokens | +| Visual LLM | Llava Next Mistral 7B | 0.32 | 0.29 | per 1M tokens | | Embeddings | BGE Multilingual Gemma2 | 0.01 | 0.01 | per 1M tokens | | Embeddings | BGE-M3 | 0.01 | 0.01 | per 1M tokens | | Embeddings | BGE Base EN v1.5 | 0.01 | 0.005 | per 1M tokens | diff --git a/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.de-de.md b/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.de-de.md index b0514f3c9a6..83c3aa0eaa3 100644 --- a/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.de-de.md +++ b/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.de-de.md @@ -48,7 +48,7 @@ You will follow different steps to export your data, train your model and save i This step is optional because you may load some open datasets through libraries, commands, etc., so you will not need to upload your own data to the cloud. -On the other hand, you can upload your data (dataset, python and requirements files, etc.) to the cloud, in the Object Storage. This can be done in two ways: either from the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or via the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/). +On the other hand, you can upload your data (dataset, python and requirements files, etc.) to the cloud, in the Object Storage. This can be done in two ways: either from the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or via the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli). **This data can be deleted at any time.** @@ -73,7 +73,7 @@ Once your object container is created, you will see it in the Object Storage lis #### 1.2 - Upload your data via CLI -To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. As in the Control Panel, you will have to specify the `region`, the `name of your container` and the `path` where your data will be located. The creation of your object container can be done by the following command: @@ -93,7 +93,7 @@ This `.zip` file can now be accessed from all OVHcloud AI products, either with ### Step 2 - Launch your training job and attach your data to its environment -To launch your training job, you can also use the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/). +To launch your training job, you can also use the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli). #### 2.1 - Launch a training job via UI (Control Panel) diff --git a/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.en-asia.md b/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.en-asia.md index b0514f3c9a6..83c3aa0eaa3 100644 --- a/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.en-asia.md +++ b/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.en-asia.md @@ -48,7 +48,7 @@ You will follow different steps to export your data, train your model and save i This step is optional because you may load some open datasets through libraries, commands, etc., so you will not need to upload your own data to the cloud. -On the other hand, you can upload your data (dataset, python and requirements files, etc.) to the cloud, in the Object Storage. This can be done in two ways: either from the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or via the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/). +On the other hand, you can upload your data (dataset, python and requirements files, etc.) to the cloud, in the Object Storage. This can be done in two ways: either from the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or via the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli). **This data can be deleted at any time.** @@ -73,7 +73,7 @@ Once your object container is created, you will see it in the Object Storage lis #### 1.2 - Upload your data via CLI -To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. As in the Control Panel, you will have to specify the `region`, the `name of your container` and the `path` where your data will be located. The creation of your object container can be done by the following command: @@ -93,7 +93,7 @@ This `.zip` file can now be accessed from all OVHcloud AI products, either with ### Step 2 - Launch your training job and attach your data to its environment -To launch your training job, you can also use the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/). +To launch your training job, you can also use the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli). #### 2.1 - Launch a training job via UI (Control Panel) diff --git a/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.en-au.md b/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.en-au.md index b0514f3c9a6..83c3aa0eaa3 100644 --- a/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.en-au.md +++ b/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.en-au.md @@ -48,7 +48,7 @@ You will follow different steps to export your data, train your model and save i This step is optional because you may load some open datasets through libraries, commands, etc., so you will not need to upload your own data to the cloud. -On the other hand, you can upload your data (dataset, python and requirements files, etc.) to the cloud, in the Object Storage. This can be done in two ways: either from the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or via the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/). +On the other hand, you can upload your data (dataset, python and requirements files, etc.) to the cloud, in the Object Storage. This can be done in two ways: either from the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or via the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli). **This data can be deleted at any time.** @@ -73,7 +73,7 @@ Once your object container is created, you will see it in the Object Storage lis #### 1.2 - Upload your data via CLI -To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. As in the Control Panel, you will have to specify the `region`, the `name of your container` and the `path` where your data will be located. The creation of your object container can be done by the following command: @@ -93,7 +93,7 @@ This `.zip` file can now be accessed from all OVHcloud AI products, either with ### Step 2 - Launch your training job and attach your data to its environment -To launch your training job, you can also use the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/). +To launch your training job, you can also use the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli). #### 2.1 - Launch a training job via UI (Control Panel) diff --git a/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.en-ca.md b/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.en-ca.md index b0514f3c9a6..83c3aa0eaa3 100644 --- a/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.en-ca.md +++ b/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.en-ca.md @@ -48,7 +48,7 @@ You will follow different steps to export your data, train your model and save i This step is optional because you may load some open datasets through libraries, commands, etc., so you will not need to upload your own data to the cloud. -On the other hand, you can upload your data (dataset, python and requirements files, etc.) to the cloud, in the Object Storage. This can be done in two ways: either from the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or via the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/). +On the other hand, you can upload your data (dataset, python and requirements files, etc.) to the cloud, in the Object Storage. This can be done in two ways: either from the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or via the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli). **This data can be deleted at any time.** @@ -73,7 +73,7 @@ Once your object container is created, you will see it in the Object Storage lis #### 1.2 - Upload your data via CLI -To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. As in the Control Panel, you will have to specify the `region`, the `name of your container` and the `path` where your data will be located. The creation of your object container can be done by the following command: @@ -93,7 +93,7 @@ This `.zip` file can now be accessed from all OVHcloud AI products, either with ### Step 2 - Launch your training job and attach your data to its environment -To launch your training job, you can also use the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/). +To launch your training job, you can also use the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli). #### 2.1 - Launch a training job via UI (Control Panel) diff --git a/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.en-gb.md b/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.en-gb.md index b0514f3c9a6..83c3aa0eaa3 100644 --- a/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.en-gb.md +++ b/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.en-gb.md @@ -48,7 +48,7 @@ You will follow different steps to export your data, train your model and save i This step is optional because you may load some open datasets through libraries, commands, etc., so you will not need to upload your own data to the cloud. -On the other hand, you can upload your data (dataset, python and requirements files, etc.) to the cloud, in the Object Storage. This can be done in two ways: either from the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or via the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/). +On the other hand, you can upload your data (dataset, python and requirements files, etc.) to the cloud, in the Object Storage. This can be done in two ways: either from the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or via the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli). **This data can be deleted at any time.** @@ -73,7 +73,7 @@ Once your object container is created, you will see it in the Object Storage lis #### 1.2 - Upload your data via CLI -To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. As in the Control Panel, you will have to specify the `region`, the `name of your container` and the `path` where your data will be located. The creation of your object container can be done by the following command: @@ -93,7 +93,7 @@ This `.zip` file can now be accessed from all OVHcloud AI products, either with ### Step 2 - Launch your training job and attach your data to its environment -To launch your training job, you can also use the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/). +To launch your training job, you can also use the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli). #### 2.1 - Launch a training job via UI (Control Panel) diff --git a/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.en-ie.md b/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.en-ie.md index b0514f3c9a6..83c3aa0eaa3 100644 --- a/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.en-ie.md +++ b/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.en-ie.md @@ -48,7 +48,7 @@ You will follow different steps to export your data, train your model and save i This step is optional because you may load some open datasets through libraries, commands, etc., so you will not need to upload your own data to the cloud. -On the other hand, you can upload your data (dataset, python and requirements files, etc.) to the cloud, in the Object Storage. This can be done in two ways: either from the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or via the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/). +On the other hand, you can upload your data (dataset, python and requirements files, etc.) to the cloud, in the Object Storage. This can be done in two ways: either from the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or via the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli). **This data can be deleted at any time.** @@ -73,7 +73,7 @@ Once your object container is created, you will see it in the Object Storage lis #### 1.2 - Upload your data via CLI -To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. As in the Control Panel, you will have to specify the `region`, the `name of your container` and the `path` where your data will be located. The creation of your object container can be done by the following command: @@ -93,7 +93,7 @@ This `.zip` file can now be accessed from all OVHcloud AI products, either with ### Step 2 - Launch your training job and attach your data to its environment -To launch your training job, you can also use the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/). +To launch your training job, you can also use the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli). #### 2.1 - Launch a training job via UI (Control Panel) diff --git a/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.en-sg.md b/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.en-sg.md index b0514f3c9a6..83c3aa0eaa3 100644 --- a/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.en-sg.md +++ b/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.en-sg.md @@ -48,7 +48,7 @@ You will follow different steps to export your data, train your model and save i This step is optional because you may load some open datasets through libraries, commands, etc., so you will not need to upload your own data to the cloud. -On the other hand, you can upload your data (dataset, python and requirements files, etc.) to the cloud, in the Object Storage. This can be done in two ways: either from the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or via the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/). +On the other hand, you can upload your data (dataset, python and requirements files, etc.) to the cloud, in the Object Storage. This can be done in two ways: either from the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or via the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli). **This data can be deleted at any time.** @@ -73,7 +73,7 @@ Once your object container is created, you will see it in the Object Storage lis #### 1.2 - Upload your data via CLI -To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. As in the Control Panel, you will have to specify the `region`, the `name of your container` and the `path` where your data will be located. The creation of your object container can be done by the following command: @@ -93,7 +93,7 @@ This `.zip` file can now be accessed from all OVHcloud AI products, either with ### Step 2 - Launch your training job and attach your data to its environment -To launch your training job, you can also use the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/). +To launch your training job, you can also use the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli). #### 2.1 - Launch a training job via UI (Control Panel) diff --git a/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.en-us.md b/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.en-us.md index b0514f3c9a6..83c3aa0eaa3 100644 --- a/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.en-us.md +++ b/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.en-us.md @@ -48,7 +48,7 @@ You will follow different steps to export your data, train your model and save i This step is optional because you may load some open datasets through libraries, commands, etc., so you will not need to upload your own data to the cloud. -On the other hand, you can upload your data (dataset, python and requirements files, etc.) to the cloud, in the Object Storage. This can be done in two ways: either from the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or via the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/). +On the other hand, you can upload your data (dataset, python and requirements files, etc.) to the cloud, in the Object Storage. This can be done in two ways: either from the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or via the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli). **This data can be deleted at any time.** @@ -73,7 +73,7 @@ Once your object container is created, you will see it in the Object Storage lis #### 1.2 - Upload your data via CLI -To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. As in the Control Panel, you will have to specify the `region`, the `name of your container` and the `path` where your data will be located. The creation of your object container can be done by the following command: @@ -93,7 +93,7 @@ This `.zip` file can now be accessed from all OVHcloud AI products, either with ### Step 2 - Launch your training job and attach your data to its environment -To launch your training job, you can also use the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/). +To launch your training job, you can also use the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli). #### 2.1 - Launch a training job via UI (Control Panel) diff --git a/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.es-es.md b/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.es-es.md index b0514f3c9a6..83c3aa0eaa3 100644 --- a/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.es-es.md +++ b/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.es-es.md @@ -48,7 +48,7 @@ You will follow different steps to export your data, train your model and save i This step is optional because you may load some open datasets through libraries, commands, etc., so you will not need to upload your own data to the cloud. -On the other hand, you can upload your data (dataset, python and requirements files, etc.) to the cloud, in the Object Storage. This can be done in two ways: either from the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or via the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/). +On the other hand, you can upload your data (dataset, python and requirements files, etc.) to the cloud, in the Object Storage. This can be done in two ways: either from the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or via the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli). **This data can be deleted at any time.** @@ -73,7 +73,7 @@ Once your object container is created, you will see it in the Object Storage lis #### 1.2 - Upload your data via CLI -To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. As in the Control Panel, you will have to specify the `region`, the `name of your container` and the `path` where your data will be located. The creation of your object container can be done by the following command: @@ -93,7 +93,7 @@ This `.zip` file can now be accessed from all OVHcloud AI products, either with ### Step 2 - Launch your training job and attach your data to its environment -To launch your training job, you can also use the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/). +To launch your training job, you can also use the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli). #### 2.1 - Launch a training job via UI (Control Panel) diff --git a/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.es-us.md b/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.es-us.md index b0514f3c9a6..83c3aa0eaa3 100644 --- a/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.es-us.md +++ b/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.es-us.md @@ -48,7 +48,7 @@ You will follow different steps to export your data, train your model and save i This step is optional because you may load some open datasets through libraries, commands, etc., so you will not need to upload your own data to the cloud. -On the other hand, you can upload your data (dataset, python and requirements files, etc.) to the cloud, in the Object Storage. This can be done in two ways: either from the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or via the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/). +On the other hand, you can upload your data (dataset, python and requirements files, etc.) to the cloud, in the Object Storage. This can be done in two ways: either from the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or via the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli). **This data can be deleted at any time.** @@ -73,7 +73,7 @@ Once your object container is created, you will see it in the Object Storage lis #### 1.2 - Upload your data via CLI -To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. As in the Control Panel, you will have to specify the `region`, the `name of your container` and the `path` where your data will be located. The creation of your object container can be done by the following command: @@ -93,7 +93,7 @@ This `.zip` file can now be accessed from all OVHcloud AI products, either with ### Step 2 - Launch your training job and attach your data to its environment -To launch your training job, you can also use the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/). +To launch your training job, you can also use the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli). #### 2.1 - Launch a training job via UI (Control Panel) diff --git a/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.fr-ca.md b/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.fr-ca.md index 4f5f9a72bcc..a6deedbd794 100644 --- a/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.fr-ca.md +++ b/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.fr-ca.md @@ -48,7 +48,7 @@ You will follow different steps to export your data, train your model and save i This step is optional because you may load some open datasets through libraries, commands, etc., so you will not need to upload your own data to the cloud. -On the other hand, you can upload your data (dataset, python and requirements files, etc.) to the cloud, in the Object Storage. This can be done in two ways: either from the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or via the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/). +On the other hand, you can upload your data (dataset, python and requirements files, etc.) to the cloud, in the Object Storage. This can be done in two ways: either from the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or via the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli). **This data can be deleted at any time.** @@ -73,7 +73,7 @@ Once your object container is created, you will see it in the Object Storage lis #### 1.2 - Upload your data via CLI -To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. As in the Control Panel, you will have to specify the `region`, the `name of your container` and the `path` where your data will be located. The creation of your object container can be done by the following command: @@ -93,7 +93,7 @@ This `.zip` file can now be accessed from all OVHcloud AI products, either with ### Step 2 - Launch your training job and attach your data to its environment -To launch your training job, you can also use the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/). +To launch your training job, you can also use the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli). #### 2.1 - Launch a training job via UI (Control Panel) diff --git a/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.fr-fr.md b/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.fr-fr.md index 4f5f9a72bcc..a6deedbd794 100644 --- a/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.fr-fr.md +++ b/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.fr-fr.md @@ -48,7 +48,7 @@ You will follow different steps to export your data, train your model and save i This step is optional because you may load some open datasets through libraries, commands, etc., so you will not need to upload your own data to the cloud. -On the other hand, you can upload your data (dataset, python and requirements files, etc.) to the cloud, in the Object Storage. This can be done in two ways: either from the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or via the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/). +On the other hand, you can upload your data (dataset, python and requirements files, etc.) to the cloud, in the Object Storage. This can be done in two ways: either from the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or via the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli). **This data can be deleted at any time.** @@ -73,7 +73,7 @@ Once your object container is created, you will see it in the Object Storage lis #### 1.2 - Upload your data via CLI -To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. As in the Control Panel, you will have to specify the `region`, the `name of your container` and the `path` where your data will be located. The creation of your object container can be done by the following command: @@ -93,7 +93,7 @@ This `.zip` file can now be accessed from all OVHcloud AI products, either with ### Step 2 - Launch your training job and attach your data to its environment -To launch your training job, you can also use the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/). +To launch your training job, you can also use the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli). #### 2.1 - Launch a training job via UI (Control Panel) diff --git a/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.it-it.md b/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.it-it.md index b0514f3c9a6..83c3aa0eaa3 100644 --- a/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.it-it.md +++ b/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.it-it.md @@ -48,7 +48,7 @@ You will follow different steps to export your data, train your model and save i This step is optional because you may load some open datasets through libraries, commands, etc., so you will not need to upload your own data to the cloud. -On the other hand, you can upload your data (dataset, python and requirements files, etc.) to the cloud, in the Object Storage. This can be done in two ways: either from the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or via the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/). +On the other hand, you can upload your data (dataset, python and requirements files, etc.) to the cloud, in the Object Storage. This can be done in two ways: either from the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or via the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli). **This data can be deleted at any time.** @@ -73,7 +73,7 @@ Once your object container is created, you will see it in the Object Storage lis #### 1.2 - Upload your data via CLI -To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. As in the Control Panel, you will have to specify the `region`, the `name of your container` and the `path` where your data will be located. The creation of your object container can be done by the following command: @@ -93,7 +93,7 @@ This `.zip` file can now be accessed from all OVHcloud AI products, either with ### Step 2 - Launch your training job and attach your data to its environment -To launch your training job, you can also use the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/). +To launch your training job, you can also use the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli). #### 2.1 - Launch a training job via UI (Control Panel) diff --git a/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.pl-pl.md b/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.pl-pl.md index b0514f3c9a6..83c3aa0eaa3 100644 --- a/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.pl-pl.md +++ b/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.pl-pl.md @@ -48,7 +48,7 @@ You will follow different steps to export your data, train your model and save i This step is optional because you may load some open datasets through libraries, commands, etc., so you will not need to upload your own data to the cloud. -On the other hand, you can upload your data (dataset, python and requirements files, etc.) to the cloud, in the Object Storage. This can be done in two ways: either from the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or via the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/). +On the other hand, you can upload your data (dataset, python and requirements files, etc.) to the cloud, in the Object Storage. This can be done in two ways: either from the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or via the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli). **This data can be deleted at any time.** @@ -73,7 +73,7 @@ Once your object container is created, you will see it in the Object Storage lis #### 1.2 - Upload your data via CLI -To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. As in the Control Panel, you will have to specify the `region`, the `name of your container` and the `path` where your data will be located. The creation of your object container can be done by the following command: @@ -93,7 +93,7 @@ This `.zip` file can now be accessed from all OVHcloud AI products, either with ### Step 2 - Launch your training job and attach your data to its environment -To launch your training job, you can also use the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/). +To launch your training job, you can also use the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli). #### 2.1 - Launch a training job via UI (Control Panel) diff --git a/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.pt-pt.md b/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.pt-pt.md index b0514f3c9a6..83c3aa0eaa3 100644 --- a/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.pt-pt.md +++ b/pages/public_cloud/ai_machine_learning/training_tuto_01_train_your_first_model/guide.pt-pt.md @@ -48,7 +48,7 @@ You will follow different steps to export your data, train your model and save i This step is optional because you may load some open datasets through libraries, commands, etc., so you will not need to upload your own data to the cloud. -On the other hand, you can upload your data (dataset, python and requirements files, etc.) to the cloud, in the Object Storage. This can be done in two ways: either from the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or via the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/). +On the other hand, you can upload your data (dataset, python and requirements files, etc.) to the cloud, in the Object Storage. This can be done in two ways: either from the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or via the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli). **This data can be deleted at any time.** @@ -73,7 +73,7 @@ Once your object container is created, you will see it in the Object Storage lis #### 1.2 - Upload your data via CLI -To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. As in the Control Panel, you will have to specify the `region`, the `name of your container` and the `path` where your data will be located. The creation of your object container can be done by the following command: @@ -93,7 +93,7 @@ This `.zip` file can now be accessed from all OVHcloud AI products, either with ### Step 2 - Launch your training job and attach your data to its environment -To launch your training job, you can also use the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/). +To launch your training job, you can also use the [OVHcloud Control Panel](https://www.ovh.com/manager/#/public-cloud/) or the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli). #### 2.1 - Launch a training job via UI (Control Panel) diff --git a/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.de-de.md b/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.de-de.md index 2a0541c5919..6dccb84d630 100644 --- a/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.de-de.md +++ b/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.de-de.md @@ -39,7 +39,7 @@ You can create the bucket that will store your ONNX model at the end of the trai #### Create your bucket via ovhai CLI -To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. As in the Control Panel, you will have to specify the `region`and the `name` (**cnn-model-onnx**) of your bucket. Create your Object Storage bucket as follows: diff --git a/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.en-asia.md b/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.en-asia.md index 2a0541c5919..6dccb84d630 100644 --- a/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.en-asia.md +++ b/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.en-asia.md @@ -39,7 +39,7 @@ You can create the bucket that will store your ONNX model at the end of the trai #### Create your bucket via ovhai CLI -To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. As in the Control Panel, you will have to specify the `region`and the `name` (**cnn-model-onnx**) of your bucket. Create your Object Storage bucket as follows: diff --git a/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.en-au.md b/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.en-au.md index 2a0541c5919..6dccb84d630 100644 --- a/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.en-au.md +++ b/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.en-au.md @@ -39,7 +39,7 @@ You can create the bucket that will store your ONNX model at the end of the trai #### Create your bucket via ovhai CLI -To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. As in the Control Panel, you will have to specify the `region`and the `name` (**cnn-model-onnx**) of your bucket. Create your Object Storage bucket as follows: diff --git a/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.en-ca.md b/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.en-ca.md index 2a0541c5919..6dccb84d630 100644 --- a/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.en-ca.md +++ b/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.en-ca.md @@ -39,7 +39,7 @@ You can create the bucket that will store your ONNX model at the end of the trai #### Create your bucket via ovhai CLI -To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. As in the Control Panel, you will have to specify the `region`and the `name` (**cnn-model-onnx**) of your bucket. Create your Object Storage bucket as follows: diff --git a/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.en-gb.md b/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.en-gb.md index 2a0541c5919..6dccb84d630 100644 --- a/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.en-gb.md +++ b/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.en-gb.md @@ -39,7 +39,7 @@ You can create the bucket that will store your ONNX model at the end of the trai #### Create your bucket via ovhai CLI -To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. As in the Control Panel, you will have to specify the `region`and the `name` (**cnn-model-onnx**) of your bucket. Create your Object Storage bucket as follows: diff --git a/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.en-ie.md b/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.en-ie.md index 2a0541c5919..6dccb84d630 100644 --- a/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.en-ie.md +++ b/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.en-ie.md @@ -39,7 +39,7 @@ You can create the bucket that will store your ONNX model at the end of the trai #### Create your bucket via ovhai CLI -To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. As in the Control Panel, you will have to specify the `region`and the `name` (**cnn-model-onnx**) of your bucket. Create your Object Storage bucket as follows: diff --git a/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.en-sg.md b/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.en-sg.md index 2a0541c5919..6dccb84d630 100644 --- a/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.en-sg.md +++ b/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.en-sg.md @@ -39,7 +39,7 @@ You can create the bucket that will store your ONNX model at the end of the trai #### Create your bucket via ovhai CLI -To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. As in the Control Panel, you will have to specify the `region`and the `name` (**cnn-model-onnx**) of your bucket. Create your Object Storage bucket as follows: diff --git a/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.en-us.md b/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.en-us.md index 2a0541c5919..6dccb84d630 100644 --- a/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.en-us.md +++ b/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.en-us.md @@ -39,7 +39,7 @@ You can create the bucket that will store your ONNX model at the end of the trai #### Create your bucket via ovhai CLI -To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. As in the Control Panel, you will have to specify the `region`and the `name` (**cnn-model-onnx**) of your bucket. Create your Object Storage bucket as follows: diff --git a/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.es-es.md b/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.es-es.md index 2a0541c5919..6dccb84d630 100644 --- a/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.es-es.md +++ b/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.es-es.md @@ -39,7 +39,7 @@ You can create the bucket that will store your ONNX model at the end of the trai #### Create your bucket via ovhai CLI -To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. As in the Control Panel, you will have to specify the `region`and the `name` (**cnn-model-onnx**) of your bucket. Create your Object Storage bucket as follows: diff --git a/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.es-us.md b/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.es-us.md index 2a0541c5919..6dccb84d630 100644 --- a/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.es-us.md +++ b/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.es-us.md @@ -39,7 +39,7 @@ You can create the bucket that will store your ONNX model at the end of the trai #### Create your bucket via ovhai CLI -To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. As in the Control Panel, you will have to specify the `region`and the `name` (**cnn-model-onnx**) of your bucket. Create your Object Storage bucket as follows: diff --git a/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.fr-ca.md b/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.fr-ca.md index 82d09b0099c..b82f09a50d4 100644 --- a/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.fr-ca.md +++ b/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.fr-ca.md @@ -39,7 +39,7 @@ You can create the bucket that will store your ONNX model at the end of the trai #### Create your bucket via ovhai CLI -To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. As in the Control Panel, you will have to specify the `region`and the `name` (**cnn-model-onnx**) of your bucket. Create your Object Storage bucket as follows: diff --git a/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.fr-fr.md b/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.fr-fr.md index 82d09b0099c..b82f09a50d4 100644 --- a/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.fr-fr.md +++ b/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.fr-fr.md @@ -39,7 +39,7 @@ You can create the bucket that will store your ONNX model at the end of the trai #### Create your bucket via ovhai CLI -To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. As in the Control Panel, you will have to specify the `region`and the `name` (**cnn-model-onnx**) of your bucket. Create your Object Storage bucket as follows: diff --git a/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.it-it.md b/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.it-it.md index 2a0541c5919..6dccb84d630 100644 --- a/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.it-it.md +++ b/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.it-it.md @@ -39,7 +39,7 @@ You can create the bucket that will store your ONNX model at the end of the trai #### Create your bucket via ovhai CLI -To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. As in the Control Panel, you will have to specify the `region`and the `name` (**cnn-model-onnx**) of your bucket. Create your Object Storage bucket as follows: diff --git a/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.pl-pl.md b/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.pl-pl.md index 2a0541c5919..6dccb84d630 100644 --- a/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.pl-pl.md +++ b/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.pl-pl.md @@ -39,7 +39,7 @@ You can create the bucket that will store your ONNX model at the end of the trai #### Create your bucket via ovhai CLI -To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. As in the Control Panel, you will have to specify the `region`and the `name` (**cnn-model-onnx**) of your bucket. Create your Object Storage bucket as follows: diff --git a/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.pt-pt.md b/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.pt-pt.md index 2a0541c5919..6dccb84d630 100644 --- a/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.pt-pt.md +++ b/pages/public_cloud/ai_machine_learning/training_tuto_09_train_model_export_onnx/guide.pt-pt.md @@ -39,7 +39,7 @@ You can create the bucket that will store your ONNX model at the end of the trai #### Create your bucket via ovhai CLI -To follow this part, make sure you have installed the [ovhai CLI](https://cli.bhs.ai.cloud.ovh.net/) on your computer or on an instance. +To follow this part, make sure you have installed the [ovhai CLI](/pages/public_cloud/ai_machine_learning/cli_10_howto_install_cli) on your computer or on an instance. As in the Control Panel, you will have to specify the `region`and the `name` (**cnn-model-onnx**) of your bucket. Create your Object Storage bucket as follows: