From da89fd3e832ac5d82527059e8410efb2a1dae4c3 Mon Sep 17 00:00:00 2001 From: Prabha Kylasamiyer Sundara Rajan Date: Wed, 17 Sep 2025 22:34:20 +0530 Subject: [PATCH 01/16] First commit Signed-off-by: Prabha Kylasamiyer Sundara Rajan --- docs/developer-lightspeed-guide/master.adoc | 4 +-- .../assembly_getting-started.adoc | 34 +++++++++++++++++++ .../con_developer-lightspeed-pathways.adoc | 10 ++---- .../con_intro-to-developer-lightspeed.adoc | 10 ------ 4 files changed, 39 insertions(+), 19 deletions(-) create mode 100644 docs/topics/developer-lightspeed/assembly_getting-started.adoc diff --git a/docs/developer-lightspeed-guide/master.adoc b/docs/developer-lightspeed-guide/master.adoc index d874d55f..df9d8e8d 100644 --- a/docs/developer-lightspeed-guide/master.adoc +++ b/docs/developer-lightspeed-guide/master.adoc @@ -11,11 +11,11 @@ include::topics/templates/document-attributes.adoc[] :context: mta-developer-lightspeed :mta-developer-lightspeed: -//Inclusive language statement -include::topics/making-open-source-more-inclusive.adoc[] include::topics/developer-lightspeed/con_intro-to-developer-lightspeed.adoc[leveloffset=+1] +include::topics/developer-lightspeed/assembly_getting-started.adoc[leveloffset=+1] + include::topics/developer-lightspeed/assembly_solution-server-configurations.adoc[leveloffset=+1] include::topics/developer-lightspeed/assembly_configuring_llm.adoc[leveloffset=+1] diff --git a/docs/topics/developer-lightspeed/assembly_getting-started.adoc b/docs/topics/developer-lightspeed/assembly_getting-started.adoc new file mode 100644 index 00000000..855dc46b --- /dev/null +++ b/docs/topics/developer-lightspeed/assembly_getting-started.adoc @@ -0,0 +1,34 @@ +:_newdoc-version: 2.18.3 +:_template-generated: 2025-05-28 + +ifdef::context[:parent-context-of-getting-started: {context}] + +:_mod-docs-content-type: ASSEMBLY + +ifndef::context[] +[id="getting-started"] +endif::[] +ifdef::context[] +[id="getting-started_{context}"] +endif::[] += Getting started with {mta-dl-plugin} + +:context: getting-started + +[role="_abstract"] +The Getting started section contains information to walk you through the prerequisites, persistent volume requirements, installation, and workflows that help you to decide how you want to use {mta-dl-full}. + +include::con_prerequisites.adoc[leveloffset=+1] + +include::con_persistent-volumes.adoc[leveloffset=+1] + +include::con_installation.adoc[leveloffset=+1] + +include::con_developer-lightspeed-pathways.adoc[leveloffset=+1] + +include::ref_example-code-suggestion.adoc[leveloffset=+1] + + +ifdef::parent-context-of-getting-started[:context: {parent-context-of-getting-started}] +ifndef::parent-context-of-getting-started[:!context:] + diff --git a/docs/topics/developer-lightspeed/con_developer-lightspeed-pathways.adoc b/docs/topics/developer-lightspeed/con_developer-lightspeed-pathways.adoc index 440804e9..39dac110 100644 --- a/docs/topics/developer-lightspeed/con_developer-lightspeed-pathways.adoc +++ b/docs/topics/developer-lightspeed/con_developer-lightspeed-pathways.adoc @@ -4,7 +4,7 @@ :_mod-docs-content-type: CONCEPT [id="how-to-use-developer-lightspeed_{context}"] -= Introducing {mta-dl-plugin} += How to use {mta-dl-plugin} [role="_abstract"] Starting with {ProductFullName} 8.0.0, you can run an application analysis using the {ProductShortName} Visual Studio (VS) Code plug-in. An {ProductShortName} analysis detects the issues in the code given a set of targets for migration. @@ -19,9 +19,7 @@ To make code changes by using the LLM, you must enable the generative AI option, If you make any change after enabling the generative AI settings in the extension, you must restart the extension for the change to take effect. ==== -[id="config-to-use-solution-server_{context}"] - -== Configurations to use the solution server +To use the solution server for code fix suggestions: * Enable the solution server in the Tackle custom resource (CR). @@ -31,9 +29,7 @@ If you make any change after enabling the generative AI settings in the extensio * Configure the profile settings for code fixes and activate the LLM provider in the `provider-settings.yaml` file. -[id="config-to-use-agent-mode_{context}"] - -== Configurations to use the agent mode +To use the agent mode for code fix suggestions: * Enable the generative AI and the agent mode in the {mta-dl-plugin} extension settings. diff --git a/docs/topics/developer-lightspeed/con_intro-to-developer-lightspeed.adoc b/docs/topics/developer-lightspeed/con_intro-to-developer-lightspeed.adoc index d5442d20..5e32dcd5 100644 --- a/docs/topics/developer-lightspeed/con_intro-to-developer-lightspeed.adoc +++ b/docs/topics/developer-lightspeed/con_intro-to-developer-lightspeed.adoc @@ -75,13 +75,3 @@ When you use the Solution Server mode, {mta-dl-plugin} delivers a solution for a * *Contextual code generation* - By leveraging AI for static code analysis, {mta-dl-plugin} breaks down complex problems into more manageable ones, providing the LLM with focused context to generate meaningful results. This helps overcome the limited context size of LLMs when dealing with large codebases. * *No fine tuning* - You also do not need to fine tune your model with a suitable data set for analysis which leaves you free to use and switch LLM models to respond to your requirements. * *Learning and Improvement* - As more parts of a codebase are migrated with {mta-dl-plugin}, it can use RAG to learn from the available data and provide better recommendations in subsequent application analysis. - -include::con_prerequisites.adoc[leveloffset=+1] - -include::con_persistent-volumes.adoc[leveloffset=+1] - -include::con_installation.adoc[leveloffset=+1] - -include::con_developer-lightspeed-pathways.adoc[leveloffset=+1] - -include::ref_example-code-suggestion.adoc[leveloffset=+1] From 56bb90c556b430d508d472236a617449a2e75da3 Mon Sep 17 00:00:00 2001 From: Prabha Kylasamiyer Sundara Rajan Date: Wed, 17 Sep 2025 23:17:12 +0530 Subject: [PATCH 02/16] Modified the abstract Signed-off-by: Prabha Kylasamiyer Sundara Rajan --- docs/developer-lightspeed-guide/master-docinfo.xml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/developer-lightspeed-guide/master-docinfo.xml b/docs/developer-lightspeed-guide/master-docinfo.xml index 3a8b94c3..a1391f71 100644 --- a/docs/developer-lightspeed-guide/master-docinfo.xml +++ b/docs/developer-lightspeed-guide/master-docinfo.xml @@ -3,7 +3,7 @@ {DocInfoProductNumber} Using the {ProductName} Developer Lightspeed to modernize your applications - you can use {ProductFullName} Developer Lightspeed for application modernization in your organization by running Artificial Intelligence-driven static code analysis for Java applications. + By using Developer Lightspeed for Migration Toolkit for Applications (MTA), you can modernize applications in your organization by applying LLM-driven code changes to resolve issues found through static code analysis. You can automate code fixes, review the suggested changes, and apply code changes with minimum manual effort. Red Hat Customer Content Services From f6ecbde4fd26a776430cac98d9f3dc0561e48380 Mon Sep 17 00:00:00 2001 From: Prabha Kylasamiyer Sundara Rajan Date: Sat, 20 Sep 2025 17:02:48 +0530 Subject: [PATCH 03/16] Modifying content based on final review - part 1 Signed-off-by: Prabha Kylasamiyer Sundara Rajan --- .../master-docinfo.xml | 2 +- .../assembly_configuring_llm.adoc | 10 +-- ...sembly_solution-server-configurations.adoc | 9 ++- .../con_intro-to-developer-lightspeed.adoc | 12 ++-- .../con_llm-service-openshift-ai.adoc | 16 ++--- .../con_prerequisites.adoc | 18 ++--- ...ing-developer-lightspeed-ide-settings.adoc | 54 +++++++------- ...onfiguring-developer-profile-settings.adoc | 7 +- .../proc_configuring-llm-podman-desktop.adoc | 6 +- .../ref_example-code-suggestion.adoc | 33 +++++---- .../ref_llm-provider-configurations.adoc | 70 +++++++++++-------- 11 files changed, 123 insertions(+), 114 deletions(-) diff --git a/docs/developer-lightspeed-guide/master-docinfo.xml b/docs/developer-lightspeed-guide/master-docinfo.xml index a1391f71..c80e7e56 100644 --- a/docs/developer-lightspeed-guide/master-docinfo.xml +++ b/docs/developer-lightspeed-guide/master-docinfo.xml @@ -3,7 +3,7 @@ {DocInfoProductNumber} Using the {ProductName} Developer Lightspeed to modernize your applications - By using Developer Lightspeed for Migration Toolkit for Applications (MTA), you can modernize applications in your organization by applying LLM-driven code changes to resolve issues found through static code analysis. You can automate code fixes, review the suggested changes, and apply code changes with minimum manual effort. + By using Developer Lightspeed for Migration Toolkit for Applications (MTA), you can modernize applications in your organization by applying LLM-driven code changes to resolve issues found through static code analysis. You can automate code fixes, review and apply the suggested code changes with minimum manual effort. Red Hat Customer Content Services diff --git a/docs/topics/developer-lightspeed/assembly_configuring_llm.adoc b/docs/topics/developer-lightspeed/assembly_configuring_llm.adoc index 3301b5dc..98abf27f 100644 --- a/docs/topics/developer-lightspeed/assembly_configuring_llm.adoc +++ b/docs/topics/developer-lightspeed/assembly_configuring_llm.adoc @@ -14,9 +14,9 @@ endif::[] = Configuring large language models for analysis :context: configuring-llm -To generate suggestions to resolves issues in the code, {mta-dl-plugin} provides the large language model (LLM) with the contextual prompt, migration hints, and solved examples to generate suggestions to resolve issues identified in the current code by running an analysis. +{mta-dl-plugin} provides the large language model (LLM) with the contextual prompt, migration hints, and solved examples to generate suggestions for resolving issues identified in the current code. -{mta-dl-plugin} is designed to be model agnostic. It works with LLMs that are run in different environments (in local containers, as local AI, as a shared service) to support analyzing Java applications in a wide range of scenarios. You can choose an LLM from well-known providers, local models that you run from Ollama or Podman desktop, and OpenAI API compatible models. +{mta-dl-plugin} is designed to be model agnostic. It works with LLMs that are run in different environments (in local containers, as local AI, or as a shared service) to support analyzing Java applications in a wide range of scenarios. You can choose an LLM from well-known providers, local models that you run from Ollama or Podman desktop, and OpenAI API compatible models. The code fix suggestions produced to resolve issues detected through an analysis depends on the LLM's capabilities. @@ -27,10 +27,10 @@ You can run an LLM from the following generative AI providers: * Google Gemini * Amazon Bedrock * Ollama -* Groq -* Anthropic -You can also run OpenAI API-compatible LLMs deployed as a service in your OpenShift AI cluster or deployed locally in the Podman AI Lab in your system. +You can also run OpenAI API-compatible LLMs deployed as: +* A service in your {ocp-name} AI cluster +* Locally in the Podman AI Lab in your system. include::con_llm-service-openshift-ai.adoc[leveloffset=+1] diff --git a/docs/topics/developer-lightspeed/assembly_solution-server-configurations.adoc b/docs/topics/developer-lightspeed/assembly_solution-server-configurations.adoc index a4d21813..b5d0fbb1 100644 --- a/docs/topics/developer-lightspeed/assembly_solution-server-configurations.adoc +++ b/docs/topics/developer-lightspeed/assembly_solution-server-configurations.adoc @@ -15,7 +15,7 @@ endif::[] :context: solution-server-configurations [role=_abstract] -Solution server is a component that allows {mta-dl-plugin} to build a collective memory of code changes from all analysis performed in an organization. Wen you request code fix for issues in the Visual Studio (VS) Code, the Solution Server augments previous patterns of how codes changed to resolve issues (also called solved examples) that were similar to those in the current file, and suggests a resolution that has a higher confidence level derived from previous solutions. After you accept a suggested code fix, the solution server works with the large language model (LLM) to improve the hints about the issue that becomes part of the context. An improved context enables the LLM to generate more reliable code fix suggestions in future cases. +Solution server is a component that allows {mta-dl-plugin} to build a collective memory of code changes from all analysis performed in an organization. When you request code fix for issues in the Visual Studio (VS) Code, the Solution Server augments previous patterns of how codes changed to resolve issues (also called solved examples) that were similar to those in the current file, and suggests a resolution that has a higher confidence level derived from previous solutions. After you accept a suggested code fix, the solution server works with the large language model (LLM) to improve the hints about the issue that becomes part of the context. An improved context enables the LLM to generate more reliable code fix suggestions in future cases. The Solution Server delivers two primary benefits to users: @@ -24,17 +24,16 @@ The Solution Server delivers two primary benefits to users: As {mta-dl-plugin} is an optional set of features in {ProductShortName}, you must complete the following configurations before you can access settings necessary to use AI analysis. -.Supported large language models and providers +.Configurable large language models and providers |=== -| LLM Provider (Tackle CR value) | Large language model in Tackle CR +| LLM Provider (Tackle CR value) | Large language model examples for Tackle CR configuration +|{ocp-name} AI platform| Models deployed in an OpenShift AI cluster that can be accessed by using Open AI-compatible API | Open AI (`openai`) | `gpt-4`, `gpt-4o`, `gpt-4o-mini`, `gpt-3.5-turbo` | Azure OpenAI (`azure_openai`) | `gpt-4`, `gpt-35-turbo` | Amazon Bedrock (`bedrock`) | `anthropic.claude-3-5-sonnet-20241022-v2:0`, `meta.llama3-1-70b-instruct-v1:0` | Google Gemini (`google`) | `gemini-2.0-flash-exp`, `gemini-1.5-pro` | Ollama (`ollama`) | `llama3.1`, `codellama`, `mistral` -| Groq (`groq`) | `llama-3.1-70b-versatile`, `mixtral-8x7b-32768` -| Anthropic (`anthropic`) | `claude-3-5-sonnet-20241022`, `claude-3-haiku-20240307` |=== diff --git a/docs/topics/developer-lightspeed/con_intro-to-developer-lightspeed.adoc b/docs/topics/developer-lightspeed/con_intro-to-developer-lightspeed.adoc index 5e32dcd5..3eab3090 100644 --- a/docs/topics/developer-lightspeed/con_intro-to-developer-lightspeed.adoc +++ b/docs/topics/developer-lightspeed/con_intro-to-developer-lightspeed.adoc @@ -6,7 +6,7 @@ [id="intro-to-the-developer-lightspeed_{context}"] = Introduction to the {mta-dl-plugin} -Starting from 8.0.0, you can use {mta-dl-full} to modernize applications in your organization by running Artificial Intelligence-driven code changes to resolve issues found through static code analysis of Java applications. +Starting from 8.0.0, {ProductFullName} integrates with large language models (LLM) through the {mta-dl-full} component in the Visual Studio (VS) Code extension. You can use {mta-dl-plugin} to apply LLM-driven code changes to resolve issues found through static code analysis of Java applications. [id="use-case-ai-code-fix_{context}"] == Use case for AI-driven code fixes @@ -33,11 +33,11 @@ The context is a combination of the following inputs that are shared with the LL + * A solved example is created when a Migrator accepts a resolution in a previous analysis that results in updated code or an unfamiliar issue in a legacy application that the Migrator manually fixed. Solved examples are stored in the Solution Server. + -More instances of solved examples for an issue enhances the context and improves the success metrics of rules that trigger the issue. A higher success metrics of an issue refers to the higher confidence level associated with the accepted resolutions for that issue in previous analyses. +More instances of solved examples for an issue enhances the context and improve the success metrics of rules that trigger the issue. A higher success metrics of an issue refers to the higher confidence level associated with the accepted resolutions for that issue in previous analyses. * (Optional) If you enable the solution server mode, the Solution Server extracts a pattern of solution that can be used by the LLM to generate a more accurate migration hint. + -The improvement in the quality of migration hints results in more accurate code resolutions. In turn, these updated code is stored in the solution server to generate a better migration hint in future. +The improvement in the quality of migration hints results in more accurate code resolutions. In turn, the updated code is stored in the solution server to generate a better migration hint in future. + This cyclical improvement of resolution pattern from the solution server and improved migration hints lead to more reliable code changes as you migrate applications in different migration waves @@ -48,9 +48,9 @@ Thus, when you deploy {mta-dl-plugin} for analyzing your entire application port It also enables you to control the analysis through manual reviews of the suggested AI resolutions by accepting or rejecting the changes while reducing the overall time and effort required to prepare your application for migration. [id="modes-developer-lightspeed_{context}"] -== Modes in {mta-dl-plugin} +== Requesting code fixes in {mta-dl-plugin} -You can run an analysis for AI-assisted code fixes in two modes: the Agentic AI and the Retrieval Augmented Generation (RAG) solution delivered by the Solution Server. +You can request AI-assisted code resolutions in two ways: the Agentic AI and the Solution Server. If you enable the agentic AI mode, {mta-dl-plugin} streams an automated analysis of the code in a loop until all issues are resolved and changes the code with the updates. In the initial run, the AI agent: @@ -63,7 +63,7 @@ If you accept that the agentic AI must continue to make changes, it compiles the Agentic AI generates a new file in each round when it applies the suggestions in the code. The time taken by the agentic AI to complete several rounds of analysis depends on the size of the application, the number of issues, and the complexity of the code. -When you use the Solution Server mode, {mta-dl-plugin} delivers a solution for an issue that is based on solved examples or code changes in past analysis. When you fix code, you can view a diff of the updated portions of the code and the original source code to do a manual review. In such an analysis, the user has more control over the changes that must be applied to the code. +When you use the Solution Server, {mta-dl-plugin} delivers a solution for an issue that is based on solved examples or code changes in past analysis. When you fix code, you can view a diff of the updated portions of the code and the original source code to do a manual review. In such an analysis, the user has more control over the changes that must be applied to the code. //You can consider using the demo mode for running {mta-dl-plugin} when you need to perform analysis but have a limited network connection for {mta-dl-plugin} to sync with the LLM. The demo mode stores the input data as a hash and past LLM calls in a cache. The cache is stored in a chosen location in the your file system for later use. The hash of the inputs is used to determine which LLM call must be used in the demo mode. After you enable the demo mode and configure the path to your cached LLM calls in the {mta-dl-plugin} settings, you can rerun an analysis for the same set of files using the responses to a previous LLM call. diff --git a/docs/topics/developer-lightspeed/con_llm-service-openshift-ai.adoc b/docs/topics/developer-lightspeed/con_llm-service-openshift-ai.adoc index 995d4650..43e5dd5e 100644 --- a/docs/topics/developer-lightspeed/con_llm-service-openshift-ai.adoc +++ b/docs/topics/developer-lightspeed/con_llm-service-openshift-ai.adoc @@ -4,31 +4,31 @@ :_mod-docs-content-type: CONCEPT [id="llm-service-openshift-ai_{context}"] -= Deploying an LLM as a service in {ocp-short} AI += Deploying an LLM as a service in an {ocp-name} AI cluster [role="_abstract"] The code suggestions from {mta-dl-full} differ based on the large language model (LLM) that you use. Therefore, you may want to use an LLM that caters to your specific requirements. -{mta-dl-plugin} integrates with LLMs that are deployed as a scalable service on {ocp-full} clusters. These deployments provide you with a granular control over resources such as compute, cluster nodes, and auto-scaling Graphical Processing Units (GPUs) while enabling you to leverage LLMs to perform analysis at a large scale. +{mta-dl-plugin} integrates with LLMs that are deployed as a scalable service on {ocp-name} AI clusters. These deployments provide you with granular control over resources such as compute, cluster nodes, and auto-scaling Graphical Processing Units (GPUs) while enabling you to leverage LLMs to resolve code issues at a large scale. -An example workflow for configuring an LLM service on {ocp-short} AI broadly requires the following configurations: +An example workflow for configuring an LLM service on {ocp-name} AI broadly requires the following configurations: * Installing and configuring the following infrastructure resources: -** {ocp-short} cluster and installing the {ocp-short} AI Operator +** {ocp-short} cluster and installing the {ocp-name} AI Operator ** Configure a GPU machineset ** (Optional) Configure an auto scaler custom resource (CR) and a machine scaler CR -* Configuring {ocp-short} AI platform +* Configuring {ocp-name} AI platform ** Configure a data science project ** Configure a serving runtime ** Configure an accelerator profile -* Deploying the LLM through {ocp-short} AI +* Deploying the LLM through {ocp-name} AI ** Uploading your model to an AWS compatible bucket ** Add a data connection -** Deploy the LLM in your {ocp} AI data science project +** Deploy the LLM in your {ocp-name} AI data science project ** Export the SSL certificate, `OPENAI_API_BASE` URL and other environment variables to access the LLM * Preparing the LLM for analysis ** Configure an OpenAI API key ** Update the OpenAI API key and the base URL in `provider-settings.yaml`. //provide the link to the document after publishing -See Provider settings configuration to configure the base URL and LLM key in the {mta-dl-plugin} VS Code extension. \ No newline at end of file +See Provider settings configuration to configure the base URL and the LLM API key in the {mta-dl-plugin} VS Code extension. \ No newline at end of file diff --git a/docs/topics/developer-lightspeed/con_prerequisites.adoc b/docs/topics/developer-lightspeed/con_prerequisites.adoc index d390866d..0fe03204 100644 --- a/docs/topics/developer-lightspeed/con_prerequisites.adoc +++ b/docs/topics/developer-lightspeed/con_prerequisites.adoc @@ -17,28 +17,30 @@ Before you install {mta-dl-plugin}, you must: * Install Git and add it to the $PATH variable -* Install the {ProductShortName} command line 8.0.0 - * Install the {ProductShortName} Operator 8.0.0 + -The {ProductShortName} Operator is mandatory if you plan to enable the solution server to work with the large language model (LLM) for generating code changes. It enables you to log in to the `openshift-mta` project where you must deploy the Tackle custom resources (CR) required for running the Solution Server. +The {ProductShortName} Operator is mandatory if you plan to enable the solution server that works with the large language model (LLM) for generating code changes. It enables you to log in to the `openshift-mta` project where you must enable the Solution Server in the Tackle custom resources (CR). * Create an API key for an LLM. + You must enter the provider value and model name in Tackle custom resource (CR) to enable generative AI configuration in the {ProductShortName} VS Code plugin. + -.Supported large language models and providers +.Configurable large language models and providers |=== -| LLM Provider (Tackle CR value) | Large language model in Tackle CR +| LLM Provider (Tackle CR value) | Large language model examples for Tackle CR configuration +| {ocp-name} AI platform| Models deployed in an {ocp-name} AI cluster that can be accessed by using Open AI-compatible API | Open AI (`openai`) | `gpt-4`, `gpt-4o`, `gpt-4o-mini`, `gpt-3.5-turbo` | Azure OpenAI (`azure_openai`) | `gpt-4`, `gpt-35-turbo` | Amazon Bedrock (`bedrock`) | `anthropic.claude-3-5-sonnet-20241022-v2:0`, `meta.llama3-1-70b-instruct-v1:0` | Google Gemini (`google`) | `gemini-2.0-flash-exp`, `gemini-1.5-pro` | Ollama (`ollama`) | `llama3.1`, `codellama`, `mistral` -| Groq (`groq`) | `llama-3.1-70b-versatile`, `mixtral-8x7b-32768` -| Anthropic (`anthropic`) | `claude-3-5-sonnet-20241022`, `claude-3-haiku-20240307` -|=== \ No newline at end of file +|=== + +[NOTE] +==== +The availability of public LLM models is maintained by the respective LLM provider. +==== \ No newline at end of file diff --git a/docs/topics/developer-lightspeed/proc_configuring-developer-lightspeed-ide-settings.adoc b/docs/topics/developer-lightspeed/proc_configuring-developer-lightspeed-ide-settings.adoc index 5846ebe1..3de0bc01 100644 --- a/docs/topics/developer-lightspeed/proc_configuring-developer-lightspeed-ide-settings.adoc +++ b/docs/topics/developer-lightspeed/proc_configuring-developer-lightspeed-ide-settings.adoc @@ -11,13 +11,9 @@ After you install the {ProductShortName} extension in Visual Studio (VS) Code, y .Prerequisites -* You installed the {ProductFullName} extension version 8.0.0 in VS Code. -* You completed the solution server configurations in Tackle custom resource if you opt to use solution server. -* You installed the {ProductShortName} version 8.0.0 in your system. -* You installed the latest version of Language Support for Java(TM) by Red Hat extension in VS Code. -* You installed Jave 17+ and Maven 3.9.9+ in your system. -* You installed Git and add it to the $PATH variable. +In addition to the overall prerequisites, you have configured the following: +* You completed the solution server configurations in Tackle custom resource if you opt to use the solution server. .Procedure @@ -29,34 +25,34 @@ After you install the {ProductShortName} extension in Visual Studio (VS) Code, y + . Configure the settings described in the following table: -.{mta-dl-plugin} settings +.{mta-dl-plugin} extension settings [cols="40%,60%a",options="header",] |==== |Settings |Description |Log level|Set the log level for the {ProductShortName} binary. The default log level is `debug`. The log level increases or decreases the verbosity of logs. -|RPC Server Path|Displays the path to the solution server binary. If you do not modify the path, {mta-dl-plugin} uses the bundled binary. -|Analyzer path|Specify a {ProductShortName} custom binary path. If you do not provide a path, {mta-dl-plugin} uses the default path to the binary. -|Solution Server:URL|Configure the URL of the Solution Server end point. This field has the default URL. -|Solution Server:enabled|Enable the Solution Server client ({ProductShortName} extension) to connect to the Solution Server to perform analysis. -|Solution Server:Auth| Enable authentication for the Solution Server. -|Solution Server:Auth Realm| Enter the name of the Keycloak realm for Solution Server. - -If you enabled authentication for the Solution Server, you must configure a link:https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/server_administration_guide/red_hat_build_of_keycloak_features_and_concepts[Keycloak realm] to allow clients to connect to the Solution Server. An administrator can configure SSL for the realm. -|Solution Server: Auth Insecure|This option is enabled by default to skip SSL certificate verification when clients connect to the Solution Server. Disable the setting to allow secure connections to the Solution Server. -|Analyze on save|Enable this setting for {mta-dl-plugin} to run an analysis on a file that is saved after code modification. This setting is enabled automatically when you enable Agentic AI mode. -|Diff editor type|Select from diff or merge view to review the suggested solutions after running an analysis. The diff view shows the old code and a copy of the code with changes side-by-side. The merge view overlays the changes in the code in a single view. +|Analyzer path|Displays the path to the solution server binary. If you do not modify the path, {mta-dl-plugin} uses the bundled binary. +|Analyzer path|Specify a MTA custom binary path. If you do not provide a path, Developer Lightspeed for MTA uses the default path to the binary. +|Auto Accept on Save|This option is enabled by default. When you accept the changes suggested by the LLM, the updated code is saved automatically in a new file. Disable this option if you want to manually save the new file after accepting the suggested code changes. |Gen AI:Enabled|This option is enabled by default. It enables you to get code fixes by using {mta-dl-plugin} with a large language model. -|Diff:Auto Accept On Save|This option is enabled by default. When you accept the changes suggested by the LLM, the updated code is saved automatically in a new file. Disable this option if you want to manually save the new file after accepting the suggested code changes. -|Agent mode|Enable the experimental Agentic AI flow for analysis. {mta-dl-plugin} runs an automated analysis of a file to identify issues and suggest resolutions. After you accept the solutions, {mta-dl-plugin} makes the changes in the code and re-analyzes the file. -|Excluded diagnostic sources|Add diagnostic sources in the `settings.json` file. The issues generated by such diagnostic sources are excluded from the automated Agentic AI analysis. -|Cache directory|Specify the path to a directory in your filesystem to store cached responses from the LLM. -|Demo mode|Enable to run {mta-dl-plugin} in demo mode that uses the LLM responses saved in the `cache` directory for analysis. -|Trace enabled|Enable to trace {ProductShortName} communication with the LLM model. Traces are stored in the `/.vscode/konveyor-logs/traces` path in your IDE project. +|Gen AI: Agent mode|Enable the experimental Agentic AI flow for analysis. {mta-dl-plugin} runs an automated analysis of a file to identify issues and suggest resolutions. After you accept the solutions, {mta-dl-plugin} makes the changes in the code and re-analyzes the file. +|Gen AI: Excluded diagnostic sources|Add diagnostic sources in the `settings.json` file. The issues generated by such diagnostic sources are excluded from the automated Agentic AI analysis. +|Cache directory|Specify the path to a directory in your filesystem to store cached responses from the LLM. +|Trace directory|Configure the absolute path to the directory that contains the saved LLM interaction. +|Trace enabled|Enable to trace {ProductShortName} communication with the LLM model. Traces are stored in the trace directory that you configured. +|Demo mode|Enable to run {mta-dl-plugin} in demo mode that uses the LLM responses saved in the `cache` directory for analysis. +|Solution Server:URL|Edit the configurations for solution server in `settings.json`: + + * “enabled”: Enter a boolean value. Set `true` for connecting the Solution Server client ({mta-dl-plugin} extension) to the Solution Server. + + * “url”: Configure the URL of the Solution Server end point. This field has the localhost as the default URL. + + * “auth”: The authentication settings allows you to configure a list of options to authenticate to the solution server. + ** "enabled": Set to `true` to enable authentication. + + ** "insecure": Set to `true` to skip SSL certificate verification when clients connect to the Solution Server. Set to `false` to allow secure connections to the Solution Server. + + ** "realm": Enter the name of the Keycloak realm for Solution Server. If you enabled authentication for the Solution Server, you must configure a link:https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/server_administration_guide/red_hat_build_of_keycloak_features_and_concepts[Keycloak realm] to allow clients to connect to the Solution Server. An administrator can configure SSL for the realm. |Debug:Webview|Enable debug level logging for Webview message handling in VS Code. -|Analyze dependencies|Enable {mta-dl-plugin} to analyze dependency-related errors detected by the LLM in your project. -|Analyze known libraries|Enable {mta-dl-plugin} to analyze well-known open-source libraries in your code. -|Code snip limit|Set the maximum number of lines of code that are included in incident reports. -|Context lines|Configure the number of context lines included in incident reports. The greater the number, the more the LLM accuracy. -|Incident limit|Specifies the maximum number of incidents to be reported. If you enter a higher value, it increases the coverage of incidents in your report. + |==== diff --git a/docs/topics/developer-lightspeed/proc_configuring-developer-profile-settings.adoc b/docs/topics/developer-lightspeed/proc_configuring-developer-profile-settings.adoc index 70664be3..ca1d7a8e 100644 --- a/docs/topics/developer-lightspeed/proc_configuring-developer-profile-settings.adoc +++ b/docs/topics/developer-lightspeed/proc_configuring-developer-profile-settings.adoc @@ -11,12 +11,7 @@ To generate code changes using {mta-dl-plugin}, you must configure a profile tha .Prerequisites -* You installed the {ProductFullName} extension version 8.0.0 in VS Code. -* You completed the solution server configurations in Tackle custom resource if you opt to use solution server. -* You installed the {ProductShortName} version 8.0.0 in your system. -* You installed the latest version of Language Support for Java(TM) by Red Hat extension in VS Code. -* You installed Jave 17+ and Maven 3.9.9+ in your system. -* You installed Git and add it to the $PATH variable. +* You completed the solution server configurations in Tackle custom resource if you opt to use the solution server. * You opened a Java project in your VS Code workspace. .Procedure diff --git a/docs/topics/developer-lightspeed/proc_configuring-llm-podman-desktop.adoc b/docs/topics/developer-lightspeed/proc_configuring-llm-podman-desktop.adoc index 88331677..d8f879f9 100644 --- a/docs/topics/developer-lightspeed/proc_configuring-llm-podman-desktop.adoc +++ b/docs/topics/developer-lightspeed/proc_configuring-llm-podman-desktop.adoc @@ -9,7 +9,7 @@ The Podman AI lab extension enables you to use an open-source model from a curated list of models and use it locally in your system. -The code fix suggestions generated by a model depends on the model's capabilities. Models deployed through the Podman AI Lab must were found to be insufficient for the complexity of code changes required to fix issues discovered by {ProductShortName}. You must not use such models in production environment. +The code fix suggestions generated by a model depends on the model's capabilities. Models deployed through the Podman AI Lab were found to be insufficient for the complexity of code changes required to fix issues discovered by {ProductShortName}. You must not use such models in a production environment. .Prerequisites @@ -39,13 +39,13 @@ You must configure these specifications in the {mta-dl-plugin} extension. export OPENAI_API_BASE= ---- + -. In the {mta-dl-plugin} extension, type `Open the GenAI model provider configuration file` in the Command Pallete to open the `provider-settings.yaml` file. +. In the {mta-dl-plugin} extension, type `Open the GenAI model provider configuration file` in the Command Palette to open the `provider-settings.yaml` file. . Enter the model details from Podman Desktop. For example, use the following configuration for a Mistral model. + [source, yaml] ---- -podman_mistral: +podman_mistral: provider: "ChatOpenAI" environment: OPENAI_API_KEY: "unused value" diff --git a/docs/topics/developer-lightspeed/ref_example-code-suggestion.adoc b/docs/topics/developer-lightspeed/ref_example-code-suggestion.adoc index 608eadbf..95c981a3 100644 --- a/docs/topics/developer-lightspeed/ref_example-code-suggestion.adoc +++ b/docs/topics/developer-lightspeed/ref_example-code-suggestion.adoc @@ -6,7 +6,7 @@ = Generating code fix suggestions example [role="_abstract"] -This example will walk you through generating code fixes for a Java application that must be migrated to `quarkus`. To generate resolutions for issues in the code, we use the Agentic AI mode and the `gpt-4` model from `OpenAI` as the large language model (LLM). +This example will walk you through generating code fixes for a Java application that must be migrated to `quarkus`. To generate resolutions for issues in the code, we use the Agentic AI mode and the `my-model` as the large language model (LLM) that you deployed in {ocp-name} AI. .Procedure @@ -19,11 +19,11 @@ This example will walk you through generating code fixes for a Java application .. Type `Ctrl+Shift+P` in Windows and Linux systems. .. Type `Cmd+Shift+P` in Mac systems. -. Type `Preferences: Open Settings (UI)` in the Command Palette to open the VS Code settings and select `Extension > {ProductShortName}`. +. Type `Preferences: Open Settings (UI)` in the Command Palette to open the VS Code settings and select `Extensions > {ProductShortName}`. -. Select `MTA:Agent Mode` and restart VS Code. +. Select `Gen AI:Agent Mode` and restart VS Code. -. In the {mta-dl-plugin} extension, click `Open {ProductShortName} Analysis View`. +. In the {mta-dl-plugin} extension, click `Open Analysis View`. . Type `MTA: Manage Analysis Profile` in the Command Palette to open the analysis profile page. @@ -33,7 +33,7 @@ This example will walk you through generating code fixes for a Java application .. *Target Technologies*: `quarkus` -.. *Use Default Rules*: Toggle the button to use default rules for `quarkus` +.. *Custom Rules*: Select custom rules if you want to include them while running the analysis. By default, {mta-dl-plugin} enables *Use Default Rules* for `quarkus`. . Close the profile manager. @@ -44,21 +44,30 @@ This example will walk you through generating code fixes for a Java application [source, yaml] ---- models: - OpenAI: &active + openshift-example-model: &active environment: - OPENAI_API_KEY: "__" # Required - provider: ChatOpenAI + OPENAI_API_KEY: "" + REQUESTS_CA_BUNDLE: "" + ALLOW_INSECURE: "true" + provider: "ChatOpenAI" args: - model: gpt-4 # Required + model: "my-model" + configuration: + base_url: "https://-.apps.konveyor-ai.migration.redhat.com/v1" ---- ++ +[NOTE] +==== +You must change the `provider-setting` configuration if you plan to use a different LLM provider. +==== -. Type `{ProductShortName}: Open {ProductShortName} Analysis View` in the Command Palette. +. Type `{ProductShortName}: Open Analysis View` in the Command Palette. . Click *Start* to start the {mta-dl-plugin} server. + -Starting the server activates the *Run Analysis* feature and the Agent mode. +Starting the server activates the *Run Analysis* feature. -. Select the profile you configured and toggle the *Agent Mode* on. +. Select the profile you configured. . Click *Run Analysis* to scan the Java application. + diff --git a/docs/topics/developer-lightspeed/ref_llm-provider-configurations.adoc b/docs/topics/developer-lightspeed/ref_llm-provider-configurations.adoc index dc008c01..652b667a 100644 --- a/docs/topics/developer-lightspeed/ref_llm-provider-configurations.adoc +++ b/docs/topics/developer-lightspeed/ref_llm-provider-configurations.adoc @@ -4,10 +4,10 @@ :_mod-docs-content-type: REFERENCE [id="llm-provider-settings_{context}"] -= Provider settings configuration += Configuring LLM provider settings [role="_abstract"] -{mta-dl-full} is large language model (LLM) agnostic and intergrates with an LLM of your choice. +{mta-dl-full} is large language model (LLM) agnostic and integrates with an LLM of your choice. To enable {mta-dl-full} to access your large language model (LLM), you must enter the LLM provider configurations in the `provider-settings.yaml` file. @@ -15,9 +15,39 @@ The `provider-settings.yaml` file contains a list of LLM providers that are supp The provider settings file is available in the {mta-dl-plugin} Visual Studio (VS) Code extension. -Access the `provider-settings.yaml` from the VS Code Command Pallete by typing `Open the GenAI model provider configuration file`. +Access the `provider-settings.yaml` from the VS Code Command Palette by typing `Open the GenAI model provider configuration file`. -You can select one provider from the list by using the `&active` anchor in the name of the provider. To use a model from another provider, move the `&active` anchor to the desired provider block and restart the solution server on the `Open {ProductShortName} Analysis View` screen. +[NOTE] +==== +You can select one provider from the list by using the `&active` anchor in the name of the provider. To use a model from another provider, move the `&active` anchor to _**one**_ of the desired provider blocks and restart the solution server on the `Open {ProductShortName} Analysis View` screen. +==== + +For a model named "my-model" deployed in {ocp-name} AI with "example-model" as the serving name: + +//check if openshift prefix is required for OpenShift AI model provider, like "openshift-example-model" or can it be just "example-model" +[source, yaml] +---- +models: + openshift-example-model: &active + environment: + CA_BUNDLE: "" + ALLOW_INSECURE: "true" + provider: "ChatOpenAI" + args: + model: "my-model" + configuration: + base_url: "https://-.apps.konveyor-ai.migration.redhat.com/v1" +---- + +[NOTE] +==== +When you change the `model` deployed in {ocp-name} AI, you must also change the `model` argument and the `base_url` endpoint. +==== + +[NOTE] +==== +If you want to select a public LLM provider, you must move the `&active` anchor to the desired block and change the provider arguments. +==== For an OpenAI model: @@ -35,7 +65,7 @@ For Azure OpenAI: [source, yaml] ---- -AzureChatOpenAI: +AzureChatOpenAI: &active environment: AZURE_OPENAI_API_KEY: "" # Required provider: AzureChatOpenAI @@ -48,7 +78,7 @@ For Amazon Bedrock: [source, yaml] ---- -AmazonBedrock: +AmazonBedrock: &active environment: ## May have to use if no global `~/.aws/credentials` AWS_DEFAULT_REGION: us-east-1 @@ -70,7 +100,7 @@ For Google Gemini: [source, yaml] ---- -GoogleGenAI: +GoogleGenAI: &active environment: GOOGLE_API_KEY: "" # Required provider: ChatGoogleGenerativeAI @@ -83,31 +113,9 @@ For Ollama: [source, yaml] ---- models: - ChatOllama: + ChatOllama: &active provider: "ChatOllama" args: model: "granite-code:8b-instruct" baseUrl: "127.0.0.1:11434" # example URL ----- - -For a model named "my-model" deployed in {ocp-short} AI with "example-model" as the serving name: - -//check if openshift prefix is required for OpenShift AI model provider, like "openshift-example-model" or can it be just "example-model" -[source, yaml] ----- -models: - openshift-example-model: - environment: - CA_BUNDLE: "" - ALLOW_INSECURE: "true" - provider: "ChatOpenAI" - args: - model: "my-model" - configuration: - base_url: "https://-.apps.konveyor-ai.migration.redhat.com/v1" ----- - -[NOTE] -==== -When you change the `model` deployed in {ocp-short} AI, you must also change the `base_url` endpoint. -==== \ No newline at end of file +---- \ No newline at end of file From ef9364a1b4a721cdb81d03ff043f807677c97416 Mon Sep 17 00:00:00 2001 From: Prabha Kylasamiyer Sundara Rajan Date: Sat, 20 Sep 2025 17:14:48 +0530 Subject: [PATCH 04/16] Minor formatting correction Signed-off-by: Prabha Kylasamiyer Sundara Rajan --- docs/topics/developer-lightspeed/assembly_configuring_llm.adoc | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/topics/developer-lightspeed/assembly_configuring_llm.adoc b/docs/topics/developer-lightspeed/assembly_configuring_llm.adoc index 98abf27f..8f9ed094 100644 --- a/docs/topics/developer-lightspeed/assembly_configuring_llm.adoc +++ b/docs/topics/developer-lightspeed/assembly_configuring_llm.adoc @@ -29,6 +29,7 @@ You can run an LLM from the following generative AI providers: * Ollama You can also run OpenAI API-compatible LLMs deployed as: + * A service in your {ocp-name} AI cluster * Locally in the Podman AI Lab in your system. From 91c835aa5a21f7c60cc1b69d9f9574e2677d704f Mon Sep 17 00:00:00 2001 From: Prabha Kylasamiyer Sundara Rajan Date: Mon, 22 Sep 2025 11:37:10 +0530 Subject: [PATCH 05/16] Modified draft based on dev and QE feedback - part 2 Signed-off-by: Prabha Kylasamiyer Sundara Rajan --- .../con_developer-lightspeed-logs.adoc | 33 ++++++++----------- .../proc_apply-rag-resolution.adoc | 13 +++----- .../proc_running-agent-analysis.adoc | 21 ++++-------- .../proc_running-rag-analysis.adoc | 15 +++------ 4 files changed, 30 insertions(+), 52 deletions(-) diff --git a/docs/topics/developer-lightspeed/con_developer-lightspeed-logs.adoc b/docs/topics/developer-lightspeed/con_developer-lightspeed-logs.adoc index e170f1af..57e269af 100644 --- a/docs/topics/developer-lightspeed/con_developer-lightspeed-logs.adoc +++ b/docs/topics/developer-lightspeed/con_developer-lightspeed-logs.adoc @@ -11,33 +11,28 @@ Extension logs are stored as `extension.log` with automatic rotation. The maximum size of the log file is 10 MB and three files are retained. Analyzer RPC logs are stored as `analyzer.log` without rotation. -[id="dev-lightspeed-access-logs_{context}"] - -== Access the logs +[id="dev-lightspeed-archive-logs_{context}"] -You can access the extension logs in the following ways: +== Archiving the logs -* *Command Palette*: Type `Show Logs` or `Open Log` and select `Extension Host`. +To archive the logs as a zip file, type `{ProductShortName}: Generate Debug Archive` in the VS Code Command Palette and select the information type that must be archived as a log file. -* *Output panel*: Select `Extension Host` from the drop-down menu. +The archive command allows capturing all relevant log files in a zip archive at the specified location in your project. By default, you can access the archived logs in the .vscode directory of your project. -* *Log file*: Go to `.vscode/mta-logs` directory in your project and open the extension log file. +The archival feature helps you to save the following information: -To access the analyzer log file, go to `.vscode/mta-logs` directory in your project and open the analyzer log file. +* Large language model (LLM) provider configuration: Fields from the provider settings that can be included in the archive. All fields are redacted for security reasons by default. Ensure that you do not expose any secrets. +* LLM model arguments +* LLM traces: If you enabled tracing LLM interactions, you can choose to include LLM traces in the logs. -You can also inspect webview content by using the webview logs. To access the webview logs, type `Open Webview Developer Tools` in the VS Code Command Palette. +[id="dev-lightspeed-access-logs_{context}"] -[id="dev-lightspeed-archive-logs_{context}"] +== Accessing the logs -== Archive logs +You can access the logs in the following ways: -The archival feature helps you to save the following information: +* *Log file*: Type `Developer: Open Extension Logs Folder` and open the `redhat.mta-vscode-extension` directory that contains the extension log and the analyzer log. -* Large language model (LLM) provider configuration -* LLM model arguments -* Template configuration -* LLama header -* LLM retry attempts -* LLM retries delay +* *Output panel*: Select `{mta-dl-plugin}` from the drop-down menu. -Type `Generate Debug Archive` in the VS Code Command Palette and select the information type that must be archived as a log file. You can access the tar file of archived logs in the `.vscode` directory of your project. +* *Webview logs*: You can also inspect webview content by using the webview logs. To access the webview logs, type `Open Webview Developer Tools` in the VS Code Command Palette. \ No newline at end of file diff --git a/docs/topics/developer-lightspeed/proc_apply-rag-resolution.adoc b/docs/topics/developer-lightspeed/proc_apply-rag-resolution.adoc index 854a11ef..d62d7b9f 100644 --- a/docs/topics/developer-lightspeed/proc_apply-rag-resolution.adoc +++ b/docs/topics/developer-lightspeed/proc_apply-rag-resolution.adoc @@ -3,28 +3,23 @@ :_mod-docs-content-type: PROCEDURE [id="apply-rag-resolution_{context}"] -= Applying resolutions after a solution server analysis += Applying resolutions generated by the solution server [role="_abstract"] When you request a code fix by using {mta-dl-plugin}, you first get a stream of messages about what needs to be fixed to resolve the issues and the corresponding updates to your code in newly generated files. You can review the changes to the code in the new files and apply the resolutions. -In the solution server mode, an issue displays an associated success metrics. Success metrics indicate the confidence level in applying the fix suggestion from the LLM based on how many times the update was applied in past analysis. +When you enable the solution server, an issue displays the success metric when the metric becomes available. A success metric indicates the confidence level in applying the fix suggestion from the LLM based on how many times the update was applied in past analysis. {mta-dl-plugin} then triggers another round of analysis to check if more issues must be fixed in the code. .Prerequisites -* You installed the {ProductShortName} distribution version 8.0.0 in your system. -* You installed Jave 17+ and Maven 3.9.9+ in your system. -* You installed the {ProductFullName} extension version 8.0.0 in VS Code. -* You installed the latest version of Language Support for Java(TM) by Red Hat extension in Visual Studio (VS) Code. * You opened a Java project in your VS Code workspace. -//check what's the alternative for Konveyor Analysis View in the d/s build. -* You configured a profile on the *Konveyor Analysis View* page and ran an analysis. +* You configured a profile on the *{ProductShortName} Analysis View* page and ran an analysis. .Procedure -. Review the issues from the *Analysis results* space of the *Konveyor view analysis* page by the following tabs: +. Review the issues from the *Analysis results* space of the *{ProductShortName} view analysis* page by the following tabs: .. *All*: lists all incidents identified in your project. .. *Files*: lists all the files in your project for which the analysis identified issues that must be resolved. .. *Issues*: lists all issues across different files in your project. diff --git a/docs/topics/developer-lightspeed/proc_running-agent-analysis.adoc b/docs/topics/developer-lightspeed/proc_running-agent-analysis.adoc index d3d002ae..bb8319a4 100644 --- a/docs/topics/developer-lightspeed/proc_running-agent-analysis.adoc +++ b/docs/topics/developer-lightspeed/proc_running-agent-analysis.adoc @@ -3,24 +3,19 @@ :_mod-docs-content-type: PROCEDURE [id="running-agent-analysis_{context}"] -= Running an analysis in agent mode += Generating code resolutions in the agent mode [role="_abstract"] In the agent mode, the {mta-dl-plugin} planning agent creates the context for an issue and picks a sub-agent that is most suited to resolve the issue. The sub-agent runs an automated scan to describe how the issue can be resolved and generates files with the updated resolutions in one stream. You can review the updated files and approve or reject the changes to the code. The agent runs another automated analysis to detect new issues in the code that may have occurred because of the accepted changes or diagnostic issues that your tool may generate following a previous analysis. If you allow the process to continue, {mta-dl-plugin} runs the stream again and generates a new file with the latest updates. -When using the agent mode, you can reject the changes or discontinue the stream but cannot edit the updated files during the stream. +When using the agent mode, you can reject the changes or discontinue the stream but you cannot edit the updated files during the stream. .Prerequisites -* You installed the {ProductShortName} distribution version 8.0.0 in your system. -* You installed Java 17+ and Maven 3.9.9+ in your system. -* You installed the {ProductFullName} extension version 8.0.0 in VS Code. -* You installed the latest version of Language Support for Java(TM) by Red Hat extension in Visual Studio (VS) Code. * You opened a Java project in your VS Code workspace. -//check what's the alternative for Konveyor references in the d/s build. -* You configured an analysis profile on the *Konveyor Analysis View* page. +* You configured an analysis profile on the *{ProductShortName} Analysis View* page. .Procedure @@ -28,25 +23,23 @@ When using the agent mode, you can reject the changes or discontinue the stream + .. Type `Ctrl + Shift + P` in VS Code search (Linux/Windows system) and `Cmd + Shift + P` for Mac to go to the command palette. .. Enter `Preferences: Open User Settings (JSON)` to open the `settings.json` file. -//check later to see how Konveyor and kai references are changed -.. Ensure that `konveyor.kai.agentMode` is set to `true`. +.. Ensure that `mta-vscode-extensionkonveyor.genai.agentMode` is set to `true`. + OR + .. Go to *Extensions > {mta-dl-plugin} > settings* -//check the settings to see how Kai:Agent Mode is changed .. Click the *Agent Mode* option to enable the server. + -. Click the {mta-dl-plugin} extension and click *Open Konveyor Analysis View*. +. Click the {mta-dl-plugin} extension and click *Open {ProductShortName} Analysis View*. + . Select a profile for the analysis. + . Click *Start* to start the {ProductShortName} RPC server. + -. Click *Run Analysis* on the *Konveyor Analysis View* page. +. Click *Run Analysis* on the *{ProductShortName} Analysis View* page. The *Resolution Details* tab opens, where you can view the automated analysis that makes changes in applicable files. + -. Click *Review Changes* option to open the editor that shows the diff view of the modified file. +. Click the *Review Changes* option to open the editor that shows the diff view of the modified file. + . Review the changes and click *Apply* to update the file with all the changes or *Reject* to reject all changes. If you applied the changes, then {mta-dl-plugin} creates the updated file with code changes. + diff --git a/docs/topics/developer-lightspeed/proc_running-rag-analysis.adoc b/docs/topics/developer-lightspeed/proc_running-rag-analysis.adoc index 0e2e232e..0dc92d3f 100644 --- a/docs/topics/developer-lightspeed/proc_running-rag-analysis.adoc +++ b/docs/topics/developer-lightspeed/proc_running-rag-analysis.adoc @@ -3,7 +3,7 @@ :_mod-docs-content-type: PROCEDURE [id="running-rag-analysis_{context}"] -= Running an analysis in solution server mode += Generating code resolutions from solution server [role="_abstract"] Solution server uses Retrieval Augmented Generation (RAG) to extract a pattern of resolution that improves the context. {mta-dl-plugin} derives the context from rules, past changes to in the codebase ro resolve issues, and from migration hits created by the solution server by working with the large language model (LLM). @@ -12,13 +12,8 @@ Solution server uses Retrieval Augmented Generation (RAG) to extract a pattern o .Prerequisites -* You installed the {ProductShortName} distribution version 8.0.0 in your system. -* You installed Jave 17+ and Maven 3.9.9+ in your system. -* You installed the {ProductFullName} extension version 8.0.0 in VS Code. -* You installed the latest version of Language Support for Java(TM) by Red Hat extension in Visual Studio (VS) Code. * You opened a Java project in your VS Code workspace. -//check what's the alternative for Konveyor references in the d/s build. -* You configured an analysis profile on the *Konveyor Analysis View* page. +* You configured an analysis profile on the *{ProductShortName} Analysis View* page. .Procedure @@ -26,19 +21,19 @@ Solution server uses Retrieval Augmented Generation (RAG) to extract a pattern o + .. Type `Ctrl + Shift + P` in VS Code search (Linux/Windows system) and `Cmd + Shift + P` for Mac to go to the command palette. .. Enter `Preferences: Open User Settings (JSON)` to open the `settings.json` file. -.. Ensure that `konveyor.solutionServer.enabled` is set to `true`. +.. Ensure that `mta-vscode-extension.solutionServer.enabled` is set to `true`. + OR + .. Go to *Extensions > {mta-dl-plugin} > settings* .. Click the *Solution Server:Enabled* option to enable the server. + -. Click the {mta-dl-plugin} extension and click *Open Konveyor Analysis View*. +. Click the {mta-dl-plugin} extension and click *Open {ProductShortName} Analysis View*. + . Select a profile for the analysis. + . Click *Start* to start the {ProductShortName} RPC server. + -. Click *Run Analysis* on the *Konveyor Analysis View* page. +. Click *Run Analysis* on the *{ProductShortName} Analysis View* page. To resolve the identified issues, see Applying resolutions after generating suggestions to change code. \ No newline at end of file From 3deda1d64ef5f255eceed5eb8d97131248a95eaf Mon Sep 17 00:00:00 2001 From: Prabha Kylasamiyer Sundara Rajan Date: Mon, 22 Sep 2025 11:46:55 +0530 Subject: [PATCH 06/16] Modified chapter 5.2 Signed-off-by: Prabha Kylasamiyer Sundara Rajan --- .../proc_configuring-developer-profile-settings.adoc | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/topics/developer-lightspeed/proc_configuring-developer-profile-settings.adoc b/docs/topics/developer-lightspeed/proc_configuring-developer-profile-settings.adoc index ca1d7a8e..e2060a54 100644 --- a/docs/topics/developer-lightspeed/proc_configuring-developer-profile-settings.adoc +++ b/docs/topics/developer-lightspeed/proc_configuring-developer-profile-settings.adoc @@ -16,12 +16,12 @@ To generate code changes using {mta-dl-plugin}, you must configure a profile tha .Procedure -. Open the `Konveyor View Analysis` page in either of the following ways: +. Open the `{ProductShortName} View Analysis` page in either of the following ways: + -.. Click the screen icon on the `Konveyor Issues` pane of the {ProductShortName} extension. +.. Click the screen icon on the `{ProductShortName}: Issues` pane of the {ProductShortName} extension. .. Type `Ctrl + Shift + P` on the search bar to open the Command Palette and enter `Konveyor:Open Konveyor Analysis View`. + -. Click the settings button on the `Konveyor View Analysis` page to configure a profile for your project. +. Click the settings button on the `{ProductShortName} View Analysis` page to configure a profile for your project. The `Get Ready to Analyze` pane lists the follwoing basic configurations required for an analysis: + From ba5733bc018a2a3f9c2a6f0f8e6394cea4c6a07e Mon Sep 17 00:00:00 2001 From: Prabha Kylasamiyer Sundara Rajan Date: Mon, 22 Sep 2025 11:48:32 +0530 Subject: [PATCH 07/16] Modified chapter 5.2 again Signed-off-by: Prabha Kylasamiyer Sundara Rajan --- .../proc_configuring-developer-profile-settings.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/topics/developer-lightspeed/proc_configuring-developer-profile-settings.adoc b/docs/topics/developer-lightspeed/proc_configuring-developer-profile-settings.adoc index e2060a54..5525c207 100644 --- a/docs/topics/developer-lightspeed/proc_configuring-developer-profile-settings.adoc +++ b/docs/topics/developer-lightspeed/proc_configuring-developer-profile-settings.adoc @@ -19,7 +19,7 @@ To generate code changes using {mta-dl-plugin}, you must configure a profile tha . Open the `{ProductShortName} View Analysis` page in either of the following ways: + .. Click the screen icon on the `{ProductShortName}: Issues` pane of the {ProductShortName} extension. -.. Type `Ctrl + Shift + P` on the search bar to open the Command Palette and enter `Konveyor:Open Konveyor Analysis View`. +.. Type `Ctrl + Shift + P` on the search bar to open the Command Palette and enter `{ProductShortName}:Open Analysis View`. + . Click the settings button on the `{ProductShortName} View Analysis` page to configure a profile for your project. The `Get Ready to Analyze` pane lists the follwoing basic configurations required for an analysis: From 366ac9162baa65ab2c5e3cb229cf184d9fbd0a92 Mon Sep 17 00:00:00 2001 From: Prabha Kylasamiyer Sundara Rajan Date: Mon, 22 Sep 2025 11:50:15 +0530 Subject: [PATCH 08/16] Updated chapter 6.3 Signed-off-by: Prabha Kylasamiyer Sundara Rajan --- .../developer-lightspeed/proc_running-agent-analysis.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/topics/developer-lightspeed/proc_running-agent-analysis.adoc b/docs/topics/developer-lightspeed/proc_running-agent-analysis.adoc index bb8319a4..3d67699d 100644 --- a/docs/topics/developer-lightspeed/proc_running-agent-analysis.adoc +++ b/docs/topics/developer-lightspeed/proc_running-agent-analysis.adoc @@ -23,7 +23,7 @@ When using the agent mode, you can reject the changes or discontinue the stream + .. Type `Ctrl + Shift + P` in VS Code search (Linux/Windows system) and `Cmd + Shift + P` for Mac to go to the command palette. .. Enter `Preferences: Open User Settings (JSON)` to open the `settings.json` file. -.. Ensure that `mta-vscode-extensionkonveyor.genai.agentMode` is set to `true`. +.. Ensure that `mta-vscode-extension.genai.agentMode` is set to `true`. + OR + From bd47fa79d23a37b2f1fe42f72c77302fdb8c6de7 Mon Sep 17 00:00:00 2001 From: Prabha Kylasamiyer Sundara Rajan Date: Wed, 24 Sep 2025 11:52:52 +0530 Subject: [PATCH 09/16] Modified based on feedback from the Kai team Signed-off-by: Prabha Kylasamiyer Sundara Rajan --- .../assembly_getting-started.adoc | 2 +- .../con_developer-lightspeed-pathways.adoc | 3 +- .../con_intro-to-developer-lightspeed.adoc | 37 +++++++++---------- .../con_prerequisites.adoc | 2 +- .../ref_example-code-suggestion.adoc | 2 +- .../ref_llm-provider-configurations.adoc | 2 +- 6 files changed, 23 insertions(+), 25 deletions(-) diff --git a/docs/topics/developer-lightspeed/assembly_getting-started.adoc b/docs/topics/developer-lightspeed/assembly_getting-started.adoc index 855dc46b..576af6d8 100644 --- a/docs/topics/developer-lightspeed/assembly_getting-started.adoc +++ b/docs/topics/developer-lightspeed/assembly_getting-started.adoc @@ -16,7 +16,7 @@ endif::[] :context: getting-started [role="_abstract"] -The Getting started section contains information to walk you through the prerequisites, persistent volume requirements, installation, and workflows that help you to decide how you want to use {mta-dl-full}. +The Getting started section contains information to walk you through the prerequisites, persistent volume requirements, installation, and workflows that help you to decide how you want to use the {mta-dl-full}. include::con_prerequisites.adoc[leveloffset=+1] diff --git a/docs/topics/developer-lightspeed/con_developer-lightspeed-pathways.adoc b/docs/topics/developer-lightspeed/con_developer-lightspeed-pathways.adoc index 39dac110..96171076 100644 --- a/docs/topics/developer-lightspeed/con_developer-lightspeed-pathways.adoc +++ b/docs/topics/developer-lightspeed/con_developer-lightspeed-pathways.adoc @@ -11,8 +11,7 @@ Starting with {ProductFullName} 8.0.0, you can run an application analysis using You can opt to use {mta-dl-full} features to request a code fix suggestion. {mta-dl-plugin} augments the manual changes made to code throughout your organization in different migration waves and creates a context that is shared with a large language model (LLM). The LLM suggests code resolutions based on the issue description, context, and previous examples of code changes to resolve issues. -To make code changes by using the LLM, you must enable the generative AI option, along with either the Solution Server mode or the Agent mode. The configurations that you complete before you request code fixes depend on the mode you prefer. -//Is it ok for users to enable all three settings? Gen AI, Solution Server, and Agent mode. +To make code changes by using the LLM, you must enable the generative AI option, along with either the Solution Server or the Agent AI. The configurations that you complete before you request code fixes depend on the mode you prefer. [NOTE] ==== diff --git a/docs/topics/developer-lightspeed/con_intro-to-developer-lightspeed.adoc b/docs/topics/developer-lightspeed/con_intro-to-developer-lightspeed.adoc index 3eab3090..e273a99b 100644 --- a/docs/topics/developer-lightspeed/con_intro-to-developer-lightspeed.adoc +++ b/docs/topics/developer-lightspeed/con_intro-to-developer-lightspeed.adoc @@ -20,50 +20,49 @@ Migrators do duplicate work by resolving issues that are repeated across applica [id="how-developerlightspped-works_{context}"] == How does {mta-dl-plugin} work -{mta-dl-plugin} works by collecting and storing the changes in the code for a large collection of applications, finding context to generate prompts for the large language model (LLM) of your choice, and by generating migrating hints produced by the LLM to resolve specific issues. +{mta-dl-plugin} works by collecting and storing the changes in the code for a large collection of applications, finding context to generate prompts for the LLM of your choice, and by generating code resolutions produced by the LLM to address specific issues. -The LLM generates migration hints based on the context shared by the {mt-dl-plugin}. -The context allows the LLM to "reason" and generate the hints. This mechanism helps to overcome the limited context size in LLMs that prevents them from analyzing the entire source code of an application. +{mta-dl-plugin} uses Retrieval Augmented Generation for context-based resolutions of issues in code. By using RAG, {mta-dl-plugin} improves the context shared with the LLM to generate more accurate suggestions to fix the issue in the code. The context allows the LLM to "reason" and generate suggestions for issues detected in the code. This mechanism helps to overcome the limited context size in LLMs that prevents them from analyzing the entire source code of an application. -The context is a combination of the following inputs that are shared with the LLM: +The context is a combination of the source code, the issue description, and solved examples: * Description of issues detected by {ProductShortName} when you run a static code analysis for a given set of target technologies. -* (Optional) Extra information that you include in the rules. The default and custom rules may contain additional information that helps {mta-dl-plugin} to define the context. +* (Optional) The default and custom rules may contain additional information that you include which can help {mta-dl-plugin} to define the context. + -* A solved example is created when a Migrator accepts a resolution in a previous analysis that results in updated code or an unfamiliar issue in a legacy application that the Migrator manually fixed. Solved examples are stored in the Solution Server. +* Solved examples constitute code changes from other migrations and a pattern of resolution for an issue that can be used in future. A solved example is created when a Migrator accepts a resolution in a previous analysis that results in updated code or an unfamiliar issue in a legacy application that the Migrator manually fixed. Solved examples are stored in the Solution Server. + More instances of solved examples for an issue enhances the context and improve the success metrics of rules that trigger the issue. A higher success metrics of an issue refers to the higher confidence level associated with the accepted resolutions for that issue in previous analyses. -* (Optional) If you enable the solution server mode, the Solution Server extracts a pattern of solution that can be used by the LLM to generate a more accurate migration hint. +* (Optional) If you enable the Solution Server, it extracts a pattern of resolution, called the migration hint, that can be used by the LLM to generate a more accurate fix suggestion in a future analysis. + -The improvement in the quality of migration hints results in more accurate code resolutions. In turn, the updated code is stored in the solution server to generate a better migration hint in future. +The improvement in the quality of migration hints results in more accurate code resolutions. Accurate code resolutions from the LLM result in the user accepting an update to the code. The updated code is stored in the solution server to generate a better migration hint in future. + -This cyclical improvement of resolution pattern from the solution server and improved migration hints lead to more reliable code changes as you migrate applications in different migration waves +This cyclical improvement of resolution pattern from the Solution Server and improved migration hints lead to more reliable code changes as you migrate applications in different migration waves. -The Solution Server acts as an institutional memory that stores changes to source codes after analyzing applications in your organization. This helps you to leverage the recurring patterns of solutions for issues that are repeated in many applications. +[id="modes-developer-lightspeed_{context}"] +== Requesting code fixes in {mta-dl-plugin} -Thus, when you deploy {mta-dl-plugin} for analyzing your entire application portfolio, it enables you to be consistent with the common fixes you need to make in the source code of any Java application. +You can request AI-assisted code resolutions that obtain additional context from several potential sources, such as analysis issues, IDE diagnostic information, and past migration data via the Solution Server. -It also enables you to control the analysis through manual reviews of the suggested AI resolutions by accepting or rejecting the changes while reducing the overall time and effort required to prepare your application for migration. +The Solution Server acts as an institutional memory that stores changes to source codes after analyzing applications in your organization. This helps you to leverage the recurring patterns of solutions for issues that are repeated in many applications. -[id="modes-developer-lightspeed_{context}"] -== Requesting code fixes in {mta-dl-plugin} +When you use the Solution Server, {mta-dl-plugin} suggests a code resolution that is based on solved examples or code changes in past analysis. You can view a diff of the updated portions of the code and the original source code to do a manual review. -You can request AI-assisted code resolutions in two ways: the Agentic AI and the Solution Server. +It also enables you to control the analysis through manual reviews of the suggested AI resolutions: you can accept, reject or edit the suggested code changes while reducing the overall time and effort required to prepare your application for migration. -If you enable the agentic AI mode, {mta-dl-plugin} streams an automated analysis of the code in a loop until all issues are resolved and changes the code with the updates. In the initial run, the AI agent: +In the agentic AI mode, {mta-dl-plugin} streams an automated analysis of the code in a loop until all issues are resolved and changes the code with the updates. In the initial run, the AI agent: * Plans the context to define the issues. * Chooses a suitable sub agent for the analysis task. Works with the LLM to generate fix suggestions. The reasoning transcript and files to be changed are displayed to the user. * Applies the changes to the code once the user approves the updates. -If you accept that the agentic AI must continue to make changes, it compiles the code and runs a partial analysis. In this phase, the agentic AI can detect diagnostic issues (if any) generated by tools that you installed in the VS Code IDE. You can accept the agentic AI's suggestion to address these diagnostic issues too. After every phase of applying changes to the code, the agentic AI runs another round of automated analysis depending on your acceptance, until it has run through all the files in your project and resolved the issues in the code. +If you accept that the agentic AI must continue to make changes, it compiles the code and runs a partial analysis. In this iteration, the agentic AI attempts to fix diagnostic issues (if any) generated by tools that you installed in the VS Code IDE. You can review the changes and accept the agentic AI's suggestion to address these diagnostic issues. -Agentic AI generates a new file in each round when it applies the suggestions in the code. The time taken by the agentic AI to complete several rounds of analysis depends on the size of the application, the number of issues, and the complexity of the code. +After each iteration of applying changes to the code, the agentic AI asks if you want the agent to continue fixing more issue. When you accept, it runs another iteration of automated analysis until it has resolved all issues or it has made a maximum of two attempts to fix an issue. -When you use the Solution Server, {mta-dl-plugin} delivers a solution for an issue that is based on solved examples or code changes in past analysis. When you fix code, you can view a diff of the updated portions of the code and the original source code to do a manual review. In such an analysis, the user has more control over the changes that must be applied to the code. +Agentic AI generates a new preview in each iteration when it updates the code with the suggested resolutions. The time taken by the agentic AI to complete all iterations depends on the number of new diagnostic issues that are detected in the code. //You can consider using the demo mode for running {mta-dl-plugin} when you need to perform analysis but have a limited network connection for {mta-dl-plugin} to sync with the LLM. The demo mode stores the input data as a hash and past LLM calls in a cache. The cache is stored in a chosen location in the your file system for later use. The hash of the inputs is used to determine which LLM call must be used in the demo mode. After you enable the demo mode and configure the path to your cached LLM calls in the {mta-dl-plugin} settings, you can rerun an analysis for the same set of files using the responses to a previous LLM call. diff --git a/docs/topics/developer-lightspeed/con_prerequisites.adoc b/docs/topics/developer-lightspeed/con_prerequisites.adoc index 0fe03204..7a5a9955 100644 --- a/docs/topics/developer-lightspeed/con_prerequisites.adoc +++ b/docs/topics/developer-lightspeed/con_prerequisites.adoc @@ -31,7 +31,7 @@ You must enter the provider value and model name in Tackle custom resource (CR) |=== | LLM Provider (Tackle CR value) | Large language model examples for Tackle CR configuration -| {ocp-name} AI platform| Models deployed in an {ocp-name} AI cluster that can be accessed by using Open AI-compatible API +| {ocp-name} AI platform| Models deployed in an {ocp-name} AI cluster that can be accessed by using Open AI-compatible API. | Open AI (`openai`) | `gpt-4`, `gpt-4o`, `gpt-4o-mini`, `gpt-3.5-turbo` | Azure OpenAI (`azure_openai`) | `gpt-4`, `gpt-35-turbo` | Amazon Bedrock (`bedrock`) | `anthropic.claude-3-5-sonnet-20241022-v2:0`, `meta.llama3-1-70b-instruct-v1:0` diff --git a/docs/topics/developer-lightspeed/ref_example-code-suggestion.adoc b/docs/topics/developer-lightspeed/ref_example-code-suggestion.adoc index 95c981a3..331a6b60 100644 --- a/docs/topics/developer-lightspeed/ref_example-code-suggestion.adoc +++ b/docs/topics/developer-lightspeed/ref_example-code-suggestion.adoc @@ -21,7 +21,7 @@ This example will walk you through generating code fixes for a Java application . Type `Preferences: Open Settings (UI)` in the Command Palette to open the VS Code settings and select `Extensions > {ProductShortName}`. -. Select `Gen AI:Agent Mode` and restart VS Code. +. Select `Gen AI:Agent Mode`. . In the {mta-dl-plugin} extension, click `Open Analysis View`. diff --git a/docs/topics/developer-lightspeed/ref_llm-provider-configurations.adoc b/docs/topics/developer-lightspeed/ref_llm-provider-configurations.adoc index 652b667a..f424d898 100644 --- a/docs/topics/developer-lightspeed/ref_llm-provider-configurations.adoc +++ b/docs/topics/developer-lightspeed/ref_llm-provider-configurations.adoc @@ -19,7 +19,7 @@ Access the `provider-settings.yaml` from the VS Code Command Palette by typing ` [NOTE] ==== -You can select one provider from the list by using the `&active` anchor in the name of the provider. To use a model from another provider, move the `&active` anchor to _**one**_ of the desired provider blocks and restart the solution server on the `Open {ProductShortName} Analysis View` screen. +You can select one provider from the list by using the `&active` anchor in the name of the provider. To use a model from another provider, move the `&active` anchor to _**one**_ of the desired provider blocks. ==== For a model named "my-model" deployed in {ocp-name} AI with "example-model" as the serving name: From 75d5e779632c63bc0ccb01948a253184313e86b7 Mon Sep 17 00:00:00 2001 From: Prabha Kylasamiyer Sundara Rajan Date: Wed, 24 Sep 2025 12:06:18 +0530 Subject: [PATCH 10/16] Removed duplicate entry in table 5.1 Signed-off-by: Prabha Kylasamiyer Sundara Rajan --- .../proc_configuring-developer-lightspeed-ide-settings.adoc | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/docs/topics/developer-lightspeed/proc_configuring-developer-lightspeed-ide-settings.adoc b/docs/topics/developer-lightspeed/proc_configuring-developer-lightspeed-ide-settings.adoc index 3de0bc01..b9c23834 100644 --- a/docs/topics/developer-lightspeed/proc_configuring-developer-lightspeed-ide-settings.adoc +++ b/docs/topics/developer-lightspeed/proc_configuring-developer-lightspeed-ide-settings.adoc @@ -30,8 +30,7 @@ In addition to the overall prerequisites, you have configured the following: |==== |Settings |Description |Log level|Set the log level for the {ProductShortName} binary. The default log level is `debug`. The log level increases or decreases the verbosity of logs. -|Analyzer path|Displays the path to the solution server binary. If you do not modify the path, {mta-dl-plugin} uses the bundled binary. -|Analyzer path|Specify a MTA custom binary path. If you do not provide a path, Developer Lightspeed for MTA uses the default path to the binary. +|Analyzer path|Specify an {ProductShortName} custom binary path. If you do not provide a path, Developer Lightspeed for MTA uses the default path to the binary. |Auto Accept on Save|This option is enabled by default. When you accept the changes suggested by the LLM, the updated code is saved automatically in a new file. Disable this option if you want to manually save the new file after accepting the suggested code changes. |Gen AI:Enabled|This option is enabled by default. It enables you to get code fixes by using {mta-dl-plugin} with a large language model. |Gen AI: Agent mode|Enable the experimental Agentic AI flow for analysis. {mta-dl-plugin} runs an automated analysis of a file to identify issues and suggest resolutions. After you accept the solutions, {mta-dl-plugin} makes the changes in the code and re-analyzes the file. From 65142db6f76a5692878cb77c1e957e6c48464836 Mon Sep 17 00:00:00 2001 From: Prabha Kylasamiyer Sundara Rajan Date: Wed, 24 Sep 2025 12:19:33 +0530 Subject: [PATCH 11/16] Modified the analysis and resolution modules Signed-off-by: Prabha Kylasamiyer Sundara Rajan --- .../proc_apply-rag-resolution.adoc | 11 +++++------ .../proc_running-rag-analysis.adoc | 17 ++--------------- 2 files changed, 7 insertions(+), 21 deletions(-) diff --git a/docs/topics/developer-lightspeed/proc_apply-rag-resolution.adoc b/docs/topics/developer-lightspeed/proc_apply-rag-resolution.adoc index d62d7b9f..e76e6730 100644 --- a/docs/topics/developer-lightspeed/proc_apply-rag-resolution.adoc +++ b/docs/topics/developer-lightspeed/proc_apply-rag-resolution.adoc @@ -6,16 +6,15 @@ = Applying resolutions generated by the solution server [role="_abstract"] -When you request a code fix by using {mta-dl-plugin}, you first get a stream of messages about what needs to be fixed to resolve the issues and the corresponding updates to your code in newly generated files. You can review the changes to the code in the new files and apply the resolutions. +When you request code resolutions by enabling the Solution Server, an issue displays the success metric when the metric becomes available. A success metric indicates the confidence level in applying the fix suggestion from the LLM based on how many times the update was applied in past analysis. -When you enable the solution server, an issue displays the success metric when the metric becomes available. A success metric indicates the confidence level in applying the fix suggestion from the LLM based on how many times the update was applied in past analysis. - -{mta-dl-plugin} then triggers another round of analysis to check if more issues must be fixed in the code. +You can review the code updates and edit the suggested code resolutions before accepting the suggestions. .Prerequisites * You opened a Java project in your VS Code workspace. -* You configured a profile on the *{ProductShortName} Analysis View* page and ran an analysis. +* You configured a profile on the *{ProductShortName} Analysis View* page +* You ran an analysis after enabling solution server. .Procedure @@ -27,5 +26,5 @@ When you enable the solution server, an issue displays the success metric when t . Click *Has Success Rate* to check how many times the same issue resolution was accepted in previous analysis. . Click the solution tool to trigger automated updates to your code. If you applied any category filter, code updates are made for all incidents, specific files, or specific issues based on the filter. {mta-dl-plugin} generates new files with the updated code. -. Review and (optionally) edit the code update in a *diff* or a *merge* view. +. Review and (optionally) edit the code. . Click *Apply all* in the *Resolutions* pane to permanently apply the changes to your code. \ No newline at end of file diff --git a/docs/topics/developer-lightspeed/proc_running-rag-analysis.adoc b/docs/topics/developer-lightspeed/proc_running-rag-analysis.adoc index 0dc92d3f..decdcabd 100644 --- a/docs/topics/developer-lightspeed/proc_running-rag-analysis.adoc +++ b/docs/topics/developer-lightspeed/proc_running-rag-analysis.adoc @@ -3,12 +3,10 @@ :_mod-docs-content-type: PROCEDURE [id="running-rag-analysis_{context}"] -= Generating code resolutions from solution server += Running an Analysis [role="_abstract"] -Solution server uses Retrieval Augmented Generation (RAG) to extract a pattern of resolution that improves the context. {mta-dl-plugin} derives the context from rules, past changes to in the codebase ro resolve issues, and from migration hits created by the solution server by working with the large language model (LLM). - -{mta-dl-plugin} generates a prompt for your large language model (LLM) based on the derived context. The LLM generates suggestions to resolve issues identified by running the static code analysis. +You can run a static code analysis of an application with or without enabling the generative AI features. The RPC server runs the analysis to detect all issues in the code for one or more target technologies to which you want to migrate the application. .Prerequisites @@ -17,17 +15,6 @@ Solution server uses Retrieval Augmented Generation (RAG) to extract a pattern o .Procedure -. Verify that solution server is enabled in one of the following ways: -+ -.. Type `Ctrl + Shift + P` in VS Code search (Linux/Windows system) and `Cmd + Shift + P` for Mac to go to the command palette. -.. Enter `Preferences: Open User Settings (JSON)` to open the `settings.json` file. -.. Ensure that `mta-vscode-extension.solutionServer.enabled` is set to `true`. -+ -OR -+ -.. Go to *Extensions > {mta-dl-plugin} > settings* -.. Click the *Solution Server:Enabled* option to enable the server. -+ . Click the {mta-dl-plugin} extension and click *Open {ProductShortName} Analysis View*. + . Select a profile for the analysis. From ce10e089af584825b74242fd7f9a102b07eae619 Mon Sep 17 00:00:00 2001 From: Prabha Kylasamiyer Sundara Rajan Date: Thu, 25 Sep 2025 13:32:39 +0530 Subject: [PATCH 12/16] Made changes based on Test Day feedback Signed-off-by: Prabha Kylasamiyer Sundara Rajan --- ...sembly_configuring-dev-lightspeed-ide.adoc | 2 + ...sembly_solution-server-configurations.adoc | 4 +- .../con_llm-service-openshift-ai.adoc | 3 +- ...ing-developer-lightspeed-ide-settings.adoc | 8 ++- ...onfiguring-developer-profile-settings.adoc | 6 +- .../proc_configuring-llm-podman-desktop.adoc | 2 +- ...iguring-solution-server-settings-file.adoc | 55 +++++++++++++++++++ .../proc_tackle-llm-secret.adoc | 13 +++-- .../ref_example-code-suggestion.adoc | 5 +- .../ref_llm-provider-configurations.adoc | 4 +- 10 files changed, 81 insertions(+), 21 deletions(-) create mode 100644 docs/topics/developer-lightspeed/proc_configuring-solution-server-settings-file.adoc diff --git a/docs/topics/developer-lightspeed/assembly_configuring-dev-lightspeed-ide.adoc b/docs/topics/developer-lightspeed/assembly_configuring-dev-lightspeed-ide.adoc index c85a00ea..a7a76044 100644 --- a/docs/topics/developer-lightspeed/assembly_configuring-dev-lightspeed-ide.adoc +++ b/docs/topics/developer-lightspeed/assembly_configuring-dev-lightspeed-ide.adoc @@ -23,6 +23,8 @@ You must configure the following settings in {mta-dl-full}: include::proc_configuring-developer-lightspeed-ide-settings.adoc[leveloffset=+1] +include::proc_configuring-solution-server-settings-file.adoc[leveloffset=+1] + include::proc_configuring-developer-profile-settings.adoc[leveloffset=+1] ifdef::parent-context-of-configuring-dev-lightspeed-ide[:context: {parent-context-of-configuring-dev-lightspeed-ide}] diff --git a/docs/topics/developer-lightspeed/assembly_solution-server-configurations.adoc b/docs/topics/developer-lightspeed/assembly_solution-server-configurations.adoc index b5d0fbb1..64954b45 100644 --- a/docs/topics/developer-lightspeed/assembly_solution-server-configurations.adoc +++ b/docs/topics/developer-lightspeed/assembly_solution-server-configurations.adoc @@ -22,9 +22,9 @@ The Solution Server delivers two primary benefits to users: * *Contextual Hints*: It surfaces examples of past migration solutions — including successful user modifications and accepted fixes — offering actionable hints for difficult or previously unsolved migration problems. * *Migration Success Metrics*: It exposes detailed success metrics for each migration rule, derived from real-world usage data. These metrics can be used by IDEs or automation tools to present users with a “confidence level” or likelihood of {mta-dl-plugin} successfully migrating a given code segment. -As {mta-dl-plugin} is an optional set of features in {ProductShortName}, you must complete the following configurations before you can access settings necessary to use AI analysis. +Solution Server is an optional component in {mta-dl-plugin}. You must complete the following configurations before you can place a code resolution request. -.Configurable large language models and providers +.Configurable large language models and providers in Tackle custom resource |=== | LLM Provider (Tackle CR value) | Large language model examples for Tackle CR configuration diff --git a/docs/topics/developer-lightspeed/con_llm-service-openshift-ai.adoc b/docs/topics/developer-lightspeed/con_llm-service-openshift-ai.adoc index 43e5dd5e..89762f89 100644 --- a/docs/topics/developer-lightspeed/con_llm-service-openshift-ai.adoc +++ b/docs/topics/developer-lightspeed/con_llm-service-openshift-ai.adoc @@ -30,5 +30,4 @@ An example workflow for configuring an LLM service on {ocp-name} AI broadly requ ** Configure an OpenAI API key ** Update the OpenAI API key and the base URL in `provider-settings.yaml`. -//provide the link to the document after publishing -See Provider settings configuration to configure the base URL and the LLM API key in the {mta-dl-plugin} VS Code extension. \ No newline at end of file +See xref:ref_llm-provider-configurations_context[Configuring LLM provider settings] to configure the base URL and the LLM API key in the {mta-dl-plugin} VS Code extension. \ No newline at end of file diff --git a/docs/topics/developer-lightspeed/proc_configuring-developer-lightspeed-ide-settings.adoc b/docs/topics/developer-lightspeed/proc_configuring-developer-lightspeed-ide-settings.adoc index b9c23834..43fd2fdb 100644 --- a/docs/topics/developer-lightspeed/proc_configuring-developer-lightspeed-ide-settings.adoc +++ b/docs/topics/developer-lightspeed/proc_configuring-developer-lightspeed-ide-settings.adoc @@ -21,7 +21,7 @@ In addition to the overall prerequisites, you have configured the following: + .. Click `Extensions > MTA CLI Extension for VSCode > Settings` + -.. Type `Ctrl + Shift + P` on the search bar to open the Command Palette and enter `Preferences: Open Settings (UI)`. Go to `Extensions > MTA` to open the settings page. +.. Type `Ctrl + Shift + P` or `Cmd + Shift + P` on the search bar to open the Command Palette and enter `Preferences: Open Settings (UI)`. Go to `Extensions > MTA` to open the settings page. + . Configure the settings described in the following table: @@ -43,14 +43,16 @@ In addition to the overall prerequisites, you have configured the following: * “enabled”: Enter a boolean value. Set `true` for connecting the Solution Server client ({mta-dl-plugin} extension) to the Solution Server. - * “url”: Configure the URL of the Solution Server end point. This field has the localhost as the default URL. + * “url”: Configure the URL of the Solution Server end point. * “auth”: The authentication settings allows you to configure a list of options to authenticate to the solution server. - ** "enabled": Set to `true` to enable authentication. + ** "enabled": Set to `true` to enable authentication. If you enable authentication, then you must configure the Solution Server realm. ** "insecure": Set to `true` to skip SSL certificate verification when clients connect to the Solution Server. Set to `false` to allow secure connections to the Solution Server. ** "realm": Enter the name of the Keycloak realm for Solution Server. If you enabled authentication for the Solution Server, you must configure a link:https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/server_administration_guide/red_hat_build_of_keycloak_features_and_concepts[Keycloak realm] to allow clients to connect to the Solution Server. An administrator can configure SSL for the realm. + + See xref:proc_configuring-solution-server-settings-file_context[Configuring the solution server settings] for an example configuration. |Debug:Webview|Enable debug level logging for Webview message handling in VS Code. |==== diff --git a/docs/topics/developer-lightspeed/proc_configuring-developer-profile-settings.adoc b/docs/topics/developer-lightspeed/proc_configuring-developer-profile-settings.adoc index 5525c207..726ac406 100644 --- a/docs/topics/developer-lightspeed/proc_configuring-developer-profile-settings.adoc +++ b/docs/topics/developer-lightspeed/proc_configuring-developer-profile-settings.adoc @@ -18,8 +18,8 @@ To generate code changes using {mta-dl-plugin}, you must configure a profile tha . Open the `{ProductShortName} View Analysis` page in either of the following ways: + -.. Click the screen icon on the `{ProductShortName}: Issues` pane of the {ProductShortName} extension. -.. Type `Ctrl + Shift + P` on the search bar to open the Command Palette and enter `{ProductShortName}:Open Analysis View`. +.. Click the book icon on the `{ProductShortName}: Issues` pane of the {ProductShortName} extension. +.. Type `Ctrl + Shift + P` or `Cmd + Shift + P` on the search bar to open the Command Palette and enter `{ProductShortName}:Open Analysis View`. + . Click the settings button on the `{ProductShortName} View Analysis` page to configure a profile for your project. The `Get Ready to Analyze` pane lists the follwoing basic configurations required for an analysis: @@ -45,5 +45,5 @@ If you mentioned a new target or a source technology in your custom rule, you ca You must configure either target or source tehcnologies before running an analysis. ==== |Set rules|Enable default rules and select your custom rule that you want {ProductShortName} to use for an analysis. You can use the custom rules in addition to the default rules. -|Configure generative AI|This option opens the `provider-settings.yaml` file that contains API keys and other parameters for all supported LLMs. By default, {mta-dl-plugin} is configured to use OpenAI LLM. To change the model, update the anchor `&active` to the desired block. Modify this file with the required arguments, such as the model and API key, to complete the setup. +|Configure generative AI|This option opens the `provider-settings.yaml` file that contains API keys and other parameters for all supported LLMs. By default, {mta-dl-plugin} is configured to use OpenAI LLM. To change the model, update the anchor `&active` to the desired block. Modify this file with the required arguments, such as the model and API key, to complete the setup. See xref:ref_llm-provider-configurations_context[Configuring LLM provider settings]. |==== diff --git a/docs/topics/developer-lightspeed/proc_configuring-llm-podman-desktop.adoc b/docs/topics/developer-lightspeed/proc_configuring-llm-podman-desktop.adoc index d8f879f9..9fdf4797 100644 --- a/docs/topics/developer-lightspeed/proc_configuring-llm-podman-desktop.adoc +++ b/docs/topics/developer-lightspeed/proc_configuring-llm-podman-desktop.adoc @@ -45,7 +45,7 @@ export OPENAI_API_BASE= + [source, yaml] ---- -podman_mistral: +podman_mistral: &active provider: "ChatOpenAI" environment: OPENAI_API_KEY: "unused value" diff --git a/docs/topics/developer-lightspeed/proc_configuring-solution-server-settings-file.adoc b/docs/topics/developer-lightspeed/proc_configuring-solution-server-settings-file.adoc new file mode 100644 index 00000000..694c1821 --- /dev/null +++ b/docs/topics/developer-lightspeed/proc_configuring-solution-server-settings-file.adoc @@ -0,0 +1,55 @@ +:_newdoc-version: 2.18.3 +:_template-generated: 2025-02-26 +:_mod-docs-content-type: PROCEDURE + +[id="configuring-solution-server-settings-file_{context}"] += Configuring the solution server settings + +[role="_abstract"] +You need a Keycloak realm and the solution server URL to connect {mta-dl-plugin} extension with the Solution Server. + +.Prerequisites + +* The Solution Server URL is available. + +* An administrator configured the Keycloak realm for the Solution Server. + +.Procedure + +. Type `Ctrl + Shift + P` or `Cmd + Shift + P` on the search bar and enter `Preferences:Open User Settings (JSON)`. + +. In the `settings.json` file, enter `Ctrl + SPACE` to enable the auto-complete for the Solution Server configurable fields. + +. Modify the following configuration as necessary: ++ + +[source, yaml] +---- +{ + "mta-vscode-extension.solutionServer": { + + "url": "https://mta-openshift-mta-kai.apps.konveyor-ai.example.com/hub/services/kai/api", + + "enabled": true, + "auth": { + + "enabled": true, #you must enter the username and password + "insecure": true, + "realm": "mta" + }, + + } +} +---- ++ + +[NOTE] +==== +When you enable Solution Server authentication for the first time, you must enter the `username` and `password` in the VS Code search bar. +==== ++ + +[TIP] +==== +Enter `MTA: Restart Solution Server` in the Command Palette to restart the Solution Server. +==== \ No newline at end of file diff --git a/docs/topics/developer-lightspeed/proc_tackle-llm-secret.adoc b/docs/topics/developer-lightspeed/proc_tackle-llm-secret.adoc index fe830a06..aaa5d330 100644 --- a/docs/topics/developer-lightspeed/proc_tackle-llm-secret.adoc +++ b/docs/topics/developer-lightspeed/proc_tackle-llm-secret.adoc @@ -8,6 +8,11 @@ [role="_abstract"] You must configure the Kubernetes secret for the large language model (LLM) provider in the {ocp-short} project where you installed the {ProductShortName} operator. +[NOTE] +==== +You can replace `oc` in the following commands with `kubectl`. +==== + .Procedure . Create a credentials secret named `kai-api-keys` in the `openshift-mta` project. @@ -16,7 +21,7 @@ You must configure the Kubernetes secret for the large language model (LLM) prov + [source, terminal] ---- -kubectl create secret generic aws-credentials \ +oc create secret generic aws-credentials \ --from-literal=AWS_ACCESS_KEY_ID= \ --from-literal=AWS_SECRET_ACCESS_KEY= ---- @@ -26,7 +31,7 @@ kubectl create secret generic aws-credentials \ + [source, terminal] ---- -kubectl create secret generic kai-api-keys -n openshift-mta \ +oc create secret generic kai-api-keys -n openshift-mta \ --from-literal=AZURE_OPENAI_API_KEY='' ---- + @@ -35,7 +40,7 @@ kubectl create secret generic kai-api-keys -n openshift-mta \ + [source, terminal] ---- -kubectl create secret generic kai-api-keys -n openshift-mta \ +oc create secret generic kai-api-keys -n openshift-mta \ --from-literal=GEMINI_API_KEY='' ---- + @@ -45,7 +50,7 @@ kubectl create secret generic kai-api-keys -n openshift-mta \ [source, terminal] ---- -kubectl create secret generic kai-api-keys -n openshift-mta \ +oc create secret generic kai-api-keys -n openshift-mta \ --from-literal=OPENAI_API_BASE='https://example.openai.com/v1' \ --from-literal=OPENAI_API_KEY='' ---- diff --git a/docs/topics/developer-lightspeed/ref_example-code-suggestion.adoc b/docs/topics/developer-lightspeed/ref_example-code-suggestion.adoc index 331a6b60..7eb2844c 100644 --- a/docs/topics/developer-lightspeed/ref_example-code-suggestion.adoc +++ b/docs/topics/developer-lightspeed/ref_example-code-suggestion.adoc @@ -47,13 +47,12 @@ models: openshift-example-model: &active environment: OPENAI_API_KEY: "" - REQUESTS_CA_BUNDLE: "" - ALLOW_INSECURE: "true" + CA_BUNDLE: "" provider: "ChatOpenAI" args: model: "my-model" configuration: - base_url: "https://-.apps.konveyor-ai.migration.redhat.com/v1" + baseURL: "https://-.apps.konveyor-ai.migration.redhat.com/v1" ---- + [NOTE] diff --git a/docs/topics/developer-lightspeed/ref_llm-provider-configurations.adoc b/docs/topics/developer-lightspeed/ref_llm-provider-configurations.adoc index f424d898..ef20ca16 100644 --- a/docs/topics/developer-lightspeed/ref_llm-provider-configurations.adoc +++ b/docs/topics/developer-lightspeed/ref_llm-provider-configurations.adoc @@ -31,12 +31,11 @@ models: openshift-example-model: &active environment: CA_BUNDLE: "" - ALLOW_INSECURE: "true" provider: "ChatOpenAI" args: model: "my-model" configuration: - base_url: "https://-.apps.konveyor-ai.migration.redhat.com/v1" + baseURL: "https://-.apps.konveyor-ai.migration.redhat.com/v1" ---- [NOTE] @@ -81,7 +80,6 @@ For Amazon Bedrock: AmazonBedrock: &active environment: ## May have to use if no global `~/.aws/credentials` - AWS_DEFAULT_REGION: us-east-1 AWS_ACCESS_KEY_ID: "" # Required if a global ~/.aws/credentials file is not present AWS_SECRET_ACCESS_KEY: "" # Required if a global ~/.aws/credentials file is not present AWS_DEFAULT_REGION: "" # Required From 64f8a4d0874b7fe5d37b44385ab1ab9328744c20 Mon Sep 17 00:00:00 2001 From: Prabha Kylasamiyer Sundara Rajan Date: Thu, 25 Sep 2025 13:41:38 +0530 Subject: [PATCH 13/16] Trying to fix internal links Signed-off-by: Prabha Kylasamiyer Sundara Rajan --- .../developer-lightspeed/con_llm-service-openshift-ai.adoc | 2 +- .../proc_configuring-developer-lightspeed-ide-settings.adoc | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/topics/developer-lightspeed/con_llm-service-openshift-ai.adoc b/docs/topics/developer-lightspeed/con_llm-service-openshift-ai.adoc index 89762f89..c47b5ece 100644 --- a/docs/topics/developer-lightspeed/con_llm-service-openshift-ai.adoc +++ b/docs/topics/developer-lightspeed/con_llm-service-openshift-ai.adoc @@ -30,4 +30,4 @@ An example workflow for configuring an LLM service on {ocp-name} AI broadly requ ** Configure an OpenAI API key ** Update the OpenAI API key and the base URL in `provider-settings.yaml`. -See xref:ref_llm-provider-configurations_context[Configuring LLM provider settings] to configure the base URL and the LLM API key in the {mta-dl-plugin} VS Code extension. \ No newline at end of file +See xref:ref_llm-provider-configurations[Configuring LLM provider settings] to configure the base URL and the LLM API key in the {mta-dl-plugin} VS Code extension. \ No newline at end of file diff --git a/docs/topics/developer-lightspeed/proc_configuring-developer-lightspeed-ide-settings.adoc b/docs/topics/developer-lightspeed/proc_configuring-developer-lightspeed-ide-settings.adoc index 43fd2fdb..cf796c03 100644 --- a/docs/topics/developer-lightspeed/proc_configuring-developer-lightspeed-ide-settings.adoc +++ b/docs/topics/developer-lightspeed/proc_configuring-developer-lightspeed-ide-settings.adoc @@ -52,7 +52,7 @@ In addition to the overall prerequisites, you have configured the following: ** "realm": Enter the name of the Keycloak realm for Solution Server. If you enabled authentication for the Solution Server, you must configure a link:https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/server_administration_guide/red_hat_build_of_keycloak_features_and_concepts[Keycloak realm] to allow clients to connect to the Solution Server. An administrator can configure SSL for the realm. - See xref:proc_configuring-solution-server-settings-file_context[Configuring the solution server settings] for an example configuration. + See xref:proc_configuring-solution-server-settings-file[Configuring the solution server settings] for an example configuration. |Debug:Webview|Enable debug level logging for Webview message handling in VS Code. |==== From 4adfdb918806a26fbec38d2b2f73b0d287cfbc6e Mon Sep 17 00:00:00 2001 From: Prabha Kylasamiyer Sundara Rajan Date: Thu, 25 Sep 2025 14:02:19 +0530 Subject: [PATCH 14/16] Fixing internal xref links Signed-off-by: Prabha Kylasamiyer Sundara Rajan --- .../developer-lightspeed/con_llm-service-openshift-ai.adoc | 2 +- .../proc_configuring-developer-lightspeed-ide-settings.adoc | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/topics/developer-lightspeed/con_llm-service-openshift-ai.adoc b/docs/topics/developer-lightspeed/con_llm-service-openshift-ai.adoc index c47b5ece..1388dc24 100644 --- a/docs/topics/developer-lightspeed/con_llm-service-openshift-ai.adoc +++ b/docs/topics/developer-lightspeed/con_llm-service-openshift-ai.adoc @@ -30,4 +30,4 @@ An example workflow for configuring an LLM service on {ocp-name} AI broadly requ ** Configure an OpenAI API key ** Update the OpenAI API key and the base URL in `provider-settings.yaml`. -See xref:ref_llm-provider-configurations[Configuring LLM provider settings] to configure the base URL and the LLM API key in the {mta-dl-plugin} VS Code extension. \ No newline at end of file +See xref:ref_llm-provider-configurations.adoc[Configuring LLM provider settings] to configure the base URL and the LLM API key in the {mta-dl-plugin} VS Code extension. \ No newline at end of file diff --git a/docs/topics/developer-lightspeed/proc_configuring-developer-lightspeed-ide-settings.adoc b/docs/topics/developer-lightspeed/proc_configuring-developer-lightspeed-ide-settings.adoc index cf796c03..11ff6731 100644 --- a/docs/topics/developer-lightspeed/proc_configuring-developer-lightspeed-ide-settings.adoc +++ b/docs/topics/developer-lightspeed/proc_configuring-developer-lightspeed-ide-settings.adoc @@ -52,7 +52,7 @@ In addition to the overall prerequisites, you have configured the following: ** "realm": Enter the name of the Keycloak realm for Solution Server. If you enabled authentication for the Solution Server, you must configure a link:https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/server_administration_guide/red_hat_build_of_keycloak_features_and_concepts[Keycloak realm] to allow clients to connect to the Solution Server. An administrator can configure SSL for the realm. - See xref:proc_configuring-solution-server-settings-file[Configuring the solution server settings] for an example configuration. + See xref:proc_configuring-solution-server-settings-file.adoc[Configuring the solution server settings] for an example configuration. |Debug:Webview|Enable debug level logging for Webview message handling in VS Code. |==== From 163243ec8d945b1bf3981afed23af3f772932b97 Mon Sep 17 00:00:00 2001 From: Prabha Kylasamiyer Sundara Rajan Date: Thu, 25 Sep 2025 14:05:54 +0530 Subject: [PATCH 15/16] Xrefs are not possible Signed-off-by: Prabha Kylasamiyer Sundara Rajan --- .../developer-lightspeed/con_llm-service-openshift-ai.adoc | 2 +- .../proc_configuring-developer-lightspeed-ide-settings.adoc | 2 -- 2 files changed, 1 insertion(+), 3 deletions(-) diff --git a/docs/topics/developer-lightspeed/con_llm-service-openshift-ai.adoc b/docs/topics/developer-lightspeed/con_llm-service-openshift-ai.adoc index 1388dc24..d6aa26ed 100644 --- a/docs/topics/developer-lightspeed/con_llm-service-openshift-ai.adoc +++ b/docs/topics/developer-lightspeed/con_llm-service-openshift-ai.adoc @@ -30,4 +30,4 @@ An example workflow for configuring an LLM service on {ocp-name} AI broadly requ ** Configure an OpenAI API key ** Update the OpenAI API key and the base URL in `provider-settings.yaml`. -See xref:ref_llm-provider-configurations.adoc[Configuring LLM provider settings] to configure the base URL and the LLM API key in the {mta-dl-plugin} VS Code extension. \ No newline at end of file +//See xref:ref_llm-provider-configurations.adoc[Configuring LLM provider settings] to configure the base URL and the LLM API key in the {mta-dl-plugin} VS Code extension. \ No newline at end of file diff --git a/docs/topics/developer-lightspeed/proc_configuring-developer-lightspeed-ide-settings.adoc b/docs/topics/developer-lightspeed/proc_configuring-developer-lightspeed-ide-settings.adoc index 11ff6731..512234af 100644 --- a/docs/topics/developer-lightspeed/proc_configuring-developer-lightspeed-ide-settings.adoc +++ b/docs/topics/developer-lightspeed/proc_configuring-developer-lightspeed-ide-settings.adoc @@ -51,8 +51,6 @@ In addition to the overall prerequisites, you have configured the following: ** "insecure": Set to `true` to skip SSL certificate verification when clients connect to the Solution Server. Set to `false` to allow secure connections to the Solution Server. ** "realm": Enter the name of the Keycloak realm for Solution Server. If you enabled authentication for the Solution Server, you must configure a link:https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/server_administration_guide/red_hat_build_of_keycloak_features_and_concepts[Keycloak realm] to allow clients to connect to the Solution Server. An administrator can configure SSL for the realm. - - See xref:proc_configuring-solution-server-settings-file.adoc[Configuring the solution server settings] for an example configuration. |Debug:Webview|Enable debug level logging for Webview message handling in VS Code. |==== From 293f5bbe9bee401769ce91f4973f679b5a0b7252 Mon Sep 17 00:00:00 2001 From: Prabha Kylasamiyer Sundara Rajan Date: Thu, 25 Sep 2025 15:06:08 +0530 Subject: [PATCH 16/16] Removed another link Signed-off-by: Prabha Kylasamiyer Sundara Rajan --- .../proc_configuring-developer-profile-settings.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/topics/developer-lightspeed/proc_configuring-developer-profile-settings.adoc b/docs/topics/developer-lightspeed/proc_configuring-developer-profile-settings.adoc index 726ac406..7683d9ab 100644 --- a/docs/topics/developer-lightspeed/proc_configuring-developer-profile-settings.adoc +++ b/docs/topics/developer-lightspeed/proc_configuring-developer-profile-settings.adoc @@ -45,5 +45,5 @@ If you mentioned a new target or a source technology in your custom rule, you ca You must configure either target or source tehcnologies before running an analysis. ==== |Set rules|Enable default rules and select your custom rule that you want {ProductShortName} to use for an analysis. You can use the custom rules in addition to the default rules. -|Configure generative AI|This option opens the `provider-settings.yaml` file that contains API keys and other parameters for all supported LLMs. By default, {mta-dl-plugin} is configured to use OpenAI LLM. To change the model, update the anchor `&active` to the desired block. Modify this file with the required arguments, such as the model and API key, to complete the setup. See xref:ref_llm-provider-configurations_context[Configuring LLM provider settings]. +|Configure generative AI|This option opens the `provider-settings.yaml` file that contains API keys and other parameters for all supported LLMs. By default, {mta-dl-plugin} is configured to use OpenAI LLM. To change the model, update the anchor `&active` to the desired block. Modify this file with the required arguments, such as the model and API key, to complete the setup. |====