Skip to content
Open
2 changes: 1 addition & 1 deletion docs/developer-lightspeed-guide/master-docinfo.xml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
<productnumber>{DocInfoProductNumber}</productnumber>
<subtitle>Using the {ProductName} Developer Lightspeed to modernize your applications</subtitle>
<abstract>
<para>you can use {ProductFullName} Developer Lightspeed for application modernization in your organization by running Artificial Intelligence-driven static code analysis for Java applications.</para>
<para>By using Developer Lightspeed for Migration Toolkit for Applications (MTA), you can modernize applications in your organization by applying LLM-driven code changes to resolve issues found through static code analysis. You can automate code fixes, review and apply the suggested code changes with minimum manual effort.</para>
</abstract>
<authorgroup>
<orgname>Red Hat Customer Content Services</orgname>
Expand Down
4 changes: 2 additions & 2 deletions docs/developer-lightspeed-guide/master.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -11,11 +11,11 @@ include::topics/templates/document-attributes.adoc[]
:context: mta-developer-lightspeed
:mta-developer-lightspeed:

//Inclusive language statement
include::topics/making-open-source-more-inclusive.adoc[]

include::topics/developer-lightspeed/con_intro-to-developer-lightspeed.adoc[leveloffset=+1]

include::topics/developer-lightspeed/assembly_getting-started.adoc[leveloffset=+1]

include::topics/developer-lightspeed/assembly_solution-server-configurations.adoc[leveloffset=+1]

include::topics/developer-lightspeed/assembly_configuring_llm.adoc[leveloffset=+1]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,8 @@ You must configure the following settings in {mta-dl-full}:
include::proc_configuring-developer-lightspeed-ide-settings.adoc[leveloffset=+1]

include::proc_configuring-solution-server-settings-file.adoc[leveloffset=+1]

include::proc_configuring-developer-profile-settings.adoc[leveloffset=+1]

ifdef::parent-context-of-configuring-dev-lightspeed-ide[:context: {parent-context-of-configuring-dev-lightspeed-ide}]
Expand Down
11 changes: 6 additions & 5 deletions docs/topics/developer-lightspeed/assembly_configuring_llm.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -14,9 +14,9 @@ endif::[]
= Configuring large language models for analysis
:context: configuring-llm

To generate suggestions to resolves issues in the code, {mta-dl-plugin} provides the large language model (LLM) with the contextual prompt, migration hints, and solved examples to generate suggestions to resolve issues identified in the current code by running an analysis.
{mta-dl-plugin} provides the large language model (LLM) with the contextual prompt, migration hints, and solved examples to generate suggestions for resolving issues identified in the current code.

{mta-dl-plugin} is designed to be model agnostic. It works with LLMs that are run in different environments (in local containers, as local AI, as a shared service) to support analyzing Java applications in a wide range of scenarios. You can choose an LLM from well-known providers, local models that you run from Ollama or Podman desktop, and OpenAI API compatible models.
{mta-dl-plugin} is designed to be model agnostic. It works with LLMs that are run in different environments (in local containers, as local AI, or as a shared service) to support analyzing Java applications in a wide range of scenarios. You can choose an LLM from well-known providers, local models that you run from Ollama or Podman desktop, and OpenAI API compatible models.

The code fix suggestions produced to resolve issues detected through an analysis depends on the LLM's capabilities.

Expand All @@ -27,10 +27,11 @@ You can run an LLM from the following generative AI providers:
* Google Gemini
* Amazon Bedrock
* Ollama
* Groq
* Anthropic

You can also run OpenAI API-compatible LLMs deployed as a service in your OpenShift AI cluster or deployed locally in the Podman AI Lab in your system.
You can also run OpenAI API-compatible LLMs deployed as:

* A service in your {ocp-name} AI cluster
* Locally in the Podman AI Lab in your system.

include::con_llm-service-openshift-ai.adoc[leveloffset=+1]

Expand Down
34 changes: 34 additions & 0 deletions docs/topics/developer-lightspeed/assembly_getting-started.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
:_newdoc-version: 2.18.3
:_template-generated: 2025-05-28

ifdef::context[:parent-context-of-getting-started: {context}]

:_mod-docs-content-type: ASSEMBLY

ifndef::context[]
[id="getting-started"]
endif::[]
ifdef::context[]
[id="getting-started_{context}"]
endif::[]
= Getting started with {mta-dl-plugin}

:context: getting-started

[role="_abstract"]
The Getting started section contains information to walk you through the prerequisites, persistent volume requirements, installation, and workflows that help you to decide how you want to use the {mta-dl-full}.

include::con_prerequisites.adoc[leveloffset=+1]

include::con_persistent-volumes.adoc[leveloffset=+1]

include::con_installation.adoc[leveloffset=+1]

include::con_developer-lightspeed-pathways.adoc[leveloffset=+1]

include::ref_example-code-suggestion.adoc[leveloffset=+1]


ifdef::parent-context-of-getting-started[:context: {parent-context-of-getting-started}]
ifndef::parent-context-of-getting-started[:!context:]

Original file line number Diff line number Diff line change
Expand Up @@ -15,26 +15,25 @@ endif::[]
:context: solution-server-configurations

[role=_abstract]
Solution server is a component that allows {mta-dl-plugin} to build a collective memory of code changes from all analysis performed in an organization. Wen you request code fix for issues in the Visual Studio (VS) Code, the Solution Server augments previous patterns of how codes changed to resolve issues (also called solved examples) that were similar to those in the current file, and suggests a resolution that has a higher confidence level derived from previous solutions. After you accept a suggested code fix, the solution server works with the large language model (LLM) to improve the hints about the issue that becomes part of the context. An improved context enables the LLM to generate more reliable code fix suggestions in future cases.
Solution server is a component that allows {mta-dl-plugin} to build a collective memory of code changes from all analysis performed in an organization. When you request code fix for issues in the Visual Studio (VS) Code, the Solution Server augments previous patterns of how codes changed to resolve issues (also called solved examples) that were similar to those in the current file, and suggests a resolution that has a higher confidence level derived from previous solutions. After you accept a suggested code fix, the solution server works with the large language model (LLM) to improve the hints about the issue that becomes part of the context. An improved context enables the LLM to generate more reliable code fix suggestions in future cases.

The Solution Server delivers two primary benefits to users:

* *Contextual Hints*: It surfaces examples of past migration solutions — including successful user modifications and accepted fixes — offering actionable hints for difficult or previously unsolved migration problems.
* *Migration Success Metrics*: It exposes detailed success metrics for each migration rule, derived from real-world usage data. These metrics can be used by IDEs or automation tools to present users with a “confidence level” or likelihood of {mta-dl-plugin} successfully migrating a given code segment.

As {mta-dl-plugin} is an optional set of features in {ProductShortName}, you must complete the following configurations before you can access settings necessary to use AI analysis.
Solution Server is an optional component in {mta-dl-plugin}. You must complete the following configurations before you can place a code resolution request.

.Supported large language models and providers
.Configurable large language models and providers in Tackle custom resource
|===
| LLM Provider (Tackle CR value) | Large language model in Tackle CR
| LLM Provider (Tackle CR value) | Large language model examples for Tackle CR configuration

|{ocp-name} AI platform| Models deployed in an OpenShift AI cluster that can be accessed by using Open AI-compatible API
| Open AI (`openai`) | `gpt-4`, `gpt-4o`, `gpt-4o-mini`, `gpt-3.5-turbo`
| Azure OpenAI (`azure_openai`) | `gpt-4`, `gpt-35-turbo`
| Amazon Bedrock (`bedrock`) | `anthropic.claude-3-5-sonnet-20241022-v2:0`, `meta.llama3-1-70b-instruct-v1:0`
| Google Gemini (`google`) | `gemini-2.0-flash-exp`, `gemini-1.5-pro`
| Ollama (`ollama`) | `llama3.1`, `codellama`, `mistral`
| Groq (`groq`) | `llama-3.1-70b-versatile`, `mixtral-8x7b-32768`
| Anthropic (`anthropic`) | `claude-3-5-sonnet-20241022`, `claude-3-haiku-20240307`

|===

Expand Down
33 changes: 14 additions & 19 deletions docs/topics/developer-lightspeed/con_developer-lightspeed-logs.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -11,33 +11,28 @@

Extension logs are stored as `extension.log` with automatic rotation. The maximum size of the log file is 10 MB and three files are retained. Analyzer RPC logs are stored as `analyzer.log` without rotation.

[id="dev-lightspeed-access-logs_{context}"]

== Access the logs
[id="dev-lightspeed-archive-logs_{context}"]

You can access the extension logs in the following ways:
== Archiving the logs

* *Command Palette*: Type `Show Logs` or `Open Log` and select `Extension Host`.
To archive the logs as a zip file, type `{ProductShortName}: Generate Debug Archive` in the VS Code Command Palette and select the information type that must be archived as a log file.

* *Output panel*: Select `Extension Host` from the drop-down menu.
The archive command allows capturing all relevant log files in a zip archive at the specified location in your project. By default, you can access the archived logs in the .vscode directory of your project.

* *Log file*: Go to `.vscode/mta-logs` directory in your project and open the extension log file.
The archival feature helps you to save the following information:

To access the analyzer log file, go to `.vscode/mta-logs` directory in your project and open the analyzer log file.
* Large language model (LLM) provider configuration: Fields from the provider settings that can be included in the archive. All fields are redacted for security reasons by default. Ensure that you do not expose any secrets.
* LLM model arguments
* LLM traces: If you enabled tracing LLM interactions, you can choose to include LLM traces in the logs.

You can also inspect webview content by using the webview logs. To access the webview logs, type `Open Webview Developer Tools` in the VS Code Command Palette.
[id="dev-lightspeed-access-logs_{context}"]

[id="dev-lightspeed-archive-logs_{context}"]
== Accessing the logs

== Archive logs
You can access the logs in the following ways:

The archival feature helps you to save the following information:
* *Log file*: Type `Developer: Open Extension Logs Folder` and open the `redhat.mta-vscode-extension` directory that contains the extension log and the analyzer log.

* Large language model (LLM) provider configuration
* LLM model arguments
* Template configuration
* LLama header
* LLM retry attempts
* LLM retries delay
* *Output panel*: Select `{mta-dl-plugin}` from the drop-down menu.

Type `Generate Debug Archive` in the VS Code Command Palette and select the information type that must be archived as a log file. You can access the tar file of archived logs in the `.vscode` directory of your project.
* *Webview logs*: You can also inspect webview content by using the webview logs. To access the webview logs, type `Open Webview Developer Tools` in the VS Code Command Palette.
Original file line number Diff line number Diff line change
Expand Up @@ -4,24 +4,21 @@
:_mod-docs-content-type: CONCEPT

[id="how-to-use-developer-lightspeed_{context}"]
= Introducing {mta-dl-plugin}
= How to use {mta-dl-plugin}

[role="_abstract"]
Starting with {ProductFullName} 8.0.0, you can run an application analysis using the {ProductShortName} Visual Studio (VS) Code plug-in. An {ProductShortName} analysis detects the issues in the code given a set of targets for migration.

You can opt to use {mta-dl-full} features to request a code fix suggestion. {mta-dl-plugin} augments the manual changes made to code throughout your organization in different migration waves and creates a context that is shared with a large language model (LLM). The LLM suggests code resolutions based on the issue description, context, and previous examples of code changes to resolve issues.

To make code changes by using the LLM, you must enable the generative AI option, along with either the Solution Server mode or the Agent mode. The configurations that you complete before you request code fixes depend on the mode you prefer.
//Is it ok for users to enable all three settings? Gen AI, Solution Server, and Agent mode.
To make code changes by using the LLM, you must enable the generative AI option, along with either the Solution Server or the Agent AI. The configurations that you complete before you request code fixes depend on the mode you prefer.

[NOTE]
====
If you make any change after enabling the generative AI settings in the extension, you must restart the extension for the change to take effect.
====

[id="config-to-use-solution-server_{context}"]

== Configurations to use the solution server
To use the solution server for code fix suggestions:

* Enable the solution server in the Tackle custom resource (CR).

Expand All @@ -31,9 +28,7 @@ If you make any change after enabling the generative AI settings in the extensio

* Configure the profile settings for code fixes and activate the LLM provider in the `provider-settings.yaml` file.

[id="config-to-use-agent-mode_{context}"]

== Configurations to use the agent mode
To use the agent mode for code fix suggestions:

* Enable the generative AI and the agent mode in the {mta-dl-plugin} extension settings.

Expand Down
Loading