diff --git a/assemblies/developer-lightspeed-guide/assembly_solution-server-configurations.adoc b/assemblies/developer-lightspeed-guide/assembly_solution-server-configurations.adoc
new file mode 100644
index 00000000..93c20d74
--- /dev/null
+++ b/assemblies/developer-lightspeed-guide/assembly_solution-server-configurations.adoc
@@ -0,0 +1,28 @@
+:_newdoc-version: 2.18.3
+:_template-generated: 2025-03-17
+
+ifdef::context[:parent-context-of-solution-server-configurations: {context}]
+
+:_mod-docs-content-type: ASSEMBLY
+
+ifndef::context[]
+[id="solution-server-configurations"]
+endif::[]
+ifdef::context[]
+[id="solution-server-configurations_{context}"]
+endif::[]
+= Solution Server Configurations
+:context: solution-server-configurations
+
+[role=_abstract]
+Solution server is a component that allows {mta-dl-plugin} to build a collective memory of code changes from all analysis performed in an organization. In the local context of an analysis run in the Visual Studio (VS) Code, the Solution Server augments previous patterns of how codes changed to resolve issues (also called solved examples) that were similar to those in the current file, and suggests a resolution that has a higher confidence level derived from previous solutions.
+
+As {mta-dl-plugin} is an optional set of features in {ProductShortName}, you must complete the following configurations before you can access settings necessary to use AI analysis.
+
+include::topics/developer-lightspeed/proc_tackle-llm-secret.adoc[leveloffset=+1]
+
+include::topics/developer-lightspeed/proc_tackle-enable-dev-lightspeed.adoc[leveloffset=+1]
+
+ifdef::parent-context-of-solution-server-configurations[:context: {parent-context-of-solution-server-configurations}]
+ifndef::parent-context-of-solution-server-configurations[:!context:]
+
diff --git a/assemblies/developer-lightspeed-guide/topics b/assemblies/developer-lightspeed-guide/topics
new file mode 120000
index 00000000..6ff2545a
--- /dev/null
+++ b/assemblies/developer-lightspeed-guide/topics
@@ -0,0 +1 @@
+/home/pkylasam/Documents/repos/mta-documentation/docs/topics
\ No newline at end of file
diff --git a/docs/developer-lightspeed-guide/assemblies b/docs/developer-lightspeed-guide/assemblies
new file mode 120000
index 00000000..51bb5102
--- /dev/null
+++ b/docs/developer-lightspeed-guide/assemblies
@@ -0,0 +1 @@
+../../assemblies/
\ No newline at end of file
diff --git a/docs/developer-lightspeed-guide/master-docinfo.xml b/docs/developer-lightspeed-guide/master-docinfo.xml
new file mode 100644
index 00000000..ecda6ac2
--- /dev/null
+++ b/docs/developer-lightspeed-guide/master-docinfo.xml
@@ -0,0 +1,13 @@
+
IntelliJ IDEA Plugin Guide
+{DocInfoProductName}
+{DocInfoProductNumber}
+Identify and resolve migration issues by analyzing your applications with the {ProductName} plugin for IntelliJ IDEA.
+
+
+ This guide describes how to use the {ProductName} plugin for IntelliJ IDEA to accelerate large-scale application modernization efforts across hybrid cloud environments on Red Hat OpenShift.
+
+
+
+ Red Hat Customer Content Services
+
+
diff --git a/docs/developer-lightspeed-guide/master.adoc b/docs/developer-lightspeed-guide/master.adoc
new file mode 100644
index 00000000..dfddaa47
--- /dev/null
+++ b/docs/developer-lightspeed-guide/master.adoc
@@ -0,0 +1,21 @@
+:mta:
+include::topics/templates/document-attributes.adoc[]
+:_mod-docs-content-type: ASSEMBLY
+[id="mta-developer-lightspeed"]
+= MTA with Developer Lightspeed
+
+:toc:
+:toclevels: 4
+:numbered:
+:imagesdir: topics/images
+:context: mta-developer-lightspeed
+:mta-developer-lightspeed:
+
+//Inclusive language statement
+include::topics/making-open-source-more-inclusive.adoc[]
+
+
+
+include::assemblies/developer-lightspeed-guide/assembly_solution-server-configurations.adoc[leveloffset=+1]
+
+:!mta-developer-lightspeed:
diff --git a/docs/developer-lightspeed-guide/topics b/docs/developer-lightspeed-guide/topics
new file mode 120000
index 00000000..5983d929
--- /dev/null
+++ b/docs/developer-lightspeed-guide/topics
@@ -0,0 +1 @@
+../topics
\ No newline at end of file
diff --git a/docs/topics/developer-lightspeed/proc_tackle-enable-dev-lightspeed.adoc b/docs/topics/developer-lightspeed/proc_tackle-enable-dev-lightspeed.adoc
new file mode 100644
index 00000000..1d9d2bcb
--- /dev/null
+++ b/docs/topics/developer-lightspeed/proc_tackle-enable-dev-lightspeed.adoc
@@ -0,0 +1,63 @@
+:_newdoc-version: 2.15.0
+:_template-generated: 2024-2-21
+:_mod-docs-content-type: PROCEDURE
+
+[id="tackle-enable-dev-lightspeed_{context}"]
+= Enabling {mta-dl-plugin} in Tackle custom resource
+
+[role="_abstract"]
+Solution Server integrates with the {ProductShortName} Hub backend component to use the database and volumes necessary to store and retrieve the solved examples.
+
+To enable Solution Server and other AI configurations in the {ProductShortName} VS Code extension, you must modify the Tackle custom resource (CR) with additional parameters.
+
+.Prerequisites
+
+//the hard link must be changed to the same topic in 8.0.0 that has the `{mta-dl-plugin}-database` req.
+. You deployed an additional RWO volume for `{mta-dl-plugin}-database` if you want to use {mta-dl-plugin}. See link:https://docs.redhat.com/en/documentation/migration_toolkit_for_applications/7.3/html/user_interface_guide/mta-7-installing-web-console-on-openshift_user-interface-guide#openshift-persistent-volume-requirements_user-interface-guide[Persistent volume requirements] for more information.
+
+. You installed the {ProductShortName} operator v8.0.0.
+
+
+.Procedure
+
+. Log in to the {ocp-short} cluster and switch to the `openshift-mta` project.
++
+. Edit the Tackle custom resource (CR) settings in the `tackle_hub.yml` to enable {mta-dl-plugin} and then enter applicable values for `kai_llm_provider` and `kai_llm_model` variables.
++
+[source, yaml]
+----
+---
+kind: Tackle
+apiVersion: tackle.konveyor.io/v1alpha1
+metadata:
+ name: mta
+ namespace: openshift-mta
+spec:
+ kai_solution_server_enabled: true
+ kai_llm_provider: ChatIBMGenAI
+ # optional, pick a suitable model for your provider
+ kai_llm_model:
+...
+----
++
+//How about Amazon Bedrock, Deepseek, and Azure OpenAI?
+[NOTE]
+====
+The default large language model (LLM) provider is `ChatIBMGenAI`. You may also enter `google` or `openai` as providers.
+====
++
+. Apply the Tackle CR by using the following command.
++
+[source, terminal]
+----
+ $ kubectl apply -f tackle_hub.yaml
+----
+
+.Verification
+
+. Enter the following command to verify the {mta-dl-plugin} resources.
++
+[source, terminal]
+----
+kubectl get deploy,svc -n openshift-mta | grep -E 'kai-(api|db|importer)'
+----
\ No newline at end of file
diff --git a/docs/topics/developer-lightspeed/proc_tackle-llm-secret.adoc b/docs/topics/developer-lightspeed/proc_tackle-llm-secret.adoc
new file mode 100644
index 00000000..79aa1b55
--- /dev/null
+++ b/docs/topics/developer-lightspeed/proc_tackle-llm-secret.adoc
@@ -0,0 +1,47 @@
+:_newdoc-version: 2.15.0
+:_template-generated: 2024-2-21
+:_mod-docs-content-type: PROCEDURE
+
+[id="tackle-llm-secret_{context}"]
+= Configuring the model secret key
+
+[role="_abstract"]
+
+You must configure the Kubernetes secret for the large language model (LLM) provider in the {ocp-short} project where you installed the {ProductShortName} operator.
+
+.Procedure
+
+. Create a credentials secret named `kai-api-keys` in the `openshift-mta` project.
+.. For the default IBM GenAI provider `ChatIBMGenAI`, type:
++
+[source, terminal]
+----
+kubectl create secret generic kai-api-keys -n openshift-mta \
+--from-literal=genai_key=''
+----
++
+.. For the Google as the provider, type:
++
+[source, terminal]
+----
+kubectl create secret generic kai-api-keys -n openshift-mta \
+ --from-literal=google_key=''
+----
++
+.. For the OpenAI-compatible providers, type:
++
+[source, terminal]
+----
+kubectl create secret generic kai-api-keys -n openshift-mta \
+ --from-literal=api_base='https://api.openai.com/v1' \
+ --from-literal=api_key=''
+----
++
+. (Optional) Force a reconcile so that the {ProductShortName} operator picks up the secret immediately
++
+//Is the double tackle needed in the command?
+[source, terminal]
+----
+kubectl patch tackle tackle -n openshift-mta --type=merge -p \
+'{"metadata":{"annotations":{"konveyor.io/force-reconcile":"'"$(date +%s)"'"}}}'
+----
\ No newline at end of file
diff --git a/docs/topics/mta-7-installing-web-console-on-openshift.adoc b/docs/topics/mta-7-installing-web-console-on-openshift.adoc
index 806870b7..68bcac44 100644
--- a/docs/topics/mta-7-installing-web-console-on-openshift.adoc
+++ b/docs/topics/mta-7-installing-web-console-on-openshift.adoc
@@ -48,6 +48,12 @@ To successfully deploy, the {ProductShortName} Operator requires 2 RWO persisten
|`100Gi`
|RWX
|Maven m2 cache; required if the `rwx_supported` configuration option is set to `true`
+
+|`{mta-dl-plugin} database`
+|`5Gi`
+|`RWO`
+|{mta-dl-plugin} database required to run an AI-driven analysis.
+//verify the final d/s name of this resource.
|====
[id="installing-mta-operator-and-ui_{context}"]
diff --git a/docs/topics/templates/document-attributes.adoc b/docs/topics/templates/document-attributes.adoc
index 67cef78d..8ca87e93 100644
--- a/docs/topics/templates/document-attributes.adoc
+++ b/docs/topics/templates/document-attributes.adoc
@@ -28,6 +28,7 @@ ifdef::mta[]
:WebConsoleBookName: {WebNameTitle} Guide
:ProductVersion: 7.3.1
:PluginName: MTA plugin
+:mta-dl-plugin: MTA with Developer Lightspeed
// :MavenProductVersion: 7.0.0.GA-redhat-00001
:ProductDistributionVersion: 7.3.1.GA-redhat
:ProductDistribution: mta-7.3.1.GA-cli-offline.zip