Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions .changeset/calm-eggs-type.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
---
"@llamaindex/readers": patch
"@llamaindex/core": patch
"@llamaindex/doc": patch
---

Expose more content to fix the issue with unavailable documentation links, and adjust the documentation based on the latest code.
10 changes: 7 additions & 3 deletions apps/next/scripts/validate-links.mts
Original file line number Diff line number Diff line change
Expand Up @@ -162,7 +162,12 @@ async function validateLinks(): Promise<LinkValidationResult[]> {
const invalidLinks = links.filter(({ link }) => {
// Check if the link exists in valid routes
// First normalize the link (remove any query string or hash)
const normalizedLink = link.split("#")[0].split("?")[0];
const baseLink = link.split("?")[0].split("#")[0];
// Remove the trailing slash if present.
// This works with links like "api/interfaces/MetadataFilter#operator" and "api/interfaces/MetadataFilter/#operator".
const normalizedLink = baseLink.endsWith("/")
? baseLink.slice(0, -1)
: baseLink;

// Remove llamaindex/ prefix if it exists as it's the root of the docs
let routePath = normalizedLink;
Expand Down Expand Up @@ -192,8 +197,7 @@ async function main() {

try {
// Check for invalid internal links
const validationResults: LinkValidationResult[] = [];
await validateLinks();
const validationResults: LinkValidationResult[] = await validateLinks();
// Check for relative links
const relativeLinksResults = await findRelativeLinks();

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ Currently, the following readers are mapped to specific file types:

- [TextFileReader](/docs/api/classes/TextFileReader): `.txt`
- [PDFReader](/docs/api/classes/PDFReader): `.pdf`
- [PapaCSVReader](/docs/api/classes/PapaCSVReader): `.csv`
- [CSVReader](/docs/api/classes/CSVReader): `.csv`
- [MarkdownReader](/docs/api/classes/MarkdownReader): `.md`
- [DocxReader](/docs/api/classes/DocxReader): `.docx`
- [HTMLReader](/docs/api/classes/HTMLReader): `.htm`, `.html`
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,5 +12,5 @@ Check the [LlamaIndexTS Github](https://github.com/run-llama/LlamaIndexTS) for t

## API Reference

- [BaseChatStore](/docs/api/interfaces/BaseChatStore)
- [BaseChatStore](/docs/api/classes/BaseChatStore)

Original file line number Diff line number Diff line change
Expand Up @@ -74,4 +74,4 @@ the response is not correct with a score of 2.5

## API Reference

- [CorrectnessEvaluator](/docs/api/classes/CorrectnessEvaluator)
- [CorrectnessEvaluator](/docs/api/classes/CorrectnessEvaluator)
13 changes: 10 additions & 3 deletions apps/next/src/content/docs/llamaindex/modules/prompt/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -28,14 +28,21 @@ Answer:`;

### 1. Customizing the default prompt on initialization

The first method is to create a new instance of `ResponseSynthesizer` (or the module you would like to update the prompt) and pass the custom prompt to the `responseBuilder` parameter. Then, pass the instance to the `asQueryEngine` method of the index.
The first method is to create a new instance of a Response Synthesizer (or the module you would like to update the prompt) by using the getResponseSynthesizer function. Instead of passing the custom prompt to the deprecated responseBuilder parameter, call getResponseSynthesizer with the mode as the first argument and supply the new prompt via the options parameter.

```ts
// Create an instance of response synthesizer
// Create an instance of Response Synthesizer

// Deprecated usage:
const responseSynthesizer = new ResponseSynthesizer({
responseBuilder: new CompactAndRefine(undefined, newTextQaPrompt),
});

// Current usage:
const responseSynthesizer = getResponseSynthesizer('compact', {
textQATemplate: newTextQaPrompt
})

// Create index
const index = await VectorStoreIndex.fromDocuments([document]);

Expand Down Expand Up @@ -75,5 +82,5 @@ const response = await queryEngine.query({

## API Reference

- [ResponseSynthesizer](/docs/api/classes/ResponseSynthesizer)
- [Response Synthesizer](/docs/llamaindex/modules/response_synthesizer)
- [CompactAndRefine](/docs/api/classes/CompactAndRefine)
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: ResponseSynthesizer
title: Response Synthesizer
---

The ResponseSynthesizer is responsible for sending the query, nodes, and prompt templates to the LLM to generate a response. There are a few key modes for generating a response:
Expand All @@ -12,15 +12,17 @@ The ResponseSynthesizer is responsible for sending the query, nodes, and prompt
multiple compact prompts. The same as `refine`, but should result in less LLM calls.
- `TreeSummarize`: Given a set of text chunks and the query, recursively construct a tree
and return the root node as the response. Good for summarization purposes.
- `SimpleResponseBuilder`: Given a set of text chunks and the query, apply the query to each text
chunk while accumulating the responses into an array. Returns a concatenated string of all
responses. Good for when you need to run the same query separately against each text
chunk.
- `MultiModal`: Combines textual inputs with additional modality-specific metadata to generate an integrated response.
It leverages a text QA template to build a prompt that incorporates various input types and produces either streaming or complete responses.
This approach is ideal for use cases where enriching the answer with multi-modal context (such as images, audio, or other data)
can enhance the output quality.

```typescript
import { NodeWithScore, TextNode, ResponseSynthesizer } from "llamaindex";
import { NodeWithScore, TextNode, getResponseSynthesizer, responseModeSchema } from "llamaindex";

const responseSynthesizer = new ResponseSynthesizer();
// you can also use responseModeSchema.Enum.refine, responseModeSchema.Enum.tree_summarize, responseModeSchema.Enum.multi_modal
// or you can use the CompactAndRefine, Refine, TreeSummarize, or MultiModal classes directly
const responseSynthesizer = getResponseSynthesizer(responseModeSchema.Enum.compact);

const nodesWithScore: NodeWithScore[] = [
{
Expand Down Expand Up @@ -55,8 +57,9 @@ for await (const chunk of stream) {

## API Reference

- [ResponseSynthesizer](/docs/api/classes/ResponseSynthesizer)
- [getResponseSynthesizer](/docs/api/functions/getResponseSynthesizer)
- [responseModeSchema](/docs/api/variables/responseModeSchema)
- [Refine](/docs/api/classes/Refine)
- [CompactAndRefine](/docs/api/classes/CompactAndRefine)
- [TreeSummarize](/docs/api/classes/TreeSummarize)
- [SimpleResponseBuilder](/docs/api/classes/SimpleResponseBuilder)
- [MultiModal](/docs/api/classes/MultiModal)
7 changes: 6 additions & 1 deletion apps/next/typedoc.json
Original file line number Diff line number Diff line change
@@ -1,8 +1,13 @@
{
"plugin": ["typedoc-plugin-markdown", "typedoc-plugin-merge-modules"],
"entryPoints": ["../../packages/**/src/index.ts"],
"entryPoints": [
"../../packages/{,**/}index.ts",
"../../packages/readers/src/*.ts",
"../../packages/cloud/src/{reader,utils}.ts"
],
"exclude": [
"../../packages/autotool/**/src/index.ts",
"../../packages/cloud/src/client/index.ts",
"**/node_modules/**",
"**/dist/**",
"**/test/**",
Expand Down
10 changes: 5 additions & 5 deletions packages/core/src/response-synthesizers/factory.ts
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ import {
} from "./base-synthesizer";
import { createMessageContent } from "./utils";

const responseModeSchema = z.enum([
export const responseModeSchema = z.enum([
"refine",
"compact",
"tree_summarize",
Expand All @@ -35,7 +35,7 @@ export type ResponseMode = z.infer<typeof responseModeSchema>;
/**
* A response builder that uses the query to ask the LLM generate a better response using multiple text chunks.
*/
class Refine extends BaseSynthesizer {
export class Refine extends BaseSynthesizer {
textQATemplate: TextQAPrompt;
refineTemplate: RefinePrompt;

Expand Down Expand Up @@ -213,7 +213,7 @@ class Refine extends BaseSynthesizer {
/**
* CompactAndRefine is a slight variation of Refine that first compacts the text chunks into the smallest possible number of chunks.
*/
class CompactAndRefine extends Refine {
export class CompactAndRefine extends Refine {
async getResponse(
query: MessageContent,
nodes: NodeWithScore[],
Expand Down Expand Up @@ -267,7 +267,7 @@ class CompactAndRefine extends Refine {
/**
* TreeSummarize repacks the text chunks into the smallest possible number of chunks and then summarizes them, then recursively does so until there's one chunk left.
*/
class TreeSummarize extends BaseSynthesizer {
export class TreeSummarize extends BaseSynthesizer {
summaryTemplate: TreeSummarizePrompt;

constructor(
Expand Down Expand Up @@ -370,7 +370,7 @@ class TreeSummarize extends BaseSynthesizer {
}
}

class MultiModal extends BaseSynthesizer {
export class MultiModal extends BaseSynthesizer {
metadataMode: MetadataMode;
textQATemplate: TextQAPrompt;

Expand Down
10 changes: 9 additions & 1 deletion packages/core/src/response-synthesizers/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,15 @@ export {
BaseSynthesizer,
type BaseSynthesizerOptions,
} from "./base-synthesizer";
export { getResponseSynthesizer, type ResponseMode } from "./factory";
export {
CompactAndRefine,
MultiModal,
Refine,
TreeSummarize,
getResponseSynthesizer,
responseModeSchema,
type ResponseMode,
} from "./factory";
export type {
SynthesizeEndEvent,
SynthesizeQuery,
Expand Down
1 change: 0 additions & 1 deletion packages/readers/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -230,7 +230,6 @@
"mammoth": "^1.7.2",
"mongodb": "^6.7.0",
"notion-md-crawler": "^1.0.0",
"papaparse": "^5.4.1",
"unpdf": "^0.12.1"
}
}
8 changes: 0 additions & 8 deletions pnpm-lock.yaml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.