Replies: 25 comments 2 replies
-
|
+1 |
Beta Was this translation helpful? Give feedback.
-
|
+1 |
Beta Was this translation helpful? Give feedback.
-
|
+1 |
Beta Was this translation helpful? Give feedback.
-
|
any updates regarding this? Very important feature! |
Beta Was this translation helpful? Give feedback.
-
+1 |
Beta Was this translation helpful? Give feedback.
-
|
+1 |
Beta Was this translation helpful? Give feedback.
-
|
+1 |
Beta Was this translation helpful? Give feedback.
-
|
+1 |
Beta Was this translation helpful? Give feedback.
-
|
+1 @YanSte any particular reason to not plan this? Could something already be achieved (adding additional metadata)? |
Beta Was this translation helpful? Give feedback.
-
|
+1 |
Beta Was this translation helpful? Give feedback.
-
|
The absence of this feature is the single reason that prevents me from using lightrag... |
Beta Was this translation helpful? Give feedback.
-
|
@YanSte is it possible you changed your mind about this? Or maybe provide the rationale on why that is the case you do not want to implement the feature? |
Beta Was this translation helpful? Give feedback.
-
Do you have any alternatives that you are currently using? |
Beta Was this translation helpful? Give feedback.
-
|
+1 |
Beta Was this translation helpful? Give feedback.
-
|
+1 |
Beta Was this translation helpful? Give feedback.
-
|
+1 |
Beta Was this translation helpful? Give feedback.
-
@danielaskdd could you shed your vision on this? |
Beta Was this translation helpful? Give feedback.
-
|
While it's a valuable enhancement, LightRAG has more pressing and critical tasks that require our immediate attention. The method of delivering the saved meta-content to the query initiator needs extensive discussion and clear definition before implementation. |
Beta Was this translation helpful? Give feedback.
-
|
Ensuring source attribution for query results is a comprehensive engineering effort. It encompasses a range of tasks including the recognition of layouts, chapters, images, and tables within the source documents, as well as their subsequent preservation and retrieval. Adding metadata into chunks represents only a minor component of this entire source attribution system. Key ongoing developments for LightRAG include:
Upon the completion of these initiatives, our next major milestone will be the implementation of robust source attribution for query results. We welcome community members who are interested and possess the relevant skills to contribute to the LightRAG project. We encourage you to align your contributions with the development priorities outlined in this roadmap. |
Beta Was this translation helpful? Give feedback.
-
|
PR #2100, A new API endpoint, /query/data, designed to return raw retrieval data from the RAG process without LLM generation. This feature is crucial for data analysis, and use cases that require direct access to knowledge graph retrieval results (entities, relationships, text chunks). |
Beta Was this translation helpful? Give feedback.
-
|
+1 |
Beta Was this translation helpful? Give feedback.
-
|
I want to create multiple stores through an API, and each store should have its own knowledge graph. Retrieval should also happen using the store ID or name. Is that possible? |
Beta Was this translation helpful? Give feedback.
-
|
In my opinion, this idea is also suitable for setting up an RBAC system. |
Beta Was this translation helpful? Give feedback.
-
|
I am pretty sure that all the work required for citing sources can be done by preprocessing the documents before they are given to LightRAG for indexing. So this doesn't seem like a problem for the LightRAG develoment team. I work mostly with videos and LightRAG is able to cite all the metadata like video title, video channel, video upload date, and video URL, when it responds to my queries. This information is extracted from YouTube into a json file when processing the audio to create a transcript. Then rag.ainsert_custom_kg is used to insert these into the index. It also tells who is speaking and cites the timestamp and provides a link to the source video queued up to the moment where the information is sourced. The trick is add the labels for who is speaking and the time stamps for each line in the transcript before the transcript is indexed by LightRAG. I have scripts that do all this work before LightRAG ever sees the transcripts. Please let me know if I have missed something. |
Beta Was this translation helpful? Give feedback.
-
|
+1 |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
It would be great to add metadata (page#, document-title, etc.) to the chunks and retrieve them along with the answer in json format for reference purposes. Would be a great check for grounding + actually a classic RAG functionality to provide references (especially for larger set of documents) along with the answers for follow-up actions and documentation.
Beta Was this translation helpful? Give feedback.
All reactions