Releases: JetXu-LLM/llama-github
Llama-github Release v0.1.3
llama-github v0.1.3
We're excited to announce the release of llama-github v0.1.3! This version introduces performance optimizations and flexibility improvements, enhancing the overall user experience.
What's New
Simple Mode for Faster Initialization
- Introduced
simple_mode
parameter inGithubRAG
class initialization - Skip loading of embedding and reranker models when
simple_mode
is enabled, significantly reducing startup time
Enhanced Context Retrieval
- Updated
retrieve_context
method to supportsimple_mode
functionality - Added flexibility to override
simple_mode
on a per-call basis inretrieve_context
Improvements
- Faster initialization process when
simple_mode
is enabled - More flexible usage of
simple_mode
in context retrieval operations - Updated documentation to reflect new features and usage
Developer Notes
- When using
simple_mode=True
, be aware that embedding and reranking functionalities will not be available - The
retrieve_context
method now uses a late binding approach for thesimple_mode
parameter
Installation
To install or upgrade to the latest version, run:
pip install --upgrade llama-github
Llama-github Release v0.1.2
v0.1.2 - PR Content Retrieval Enhancement
What's New
- Added
get_pr_content
method toRepository
class - Implemented singleton pattern for efficient PR data caching
- Enhanced support for LLM-assisted PR analysis
Features
- Comprehensive PR data retrieval including metadata, file changes, and comments
- Automatic caching to reduce API calls and improve performance
- Threaded comment and review retrieval
Usage
from llama_github import GithubRAG
github_rag=GithubRAG(github_access_token=github_access_token)
repo = github_rag.RepositoryPool.get_repository("JetXu-LLM/llama-github")
pr_content = repo.get_pr_content(number=15)
Llama-github Release v0.1.1
llama-github v0.1.1
We're excited to announce the release of llama-github v0.1.1! This version brings significant improvements and new features to enhance your GitHub-based LLM interactions.
What's New
New Features 🚀
- Direct Answer Generation: Implemented
answer_with_context
method for streamlined response creation (closes #6) - Mistral AI Support: Added integration with Mistral AI LLM provider, expanding our LLM ecosystem
- Enhanced Context Retrieval:
retrieve_context
function now includes metadata (e.g., URLs) with each context string (closes #2)
Improvements 🛠️
- Advanced Reranking: Upgraded to jina-reranker-v2 for more accurate context retrieval
- Expanded LLM Support: Broadened compatibility for diverse use cases
- Overall Enhancements: Refined the context retrieval process for better performance
Bug Fixes 🐛
- Resolved warning during context retrieval process (closes #3)
Upgrading
To upgrade to this version, run:
pip install --upgrade llama-github==0.1.1
Llama-github Initial Release v0.1.0
We are excited to announce the initial release of llama-github!
Llama-github is an open-source Python library that empowers LLM Chatbots, AI Agents, and Auto-dev Agents to conduct Retrieval from actively selected GitHub public contents (Repository, Issue, Code). It Augments through LLMs and Generates context for any coding question, in order to streamline the development of sophisticated AI-driven applications.
Highlights:
- Intelligent GitHub Retrieval: Retrieve highly relevant code snippets, issues, and repository information from GitHub based on user queries.
- Repository Pool Caching: Innovative caching mechanism that stores repositories across threads, significantly accelerating GitHub search retrieval efficiency and minimizing GitHub API token consumption.
- LLM-Powered Question Analysis: Leverage state-of-the-art language models to analyze user questions and generate effective search strategies and criteria.
- Comprehensive Context Generation: Generate rich, contextually relevant answers by combining information from GitHub with advanced language model reasoning capabilities.
- Asynchronous Processing Excellence: Handle multiple requests concurrently with meticulously implemented asynchronous mechanisms, boosting overall performance.
- Flexible LLM Integration: Easily integrate with various LLM providers, embedding models, and reranking models to customize the library's capabilities.
- Robust Authentication Options: Support both personal access tokens and GitHub App authentication for flexible integration into different development setups.
- Logging and Error Handling: Equipped with comprehensive logging and error handling mechanisms for smooth operations and easy troubleshooting.
Installation
pip install llama-github
Usage
Here's a simple example of how to use llama-github:
from llama_github import GithubRAG
# Initialize GithubRAG with your credentials
github_rag = GithubRAG(
github_access_token="your_github_access_token",
openai_api_key="your_openai_api_key", # Optional in Simple Mode
jina_api_key="your_jina_api_key" # Optional - unless you want high concurrency production deployment (s.jina.ai API will be used in llama-github)
)
# Retrieve context for a coding question (simple_mode is default set to False)
query = "How to create a NumPy array in Python?"
context = github_rag.retrieve_context(
query, # In professional mode, one query will take nearly 1 min to generate final contexts. You could set log level to INFO to monitor the retrieval progress
# simple_mode = True
)
### Documentation:
For detailed documentation, please visit our [GitHub repository](https://github.com/JetXu-LLM/llama-github).
### Feedback:
We welcome feedback and contributions from the community. Please feel free to open issues or pull requests.
Thank you for using **llama-github**!