Skip to content

Predacons/predacons-cli

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

33 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Predacons CLI

Welcome to the Predacons CLI! This command-line interface (CLI) allows you to interact with the Predacons library, providing a seamless way to load models, generate responses, and manage configurations directly from your terminal.

Features

  • Model Management: Load and manage different types of models including local, Hugging Face safetensor, PyTorch, and GGUF models.
  • Interactive Chat: Engage in interactive chat sessions with the loaded model.
  • Vector store: Supports vector store allowing user to have conversation with any document or any unstructured data source
  • Web Scraper: It can makes google search and answer based on the search results
  • Configuration Management: Easily create, update, and clear configuration files.
  • Rich Output: Utilize rich text formatting for better readability and user experience.
  • Logging: Optionally enable logging for debugging and tracking purposes.

Installation

To install the Predacons CLI, you need to have Python installed on your system. You can install the required dependencies using pip:

pip install predacons-cli

Usage

Launching the CLI

To start the Predacons CLI, simply run:

predacons

Commands

Once the CLI is launched, you can use the following commands:

  • clear: Clear the chat history.
  • exit: Quit the CLI.
  • clear-config: Clear the current configuration file.
  • settings: Show and update settings.
  • version: Display the current version of the Predacons CLI.
  • help: Show help information.
  • update: updates the documents to the vector db

Example Session

$ predacons
Welcome to the Predacons CLI!
No config file found. Creating one now...
Creating a new configuration file...
Please enter the following details to create a new configuration file
...
Welcome to the Predacons CLI! Model: Precacons/Pico-Lamma-3.2-1B-Reasoning-Instruct loaded successfully!
You can start chatting with Predacons now. Type 'clear' to clear history, Type 'exit' to quit, Type 'help' for more options,
User: Hello!
Predacons: Hi there! How can I assist you today?

Configuration

The configuration file is stored at ~/.predacons_cli/predacon_cli_config.json. You can update the configuration settings by using the settings command within the CLI.

Configuration Options

  • model_path: Path to the model or Hugging Face model name.
  • trust_remote_code: Boolean to trust remote code.
  • use_fast_generation: Boolean to enable fast generation.
  • draft_model_name: Optional draft model name.
  • gguf_file: Path to the GGUF file.
  • auto_quantize: Boolean to enable auto quantization.
  • temperature: Temperature setting for response generation.
  • max_length: Maximum length for each response.
  • top_k: Top K value for response generation.
  • top_p: Top P value for response generation.
  • repetition_penalty: Repetition penalty value.
  • num_return_sequences: Number of return sequences.
  • print_as_markdown: Boolean to print response as markdown.
  • chat_with_data: enable vector db
  • vector_db_path: path to the vector store
  • document_path: path to the directory containing the documents
  • embedding_model: embedding model id or path
  • scrap_web: enable web scraping

Logging

To enable logging, launch the CLI with the --logs flag:

python cli.py --logs

License

This project is licensed under the MIT License. See the LICENSE file for details.

Contributing

Contributions are welcome! Please open an issue or submit a pull request on GitHub.

Contact

For any questions or support, please open an issue on the GitHub repository.


Enjoy using the Predacons CLI! 🚀

About

cli for predacons to chat, load models in cache etc

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages