Welcome to the Predacons CLI! This command-line interface (CLI) allows you to interact with the Predacons library, providing a seamless way to load models, generate responses, and manage configurations directly from your terminal.
- Model Management: Load and manage different types of models including local, Hugging Face safetensor, PyTorch, and GGUF models.
- Interactive Chat: Engage in interactive chat sessions with the loaded model.
- Vector store: Supports vector store allowing user to have conversation with any document or any unstructured data source
- Web Scraper: It can makes google search and answer based on the search results
- Configuration Management: Easily create, update, and clear configuration files.
- Rich Output: Utilize rich text formatting for better readability and user experience.
- Logging: Optionally enable logging for debugging and tracking purposes.
To install the Predacons CLI, you need to have Python installed on your system. You can install the required dependencies using pip:
pip install predacons-cliTo start the Predacons CLI, simply run:
predaconsOnce the CLI is launched, you can use the following commands:
clear: Clear the chat history.exit: Quit the CLI.clear-config: Clear the current configuration file.settings: Show and update settings.version: Display the current version of the Predacons CLI.help: Show help information.update: updates the documents to the vector db
$ predacons
Welcome to the Predacons CLI!
No config file found. Creating one now...
Creating a new configuration file...
Please enter the following details to create a new configuration file
...
Welcome to the Predacons CLI! Model: Precacons/Pico-Lamma-3.2-1B-Reasoning-Instruct loaded successfully!
You can start chatting with Predacons now. Type 'clear' to clear history, Type 'exit' to quit, Type 'help' for more options,
User: Hello!
Predacons: Hi there! How can I assist you today?The configuration file is stored at ~/.predacons_cli/predacon_cli_config.json. You can update the configuration settings by using the settings command within the CLI.
model_path: Path to the model or Hugging Face model name.trust_remote_code: Boolean to trust remote code.use_fast_generation: Boolean to enable fast generation.draft_model_name: Optional draft model name.gguf_file: Path to the GGUF file.auto_quantize: Boolean to enable auto quantization.temperature: Temperature setting for response generation.max_length: Maximum length for each response.top_k: Top K value for response generation.top_p: Top P value for response generation.repetition_penalty: Repetition penalty value.num_return_sequences: Number of return sequences.print_as_markdown: Boolean to print response as markdown.chat_with_data: enable vector dbvector_db_path: path to the vector storedocument_path: path to the directory containing the documentsembedding_model: embedding model id or pathscrap_web: enable web scraping
To enable logging, launch the CLI with the --logs flag:
python cli.py --logsThis project is licensed under the MIT License. See the LICENSE file for details.
Contributions are welcome! Please open an issue or submit a pull request on GitHub.
For any questions or support, please open an issue on the GitHub repository.
Enjoy using the Predacons CLI! 🚀