Skip to content

1sarthakbhardwaj/Ollama

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

2 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Local Mini DeepSeek with Qwen3 & Ollama

This project leverages Qwen3:4B and Ollama inference to create a 100% local ChatGPT-like app with a hybrid thinking UI built on Streamlit.


πŸš€ Installation and Setup

1. Setup Ollama

# Install Ollama on Linux
curl -fsSL https://ollama.com/install.sh | sh
# Pull the Qwen3:4B model
ollama pull qwen3:4b

2. Install Python Dependencies

pip install streamlit ollama

▢️ Running the App

streamlit run app.py

✨ Features

  • Local inference using Ollama and Qwen3
  • Hybrid Thinking UI: enable /think mode to reveal chain-of-thought steps
  • Toggle reasoning on/off directly in the chat input

πŸ—‚ Project Structure

.
β”œβ”€β”€ assets/
β”‚   β”œβ”€β”€ logo_qwen3.png
β”‚   └── ollama.jpg
β”œβ”€β”€ app.py
└── README.md

πŸ“ Usage

  1. Launch the app with streamlit run app.py.
  2. Toggle Enable step-by-step reasoning 🧠 at the bottom.
  3. Type your question in the input box and press Enter.
  4. View answer with or without chain-of-thought.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages