This project leverages Qwen3:4B and Ollama inference to create a 100% local ChatGPT-like app with a hybrid thinking UI built on Streamlit.
# Install Ollama on Linux
curl -fsSL https://ollama.com/install.sh | sh
# Pull the Qwen3:4B model
ollama pull qwen3:4bpip install streamlit ollamastreamlit run app.py- Local inference using Ollama and Qwen3
- Hybrid Thinking UI: enable
/thinkmode to reveal chain-of-thought steps - Toggle reasoning on/off directly in the chat input
.
βββ assets/
β βββ logo_qwen3.png
β βββ ollama.jpg
βββ app.py
βββ README.md- Launch the app with
streamlit run app.py. - Toggle Enable step-by-step reasoning π§ at the bottom.
- Type your question in the input box and press Enter.
- View answer with or without chain-of-thought.