A bash shell script to run a single prompt against any or all of your locally installed ollama models, saving the output and performance statistics as easily navigable web pages.
Useful for comparing model responses, tracking performance and documenting your local AI experiments
Demo: https://attogram.github.io/ai_test_zone/
Repo: https://github.com/attogram/ollama-multirun
For Help and Discussions please join the Attogram Discord Channel
- Batch Processing: Run a single prompt across all models listed by
ollama list
. - Comprehensive HTML Reports: Generates a full web-based report for each run, including:
- An index page summarizing all model outputs and key statistics.
- Dedicated pages for each model's output with raw text and stats.
- Links for easy navigation between models and runs.
- Detailed Statistics: Captures and displays
ollama --verbose
output, including:- Total duration
- Load duration
- Prompt evaluation count, duration, and rate
- Generation evaluation count, duration, and rate
- Prompt Persistence: Saves the original prompt as a plain text file (
prompt.txt
) and in GitHub's prompt YAML format (.prompt.yaml
) for easy re-use and documentation. - Clean Slate: Automatically clears and stops Ollama models between runs for consistent results.
- Flexible Prompt Input:
- Interactive prompt entry (default behavior) (e.g.,
./multirun.sh
). - From the command line (e.g.,
./multirun.sh "my prompt"
). - From a file (e.g.,
./multirun.sh < prompt.txt
). - From a pipe (e.g.,
echo "my prompt" | ./multirun.sh
).
- Interactive prompt entry (default behavior) (e.g.,
- Safe Naming: Generates sanitized, timestamped directories for each run to keep your results organized.
Before running ollama multirun
, ensure you have the following installed:
- ollama: The core large language model runner.
- bash: The default shell on most Linux and macOS systems.
expect
: For interacting with Ollama'srun
command (sudo apt-get install expect
on Debian/Ubuntu,brew install expect
on macOS).- Standard Unix Utilities:
awk
,sed
,tr
,wc
(typically pre-installed).
-
Clone the Repository:
git clone https://github.com/attogram/ollama-multirun.git cd ollama-multirun
or just copy the latest version from: https://raw.githubusercontent.com/attogram/ollama-multirun/refs/heads/main/multirun.sh
-
Make Executable:
chmod +x multirun.sh
-
Pull Some Ollama Models: If you don't have any models yet, you'll need to download them:
ollama pull llama2 ollama pull mistral ollama pull phi3 # etc.
Run the script from your terminal. The results will be saved in a new directory inside the results/
folder.
-
Enter prompt interactively (default):
./multirun.sh
-
Enter prompt from the command line:
./multirun.sh "Your prompt here"
./multirun.sh "Summarize this document: $(cat document.txt)"
-
Enter prompt from a file:
./multirun.sh < my_prompt.txt
-
Enter prompt from a pipe:
echo "Your prompt here" | ./multirun.sh
echo "Summarize this document: $(cat document.txt)" | ./multirun.sh
-
Specify models to run: Use the -m option, with comma-seperated list of model names (no spaces)
./multirun.sh -m model1name,model2name
-
Include images:
./multirun.sh "Describe this image: ./vision/image.jpg"
-
Specify response timeout: Use the -t option, with timeout specified in seconds
./multirun.sh -t 60
Once the script finishes,
it will print the path to your newly created results directory (e.g., results/your_prompt_tag_20250601-123456/
).
Navigate to this directory and open index.html
in your web browser to view the generated reports
ollama-multirun/
βββ multirun.sh # The main script
βββ results/ # Directory where all output runs are stored
βββ index.html # Global index of all past runs
βββ models.html # Global index of all models, with links to past run results
βββ your_prompt_tag_YYYYMMDD-HHMMSS/ # A directory for each specific run
βββ index.html # HTML summary page for this run
βββ models.html # HTML models info summary
βββ prompt.txt # The raw prompt used
βββ tag.prompt.yaml # Prompt in GitHub YAML format
βββ image.jpg # If an image was included in the prompt, it will be saved here
βββ model1.html # HTML page for model1's output and stats
βββ model1.output.txt # Raw text output from model1
βββ model1.thinking.txt # Raw thinking text output from model1 (for thinking models only)
βββ model1.stats.txt # Raw stats from model1
βββ model1.info.txt # Raw info from model1
βββ model2.html # ... and so on for each model
βββ ...
We welcome contributions! Whether it's a bug report, a feature suggestion, or a code change, your input is valuable.
- Fork the repository.
- Clone your forked repository:
git clone https://github.com/YOUR_USERNAME/ollama-multirun.git
cd ollama-multirun
- Create a new branch:
git checkout -b feature/your-feature-name
- Make your changes.
- Test your changes thoroughly.
- Commit your changes:
git commit -m "feat: Add a new feature (e.g., --output-json option)"
- Push to your branch:
git push origin feature/your-feature-name
- Open a Pull Request on the original repository, detailing your changes.
If you encounter any bugs or have feature requests, please open an issue on the GitHub Issues page.
This project is licensed under the MIT License - see the LICENSE file for details.
- Ollama for making local LLMs accessible.
- The open-source community for inspiration and tools.
Project | Github Repo | Description |
---|---|---|
Multirun | ollama-multirun | Run a prompt against all, or some, of your models running on Ollama. Creates web pages with the output, performance statistics and model info. All in a single Bash shell script. |
Toolshed | ollama-bash-toolshed | Chat with tool calling models. Sample tools included. Add new tools to your shed with ease. Runs on Ollama. All via Bash shell scripts. |
LLM Council | llm-council | Start a chat room between all, or some, of your models running on Ollama. All in a single Bash shell script. |
Ollama Bash Lib | ollama-bash-lib | A Bash Library to interact with Ollama |
Small Models | small-models | Comparison of small open source LLMs (8b parameters or less) |
AI Test Zone | ai_test_zone | Test results hosted on https://attogram.github.io/ai_test_zone/ |
Thanks goes to these wonderful people (emoji key):
Attogram Project π» |
Yodo9000 π |
Ollama π§ |
drumnix π |
This project follows the all-contributors specification. Contributions of any kind welcome!