Skip to content

Run a prompt against all, or some, of your models running on Ollama. Creates web pages with the output, performance statistics and model info. All in a single Bash shell script.

License

Notifications You must be signed in to change notification settings

attogram/ollama-multirun

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

ollama-multirun

Logo

A bash shell script to run a single prompt against any or all of your locally installed ollama models, saving the output and performance statistics as easily navigable web pages.

Useful for comparing model responses, tracking performance and documenting your local AI experiments

Demo: https://attogram.github.io/ai_test_zone/

Repo: https://github.com/attogram/ollama-multirun

For Help and Discussions please join the Attogram Discord Channel

All Contributors

Screenshots

Run index page:

Screenshot of the run page

Model output page:

Screenshot of the index page

✨ Features

  • Batch Processing: Run a single prompt across all models listed by ollama list.
  • Comprehensive HTML Reports: Generates a full web-based report for each run, including:
    • An index page summarizing all model outputs and key statistics.
    • Dedicated pages for each model's output with raw text and stats.
    • Links for easy navigation between models and runs.
  • Detailed Statistics: Captures and displays ollama --verbose output, including:
    • Total duration
    • Load duration
    • Prompt evaluation count, duration, and rate
    • Generation evaluation count, duration, and rate
  • Prompt Persistence: Saves the original prompt as a plain text file (prompt.txt) and in GitHub's prompt YAML format (.prompt.yaml) for easy re-use and documentation.
  • Clean Slate: Automatically clears and stops Ollama models between runs for consistent results.
  • Flexible Prompt Input:
    • Interactive prompt entry (default behavior) (e.g., ./multirun.sh).
    • From the command line (e.g., ./multirun.sh "my prompt").
    • From a file (e.g., ./multirun.sh < prompt.txt).
    • From a pipe (e.g., echo "my prompt" | ./multirun.sh).
  • Safe Naming: Generates sanitized, timestamped directories for each run to keep your results organized.

πŸš€ Getting Started

Prerequisites

Before running ollama multirun, ensure you have the following installed:

  • ollama: The core large language model runner.
  • bash: The default shell on most Linux and macOS systems.
  • expect: For interacting with Ollama's run command (sudo apt-get install expect on Debian/Ubuntu, brew install expect on macOS).
  • Standard Unix Utilities: awk, sed, tr, wc (typically pre-installed).

Installation

  1. Clone the Repository:

    git clone https://github.com/attogram/ollama-multirun.git
    cd ollama-multirun

    or just copy the latest version from: https://raw.githubusercontent.com/attogram/ollama-multirun/refs/heads/main/multirun.sh

  2. Make Executable:

    chmod +x multirun.sh
  3. Pull Some Ollama Models: If you don't have any models yet, you'll need to download them:

    ollama pull llama2
    ollama pull mistral
    ollama pull phi3
    # etc.

πŸ’‘ Usage

Run the script from your terminal. The results will be saved in a new directory inside the results/ folder.

  • Enter prompt interactively (default):

    ./multirun.sh
  • Enter prompt from the command line:

    ./multirun.sh "Your prompt here"
    ./multirun.sh "Summarize this document: $(cat document.txt)"
  • Enter prompt from a file:

    ./multirun.sh < my_prompt.txt
  • Enter prompt from a pipe:

    echo "Your prompt here" | ./multirun.sh
    echo "Summarize this document: $(cat document.txt)" | ./multirun.sh
  • Specify models to run: Use the -m option, with comma-seperated list of model names (no spaces)

    ./multirun.sh -m model1name,model2name
  • Include images:

    ./multirun.sh "Describe this image: ./vision/image.jpg"
  • Specify response timeout: Use the -t option, with timeout specified in seconds

    ./multirun.sh -t 60

After Running

Once the script finishes, it will print the path to your newly created results directory (e.g., results/your_prompt_tag_20250601-123456/).

Navigate to this directory and open index.html in your web browser to view the generated reports

πŸ“‚ Results Structure

ollama-multirun/
β”œβ”€β”€ multirun.sh             # The main script
└── results/                # Directory where all output runs are stored
    β”œβ”€β”€ index.html          # Global index of all past runs
    β”œβ”€β”€ models.html         # Global index of all models, with links to past run results
    └── your_prompt_tag_YYYYMMDD-HHMMSS/ # A directory for each specific run
        β”œβ”€β”€ index.html          # HTML summary page for this run
        β”œβ”€β”€ models.html         # HTML models info summary
        β”œβ”€β”€ prompt.txt          # The raw prompt used
        β”œβ”€β”€ tag.prompt.yaml     # Prompt in GitHub YAML format
        β”œβ”€β”€ image.jpg           # If an image was included in the prompt, it will be saved here
        β”œβ”€β”€ model1.html         # HTML page for model1's output and stats
        β”œβ”€β”€ model1.output.txt   # Raw text output from model1
        β”œβ”€β”€ model1.thinking.txt # Raw thinking text output from model1 (for thinking models only)
        β”œβ”€β”€ model1.stats.txt    # Raw stats from model1
        β”œβ”€β”€ model1.info.txt     # Raw info from model1
        β”œβ”€β”€ model2.html         # ... and so on for each model
        └── ...

πŸ› οΈ Development & Contribution

We welcome contributions! Whether it's a bug report, a feature suggestion, or a code change, your input is valuable.

  • Fork the repository.
  • Clone your forked repository: git clone https://github.com/YOUR_USERNAME/ollama-multirun.git
  • cd ollama-multirun
  • Create a new branch: git checkout -b feature/your-feature-name
  • Make your changes.
  • Test your changes thoroughly.
  • Commit your changes: git commit -m "feat: Add a new feature (e.g., --output-json option)"
  • Push to your branch: git push origin feature/your-feature-name
  • Open a Pull Request on the original repository, detailing your changes.

Reporting Issues

If you encounter any bugs or have feature requests, please open an issue on the GitHub Issues page.

πŸ“œ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ™ Acknowledgements

  • Ollama for making local LLMs accessible.
  • The open-source community for inspiration and tools.

More AI from the Attogram Project

Project Github Repo Description
Multirun ollama-multirun Run a prompt against all, or some, of your models running on Ollama.
Creates web pages with the output, performance statistics and model info.
All in a single Bash shell script.
Toolshed ollama-bash-toolshed Chat with tool calling models.
Sample tools included.
Add new tools to your shed with ease.
Runs on Ollama.
All via Bash shell scripts.
LLM Council llm-council Start a chat room between all, or some, of your models running on Ollama.
All in a single Bash shell script.
Ollama Bash Lib ollama-bash-lib A Bash Library to interact with Ollama
Small Models small-models Comparison of small open source LLMs (8b parameters or less)
AI Test Zone ai_test_zone Test results hosted on https://attogram.github.io/ai_test_zone/

Contributors ✨

Thanks goes to these wonderful people (emoji key):

Attogram Project
Attogram Project

πŸ’»
Yodo9000
Yodo9000

πŸ›
Ollama
Ollama

πŸ”§
drumnix
drumnix

πŸ›

This project follows the all-contributors specification. Contributions of any kind welcome!

About

Run a prompt against all, or some, of your models running on Ollama. Creates web pages with the output, performance statistics and model info. All in a single Bash shell script.

Topics

Resources

License

Stars

Watchers

Forks

Contributors 4

  •  
  •  
  •  
  •  

Languages