A lightweight, performance-focused Obsidian plugin that displays the estimated token count for LLMs (like GPT-4, Claude, Gemini) in your status bar.
Designed to be unobtrusive and efficient, it helps you manage context windows before sending your notes to AI tools.
- 🚀 Zero Lag: Uses a debounced update strategy (updates 500ms after you stop typing) to ensure your typing experience remains buttery smooth, even in large files.
- 📊 Real-time Status Bar: Integrates seamlessly into the Obsidian footer alongside word count and backlinks.
- ⚙️ Configurable Models: Switch between different tokenization strategies (GPT-4, GPT-3.5, Legacy).
- 🔋 Battery Friendly: Only calculates when necessary.
- Download the
main.js,manifest.json, andstyles.cssfrom the latest Release. - Create a folder named
obsidian-llm-token-counterin your vault's.obsidian/plugins/directory. - Move the downloaded files into that folder.
- Reload Obsidian and enable the plugin in Community Plugins settings.
- Clone this repository.
- Run
npm installto install dependencies. - Run
npm run buildto compile the plugin. - Copy the files to your plugin directory.
Once enabled, look at the bottom right of your Obsidian window. You will see a counter like:
1.2k tokens
- Hover over the counter to see which model encoding is currently active.
- Click (future feature) to copy the count or open settings.
- Basic Structure & Status Bar Integration
- Debounce Logic for Performance
- Integration with
js-tiktokenfor 100% accurate GPT-4 counting - Support for Claude (Anthropic) and Gemini (Google) tokenizers
- "Warning Threshold" setting (turn red when approaching 8k/32k tokens)
Contributions are welcome! Please see DOCUMENTATION.md for technical details on how the plugin is architected.
MIT License. See LICENSE for more information.
