A reliable distributed scheduler with pluggable storage backends for Async Python.
- Free software: MIT license
Minimal installation (just SQLite persistence):
pip install pyncetteFull installation (all the backends and Prometheus metrics exporter):
pip install pyncette[all]You can also install the in-development version with:
pip install https://github.com/tibordp/pyncette/archive/master.ziphttps://tibordp.github.io/pyncette/
Simple in-memory scheduler (does not persist state)
from pyncette import Pyncette, Context
app = Pyncette()
@app.task(schedule="* * * * *")
async def foo(context: Context):
print("This will run every minute")
if __name__ == "__main__":
app.main()Persistent distributed cron using Redis (coordinates execution with parallel instances and survives restarts)
from pyncette import Pyncette, Context
from pyncette.redis import redis_repository
app = Pyncette(repository_factory=redis_repository, redis_url="redis://localhost")
@app.task(schedule="* * * * * */10")
async def foo(context: Context):
print("This will run every 10 seconds")
if __name__ == "__main__":
app.main()See the examples directory for more examples of usage.
Pyncette is designed for reliable (at-least-once or at-most-once) execution of recurring tasks (think cronjobs) whose lifecycles are managed dynamically, but can work effectively for non-reccuring tasks too.
Example use cases:
- You want to perform a database backup every day at noon
- You want a report to be generated daily for your 10M users at the time of their choosing
- You want currency conversion rates to be refreshed every 10 seconds
- You want to allow your users to schedule non-recurring emails to be sent at an arbitrary time in the future
Pyncette might not be a good fit if:
- You want your tasks to be scheduled to run (ideally) once as soon as possible. It is doable, but you will be better served by a general purpose reliable queue like RabbitMQ or Amazon SQS.
- You need tasks to execute at sub-second intervals with low jitter. Pyncette coordinates execution on a per task-instance basis and this corrdination can add overhead and jitter.
Pyncette comes with an implementation for the following backends (used for persistence and coordination) out-of-the-box:
- SQLite (included)
- Redis (
pip install pyncette[redis]) - PostgreSQL (
pip install pyncette[postgres]) - MySQL 8.0+ (
pip install pyncette[mysql]) - MongoDB (
pip install pyncette[mongodb]) - Amazon DynamoDB (
pip install pyncette[dynamodb])
Pyncette imposes few requirements on the underlying datastores, so it can be extended to support other databases or custom storage formats / integrations with existing systems. For best results, the backend needs to provide:
- Some sort of serialization mechanism, e.g. traditional transactions, atomic stored procedures or compare-and-swap
- Efficient range queries over a secondary index, which can be eventually consistent
Install uv for fast package management:
curl -LsSf https://astral.sh/uv/install.sh | shSync dependencies and install the package in editable mode:
uv sync --extra allUnit tests (fast, no external dependencies):
uv run pytest -m "not integration" testsIntegration tests (requires Redis, PostgreSQL, MySQL, MongoDB, DynamoDB):
Using Docker Compose to set up all backends:
docker-compose up -d
docker-compose run --rm shell
uv run pytest testsOr manually with services running locally:
uv run pytest testsTest on specific Python version:
uv venv --python 3.11
uv sync --extra all
uv run pytest testsRun linting and type checking:
uv run pre-commit run --all-files
uv run ty check src examplesuv run mkdocs build
# Or serve locally with live reload
uv run mkdocs serveuv build