A lightweight Docker service that bridges Jira Service Management (JSM / OpsGenie) alerts to Home Assistant — with smart on-call routing, escalation detection, rich TTS announcements, and persistent dashboard notifications.
JSM alert created / escalated
│
▼
jsm-ha-notifier (Docker)
│
├─ Parse alert payload
├─ Deduplicate (suppress retries within 60 s)
├─ Route decision:
│ always_notify mode? → NOTIFY
│ escalated to me? → NOTIFY
│ I'm on-call? → NOTIFY (JSM API, cached 5 min)
│ none of the above → DROP
│
▼
Home Assistant REST API
├─ media_player.play_media (TTS with rich metadata / real alert title)
└─ persistent_notification (visible in HA dashboard)
On Acknowledge / Close → dismiss the persistent notification automatically
Two webhook URLs, one for each routing mode:
| JSM Webhook URL | Behaviour |
|---|---|
https://your-host:8080/alert?key=YOUR_KEY |
Notify only when on-call |
https://your-host:8080/alert?mode=always&key=YOUR_KEY |
Always notify regardless of schedule |
- On-call aware — queries JSM in real time and caches results; only wakes you when you are actually on-call
- Escalation detection —
EscalateNextevents always notify regardless of on-call status or dedup window - Always-notify mode — a separate webhook path for schedules that should always page you (e.g. infrastructure monitors)
- Rich TTS — spoken announcements include priority, alert title, system name, and a description excerpt
- Real media player title — uses
extra.metadataso HA shows the actual alert title instead of "Playing Default Media Receiver" - Persistent HA notifications — created on alert, auto-dismissed on Acknowledge or Close
- Configurable announcement formats — customise the detailed and terse TTS templates with placeholders
- Time-based quiet hours — silent windows (no TTS) and terse windows (short format), with cross-midnight support
- Priority override for silent mode — P1/P2 alerts can bypass silent windows so critical incidents always wake you
- Per-media-player routing — route TTS to different speakers by time of day (e.g. bedroom at night, office during the day)
- Volume control — set media player volume before TTS playback, with separate levels for full and terse modes
- Alert batching — combine multiple alerts arriving within a configurable window into one TTS announcement
- TTS repeat (pager mode) — repeat TTS at intervals for critical alerts until acknowledged or max repeats hit
- Acknowledge from HA —
POST /alert/{id}/acknowledgeendpoint lets HA automations ack alerts without opening JSM - Token health check — daily background job verifies the Atlassian API token; fires a HA TTS warning if expired (TTS suppressed during quiet hours; persistent notification still created)
- Deep health check —
GET /healthzverifies both JSM and HA API connectivity (returns 503 if either fails) - Startup connectivity checks — verifies JSM and HA reachability at boot, logs warnings if unreachable
- HA automation webhooks — fire HA webhook triggers on Create, Escalate, Acknowledge, Close, Update, and SLA Breach events to control lights, scenes, scripts
- Incident state dashboard — optional SQLite-backed
GET /incidentsAPI with status/priority filters, summary endpoint, and Grafana JSON datasource compatibility - JSM incident sync — optional background task to poll JSM for open alerts and keep the incident dashboard current
- Emoji toggle —
ENABLE_EMOJIS=falsestrips all emojis from notifications, metadata, and incoming alert text - Generic webhook support — any system that sends HTTP POST (Grafana, Uptime Kuma, shell scripts, HA automations) can trigger HA alerts
- API key authentication — optional API key via query parameter (
?key=), HTTP header (X-API-Key), or URL path prefix (/KEY/endpoint) - Webhook signature verification — optional HMAC-SHA256 validation via
X-Hub-Signature-256 - Request body size limit — rejects payloads over 1 MB to prevent memory exhaustion
- Safe format templates — user-configurable announcement formats use a restricted formatter that blocks attribute/index access
- Prometheus metrics —
GET /metricsexposes alert counters, credential check stats, rate limit hits, and uptime for Grafana/Prometheus dashboards - Structured JSON logging —
LOG_FORMAT=jsonfor Datadog, Loki, CloudWatch, ELK; default is human-readable text - Hot config reload —
POST /reloadre-reads.envand applies changes without container restart - Per-IP rate limiting — 60 requests/minute on
/alertto prevent webhook abuse - Secure container — non-root user, read-only filesystem, tmpfs at
/tmp, localhost-only port binding
- Docker + Docker Compose
- A server or device accessible from the internet (or from JSM's webhook delivery IPs)
- Home Assistant with a Long-Lived Access Token and a TTS service configured
- An Atlassian API token with access to JSM Ops (OpsGenie) schedules
git clone https://github.com/RealDougEubanks/JSM-HomeAssistant-Notifier.git
cd JSM-HomeAssistant-Notifier
cp .env.example .env
# Edit .env and fill in all required values (see Configuration below)
docker compose up -d
docker compose logs -fVerify the service is running:
curl http://localhost:8080/health
# {"status":"ok"}To build the Docker image locally instead of pulling from GHCR:
docker build -t ghcr.io/realdougeubanks/jsm-ha-notifier:latest .
docker compose up -d
docker compose logs -fThis tags the local build with the same image name the compose file expects.
If you've made code changes and Docker serves a cached layer, force a clean rebuild:
docker build --no-cache -t ghcr.io/realdougeubanks/jsm-ha-notifier:latest .
docker compose up -d
docker compose logs -fNote: .env changes do not require a rebuild — just restart the container with docker compose up -d.
cp .env.example .envOpen .env and fill in each value. The file is fully commented with instructions for finding each value. The sections below expand on the key ones.
Your Cloud ID is a UUID that identifies your Atlassian organisation. Retrieve it with:
curl -s -u "you@yourcompany.com:YOUR_API_TOKEN" \
https://your-org.atlassian.net/_edge/tenant_info \
| python3 -m json.toolLook for the "cloudId" field. Copy it into JSM_CLOUD_ID.
Your account ID (JSM_MY_USER_ID) is the UUID Atlassian uses internally for your user. The easiest way to find it:
curl -s -u "you@yourcompany.com:YOUR_API_TOKEN" \
"https://api.atlassian.com/jsm/ops/api/YOUR_CLOUD_ID/v1/schedules/YOUR_SCHEDULE_ID/on-calls" \
| python3 -m json.toolFind your name in the onCallParticipants array; the "id" field is your account ID.
Schedule names are case-sensitive. List all schedules visible to your token:
curl -s -u "you@yourcompany.com:YOUR_API_TOKEN" \
"https://api.atlassian.com/jsm/ops/api/YOUR_CLOUD_ID/v1/schedules" \
| python3 -m json.tool | grep '"name"'Copy the exact names into ALWAYS_NOTIFY_SCHEDULE_NAMES and/or CHECK_ONCALL_SCHEDULE_NAMES in .env.
- In Home Assistant, click your profile picture (bottom-left)
- Scroll to Security → Long-Lived Access Tokens
- Click Create token, give it a descriptive name (e.g.
JSM Notifier) - Copy the token into
HA_TOKENin.env— it is only shown once
In Home Assistant go to Developer Tools → States, filter by media_player. Copy the entity_id (e.g. media_player.living_room) into HA_MEDIA_PLAYER_ENTITY.
Once the container is running, check on-call status directly:
curl http://localhost:8080/status | python3 -m json.toolYou should see your schedules listed and an on_call field. If a schedule shows "error": "not found", the name in .env doesn't match — compare carefully against the output of the schedule listing curl above.
JSM's servers need to reach your webhook URL over the internet.
WARNING: Do not expose this container directly to the internet. The service runs plain HTTP without TLS. All traffic — including API keys, webhook payloads, and authentication tokens — is transmitted in cleartext. Always place a TLS-terminating proxy or tunnel in front of the service. Direct internet exposure risks credential interception, replay attacks, and unauthorized access to your Home Assistant instance.
A Cloudflare Tunnel creates an encrypted outbound connection from your network to Cloudflare's edge, with no inbound ports to open on your router or firewall. Cloudflare handles TLS termination and DDoS protection automatically.
For setup instructions, see the Cloudflare Tunnel documentation.
Run a TLS-terminating reverse proxy on the same host and forward traffic to http://127.0.0.1:8080. Detailed reverse proxy configuration is outside the scope of this README — consult your proxy's documentation for TLS certificate setup (e.g. Let's Encrypt via Certbot or Caddy's automatic HTTPS).
Configure two outgoing webhooks in JSM Ops — one for on-call schedules and one for always-notify schedules.
JSM project → Settings → Integrations → Add Integration → choose Webhook (under "Outgoing").
| Field | Value |
|---|---|
| Name | HA Notifier — On-Call |
| Webhook URL | https://your-host/alert?key=YOUR_API_KEY |
| Method | POST |
| Send alert payload | ✅ Enabled |
| Alert actions | Create, EscalateNext, Acknowledge, Close |
| Teams / Schedules filter | Your on-call schedule's team |
| Field | Value |
|---|---|
| Name | HA Notifier — Always Notify |
| Webhook URL | https://your-host/alert?mode=always&key=YOUR_API_KEY |
| Method | POST |
| Send alert payload | ✅ Enabled |
| Alert actions | Create, EscalateNext, Acknowledge, Close |
| Teams / Schedules filter | Your always-notify team/schedule |
The simplest way to secure your webhook endpoints. Set WEBHOOK_API_KEY in .env and pass the key using any of these methods:
| Method | Example | Best for |
|---|---|---|
| Query parameter | https://your-host/alert?key=YOUR_KEY |
JSM webhooks (URL-only config) |
| Path prefix | https://your-host/YOUR_KEY/alert |
Tools that can't add headers or query params |
| HTTP header | X-API-Key: YOUR_KEY |
Scripts, HA automations, Grafana |
All three methods work on every authenticated endpoint. Generate a key: openssl rand -hex 32
Requests without a valid key receive a 401 Unauthorized.
For additional security (or as an alternative to API keys), set WEBHOOK_SECRET in .env and add a custom header to each JSM webhook:
| Header name | Value |
|---|---|
X-Hub-Signature-256 |
sha256={{ hmac_sha256(body, "YOUR_SECRET") }} |
You can use both WEBHOOK_API_KEY and WEBHOOK_SECRET together for defense in depth.
Check the Atlassian JSM documentation for the exact Jinja/template syntax supported in your version's outgoing webhook headers.
curl -X POST http://localhost:8080/alert \
-H "Content-Type: application/json" \
-d '{
"action": "Create",
"alert": {
"alertId": "test-001",
"message": "Test Alert — please ignore",
"priority": "P3",
"entity": "dev-server",
"description": "This is a test alert sent manually."
}
}'If you are currently on-call, this will trigger a TTS announcement and create a persistent notification in HA.
curl -X POST "http://localhost:8080/alert?mode=always" \
-H "Content-Type: application/json" \
-d '{
"action": "Create",
"alert": {
"alertId": "always-test-001",
"message": "Infrastructure Monitor Test",
"priority": "P2",
"entity": "prod-server-01"
}
}'This path always notifies regardless of on-call status.
curl -X POST "http://localhost:8080/alert?mode=always" \
-H "Content-Type: application/json" \
-d '{
"action": "EscalateNext",
"alert": {
"alertId": "test-001",
"message": "Test Alert — please ignore",
"priority": "P1",
"entity": "prod-db-01"
}
}'curl http://localhost:8080/status | python3 -m json.toolcurl -X POST http://localhost:8080/cache/invalidateIf WEBHOOK_SECRET is set, generate the signature before sending:
SECRET="your-webhook-secret"
BODY='{"action":"Create","alert":{"alertId":"sig-test","message":"Signed test","priority":"P3"}}'
SIG="sha256=$(echo -n "$BODY" | openssl dgst -sha256 -hmac "$SECRET" | awk '{print $2}')"
curl -X POST http://localhost:8080/alert \
-H "Content-Type: application/json" \
-H "X-Hub-Signature-256: $SIG" \
-d "$BODY"The /alert endpoint accepts any JSON payload matching the OpsGenie webhook format. You don't need JSM — any monitoring system, script, or automation that can send HTTP POST requests can trigger HA alerts.
{
"action": "Create",
"alert": {
"alertId": "unique-id-123",
"message": "Your alert title here",
"priority": "P1",
"entity": "optional-system-name",
"description": "Optional longer description text"
}
}| Field | Required | Description |
|---|---|---|
action |
Yes | Create, EscalateNext, Acknowledge, or Close |
alert.alertId |
Yes | Unique identifier (used for dedup and notification tracking) |
alert.message |
Yes | Alert title / summary (spoken by TTS) |
alert.priority |
No | P1–P5 (default: P3) |
alert.entity |
No | System / host name |
alert.description |
No | Longer details (first 200 chars used in TTS) |
Configure a webhook notification in Uptime Kuma with the Notification Type set to "Webhook" / custom JSON:
# Uptime Kuma → Settings → Notifications → Add → Webhook
# URL: http://your-notifier:8080/alert?mode=always&key=YOUR_KEY
# Method: POST
# Body:
{
"action": "Create",
"alert": {
"alertId": "uptime-kuma-{{ monitorJSON.id }}",
"message": "{{ monitorJSON.name }} is {{ heartbeatJSON.status == 1 ? 'UP' : 'DOWN' }}",
"priority": "P2",
"entity": "{{ monitorJSON.hostname }}"
}
}Use a Grafana "webhook" contact point with the OpsGenie payload format:
# Grafana → Alerting → Contact Points → New → Webhook
# URL: http://your-notifier:8080/alert?mode=always&key=YOUR_KEY
# Method: POST
#
# Or use curl to forward Grafana alerts via a script:
curl -X POST "http://your-notifier:8080/alert?mode=always&key=YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{
"action": "Create",
"alert": {
"alertId": "grafana-cpu-alert-prod01",
"message": "CPU usage above 95% on prod-01",
"priority": "P1",
"entity": "prod-01",
"description": "CPU has been above 95% for the last 5 minutes. Current: 98.2%."
}
}'Use Alertmanager's webhook receiver to POST to the notifier:
# alertmanager.yml
receivers:
- name: ha-notifier
webhook_configs:
- url: "http://your-notifier:8080/alert?mode=always&key=YOUR_KEY"
send_resolved: trueThen use a small relay script or Alertmanager template to transform alerts into the expected format.
Trigger an alert from HA itself (e.g. a sensor threshold):
# HA automation action
service: rest_command.trigger_notifier_alert
data:
alert_id: "ha-temp-alert-{{ now().isoformat() }}"
message: "Temperature sensor above threshold"
priority: "P2"
entity: "sensor.living_room_temperature"
description: "Current temperature: {{ states('sensor.living_room_temperature') }}°C"# configuration.yaml
rest_command:
trigger_notifier_alert:
url: "http://your-notifier:8080/alert?mode=always&key=YOUR_KEY"
method: POST
content_type: "application/json"
payload: >
{"action":"Create","alert":{"alertId":"{{ alert_id }}","message":"{{ message }}","priority":"{{ priority }}","entity":"{{ entity }}","description":"{{ description }}"}}Trigger an alert from any script or cron job:
#!/bin/bash
# notify-ha.sh — send an alert to the JSM-HA Notifier
NOTIFIER_URL="http://your-notifier:8080/alert?mode=always&key=YOUR_KEY"
curl -s -X POST "$NOTIFIER_URL" \
-H "Content-Type: application/json" \
-d "{
\"action\": \"Create\",
\"alert\": {
\"alertId\": \"script-$(date +%s)\",
\"message\": \"$1\",
\"priority\": \"${2:-P3}\",
\"entity\": \"$(hostname)\"
}
}"Usage: ./notify-ha.sh "Backup failed on NAS" P2
To dismiss the persistent HA notification and stop TTS repeats, send a Close or Acknowledge action with the same alertId:
curl -X POST "http://your-notifier:8080/alert?mode=always&key=YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{"action": "Close", "alert": {"alertId": "the-original-alert-id", "message": "resolved"}}'Or use the dedicated acknowledge endpoint:
curl -X POST "http://your-notifier:8080/alert/the-original-alert-id/acknowledge?key=YOUR_KEY"JSM sends different action values for each alert lifecycle event. The notifier handles all of them:
| JSM Action | Notifier Behaviour | HA Webhook Config |
|---|---|---|
Create |
TTS + persistent notification (if on-call/always-notify) | HA_WEBHOOK_ON_CREATE |
EscalateNext |
TTS + persistent notification (always if targeted at you) | HA_WEBHOOK_ON_ESCALATE |
Acknowledge |
Dismiss HA notification, cancel TTS repeat | HA_WEBHOOK_ON_ACKNOWLEDGE |
Close |
Dismiss HA notification, cancel TTS repeat | HA_WEBHOOK_ON_CLOSE |
AddNote |
Fire HA webhook only (no TTS) | HA_WEBHOOK_ON_UPDATE |
UnAcknowledge |
Fire HA webhook only | HA_WEBHOOK_ON_UPDATE |
AssignOwnership |
Fire HA webhook only | HA_WEBHOOK_ON_UPDATE |
SlaBreached |
Fire HA webhook only | HA_WEBHOOK_ON_SLA_BREACH |
In JSM, configure your outgoing webhook to include all action types you want to handle:
| Field | Value |
|---|---|
| Alert actions | Create, EscalateNext, Acknowledge, Close, AddNote, UnAcknowledge, AssignOwnership |
| Webhook URL | https://your-host/alert?mode=always&key=YOUR_API_KEY |
These work with JSM, Uptime Kuma, Grafana, scripts, or any HTTP client:
New alert:
{"action": "Create", "alert": {"alertId": "inc-001", "message": "Server down", "priority": "P1", "entity": "prod-01"}}Escalation:
{"action": "EscalateNext", "alert": {"alertId": "inc-001", "message": "Server down", "priority": "P1", "entity": "prod-01"}}Acknowledged:
{"action": "Acknowledge", "alert": {"alertId": "inc-001", "message": "Server down"}}Resolved / Closed:
{"action": "Close", "alert": {"alertId": "inc-001", "message": "Server down"}}Updated (note added):
{"action": "AddNote", "alert": {"alertId": "inc-001", "message": "Server down", "description": "Restarting services..."}}SLA Breached:
{"action": "SlaBreached", "alert": {"alertId": "inc-001", "message": "Server down", "priority": "P1"}}Send a full alert lifecycle from the command line for testing:
URL="http://localhost:8080/alert?mode=always&key=YOUR_KEY"
ID="test-lifecycle-$(date +%s)"
# Create
curl -s -X POST "$URL" -H "Content-Type: application/json" \
-d "{\"action\":\"Create\",\"alert\":{\"alertId\":\"$ID\",\"message\":\"Test lifecycle alert\",\"priority\":\"P2\",\"entity\":\"test-server\"}}"
sleep 5
# Acknowledge
curl -s -X POST "$URL" -H "Content-Type: application/json" \
-d "{\"action\":\"Acknowledge\",\"alert\":{\"alertId\":\"$ID\",\"message\":\"Test lifecycle alert\"}}"
sleep 5
# Close
curl -s -X POST "$URL" -H "Content-Type: application/json" \
-d "{\"action\":\"Close\",\"alert\":{\"alertId\":\"$ID\",\"message\":\"Test lifecycle alert\"}}"The notifier can fire Home Assistant webhook triggers on each alert event, enabling you to control lights, scenes, scripts, or any HA automation in response to incidents.
- You define an HA automation with a
webhooktrigger - You set the webhook ID in the notifier's
.envfile - When the matching event occurs, the notifier POSTs the alert data to HA
- Your automation receives the data as
trigger.json.*variables
# automations.yaml
- alias: "Flash office light red on P1 alert"
trigger:
- platform: webhook
webhook_id: "jsm_alert_created"
allowed_methods: [POST]
local_only: true
condition:
- condition: template
value_template: "{{ trigger.json.priority == 'P1' }}"
action:
- service: light.turn_on
target:
entity_id: light.office_desk
data:
color_name: red
brightness: 255
flash: long
- delay: "00:00:10"
- service: light.turn_on
target:
entity_id: light.office_desk
data:
color_name: whiteHA_WEBHOOK_ON_CREATE=jsm_alert_createdIn your HA automation templates, access the alert data via:
| Variable | Description |
|---|---|
trigger.json.event |
Action name (Create, EscalateNext, etc.) |
trigger.json.alert_id |
Unique alert identifier |
trigger.json.message |
Alert title / summary |
trigger.json.priority |
P1–P5 |
trigger.json.entity |
System / host name |
trigger.json.description |
First 200 chars of description |
trigger.json.source |
Alert source |
trigger.json.tags |
List of tags |
- alias: "Flash all lights on escalation"
trigger:
- platform: webhook
webhook_id: "jsm_escalation"
allowed_methods: [POST]
local_only: true
action:
- service: light.turn_on
target:
entity_id: all
data:
flash: long
color_name: redHA_WEBHOOK_ON_ESCALATE=jsm_escalation- alias: "Status light green on resolve"
trigger:
- platform: webhook
webhook_id: "jsm_alert_resolved"
allowed_methods: [POST]
local_only: true
action:
- service: light.turn_on
target:
entity_id: light.status_indicator
data:
color_name: green
brightness: 200
- delay: "00:01:00"
- service: light.turn_off
target:
entity_id: light.status_indicatorHA_WEBHOOK_ON_CLOSE=jsm_alert_resolved- alias: "SLA breach warning"
trigger:
- platform: webhook
webhook_id: "jsm_sla_breached"
allowed_methods: [POST]
local_only: true
action:
- service: light.turn_on
target:
entity_id: light.office_desk
data:
color_name: yellow
flash: shortHA_WEBHOOK_ON_SLA_BREACH=jsm_sla_breached- alias: "Color code by priority"
trigger:
- platform: webhook
webhook_id: "jsm_alert_created"
allowed_methods: [POST]
local_only: true
action:
- service: light.turn_on
target:
entity_id: light.status_indicator
data:
brightness: 255
rgb_color: >
{% if trigger.json.priority == 'P1' %}
[255, 0, 0]
{% elif trigger.json.priority == 'P2' %}
[255, 165, 0]
{% elif trigger.json.priority == 'P3' %}
[255, 255, 0]
{% else %}
[0, 255, 0]
{% endif %}You can fire multiple webhooks for a single event:
HA_WEBHOOK_ON_CREATE=jsm_alert_created,flash_office_lights,send_mobile_notification
HA_WEBHOOK_ON_ESCALATE=jsm_escalation,flash_all_lights,play_sirenAn optional SQLite-backed incident tracker that exposes a JSON API at /incidents. Useful for building Grafana dashboards, monitoring tools, or just quickly checking what's open.
INCIDENT_DASHBOARD_ENABLED=true
INCIDENT_DB_PATH=/data/incidents.db
INCIDENT_SYNC_INTERVAL_MINUTES=5For persistent storage, mount a volume in docker-compose.yml:
services:
jsm-ha-notifier:
volumes:
- ./data:/dataList all incidents with optional filters:
# All incidents
curl http://localhost:8080/incidents
# Only open incidents
curl "http://localhost:8080/incidents?status=open"
# Only P1 incidents
curl "http://localhost:8080/incidents?priority=P1"
# Open P1 incidents
curl "http://localhost:8080/incidents?status=open&priority=P1"Response:
{
"incidents": [
{
"alert_id": "abc-123",
"message": "Database connection pool exhausted",
"priority": "P1",
"entity": "prod-db-01",
"description": "All 200 connections in use...",
"source": "Datadog",
"status": "open",
"action": "Create",
"created_at": "2026-03-22T10:30:00+00:00",
"updated_at": "2026-03-22T10:30:00+00:00",
"acknowledged_at": null,
"closed_at": null
}
],
"count": 1
}Aggregate counts:
curl http://localhost:8080/incidents/summary{
"total_open": 3,
"total_closed": 12,
"by_status": {"open": 2, "escalated": 1, "closed": 12},
"by_priority": {"P1": 1, "P2": 2}
}Single incident detail:
curl http://localhost:8080/incidents/abc-123Force an immediate sync from JSM:
curl -X POST http://localhost:8080/incidents/syncThe /incidents endpoint is compatible with Grafana's JSON or Infinity datasource plugins:
- Install the Infinity datasource plugin in Grafana
- Add a new datasource:
- Type: Infinity
- URL:
http://your-notifier:8080
- Create a dashboard panel:
- Source: Infinity
- Type: JSON
- URL:
/incidents?status=open - Root selector:
$.incidents
- Add columns:
alert_id,message,priority,status,entity,created_at
For the summary endpoint, use /incidents/summary to build gauge or stat panels showing open incident counts by priority.
A ready-to-import Grafana dashboard is included in this repo:
grafana/incident-dashboard.json
To import:
- In Grafana, go to Dashboards > Import
- Upload
grafana/incident-dashboard.json - Select your Infinity datasource
- Set the
api_keyvariable to yourWEBHOOK_API_KEYvalue (under dashboard Settings > Variables)
The dashboard includes:
- Stat panels: Total Open, Total Closed, Open P1, Open P2, Open P3
- Full incident table with priority/status color coding and column filters
- Pie charts: By Status, By Priority (open only)
- Auto-refresh every 30 seconds
Close a stale incident directly from the API (without waiting for JSM):
curl -X POST "http://localhost:8080/incidents/the-alert-id/close?key=YOUR_KEY"This sets the status to closed, dismisses the HA persistent notification, and cancels any TTS repeats.
Automatically clean up old incidents to prevent unbounded database growth:
INCIDENT_RETENTION_OPEN_DAYS=30 # Delete stale open incidents after 30 days
INCIDENT_RETENTION_CLOSED_DAYS=90 # Delete resolved incidents after 90 daysRetention runs during each sync cycle (INCIDENT_SYNC_INTERVAL_MINUTES). Set to 0 to keep everything forever (default).
When the incident dashboard is enabled, the notifier automatically enriches new alerts by fetching full details from the JSM API on Create events. This adds:
- Tags — alert tags from JSM
- Teams — team assignments
- Responders — who the alert was sent to
- Custom details — any key/value pairs from the alert's
detailsfield
All enrichment data is stored in the SQLite database and returned in the /incidents API response.
# Build locally (until you push to GHCR)
docker compose up -d --build
# Or pull the pre-built image after the first CI release
docker compose pull
docker compose up -d
# Watch logs
docker compose logs -f jsm-ha-notifierdocker run -d \
--name jsm-ha-notifier \
--restart unless-stopped \
-p 8080:8080 \
--env-file /path/to/.env \
--read-only \
--tmpfs /tmp \
ghcr.io/realdougeubanks/jsm-ha-notifier:latestThe repository includes two workflows. Images are published to both GitHub Container Registry (GHCR) and Docker Hub.
Triggers on every push to main or develop and on pull requests. Runs:
ruff(lint)black(format check)mypy(type check, advisory)pip-audit(dependency CVE scan)bandit(Python SAST, advisory)pytestwith coverage (fails below 70%)- Tests against Python 3.11, 3.12, 3.13
Triggers on push to main or any version tag (v*). Builds a multi-arch Docker image (linux/amd64 + linux/arm64) and pushes to both GHCR and Docker Hub. Also runs Trivy container vulnerability scanning and uploads results to GitHub Security.
No personal access tokens or manual secrets are needed — the workflow uses the built-in GITHUB_TOKEN that GitHub provides automatically to every Actions run, which already has packages: write permission as configured in the workflow.
| Git event | Image tags |
|---|---|
Push to main |
latest, main, <short-sha> |
Push tag v1.2.3 |
v1.2.3, <short-sha> |
After the first successful release workflow run, your container image is private by default. To make it public so others (and your unRAID server) can pull it without authentication:
- Go to
https://github.com/RealDougEubanks?tab=packages - Click the
jsm-ha-notifierpackage - Click Package settings (right side)
- Under Danger Zone, click Change visibility → Public
Alternatively, link the package to your repository:
- On the package page, click Connect repository and select your repo
- The package inherits the repository's visibility
Once public, docker pull ghcr.io/realdougeubanks/jsm-ha-notifier:latest works without login from any machine.
The release workflow also pushes to Docker Hub if credentials are configured. To enable:
- Create a Docker Hub access token at https://hub.docker.com/settings/security
- In your GitHub repo, go to Settings > Secrets and variables > Actions
- Add these secrets/variables:
- Variable
DOCKERHUB_USERNAME= your Docker Hub username (e.g.realdougeubanks) - Secret
DOCKERHUB_TOKEN= the access token from step 1
- Variable
Once configured, every release pushes to both registries:
docker pull ghcr.io/realdougeubanks/jsm-ha-notifier:latest
docker pull realdougeubanks/jsm-ha-notifier:latest
If Docker Hub credentials are not set, the workflow gracefully skips the Docker Hub login and only pushes to GHCR.
Every image pushed by the release workflow is:
- Signed with cosign (Sigstore keyless / OIDC) — proves the image was built by this GitHub Actions workflow
- Attested with SLSA Build Level 2 — GitHub's native build provenance attestation
To verify an image before pulling:
# Install cosign: https://docs.sigstore.dev/cosign/system_config/installation/
cosign verify ghcr.io/realdougeubanks/jsm-ha-notifier:latest \
--certificate-oidc-issuer https://token.actions.githubusercontent.com \
--certificate-identity-regexp github.com/RealDougEubanks/JSM-HomeAssistant-NotifierThis confirms the image was built from this repository's GitHub Actions — not tampered with after the fact.
After the image has been published, edit docker-compose.yml:
services:
jsm-ha-notifier:
image: ghcr.io/realdougeubanks/jsm-ha-notifier:latest
# build: . ← comment out or remove this lineReceives JSM webhook payloads.
| Query param | Values | Behaviour |
|---|---|---|
mode |
always |
Skip on-call check; always notify |
| (absent) | — | Check on-call status before notifying |
Expected payload: standard OpsGenie / JSM Ops outgoing webhook JSON.
Acknowledges a JSM alert, dismisses the HA notification, and cancels TTS repeats. Intended for use from HA automations (see .env.example for a ready-to-use rest_command snippet).
Returns {"alert_id": "...", "acknowledged": true} on success, 502 if JSM rejects the request.
Returns {"status": "ok"}. Used by Docker health-check and external monitors.
Deep health check — verifies JSM and HA connectivity, validates configured schedules, and reports operational state. Returns 200 if core checks pass, 503 if any fail. Gated by API key (query param, header, or path prefix) when WEBHOOK_API_KEY is set.
{
"healthy": true,
"timestamp": "2026-03-25T14:32:01+00:00",
"started_at": "2026-03-25T14:00:00+00:00",
"uptime_seconds": 1921.0,
"version": "2.0.0",
"checks": { "jsm_api": "ok", "ha_api": "ok" },
"schedules": {
"check_oncall": {
"Cloud Engineering On-Call Schedule": {
"schedule_id": "abc-123",
"exists_in_jsm": true,
"on_call": true
}
},
"always_notify": ["Internal Systems_schedule"]
},
"cache": {
"schedule_id_entries": 2,
"oncall_entries": 1,
"dedup_entries": 0
},
"background_tasks": {
"batch_queue_size": 0,
"active_tts_repeats": 0,
"tts_repeat_alert_ids": []
},
"incident_dashboard": { "enabled": false },
"configuration": {
"oncall_cache_ttl_seconds": 300,
"alert_dedup_ttl_seconds": 60,
"token_check_interval_hours": 24,
"alert_batch_window_seconds": 0,
"tts_repeat_interval_seconds": 0,
"tts_repeat_max": 5,
"tts_repeat_priorities": "P1",
"silent_window": "(none)",
"terse_window": "(none)",
"webhook_secret_configured": true,
"webhook_api_key_configured": true,
"emojis_enabled": true
}
}No tokens, secrets, URLs, or user IDs are included in the response.
Returns current on-call status for all watched schedules (forces a fresh JSM API lookup, bypasses cache).
{
"on_call_schedules": {
"Your On-Call Schedule": {
"schedule_id": "abc-123",
"on_call": true
}
},
"always_notify_schedules": ["Your Always-Notify Schedule"]
}Prometheus-compatible metrics in text exposition format. Gated by API key when configured.
jsm_notifier_alerts_received_total 42
jsm_notifier_alerts_notified_total 38
jsm_notifier_alerts_deduplicated_total 3
jsm_notifier_alerts_dismissed_total 12
jsm_notifier_alerts_rate_limited_total 0
jsm_notifier_credential_checks_total 7
jsm_notifier_credential_checks_failed_total 0
jsm_notifier_healthz_requests_total 15
jsm_notifier_uptime_seconds 86412.3
Re-reads .env and applies configuration changes without restarting the container. Clears all caches (schedule ID, on-call, dedup) on reload. Gated by API key when configured.
Returns {"status": "reloaded"} on success, 500 if the new config is invalid (previous config remains active).
Clears the cached on-call status so the next alert forces a fresh JSM API check. Useful immediately after a rotation hand-off.
List incidents with optional ?status= and ?priority= filters. Requires INCIDENT_DASHBOARD_ENABLED=true.
Aggregate incident counts by status and priority.
Single incident detail by alert ID.
Force an immediate sync of open alerts from JSM into the incident store.
The spoken message includes:
- Escalation prefix ("Escalated alert!") when applicable
- Priority level in plain English ("Priority 1, Critical")
- Alert message / title
- System / entity name
- Truncated description (first 200 characters)
Example: "Attention! Priority 1, Critical alert from Jira Service Management. Alert: Database connection lost. System: prod-db-01. Details: All connections exhausted..."
Instead of "Playing Default Media Receiver", the HA media player will show:
🔴 P1: Database connection lost
Your Notifier Label
prod-db-01
This is set via the extra.metadata block in the media_player.play_media service call. The label shown as the artist is configurable via HA_NOTIFIER_LABEL in .env.
A persistent notification is created in the HA dashboard with the full alert details. It is automatically dismissed when JSM sends an Acknowledge or Close action for that alert.
# Create a virtual environment
python3 -m venv .venv
source .venv/bin/activate
# Install all dependencies
pip install -r requirements-dev.txt
# Copy and edit config
cp .env.example .env
# Fill in .env before running tests or the server
# Run tests
pytest tests/ -v
# Run the service locally
uvicorn src.main:app --reload --port 8080Schedule names in .env must match JSM exactly (case-sensitive). List your schedules:
curl -s -u "you@yourcompany.com:YOUR_API_TOKEN" \
"https://api.atlassian.com/jsm/ops/api/YOUR_CLOUD_ID/v1/schedules" \
| python3 -m json.tool | grep '"name"'Copy the exact name into .env and restart.
- Verify the HA token is valid:
curl -H "Authorization: Bearer YOUR_HA_TOKEN" https://your-ha-url/api/ - Verify the media player entity ID:
curl -H "Authorization: Bearer YOUR_HA_TOKEN" \ https://your-ha-url/api/states \ | python3 -m json.tool | grep media_player
- Check service logs:
docker compose logs -f jsm-ha-notifier
Your HA media player integration may not support the extra.metadata block. This is normal for some Google Cast / Chromecast firmware versions. The TTS audio itself will still play correctly — only the display label is affected.
The on-call cache may be stale. Force a refresh:
curl -X POST http://localhost:8080/cache/invalidateWhen WEBHOOK_API_KEY is set, endpoints return 404 Not Found (not 401) if the API key is missing or wrong. This is intentional — it prevents attackers from discovering that authenticated endpoints exist. If you're getting unexpected 404s:
- Confirm
WEBHOOK_API_KEYis set in your.env - Confirm your request includes the key via one of:
- Query parameter:
?key=YOUR_KEY - Path prefix:
/YOUR_KEY/endpoint - HTTP header:
X-API-Key: YOUR_KEY
- Query parameter:
- Verify the key value matches exactly (no extra whitespace)
The /health and /robots.txt endpoints are always unauthenticated.
Confirm that the WEBHOOK_SECRET in .env matches the secret configured in JSM exactly. Remember: the HMAC is computed over the raw request body, not the parsed JSON.
- Confirm the URL is reachable from the internet:
curl https://your-host/health - Check JSM webhook delivery logs: JSM → Settings → Integrations → your webhook → Logs
The service checks token validity every TOKEN_CHECK_INTERVAL_HOURS hours (default: 24). If your token has expired:
- Create a new token at https://id.atlassian.com/manage-profile/security/api-tokens
- Update
JSM_API_TOKENin.env - Restart the container:
docker compose restart
The persistent HA notification will be dismissed automatically on the next successful token check (within 30 seconds of startup).
Quiet hours: If the credential check fails during a SILENT_WINDOW, the TTS announcement is suppressed — only the persistent dashboard notification is created. This prevents the service from waking you up at night for a non-urgent token issue that can wait until morning.
The service is designed to wake you up — but nothing wakes you up if the service itself is down. Configure an external uptime monitor to poll GET /health and alert you if it stops responding.
NodePing (recommended):
- Create a new HTTP check
- URL:
https://your-host/health(or your Cloudflare Tunnel URL) - Expected status:
200 - Expected body contains:
"ok" - Check interval: 1 minute
- Notification contacts: your email, SMS, or PagerDuty
Other options: Uptime Kuma (self-hosted), UptimeRobot, Pingdom, or Cloudflare Health Checks.
The /health endpoint is always unauthenticated and has no external dependencies — it returns {"status": "ok"} as long as the process is alive.
The service runs a single uvicorn worker. This is sufficient for typical JSM webhook volume (a few alerts per hour), but be aware:
- If the process crashes, Docker's
restart: unless-stoppedwill restart it automatically, but alerts arriving during the ~5s restart window will be lost (JSM retries, so they'll typically arrive again) - If the event loop blocks on a slow JSM/HA API call, other webhooks queue behind it — the async architecture mitigates this, but a truly hung connection could stall processing
For higher availability, run the service on a host with reliable uptime and use the external uptime monitor above to detect outages quickly.
By default, the incident database uses /tmp/incidents.db which is lost on container restart (tmpfs). For production use, mount a Docker volume:
# docker-compose.yml
services:
jsm-ha-notifier:
volumes:
- ./data:/data# .env
INCIDENT_DB_PATH=/data/incidents.db- Atlassian API token created with minimum necessary permissions (JSM Ops schedule access)
-
.envis in.gitignoreand was never committed -
WEBHOOK_API_KEYis set (openssl rand -hex 32) — or —WEBHOOK_SECRETis set (or both) - Service runs as non-root user (handled in Dockerfile)
- Container filesystem is read-only (
read_only: truein compose) - Port 8080 is behind a TLS-terminating reverse proxy or Cloudflare Tunnel before reaching the internet
- HA long-lived token was created specifically for this service (not shared with other integrations)
jsm-ha-notifier/
├── .github/
│ └── workflows/
│ ├── ci.yml # Lint, test, coverage
│ └── release.yml # Build & push multi-arch Docker image to GHCR
├── src/
│ ├── __init__.py
│ ├── main.py # FastAPI app, routes, signature verification
│ ├── config.py # Pydantic settings (all from .env)
│ ├── models.py # JSM webhook payload models
│ ├── jsm_client.py # Async JSM Ops API client with caching
│ ├── ha_client.py # Async Home Assistant REST API client
│ ├── alert_processor.py # Core routing / dedup / notification logic
│ ├── incident_store.py # SQLite-backed incident state tracker
│ └── time_windows.py # Time-window parsing and media player routing
├── tests/
│ ├── conftest.py # Shared fixtures
│ ├── test_models.py
│ ├── test_config.py
│ ├── test_ha_client.py
│ ├── test_alert_processor.py
│ ├── test_announcement_format.py # Format, time windows, priority override, repeat
│ ├── test_robustness.py # Security: sanitization, safe formatter, emoji toggle
│ ├── test_incident_store.py # Incident store, webhooks, force-close, retention
│ └── test_time_windows.py # Window parsing, player routing
├── grafana/
│ └── incident-dashboard.json # Pre-built Grafana dashboard (import-ready)
├── .env.example # Template — copy to .env and fill in values
├── .gitignore
├── CHANGELOG.md
├── docker-compose.yml
├── Dockerfile
├── pyproject.toml # black, ruff, pytest, mypy config
├── requirements.txt
├── requirements-dev.txt
└── README.md
This project was designed and built by Doug Eubanks to solve a real on-call alerting problem. The architecture, requirements, testing, and deployment decisions were driven by him throughout.
Claude (Anthropic's AI assistant) was used as a collaborative engineering tool during development — writing and iterating on code, debugging issues, and helping document the project. All code was reviewed, tested in a live environment, and validated by the author before use.
This disclosure is provided in the spirit of transparency. The use of AI assistance does not diminish the engineering decisions, debugging work, or operational responsibility that went into this project.
Apache License 2.0 — see LICENSE for details.