Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions .env.example
Original file line number Diff line number Diff line change
@@ -1,3 +1,10 @@
# Example env file for Mijn API
# Use a managed Postgres in production/staging
DATABASE_URL=postgres://postgres:yourpassword@db-host:5432/mijn_api
JWT_SECRET_KEY=replace_with_secure_secret
DATA_DIR=/tmp
# Optional: override cookie secure mode in dev
# COOKIE_SECURE=0
# Example environment variables for development
DATABASE_URL=postgresql://user:password@localhost:5432/mijn_api_db
JWT_SECRET_KEY=change-me-for-production
5 changes: 5 additions & 0 deletions .github/copilot-instructions.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,11 @@ This file tells AI coding agents how this small FastAPI service is structured an
- SECRET_KEY is read from `JWT_SECRET_KEY` env var; hard-coded fallback exists for dev only. New changes must keep this env override.
- Avoid leaking internal exceptions in HTTP responses (the code intentionally maps errors to 4xx/5xx messages).

Additional notes (2026-02-26):
- The project now stores refresh tokens in a DB table `refresh_tokens` when a `DATABASE_URL` is provided. Alembic migration `20260226_add_refresh_tokens` was added; run `alembic upgrade heads` to apply it.
- For serverless deployments (Vercel), use a managed Postgres/Supabase and run migrations outside functions (CI/admin job). Do not rely on local filesystem persistence.
- Add `httpx` to test/runtime requirements for `TestClient` usage.

## Typical change examples
- Add a new protected endpoint: use `Depends(get_current_user)` for authenticated or `Depends(require_admin)` for admin-only.
- Return public user views: match the pattern used in `list_users` / `get_user` (hide `password` field and return `id`, `name`, `role`).
Expand Down
28 changes: 28 additions & 0 deletions .github/workflows/ci-deploy-migrations-placeholder.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
name: "CI: Deploy Migrations (placeholder)"

on:
workflow_dispatch:
push:
branches: [ main ]

jobs:
check-and-trigger:
name: Check secrets and trigger migration workflow
runs-on: ubuntu-latest
steps:
- name: Check required secrets
run: |
if [ -z "${{ secrets.DATABASE_URL }}" ] || [ -z "${{ secrets.JWT_SECRET_KEY }}" ]; then
echo "ERROR: Required secrets DATABASE_URL and JWT_SECRET_KEY are not set in repository secrets."
echo "Set them via GitHub → Settings → Secrets and Variables → Actions."
exit 1
fi
echo "Secrets present. This job can trigger the 'deploy_migrations' workflow."

- name: Trigger `deploy_migrations` workflow (optional)
uses: peter-evans/workflow-dispatch@v1
with:
token: ${{ secrets.GITHUB_TOKEN }}
repository: ${{ github.repository }}
workflow: deploy_migrations.yml
ref: main
76 changes: 76 additions & 0 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
name: CI

on:
push:
branches: [ main, master ]
pull_request:
branches: [ main, master ]

jobs:
test:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:15
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: mijn_api_test
ports:
- 5432:5432
options: >-
--health-cmd "pg_isready -U postgres"
--health-interval 10s
--health-timeout 5s
--health-retries 5
env:
DATABASE_URL: postgres://postgres:postgres@127.0.0.1:5432/mijn_api_test
JWT_SECRET_KEY: ${{ secrets.JWT_SECRET_KEY }}
DATA_DIR: /tmp

steps:
- uses: actions/checkout@v4

- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.11'

- name: Install system packages
run: sudo apt-get update && sudo apt-get install -y libpq-dev

- name: Install Python dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install psycopg2-binary

- name: Wait for Postgres
run: |
for i in {1..30}; do pg_isready -h 127.0.0.1 -p 5432 -U postgres && break || sleep 1; done

- name: Run Alembic migrations
run: alembic upgrade heads

- name: Start app servers for integration tests
run: |
nohup uvicorn main:app --host 127.0.0.1 --port 8000 > uvicorn8000.log 2>&1 &
nohup uvicorn main:app --host 127.0.0.1 --port 8001 > uvicorn8001.log 2>&1 &
python - <<'PY'
import socket, time
def wait(port, retries=30):
for i in range(retries):
try:
s=socket.socket(); s.settimeout(1); s.connect(('127.0.0.1', port)); s.close();
print('port',port,'ready')
return True
except Exception:
time.sleep(1)
raise SystemExit('port %s not ready' % port)
wait(8000)
wait(8001)
PY

- name: Run tests
run: |
PYTHONPATH=. pytest -q
28 changes: 28 additions & 0 deletions .github/workflows/deploy_migrations.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
name: Deploy Migrations

on:
workflow_dispatch:

jobs:
migrate:
runs-on: ubuntu-latest
env:
DATABASE_URL: ${{ secrets.DATABASE_URL }}
JWT_SECRET_KEY: ${{ secrets.JWT_SECRET_KEY }}
steps:
- uses: actions/checkout@v4

- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.11'

- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install psycopg2-binary

- name: Run Alembic migrations
run: |
alembic upgrade heads
90 changes: 90 additions & 0 deletions DEPLOYMENT_SWITCH_TO_POSTGRES.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
Switch staging to a managed Postgres
===================================

This document describes the minimal steps to move the Mijn API staging environment
from SQLite/local to a managed Postgres instance (e.g., Supabase, AWS RDS, Railway).

1) Provision a managed Postgres instance
- Create a database and user. Note the connection string, e.g.:
postgres://<user>:<password>@<host>:5432/<database>

2) Set production/staging environment variables
- Set `DATABASE_URL` to the Postgres connection string.
- Set `JWT_SECRET_KEY` to a strong secret (use your cloud provider's secret manager or GitHub Secrets).
- Optionally set `DATA_DIR` to a writable directory for server-local artifacts.

3) Configure connection pooling
- For serverful deployments (multiple workers), enable a connection pool or use pgbouncer.
- Recommended: use SQLAlchemy pool sizing appropriate for your worker count. Example (env vars):
- `DB_POOL_SIZE` or adjust your Gunicorn/ASGI worker count accordingly.

- Example SQLAlchemy engine configuration (use these env vars to tune in production):

```python
from sqlalchemy import create_engine
import os

DATABASE_URL = os.environ["DATABASE_URL"]
engine = create_engine(
DATABASE_URL,
pool_size=int(os.environ.get("DB_POOL_SIZE", "5")),
max_overflow=int(os.environ.get("DB_MAX_OVERFLOW", "10")),
pool_timeout=int(os.environ.get("DB_POOL_TIMEOUT", "30")),
pool_pre_ping=True,
future=True,
)
```

- Example Gunicorn + Uvicorn command for production (4 workers as an example):

```
gunicorn -k uvicorn.workers.UvicornWorker main:app -w 4 --bind 0.0.0.0:8000
```

4) Run Alembic migrations
- Run `alembic upgrade heads` against the managed Postgres to create required tables.
- We provide a `deploy_migrations` workflow in `.github/workflows/deploy_migrations.yml` which can
be used from GitHub Actions (workflow_dispatch) and reads `DATABASE_URL` from secrets.

5) Verify
- Start the app and run smoke tests. Ensure `/health` (if you have one) and auth flows behave.
- Check that refresh tokens persist in the `refresh_tokens` table and rotation works.

Notes and recommendations
- Do NOT use file-based `users.json` for production state; migrate any existing users to the DB.
- Use TLS for all traffic and set `COOKIE_SECURE=True` in production.
- Add monitoring and alerts for DB connection exhaustion and failed migrations.

Environment variables summary (suggested additions):

```env
# Database connection and pooling
DATABASE_URL=postgresql://user:password@host:5432/mijn_api
DB_POOL_SIZE=5
DB_MAX_OVERFLOW=10
DB_POOL_TIMEOUT=30

# Security
JWT_SECRET_KEY=replace_with_strong_random
COOKIE_SECURE=1 # set in production
```

Setting GitHub Secrets
----------------------

You should set `DATABASE_URL` and `JWT_SECRET_KEY` as repository secrets so CI and the `deploy_migrations` workflow can run safely.

Using the GitHub CLI:

```bash
# replace values before running
gh secret set DATABASE_URL --body "postgresql://user:password@host:5432/mijn_api"
gh secret set JWT_SECRET_KEY --body "$(openssl rand -hex 32)"
```

Or via the GitHub web UI:

1. Go to your repository → Settings → Secrets and variables → Actions.
2. Click "New repository secret" and add `DATABASE_URL` and `JWT_SECRET_KEY`.

After adding secrets, you can run the Actions → "CI: Deploy Migrations (placeholder)" workflow or trigger the `deploy_migrations` workflow directly.
5 changes: 4 additions & 1 deletion alembic/versions/0001_add_merchant_id.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,10 @@

# revision identifiers, used by Alembic.
revision = '0001_add_merchant_id'
down_revision = None
# This migration expects an `invoices` table to exist. Ensure the invoices
# creation migration runs before this one by setting its down_revision to the
# invoices creation revision added on 2026-02-26.
down_revision = '20260226_create_invoices_table'
branch_labels = None
depends_on = None

Expand Down
30 changes: 30 additions & 0 deletions alembic/versions/20260226_add_refresh_tokens.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
"""add refresh_tokens table

Revision ID: 20260226_add_refresh_tokens
Revises: 20260131_create_core_tables
Create Date: 2026-02-26 00:00:00.000000
"""
from alembic import op
import sqlalchemy as sa

# revision identifiers, used by Alembic.
revision = '20260226_add_refresh_tokens'
down_revision = '20260131_create_core_tables'
branch_labels = None
depends_on = None


def upgrade():
op.create_table(
'refresh_tokens',
sa.Column('id', sa.Integer, primary_key=True, index=True),
sa.Column('user_id', sa.Integer, sa.ForeignKey('users.id', ondelete='CASCADE'), nullable=False, index=True),
sa.Column('token', sa.String, nullable=False, unique=True, index=True),
sa.Column('issued_at', sa.DateTime(timezone=True), server_default=sa.text('CURRENT_TIMESTAMP')),
sa.Column('expires_at', sa.DateTime(timezone=True), nullable=True),
sa.Column('revoked', sa.Boolean, nullable=False, server_default=sa.text('false')),
)


def downgrade():
op.drop_table('refresh_tokens')
32 changes: 32 additions & 0 deletions alembic/versions/20260226_create_invoices_table.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
"""create invoices table

Revision ID: 20260226_create_invoices_table
Revises: merge_all_heads_20260131
Create Date: 2026-02-26 00:00:00.000000
"""
from alembic import op
import sqlalchemy as sa

# revision identifiers, used by Alembic.
revision = '20260226_create_invoices_table'
# Make this a base revision so it can be applied before older revisions that
# assumed `invoices` existed (we set dependent migrations to reference this).
down_revision = None
branch_labels = None
depends_on = None


def upgrade():
op.create_table(
'invoices',
sa.Column('id', sa.Integer, primary_key=True, index=True),
sa.Column('invoice_number', sa.String(), nullable=True),
sa.Column('amount', sa.Numeric(12, 2), nullable=False, server_default='0'),
sa.Column('status', sa.String(), nullable=False, server_default=sa.text("'draft'")),
sa.Column('due_date', sa.DateTime(timezone=True), nullable=True),
sa.Column('created_at', sa.DateTime(timezone=True), server_default=sa.text('CURRENT_TIMESTAMP')),
)


def downgrade():
op.drop_table('invoices')
21 changes: 20 additions & 1 deletion app/db/session.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,26 @@
future=True,
)
else:
engine = create_engine(DATABASE_URL, echo=True, future=True)
# Configure pooling for non-SQLite databases. Pool sizing can be tuned
# via environment variables (sensible defaults are provided).
pool_size = int(os.environ.get("DB_POOL_SIZE", "5"))
max_overflow = int(os.environ.get("DB_MAX_OVERFLOW", "10"))
pool_timeout = int(os.environ.get("DB_POOL_TIMEOUT", "30"))
pool_pre_ping = os.environ.get("DB_POOL_PRE_PING", "1") in ("1", "true", "True", "yes")

# For file-backed sqlite use the plain create_engine. For real DBs use pooling.
if DATABASE_URL.startswith("sqlite"):
engine = create_engine(DATABASE_URL, echo=True, future=True)
else:
engine = create_engine(
DATABASE_URL,
echo=True,
future=True,
pool_size=pool_size,
max_overflow=max_overflow,
pool_timeout=pool_timeout,
pool_pre_ping=pool_pre_ping,
)

SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)

Expand Down
14 changes: 14 additions & 0 deletions app/models/refresh_token.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
from sqlalchemy import Column, Integer, String, DateTime, Boolean, ForeignKey
from sqlalchemy.sql import func
from app.db.session import Base


class RefreshToken(Base):
__tablename__ = "refresh_tokens"

id = Column(Integer, primary_key=True, index=True)
user_id = Column(Integer, ForeignKey("users.id", ondelete="CASCADE"), nullable=False, index=True)
token = Column(String, unique=True, nullable=False, index=True)
issued_at = Column(DateTime, server_default=func.now())
expires_at = Column(DateTime, nullable=True)
revoked = Column(Boolean, default=False, nullable=False)
Loading