This is a project of SF Civic Tech https://www.sfcivictech.org/
This is a hybrid Next.js + Python app that uses Next.js as the frontend and FastAPI as the API backend. It also uses PostgreSQL as the database.
You can work on this app entirely locally, entirely using Docker, or--if you prefer to focus on front end or back end--a combination of the two.
We use GitHub Secrets to store sensitive environment variables. To be able to run the app with all features enabled, users will need write access to the repository to manually trigger the Generate .env File workflow, which creates and uploads an encrypted .env file as an artifact. You can ask a lead or fellow team member for both write access and the decryption passphrase as described here.
For additional environment variables just for your local machine, please create a .env.local file in the root folder and add the variables there. Note that these will override any variables in .env of the same name. An example of an additional environment variable you may need to configure is BROWSER for Storybook.
- If you are contributing from a fork, you do not need to follow the workflow below for core contributors.
- The CI pipeline for forked PRs will automatically use the provided
.env.example. - If you don't have
.env,.env.examplewill be automatically used instead. This allows you to run the app, but with limited functionality (since the real secrets are not included).
Before starting work on the project, make sure to:
- Get write access to the repository. Accept invitation after you have been invited.
- Get the decryption passphrase from other devs or in the Slack Engineering channel.
- Navigate to workflow on the repository's Actions page
- Click on
Generate .env Fileworkflow - Trigger the workflow with the
Run workflowbutton. - Click on the job to navigate to the workflow run page.
- Download the artifact at the bottom of the page and unzip.
- Decrypt the env file using OpenSSL. In the folder with the artifact, run
openssl aes-256-cbc -d -salt -pbkdf2 -k <YOUR_PASSPHRASE> -in .env.enc -out envin the terminal. This creates a decrypted file namedenv. - Place the decrypted file in the root folder of the project and rename it to
.env.
The file is organized into four main sections:
- Postgres Environment Variables. This section contains the credentials to connect to the PostgreSQL database, such as the username, password, and the name of the database.
- Backend Environment Variables. These variables are used by the backend (i.e., FastAPI) to configure its behavior and to connect to the database and the frontend application.
- Frontend Environment Variables. This section contains the base URL for API calls to the backend,
NODE_ENVvariable that determines in which environment the Node.js application is running, and the token needed to access Mapbox APIs. - Monitoring and Analytics Variables. This section contains variables for Sentry and Posthog.
⚠️ If you add a new variable to the Settings class in the backend, you must also add a dummy value for it in .env.example. Otherwise, PRs from forks will fail, since the CI depends on .env.example when secrets are unavailable.`
This project uses Docker and Docker Compose to run the application, which includes the frontend, backend, and postgres database.
Docker is configured so that any changes you make should trigger re-compiling by the appropriate service. If the change is not taking, you may need to restart the server, but you do not have to rebuild everything.
This includes pyproject.toml and .env, and package.json. You will need to restart the individual server, but should not have to rebuild everything.
- Docker: Make sure Docker is installed and running on your machine. Get Docker.
- Docker Compose: Ensure Docker Compose is installed (usually included with Docker Desktop).
-
Build images if not yet created: From the project root directory (where the docker-compose.yml file is located), run:
docker compose build -
Run Docker Compose: From the project root directory (where the docker-compose.yml file is located), run:
docker compose upor
docker compose up -dThis will:
- Start all services defined in the docker-compose.yml file (e.g., frontend, backend, database)
-
Start Postgres
-
Access the Application:
- The app is running at http://localhost:3000. Note that this may conflict with your local dev server. If so, one will be running on port 3000 and the other on port 3001.
- The API is accessible at http://localhost:8000.
- The Postgres instance with PostGIS extension is accessible at http://localhost:5432.
- To interact with a running container, use
docker exec [OPTIONS] CONTAINER COMMAND [ARG...]- To run a database query, run
docker exec -it my_postgis_db psql -U postgres -d qsdatabase - To execute a python script, run
docker exec -it datasci-earthquake-backend-1 python <path/to/script>
- To run a database query, run
Note: If you modify the
Dockerfileor other build contexts (e.g.,.env,package.json), you should rundocker compose up -d --buildto rebuild the images and apply the changes! You do not need to restartnpm run fast-api devwhen doing so.
To stop and shut down the application:
-
Stop Docker Compose: Type
docker compose stop. -
Bring Down the Containers: If you want to stop and remove the containers completely, run:
docker compose downThis will:- Stop all services.
- Remove the containers, but it will not delete volumes (so the database data will persist).
Note: If you want to start with a clean slate and remove all data inside the containers, run
docker compose down -v.
Replace with
datasci-earthquake-frontend, datasci-earthquake-backend, datasci-earthquake-db or datasci-earthquake-db_test:
docker compose build <service_name>
Replace with frontend, backend, db, or db_test:
docker compose up -d <service_name>
docker compose stop <service_name>
- You can find the list of running containers (their ids, names, status and ports) by running
docker ps -a. - If a container failed, start troublshooting by inspecting logs with
docker logs <container-id>. - Containers fail when their build contexts were modified but the containers weren't rebuilt. If logs show that some dependencies are missing, this can be likely solved by rebuilding the containers.
- Many problems are caused by disk usage issues. Run
docker system dfto show disk usage. Use pruning commands such asdocker system pruneto clean up unused resources. Error response from daemon: network not foundoccurs when Docker tries to use a network that has already been deleted or is dangling (not associated with any container). Prune unused networks to resolve this issue:docker network prune -f. If this doesn't help, rundocker system prune.
- First update code and/or rebuild any containers as necessary. Otherwise you may get false results.
- Run the containers (
docker compose up -d) - Run pytest to test the docker container:
docker exec -it datasci-earthquake-backend-1 pytest backendordocker compose exec backend pytest backend - To get code coverage, run
docker exec -w /backend datasci-earthquake-backend-1 pytest --cov=backend
Docker development is recommended as the configuration is more guaranteed.
uv: Install the uv package manager:
On macOS/Linux:
curl -LsSf https://astral.sh/uv/install.sh | shOn Windows:
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"Alternative (all platforms):
pip install uvPostgreSQL:
- Install Java 1.8 or later if your PostgreSQL installer requires it (e.g., the EDB installer).
- Install PostgreSQL locally with the PostGIS extension, select the PostGIS extension when prompted by the installer.
- If PostgreSQL was already installed, add the PostGIS extension if not already included
Note: The backend dependencies are installed automatically when you run the development server (npm run fastapi-dev). If you need to run backend commands manually (e.g., running pytest), run:
(cd backend && uv sync --extra dev)To manually activate the virtual environment from the project root, run:
On macOS/Linux:
source backend/.venv/bin/activateOn Windows:
backend\.venv\Scripts\activate-
Set nvm version:
nvm use 18
-
Install the front end dependencies:
npm install # or yarn # or pnpm install
-
Run the development server:
npm run dev # or yarn dev # or pnpm dev
Open http://localhost:3000 with your browser to see the result.
The FastAPI server will be running on http://127.0.0.1:8000 – feel free to change the port in package.json (you'll also need to update it in next.config.js).
Please refer to Troubleshooting front end.
If you will be working exclusively on the front end or back end, you can run the Docker containers for the part of the stack you won't be doing development on, and then run the rest of the stack locally. A handful of NPM scripts have been provided to make this a bit easier (npm run dev-* and npm run docker-*, described below).
After going through the steps below for either front end-focused or back end-focused development, you should be able to access the servers at the following URLs:
- The app is running at http://localhost:3000, which you can open in your browser.
- The API is accessible at http://localhost:8000.
- Docker: Make sure Docker is installed and running on your machine. Get Docker.
- Docker Compose: Ensure Docker Compose is installed (usually included with Docker Desktop).
For front end-focused development, first run npm install, and then you can run npm run dev-front, which will:
- build and restart your backend (and database) Docker containers
- install dependencies
- start up your Next.js development server locally (on port 3000 by default)
- start up Storybook locally (on port 6006 in Chrome by default)
If you need to rebuild the containers, run npm run docker-back.
If you would prefer to skip starting Storybook, run `npm run dev-front-no-storybook' instead.
To start up our Storybook component workshop, run npm run storybook. This will:
- start up an instance of Storybook in Google Chrome (on port 6006 by default)
If you would like to use a different browser than Chrome, then edit your .env.local file to include this line where "firefox" is the name of the browser you are targeting:
BROWSER="firefox"For context, when we work on components, we also write Storybook Stories for them so that we can more easily create and document each component in isolation. If you create or edit a component or part of our theme, you will need to create one or more Stories for it if they don't yet exist, or update them if they do.
For UI components, styling, and theming, most initial setup is done via Chakra UI v3 within theme.ts. Docs are located at https://chakra-ui.com/. For default theme values, which are automatically used by Chakra along with the overrides in theme.ts, you can refer to https://github.com/chakra-ui/chakra-ui/tree/main/packages/react/src/theme (note that sizing is a superset of spacing). Be aware that there are a lot of out-of-date resources and articles online for Chakra UI v2 that you should ignore in favor of v3.
While doing development, note that style prop autocompletion relies on theme typings generated from [styles/theme.ts], which is where SafeHome's Chakra theme overrides are defined. The overrides are merged with Chakra's default theme to give us the final SafeHome theme. To see a full list of theme values and tokens, check your browser console.
It's not obvious how to see a theme's tokens and values for reference during development. To make this easier and improve the experience, there are two ways you can currently view the theme:
- Browser console log: While running
npm run dev-front, on page load, we log the theme as an object to the browser console WARNING: thethemefolder in your local filesystem: If you runnpm run gen:tokens, the theme will be outputted to a temporarythemefoldergen:tokensnpm script that creates athemefolder does not currently work as expected; see related comment in package.json
For autocompletion of theme tokens in JSX, make sure you are running npm run dev-front. With this npm script running, theme typings will be regenerated whenever theme.ts is modified.
There are plans to introduce a third way of viewing our theme: via Storybook!
Chakra prefers strings as prop values as opposed to numbers.
Instead of:
gap={1.5}
Use:
gap="1.5"
As for shorthand notation (e.g., "{spacing.4} {spacing.2}"); see: Chakra's token reference syntax), it doesn’t quite work, unfortunately. Although it works in development mode, when strictTokens is set to “true” in theme.ts, red squiggly lines will appear in the editor and the TypeScript build will fail. The alternative is to not use shorthand at all.
For shorthand notation, (e.g., "{spacing.4} {spacing.2}") appears to trip up the theme typings (seen as red squiggly lines in Visual Studio Code), so please use the array syntax instead:
Instead of this shorthand:
// works in dev mode, but build will fail
p="{spacing.8} {spacing.16} {spacing.8} {spacing.16}"
Use individual props and single values:
// works
py="8"
px="16"
For responsive props, instead of the aforementioned token reference syntax:
// works in dev mode, but build will fail
p={{
base: "{spacing.6}",
md: "{spacing.7}",
lg: "{spacing.7} {spacing.8} {spacing.7} {spacing.8}",
xl: "{spacing.7} {spacing.9} {spacing.7} {spacing.9]",
}}
And instead of trying to convert the shorthand string into an array (will NOT work … only the last value is utilized):
// doesn't work at all (only last value of array is used)
p={{
base: "6",
md: "7",
lg: ["7", "8", "7", "8"],
xl: ["7", "9", "7", "9"],
}}
Use individual props combined with prop-based object syntax:
// works
pt={{ base: "6", md: "7", lg: "7", xl: "7" }}
pr={{ base: "6", md: "7", lg: "8", xl: "9" }}
pb={{ base: "6", md: "7", lg: "7", xl: "7" }}
pl={{ base: "6", md: "7", lg: "8", xl: "9" }}
Which can be simplified to:
// works
pt={{ base: "6", md: "7" }}
pr={{ base: "6", md: "7", lg: "8", xl: "9" }}
pb={{ base: "6", md: "7" }}
pl={{ base: "6", md: "7", lg: "8", xl: "9" }}
And further simplified to:
// works
py={{ base: "6", md: "7" }}
px={{ base: "6", md: "7", lg: "8", xl: "9" }}
The package-lock.json file plays a crucial role in ensuring that the exact versions of dependencies installed in node_modules remain consistent across different environments.
Deleting this file can lead to unintended side effects, especially when both package-lock.json and the node_modules folder are removed and then npm install is run. In such cases, a new package-lock.json will be generated based solely on package.json, which might diverge significantly from the committed lock file and cause unexpected behavior or bugs.
In the event of merge conflicts involving package-lock.json, please do not manually fix or delete the file. Instead, run npm install to automatically resolve and repair the lock file.
If you are experimenting locally, you may delete package-lock.json, but make sure not to commit the regenerated file to the repository to avoid affecting others.
Because package-lock.json changes are sometimes overlooked during code reviews, it’s important to pay close attention and avoid accidentally committing problematic versions.
If you face any issues related to package-lock.json, please raise them with the team before making changes.
Thank you for helping keep the project stable!
Suspense boundary missing around useSearchParams(), causing entire page to deopt into client-side rendering (CSR)
You may run into the following NextJS error when you run npm run build1:
useSearchParams() should be wrapped in a suspense boundary at page "/<PAGE_NAME>". Read more: https://nextjs.org/docs/messages/missing-suspense-with-csr-bailout[!INFORMATION] >
<PAGE_NAME>refers to apage.tsxfile in the app, either the root page atapp/page.tsxor a non-root page at, for example,app/<PAGE_NAME>/page.tsx.
The fix is to wrap any component that references useSearchParams() with React's <Suspense>. Read further to understand why.
This error message can be highly misleading because it refers directly to <PAGE_NAME> even though it's more likely that its page.tsx file contains zero usages of useSearchParams()2. This can make debugging difficult.
The error doesn't make a distinction between <PAGE_NAME> and its descendant components, unfortunately, which is what causes the confusion. If you can't find usages of useSearchParams() directly in page.tsx, then you can search for usages in its descendant components instead. Once you find a component with a usage, you can fix the error by wrapping any references to the component with <Suspense> (and, ideally, providing a fallback) and npm run build again. More details can be found at https://nextjs.org/docs/messages/missing-suspense-with-csr-bailout.
There may be some instances where a usage of useSearchParams() does not appear even in a descendant component, but rather in, say, a provider. And, anecdotally, there may be other hooks that trigger a similar error message. Extensive discussion about several variants of this error message and workarounds can be found in this Github Issue: vercel/next.js#61654.
Error occurred prerendering page "/<PAGE_NAME>". Read more: https://nextjs.org/docs/messages/prerender-error"If there are issues with layers showing data, you can add the following snippet in map.tsx under the other addLayer() calls to see outlines of each MultiPolygon:
map.addLayer({
id: "tsunamiLayerOutline",
source: "tsunami",
type: "line",
paint: {
"line-color": "rgba(255, 0, 0, 1)",
"line-width": 4,
},
});Caution
This section is currently undergoing review and correction. Do not use this method. Instead, please develop using Docker.
Docker: Make sure Docker is installed and running on your machine. Get Docker.Docker Compose: Ensure Docker Compose is installed (usually included with Docker Desktop).PostgreSQL: Ensure PostgreSQL is installed to run the database locally (instead of in a Docker container).
For back end-focused development, you can run npm run dev-back, which will:
install dependencies and start up your FastAPI server locallybuild and restart your frontend Docker containers
If you need to rebuild the container, run npm run docker-front.
NOTE: You will need to run PostgreSQL locally or in a Docker container as well.
This repository uses Black for Python and ESLint for JS/TS to enforce code style standards. We also use MyPy to perform static type checking on Python code. The pre-commit hook runs the formatters automatically before each commit, helping maintain code consistency across the project. It works for only the staged files. If you have edited unstaged files in your repository and want to make them comply with the CI pipeline, then run black . mypy . for Python code and npm run lint . for Javascript code.
- Pre-commit is included in the project dependencies and will be installed with
uv sync --extra dev - Run the following command to install the pre-commit hooks defined in the configuration file
.pre-commit-config.yaml:This command sets up pre-commit to automatically run ESLint, Black, and MyPy before each commit.pre-commit install
-
Running Black Automatically: After setup, every time you attempt to commit code, Black will check the staged files and apply formatting if necessary. If files are reformatted, the commit will be stopped, and you’ll need to review the changes before committing again.
-
Bypassing the Hook: If you want to skip the pre-commit hook for a specific commit, use the --no-verify flag with your commit command:
git commit -m "your commit message" --no-verify.Note: The
--no-verifyflag is helpful in cases where you need to make a quick commit without running the pre-commit checks, but it should be used sparingly to maintain code quality. CI pipeline will fail during thepull requestaction if the code is not formatted. -
Running Pre-commit on All Files: If you want to format all files in the repository, use:
pre-commit run --all-files
If you have changed the models in backend/api/models, then you must migrate the database from its current models to the new ones with the following two commands:
docker exec -it BACKEND_CONTAINER_NAME bash -c "cd backend && alembic revision --autogenerate -m 'MIGRATION NAME'"
and
docker exec -it CONTAINER_NAME bash -c "cd backend && alembic upgrade head"
where BACKEND_CONTAINER_NAME may change with the project and perhaps its deployment but, for local Docker development is datasci-earthquake-backend-1 and MIGRATION NAME is your choice and should describe the migration.
The former command generates a migration script in backend/alembic/versions, and the second command runs it.
Developers should only branch from develop, pull updates to develop, and ensure their work is merged into develop via Pull Requests. main is the safe production branch.
When opening a pull request, please:
- aim the pull request at the
developbranch rather thanmain - add reviewers
- use draft/WIP if it turns out to be not ready for review
- link the relevant issue so it is automatically closed when the PR is merged
- run
npm run buildlocally if you have changed any frontend code or dependencies to catch potential build errors.
Ideally, we maintain a readable, clean, and linear commit history. To that end, when merging a pull request, please use Squash and Merge¹.
¹ you can optionally use
Rebase and Mergeif and only if the following conditions are met on your branch:
- commits are atomic, no WIP
- there is more than one commit
- ideally, there are no more than 3 commits
- commit messages are useful
NOTE: An interactive rebase (e.g.,
git rebase -i) can help you rewrite your branch's local history to meet the criteria above
New issues can be created in the Issues tab using the New issue button.
When creating an issue, please:
- use the correct template(default, feature request, bug, etc...)
- add the
SafeHome Projectas a project to the issue. If this is your first issue you will likely need to request access to be added to the project and have write access. You can ask in Slack. - add the relevant label(front end, back end, etc...) so it can easily be filtered by team
To learn more about Next.js, take a look at the following resources:
- Next.js Documentation - learn about Next.js features and API.
- Learn Next.js - an interactive Next.js tutorial.
- FastAPI Documentation - learn about FastAPI features and API.
You can check out the Next.js GitHub repository.