git-pages is a static site server for use with Git forges (i.e. a GitHub Pages replacement). It is written with efficiency in mind, scaling horizontally to any number of machines and serving sites up to multiple gigabytes in size, while being equally suitable for small single-user deployments.
It is implemented in Go and has no other mandatory dependencies, although it is designed to be used together with the Caddy server for TLS termination. Site data may be stored on the filesystem or in an Amazon S3 compatible object store.
The included Docker container provides everything needed to deploy a Pages service, including zero-configuration on-demand provisioning of TLS certificates from Let's Encrypt, and runs on any commodity cloud infrastructure.
Tip
If you want to publish a site using git-pages to an existing service like Codeberg Pages or Grebedoc, consider using the CLI tool or the Forgejo Action.
You will need Go 1.25 or newer. Run:
$ mkdir -p data
$ cp conf/config.example.toml config.toml
$ PAGES_INSECURE=1 go run .These commands starts an HTTP server on 0.0.0.0:3000 and use the data directory for persistence. Authentication is disabled via PAGES_INSECURE=1 to avoid the need to set up a DNS server as well; never enable PAGES_INSECURE=1 in production.
To publish a site, run the following commands (consider also using the git-pages-cli tool):
$ curl http://localhost:3000/ -X PUT --data https://codeberg.org/git-pages/git-pages.git
b70644b523c4aaf4efd206a588087a1d406cb047The pages branch of the repository is now available at http://localhost:3000/!
The first-party container supports running git-pages either standalone or together with Caddy.
To run git-pages standalone and use the filesystem to store site data:
$ docker run -u $(id -u):$(id -g) --mount type=bind,src=$(pwd)/data,dst=/app/data -p 3000:3000 codeberg.org/git-pages/git-pages:latestTo run git-pages with Caddy and use an S3-compatible endpoint to store site data and TLS key material:
$ docker run -e PAGES_STORAGE_TYPE -e PAGES_STORAGE_S3_ENDPOINT -e PAGES_STORAGE_S3_REGION -e PAGES_STORAGE_S3_ACCESS_KEY_ID -e PAGES_STORAGE_S3_SECRET_ACCESS_KEY -e PAGES_STORAGE_S3_BUCKET -e ACME_EMAIL -p 80:80 -p 443:443 codeberg.org/git-pages/git-pages:latest supervisord- In response to a
GETorHEADrequest, the server selects an appropriate site and responds with files from it. A site is a combination of the hostname and (optionally) the project name.- The site is selected as follows:
- If the URL matches
https://<hostname>/<project-name>/...and a site was published at<project-name>, this project-specific site is selected. - If the URL matches
https://<hostname>/...and the previous rule did not apply, the index site is selected.
- If the URL matches
- Site URLs that have a path starting with
.git-pages/...are reserved for git-pages itself.- The
.git-pages/healthURL returnsokwith theLast-Modified:header set to the manifest modification time. - The
.git-pages/manifest.jsonURL returns a ProtoJSON representation of the deployed site manifest with theLast-Modified:header set to the manifest modification time. It enumerates site structure, redirect rules, and errors that were not severe enough to abort publishing. Note that the manifest JSON format is not stable and will change without notice. - The
.git-pages/archive.tarURL returns a tar archive of all site contents, including_redirectsand_headersfiles (reconstructed from the manifest), with theLast-Modified:header set to the manifest modification time. Compression can be enabled using theAccept-Encoding:HTTP header (only).
- The
- The site is selected as follows:
- In response to a
PUTorPOSTrequest, the server updates a site with new content. The URL of the request must be the root URL of the site that is being published.- If the
PUTmethod receives anapplication/x-www-form-urlencodedbody, it contains a repository URL to be shallowly cloned. TheBranchheader contains the branch to be checked out; thepagesbranch is used if the header is absent. - If the
PUTmethod receives anapplication/x-tar,application/x-tar+gzip,application/x-tar+zstd, orapplication/zipbody, it contains an archive to be extracted. - The
POSTmethod requires anapplication/jsonbody containing a Forgejo/Gitea/Gogs/GitHub webhook event payload. Requests where therefkey contains anything other thanrefs/heads/pagesare ignored, and only thepagesbranch is used. Therepository.clone_urlkey contains a repository URL to be shallowly cloned. - If the received contents is empty, performs the same action as
DELETE.
- If the
- In response to a
PATCHrequest, the server partially updates a site with new content. The URL of the request must be the root URL of the site that is being published.- The request must have a
application/x-tar,application/x-tar+gzip, orapplication/x-tar+zstdbody, whose contents is merged with the existing site contents as follows:- A character device entry with major 0 and minor 0 is treated as a "whiteout marker" (following unionfs): it causes any existing file or directory with the same name to be deleted.
- A directory entry replaces any existing file or directory with the same name (if any), recursively removing the old contents.
- A file or symlink entry replaces any existing file or directory with the same name (if any).
- If there is no
Create-Parents:header or aCreate-Parents: noheader is present, the parent path of an entry must exist and refer to a directory. - If a
Create-Parents: yesheader is present, any missing segments in the parent path of an entry will be created (likemkdir -p). Any existing segments refer to directories.
- The request must have a
Atomic: yesorAtomic: noheader. Not every backend configuration makes it possible to perform atomic compare-and-swap operations; on backends without atomic CAS support,Atomic: yesrequests will fail, whileAtomic: norequests will provide a best-effort approximation. - If a
PATCHrequest loses a race against another content update request, it may return409 Conflict. This is true regardless of theAtomic:header value. Whenever this happens, resubmit the request as-is. - If the site has no contents after the update is applied, performs the same action as
DELETE.
- The request must have a
- In response to a
DELETErequest, the server unpublishes a site. The URL of the request must be the root URL of the site that is being unpublished. Site data remains stored for an indeterminate period of time, but becomes completely inaccessible. - If a
Dry-Run: yesheader is provided with aPUT,PATCH,DELETE, orPOSTrequest, only the authorization checks are run; no destructive updates are made. - All updates to site content are atomic (subject to consistency guarantees of the storage backend). That is, there is an instantaneous moment during an update before which the server will return the old content and after which it will return the new content.
- Files with a certain name, when placed in the root of a site, have special functions:
- Netlify
_redirectsfile can be used to specify HTTP redirect and rewrite rules. The git-pages implementation currently does not support placeholders, query parameters, or conditions, and may differ from Netlify in other minor ways. If you find that a supported_redirectsfile feature does not work the same as on Netlify, please file an issue. (Note that git-pages does not perform URL normalization;/fooand/foo/are not the same, unlike with Netlify.) - Netlify
_headersfile can be used to specify custom HTTP response headers (if allowlisted by configuration). In particular, this is useful to enable CORS requests. The git-pages implementation may differ from Netlify in minor ways; if you find that a_headersfile feature does not work the same as on Netlify, please file an issue.
- Netlify
- Incremental updates can be made using
PUTorPATCHrequests where the body contains an archive (both tar and zip are supported).- Any archive entry that is a symlink to
/git/pages/<git-sha256>is replaced with an existing manifest entry for the same site whose git blob hash matches<git-sha256>. If there is no existing manifest entry with the specified git hash, the update fails with a422 Unprocessable Entity. - For this error response only, if the negotiated content type is
application/vnd.git-pages.unresolved, the response will contain the<git-sha256>of each unresolved reference, one per line.
- Any archive entry that is a symlink to
- Support for SHA-256 Git hashes is limited by go-git; once go-git implements the required features, git-pages will automatically gain support for SHA-256 Git hashes. Note that shallow clones (used by git-pages to conserve bandwidth if available) aren't supported yet in the Git protocol as of 2025.
DNS is the primary authorization method, using either TXT records or wildcard matching. In certain cases, git forge authorization is used in addition to DNS.
The authorization flow for content updates (PUT, PATCH, DELETE, POST requests) proceeds sequentially in the following order, with the first of multiple applicable rule taking precedence:
- Development Mode: If the environment variable
PAGES_INSECUREis set to a truthful value like1, the request is authorized. - DNS Challenge: If the method is
PUT,PATCH,DELETE,POST, and a well-formedAuthorization:header is provided containing a<token>, and a TXT record lookup at_git-pages-challenge.<host>returns a record whose concatenated value equalsSHA256("<host> <token>"), the request is authorized.Pagesscheme: Request includes anAuthorization: Pages <token>header.Basicscheme: Request includes anAuthorization: Basic <basic>header, where<basic>is equal toBase64("Pages:<token>"). (Useful for non-Forgejo forges.)
- DNS Allowlist: If the method is
PUTorPOST, and the request URL isscheme://<user>.<host>/, and a TXT record lookup at_git-pages-repository.<host>returns a set of well-formed absolute URLs, and (forPUTrequests) the body contains a repository URL, and the requested clone URLs is contained in this set of URLs, the request is authorized. - Wildcard Match (content): If the method is
POST, and a[[wildcard]]configuration section exists where the suffix of a hostname (compared label-wise) is equal to[[wildcard]].domain, and (forPUTrequests) the body contains a repository URL, and the requested clone URL is a matching clone URL, the request is authorized.- Index repository: If the request URL is
scheme://<user>.<host>/, a matching clone URL is computed by templating[[wildcard]].clone-urlwith<user>and<project>, where<project>is computed by templating each element of[[wildcard]].index-reposwith<user>, and[[wildcard]]is the section where the match occurred. - Project repository: If the request URL is
scheme://<user>.<host>/<project>/, a matching clone URL is computed by templating[[wildcard]].clone-urlwith<user>and<project>, and[[wildcard]]is the section where the match occurred.
- Index repository: If the request URL is
- Forge Authorization: If the method is
PUTorPATCH, and the body contains an archive, and a[[wildcard]]configuration section exists where the suffix of a hostname (compared label-wise) is equal to[[wildcard]].domain, and[[wildcard]].authorizationis non-empty, and the request includes aForge-Authorization:header, and the header (when forwarded asAuthorization:) grants push permissions to a repository at the matching clone URL (as defined above) as determined by an API call to the forge, the request is authorized. (This enables publishing a site for a private repository.) - Default Deny: Otherwise, the request is not authorized.
The authorization flow for metadata retrieval (GET requests with site paths starting with .git-pages/) in the following order, with the first of multiple applicable rule taking precedence:
- Development Mode: Same as for content updates.
- DNS Challenge: Same as for content updates.
- Wildcard Match (metadata): If a
[[wildcard]]configuration section exists where the suffix of a hostname (compared label-wise) is equal to[[wildcard]].domain, the request is authorized. - Default Deny: Otherwise, the request is not authorized.
git-pages has robust observability features built in:
- The metrics endpoint (bound to
:3002by default) returns Go, pages server, and storage backend metrics in the Prometheus format. - Optional Sentry integration allows greater visibility into the application. The
ENVIRONMENTenvironment variable configures the deploy environment name (developmentby default).- If
SENTRY_DSNenvironment variable is set, panics are reported to Sentry. - If
SENTRY_DSNandSENTRY_LOGS=1environment variables are set, logs are uploaded to Sentry. - If
SENTRY_DSNandSENTRY_TRACING=1environment variables are set, traces are uploaded to Sentry.
- If
- Optional syslog integration allows transmitting application logs to a syslog daemon. When present, the
SYSLOG_ADDRenvironment variable enables the integration, and the value is used to configure the syslog destination. The value must follow the formatfamily/addressand is usually one of the following:- a Unix datagram socket:
unixgram//dev/log; - TLS over TCP:
tcp+tls/host:port; - plain TCP:
tcp/host:post; - UDP:
udp/host:port.
- a Unix datagram socket:
An object store (filesystem, S3, ...) is used as the sole mechanism for state storage. The object store is expected to provide atomic operations and where necessary the backend adapter ensures as such.
- Repositories themselves never reach the object store; they are cloned to an ephemeral location and discarded immediately after their contents is extracted.
- The
blob/prefix contains file data organized by hash of their contents (indiscriminately of the repository they belong to).- Very small files are stored inline in the manifest.
- The
site/prefix contains site manifests organized by domain and project name (e.g.site/example.org/myprojectorsite/example.org/.index).- The manifest is a Protobuf object containing a flat mapping of paths to entries. An entry is comprised of type (file, directory, symlink, etc) and data, which may be stored inline or refer to a blob.
- A small amount of internal metadata within a manifest allows attributing deployments to their source and computing quotas.
- Additionally, the object store contains staged manifests, representing an in-progress update operation.
- An update first creates a staged manifest, then uploads blobs, then replaces the deployed manifest with the staged one. This avoids TOCTTOU race conditions during garbage collection.
- Stable marshalling allows addressing staged manifests by the hash of their contents.
This approach, unlike the v1 one, cannot be easily introspected with normal Unix commands, but is very friendly to S3-style object storage services, as it does not rely on operations these services cannot support (subtree rename, directory stat, symlink/readlink).
The S3 backend, intended for (relatively) high latency connections, caches both manifests and blobs in memory. Since a manifest is necessary and sufficient to return 304 Not Modified responses for a matching ETag, this drastically reduces navigation latency. Blobs are content-addressed and are an obvious target for a last level cache.
This was the original architecture and it is no longer used. Migration to v2 was last available in commit 7e9cd17b.
Filesystem is used as the sole mechanism for state storage.
- The
data/tree/directory contains working trees organized by commit hash (indiscriminately of the repository they belong to). Repositories themselves are never stored on disk; they are cloned in-memory and discarded immediately after their contents is extracted.- The presence of a working tree directory under the appropriate commit hash is considered an indicator of its completeness. Checkouts are first done into a temporary directory and then atomically moved into place.
- Currently a working tree is never removed, but a practical system would need to have a way to discard orphaned ones.
- The
data/www/directory contains symlinks to working trees organized by domain and project name (e.g.data/www/example.org/myprojectordata/www/example.org/.index).- The presence of a symlink at the appropriate location is considered an indicator of completeness as well. Updating to a new content version is done by creating a new symlink at a temporary location and then atomically moving it into place.
- This structure is simple enough that it may be served by e.g. Nginx instead of the Go application.
openat2(RESOLVE_IN_ROOT)is used to confine GET requests strictly under thedata/directory.
This approach has the benefits of being easy to explore and debug, but places a lot of faith onto the filesystem implementation; partial data loss, write reordering, or incomplete journalling will result in confusing and persistent caching issues. This is probably fine, but needs to be understood.
The specific arrangement used is clearly not optimal; at a minimum it is likely worth it to deduplicate files under data/tree/ using hardlinks, or perhaps to put objects in a flat, content-addressed store with data/www/ linking to each individual file. The key practical constraint will likely be the need to attribute excessively large trees to repositories they were built from (and to perform GC), which suggests adding structure and not removing it.