This is the source code for brianbondy.com, a personal website built with Go.
- Go 1.19+
- Python 3.7+ (for image processing scripts)
cwebptool (for WebP image conversion)
- Clone the repository
- Install the
cwebptool:# macOS brew install webp # Ubuntu/Debian sudo apt-get install webp # CentOS/RHEL sudo yum install libwebp-tools
go run .The site will be available at http://localhost:8080
make testmake formatTo fetch the latest cheatsheets markdown from https://github.com/bbondy/cheatsheets and generate the local manifest:
make cheatsheetsThis writes data/cheatsheetsManifest.json and data/markdown/cheatsheets/*.md.
make deploy- Create a new markdown file in
data/markdown/blog/with the next available ID - Add the blog post metadata to
data/blogPostManifest.json - Add images to
static/img/blogpost_[ID]/directory - Process the images for WebP optimization:
Or process all blog post images:
make blog-images [ID]
make blog-images
- Test locally:
go run . - Run tests:
make test - Deploy:
make deploy
The website automatically optimizes images for better performance by:
- Converting images to WebP format
- Adding lazy loading
- Adding async decoding
- Providing responsive image support
Convert all images to WebP:
make webpForce convert all images (even if WebP already exists):
make webp-forceProcess images for a specific blog post:
python3 scripts/process_new_blog_images.py [blog_post_id]scripts/convert_images_to_webp.py- Main WebP conversion scriptscripts/process_new_blog_images.py- Blog post specific image processingscripts/download_strava_images.py- Download images from Strava activitiesscripts/generate_books.py- Generate book data from Goodreads export
data/- Blog posts, projects, and other contentstatic/- CSS, images, and other static assetstemplates/- HTML templatesscripts/- Utility scripts for content managementhandlers.go- HTTP request handlersroutes.go- URL routingutils.go- Utility functions including image optimization
- Go 1.18+
- Python 3 (for scripts)
- golangci-lint (
brew install golangci-lint) - Google Cloud SDK (for deployment)
- Create a new markdown file in
data/markdown/blog/. - Add an entry to
data/blogPostManifest.jsonwith the new post's metadata. - (Optional) Add images to
static/img/blogpost_<id>/.
To check for linting issues without fixing them:
make lint
To automatically format and fix linting issues:
make format
To run all tests:
make test
Authenticate with Google Cloud (if you haven't already):
make auth
Then deploy:
make deploy
Download an export from https://www.goodreads.com/review/import and save it to data/goodreads_library_export.csv.
Run python3 scripts/generate_books.py
- Facebook page and related groups (Adjust visibility to Public)
- Strava (if about running)
See LICENSE.
The script scripts/fetch_memorable_runs.py helps automate the process of adding elapsed time (hours and minutes) to each running activity in data/memorableRuns.json.
Features:
- Extracts time from the description if present (and cleans up duplicates)
- If time is missing, fetches the elapsed time from the public Strava activity web page (no API credentials required)
- Updates the manifest with a
"time"field for each activity - Cleans up the description to avoid duplicate time display
Requirements:
- Python 3
requestsandbeautifulsoup4libraries (install withpip install requests beautifulsoup4)
Usage:
python3 scripts/fetch_memorable_runs.pyAfter running, your data/memorableRuns.json will be updated with time fields for each activity. Activities without a Strava activity URL or with non-standard pages will be flagged for manual review.
The contribution graph on the running page is generated from data/stravaRunManifest.json.
Requirements:
- Strava API access (either
STRAVA_ACCESS_TOKENor client credentials)
Usage:
make strava-run-manifestIf the token is missing or expired, set one of the following before running:
STRAVA_ACCESS_TOKEN=... make strava-run-manifestOr use OAuth client credentials (this opens a browser for login):
STRAVA_CLIENT_ID=... STRAVA_CLIENT_SECRET=... make strava-run-manifestThe script scripts/fetch_github_stats.py automates fetching commit and pull request counts for your projects from GitHub and updates data/projectManifest.json.
Features:
- Scrapes GitHub search pages for commit and PR counts by author (bbondy)
- Supports keyword-based filtering for subprojects (see
searchKeywordsin the manifest) - Handles abbreviated numbers (e.g., "2.3k" → 2300)
- Retries on rate limiting with exponential backoff
- Waits 2 seconds between all requests to avoid rate limits
- Only includes real fetched data (removes stats if not available)
Requirements:
- Python 3
requestslibrary (install withpip install requests)
Usage:
python3 scripts/fetch_github_stats.pyAfter running, your data/projectManifest.json will be updated with the latest commit and PR counts for each project. If a project's data can't be fetched (e.g., due to rate limiting), it will be omitted from the stats until a successful fetch.