Skip to content

bbondy/go-brianbondy

Repository files navigation

Brian Bondy's Website

This is the source code for brianbondy.com, a personal website built with Go.

Development

Prerequisites

  • Go 1.19+
  • Python 3.7+ (for image processing scripts)
  • cwebp tool (for WebP image conversion)

Installation

  1. Clone the repository
  2. Install the cwebp tool:
    # macOS
    brew install webp
    
    # Ubuntu/Debian
    sudo apt-get install webp
    
    # CentOS/RHEL
    sudo yum install libwebp-tools

Running Locally

go run .

The site will be available at http://localhost:8080

Testing

make test

Formatting

make format

Cheatsheets

To fetch the latest cheatsheets markdown from https://github.com/bbondy/cheatsheets and generate the local manifest:

make cheatsheets

This writes data/cheatsheetsManifest.json and data/markdown/cheatsheets/*.md.

Deployment

make deploy

Blog Post Workflow

Adding a New Blog Post

  1. Create a new markdown file in data/markdown/blog/ with the next available ID
  2. Add the blog post metadata to data/blogPostManifest.json
  3. Add images to static/img/blogpost_[ID]/ directory
  4. Process the images for WebP optimization:
    make blog-images [ID]
    Or process all blog post images:
    make blog-images
  5. Test locally: go run .
  6. Run tests: make test
  7. Deploy: make deploy

Image Processing

The website automatically optimizes images for better performance by:

  • Converting images to WebP format
  • Adding lazy loading
  • Adding async decoding
  • Providing responsive image support

Manual Image Processing

Convert all images to WebP:

make webp

Force convert all images (even if WebP already exists):

make webp-force

Process images for a specific blog post:

python3 scripts/process_new_blog_images.py [blog_post_id]

Image Processing Scripts

  • scripts/convert_images_to_webp.py - Main WebP conversion script
  • scripts/process_new_blog_images.py - Blog post specific image processing
  • scripts/download_strava_images.py - Download images from Strava activities
  • scripts/generate_books.py - Generate book data from Goodreads export

Project Structure

  • data/ - Blog posts, projects, and other content
  • static/ - CSS, images, and other static assets
  • templates/ - HTML templates
  • scripts/ - Utility scripts for content management
  • handlers.go - HTTP request handlers
  • routes.go - URL routing
  • utils.go - Utility functions including image optimization

Prerequisites

  • Go 1.18+
  • Python 3 (for scripts)
  • golangci-lint (brew install golangci-lint)
  • Google Cloud SDK (for deployment)

Adding a Blog Post

  1. Create a new markdown file in data/markdown/blog/.
  2. Add an entry to data/blogPostManifest.json with the new post's metadata.
  3. (Optional) Add images to static/img/blogpost_<id>/.

Format & Lint

To check for linting issues without fixing them:

make lint

To automatically format and fix linting issues:

make format

Testing

To run all tests:

make test

Deployment

Authenticate with Google Cloud (if you haven't already):

make auth

Then deploy:

make deploy

Updating book list

Download an export from https://www.goodreads.com/review/import and save it to data/goodreads_library_export.csv.

Run python3 scripts/generate_books.py

Where to publish blog posts

  • Facebook page and related groups (Adjust visibility to Public)
  • Twitter
  • LinkedIn
  • Strava (if about running)

License

See LICENSE.

Running Data Automation

Auto-Fetching Run Times from Strava

The script scripts/fetch_memorable_runs.py helps automate the process of adding elapsed time (hours and minutes) to each running activity in data/memorableRuns.json.

Features:

  • Extracts time from the description if present (and cleans up duplicates)
  • If time is missing, fetches the elapsed time from the public Strava activity web page (no API credentials required)
  • Updates the manifest with a "time" field for each activity
  • Cleans up the description to avoid duplicate time display

Requirements:

  • Python 3
  • requests and beautifulsoup4 libraries (install with pip install requests beautifulsoup4)

Usage:

python3 scripts/fetch_memorable_runs.py

After running, your data/memorableRuns.json will be updated with time fields for each activity. Activities without a Strava activity URL or with non-standard pages will be flagged for manual review.

Update Strava Run Manifest (Contribution Graph)

The contribution graph on the running page is generated from data/stravaRunManifest.json.

Requirements:

  • Strava API access (either STRAVA_ACCESS_TOKEN or client credentials)

Usage:

make strava-run-manifest

If the token is missing or expired, set one of the following before running:

STRAVA_ACCESS_TOKEN=... make strava-run-manifest

Or use OAuth client credentials (this opens a browser for login):

STRAVA_CLIENT_ID=... STRAVA_CLIENT_SECRET=... make strava-run-manifest

Auto-Fetching GitHub Project Stats

The script scripts/fetch_github_stats.py automates fetching commit and pull request counts for your projects from GitHub and updates data/projectManifest.json.

Features:

  • Scrapes GitHub search pages for commit and PR counts by author (bbondy)
  • Supports keyword-based filtering for subprojects (see searchKeywords in the manifest)
  • Handles abbreviated numbers (e.g., "2.3k" → 2300)
  • Retries on rate limiting with exponential backoff
  • Waits 2 seconds between all requests to avoid rate limits
  • Only includes real fetched data (removes stats if not available)

Requirements:

  • Python 3
  • requests library (install with pip install requests)

Usage:

python3 scripts/fetch_github_stats.py

After running, your data/projectManifest.json will be updated with the latest commit and PR counts for each project. If a project's data can't be fetched (e.g., due to rate limiting), it will be omitted from the stats until a successful fetch.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published