.github/workflows/release-electrobun.yml is the canonical desktop release workflow and reusable desktop release-build graph. .github/workflows/test-electrobun-release.yml calls that same graph on pull requests in build-only mode, and .github/workflows/release.yml remains a manual legacy desktop fallback only.
Why the release pipeline and desktop bundle work the way they do.
We ship separate Milady-arm64.dmg and Milady-x64.dmg because:
- Native Node addons (e.g.
onnxruntime-node,whisper-node) ship prebuilt.nodebinaries per OS and arch. There is no single "universal" npm artifact that contains both arm64 and x64; the addon is built for the arch of the machine that rannpm install/bun install. - CI builds both macOS architectures separately. The Apple Silicon artifact runs on
macos-14, and the Intel artifact runs on the dedicatedmacos-15-intelrunner. - The Intel artifact still uses explicit x64 invocations through the shared desktop builder (
MILADY_DESKTOP_COMMAND_PREFIX="arch -x86_64") so native modules and helper binaries are resolved consistently as x64 throughout the packaging path. - Why this still matters on the Intel runner: our workflow shares the same commands and staging logic across all jobs, and the explicit x64 path avoids accidental host/translation drift in the install and packaging steps.
See .github/workflows/release-electrobun.yml: the platform jobs run arch -x86_64 for the macOS Intel leg during "Install root dependencies", scripts/desktop-build.mjs stage, and scripts/desktop-build.mjs package.
Runner hygiene: When GitHub renames, updates, or retires labels such as macos-14 or macos-15-intel, update the matrix in .github/workflows/release-electrobun.yml (and any callers) and run .github/workflows/test-electrobun-release.yml on a branch to confirm the desktop build graph still passes before relying on it for a release.
The packaged app runs the agent from milady-dist/ (bundled JS + node_modules). The main bundle is built by tsdown with dependencies inlined where possible, but:
- Plugins (
@elizaos/plugin-*) are loaded at runtime; their dist/ and any runtime-only dependencies (native addons, optional requires, etc.) must be present inmilady-dist/node_modules. - Why not rely on a single global node_modules at pack time? The app is built into an ASAR (and unpacked dirs); resolution at runtime is from the app directory. So we copy the subset we need into
apps/app/electrobun/milady-dist/node_modulesbefore packaging runs.
The packaging scripts derive that subset instead of keeping a hand-maintained allowlist:
scripts/copy-runtime-node-modules.tshandles the Electrobun build and scans the builtdist/output for bare package imports, unions that with the installed@elizaos/*and@miladyai/plugin-*packages from the repo root, then recursively copies their runtime deps intodist/node_modules.- The packaging flow walks package.json
dependenciesandoptionalDependenciesrecursively. Why: dynamic plugin loading and native optional deps change more often than the release workflow; deriving the closure from installed package metadata avoids shipping a stale allowlist. - Known dev/renderer-only packages (for example
typescript,lucide-react) are skipped to keep the packaged runtime smaller.
We do not try to exclude deps that might already be inlined by tsdown into plugin dist/, because plugins can require() at runtime; excluding them would risk "Cannot find module" in the packaged app.
The release workflow (.github/workflows/release.yml) is designed for reproducible, fail-fast builds and diagnosable failures. Key choices and their reasons:
- Strict shell (
bash -euo pipefail) — Applied at job default forbuild-desktopso every step exits on first error, undefined variable, or pipe failure. Why: Without it, a failing command in the middle of a script can be ignored and the step still "succeeds", producing broken artifacts or confusing later failures. - Retry loops with final assertion —
bun installsteps retry up to 3 times, then run the same install command once more after the loop. Why: If all retries failed, the loop exits without failing the step; the final run ensures the step fails with a clear install error instead of silently continuing. - Crash dump uses the maintained ASAR CLI — When packaging crashes, we list ASAR contents with the maintained ASAR CLI, not the deprecated
asarpackage. Why: The deprecated package can be missing or incompatible; the maintained ASAR tooling works when the build fails. find -print0andwhile IFS= read -r -d ''— Copying JS intomilady-distand removing node-gyp artifacts use null-delimited find + read. Why: Filenames with newlines or spaces would breakfind | while read; null-delimited iteration is safe for any path.- DMG path via
find+stat -f— We pick the newest DMG withfind dist -name '*.dmg' -exec stat -f '%m\t%N' {} \; | sort -rn | head -1instead ofls -t dist/*.dmg. Why:ls -twith a glob can fail or behave oddly when no DMG exists or paths have spaces; find + stat is robust and this step runs only on macOS wherestat -fis available. - Remove node-gyp build artifacts before packaging — We delete
build-tmp*andnode_gyp_binsundernode_modules(root and milady-dist). Why: @tensorflow/tfjs-node and other native addons leave symlinks to system Python there; the packager refuses to pack symlinks to paths outside the app (security), so the pack step would fail without removal. - Size report includes
milady-dist— We report sizes of bothapp.asar.unpacked/node_modulesandapp.asar.unpacked/milady-dist(and its node_modules when present). Why: Both regions contribute to artifact size; reporting both makes it obvious where bloat comes from. - Size report
du | sort | headpipelines — We run each pipeline in a subshell and capture exit code with( pipeline ) || r=$?, then allow 0 or 141; we also redirectsortstderr to/dev/null. Why: Underbash -euo pipefail, whenheadcloses the pipe after N lines,sortgets SIGPIPE and exits 141; the step would exit beforer=$?ran. The subshell +||lets us treat 141 as success. Silencingsortavoids noisy "Broken pipe" in logs. - Single Capacitor build step — One "Build Capacitor app" step runs
npx vite buildon all platforms. Why: The previous split (non-Windows vs Windows) was redundant; vite build works everywhere, so one step reduces drift and confusion. - Packaged DMG E2E: 240s CDP timeout in CI, stdout/stderr dump on timeout — In CI we use a longer CDP wait and on timeout we log app stdout/stderr before failing. Why: CI can be slower; a longer timeout reduces flaky failures. Dumping logs makes CDP timeouts debuggable instead of silent.
CI workflows that need Node (for node-gyp / native modules or npm registry) were timing out on Node download and install. We fixed this as follows.
useblacksmith/setup-node@v5on Blacksmith runners — Intest.yml, jobs that run onblacksmith-4vcpu-ubuntu-2404useuseblacksmith/setup-nodeinstead ofactions/setup-node. Why: Blacksmith’s action uses their colocated cache (same DC as the runner), so Node binaries are served at ~400MB/s and we avoid slow or failing downloads from nodejs.org.actions/setup-node@v3(not v4) on GitHub-hosted runners — Release, test (macOS legs), nightly, publish-npm, and other workflows pin to@v3. Why: v4 has a known slow post-action step and often triggers nodejs.org downloads that time out; v3 uses the runner toolcache when the version is present and avoids the regression.check-latest: false— We set this explicitly on everyactions/setup-nodestep (Blacksmith jobs useuseblacksmith/setup-node, which has its own caching behavior). Why: With the default, the action can hit nodejs.org to check for a newer patch; that adds latency and can timeout. We want a fixed, cached Node version for reproducible CI.- Bun global cache (
~/.bun/install/cache) — test.yml, release.yml, benchmark-tests.yml, publish-npm.yml, and nightly.yml all cache this path withactions/cache@v4keyed bybun.lock. Why: Bun install is fast, but re-downloading every package every run was still a major cost; caching the global cache avoids re-downloading tarballs while lettingbun installdo its fast hardlink/clonefile intonode_modules. We do not cachenode_modulesitself — compression/upload cost exceeds the gain. timeout-minuteson jobs — We set explicit timeouts (e.g. 20–30 min for test jobs, 45 for release build-desktop). Why: So a hung or extremely slow run fails in a bounded time instead of burning runner hours; also makes flakiness visible.
- Electrobun PR release validation:
.github/workflows/test-electrobun-release.yml— on pull requests; runs the same Electrobun release build matrix in build-only mode without creating a GitHub release. - Electrobun release:
.github/workflows/release-electrobun.yml— on version tag push or manual dispatch; builds macOS arm64, macOS x64, Windows x64, and Linux x64 Electrobun artifacts plus update channel files. - Legacy desktop compatibility stub:
.github/workflows/release.yml— manual workflow that only points maintainers at the Electrobun release path. - Local desktop build: From repo root, use the Electrobun path:
bun run build:desktopfor a local bundle build, thenbash apps/app/electrobun/scripts/smoke-test.shfor packaged desktop verification.
Electrobun writes platform-prefixed flat artifact names into apps/app/electrobun/artifacts/, for example:
canary-macos-arm64-Milady-canary.app.tar.zstcanary-macos-arm64-Milady-canary.dmgcanary-macos-arm64-update.json
Why the workflow mirrors that shape directly to https://milady.ai/releases/:
- The Electrobun updater resolves manifests at
${baseUrl}/${platformPrefix}-update.json, not${baseUrl}/${channel}/update.json. - It also resolves tarballs at
${baseUrl}/${platformPrefix}-${tarballFileName}. - Because of that, the release upload step must publish
*-update.json,*.tar.zst, and optional*.patchfiles at the flat release root. Uploading only a genericupdate.jsonor nesting files under version folders breaks in-app updates.
The official Electrobun docs expect the CLI to come from the project dependency and be invoked through npm scripts or bunx. Milady now uses the shared desktop builder to reach that package-local path:
apps/app/electrobun/package.jsondeclareselectrobunas a dependency.scripts/desktop-build.mjs stageinstalls the Electrobun workspace package before packaging.scripts/desktop-build.mjs packagedrivesbun run build -- --env=...insideapps/app/electrobun, and that script invokesbunx electrobun buildagainst the package-local dependency.
We still keep two Windows-specific guards around that documented flow:
- Pre-extract the Electrobun CLI tarball:
electrobun@1.16.0still shells out to plaintar -xzf ...on Windows. On GitHub runners that can resolve to GNU tar and fail onC:paths, so the workflow downloads the officialelectrobun-cli-win-x64.tar.gz, verifies its SHA256 from the GitHub release metadata, and extracts it withC:\\Windows\\System32\\tar.exebefore the build runs. - Seed
rceditwhen needed: the CLI still importsrceditdynamically during Windows packaging, so the workflow copies a known-goodrcedit-x64.exefrom the already-installed workspace Bun packages into the Electrobun package before invokingbun run build. This avoids relying on a separate global registry fetch during release time.
Milady now carries both WebGPU paths in the desktop app:
- Renderer-side WebGPU: the existing avatar and vector-browser scenes run in the webview and prefer
three/webgpuwhen the embedded browser exposesnavigator.gpu. - Electrobun-native WebGPU:
apps/app/electrobun/electrobun.config.tsenablesbundleWGPU: trueon macOS, Windows, and Linux, so packaged desktop builds also include Dawn (libwebgpu_dawn.*) for Bun-sideGpuWindow,WGPUView, and<electrobun-wgpu>surfaces. - Renderer choice for packaged builds: macOS stays on the native renderer by default, while Windows and Linux default to bundled CEF. That matches Electrobun's current cross-platform guidance: Linux distribution should use CEF-backed
BrowserWindow/BrowserViewinstances, and CEF gives us the most consistent browser-side WebGPU path on the non-macOS desktop targets.
Why this split exists:
- The current UI/React surfaces already live in the renderer webview, so browser WebGPU remains the lowest-risk path for those scenes.
- Bundling Dawn keeps the desktop runtime ready for native GPU surfaces and Bun-side compute/render workloads without maintaining a separate desktop flavor.
The local Electrobun smoke test now verifies the backend, not just the window shell:
- After building,
apps/app/electrobun/scripts/smoke-test.shlaunches the packaged app and tails~/.config/Milady/milady-startup.log. - It fails if the child runtime logs
Cannot find module, exits before becoming healthy, or never reachesRuntime started -- agent: ... port: .... - Once the startup log reports a port, the script probes
http://127.0.0.1:${port}/api/healthand requires that endpoint to stay healthy for the liveness window. - On Windows,
apps/app/electrobun/scripts/smoke-test-windows.ps1now prefers the packaged*.tar.zstbundle and launches itslauncher.exedirectly. It only falls back to theMilady-Setup*.exeinstaller path when no direct packaged bundle artifact is available.
Why: the previous smoke test could pass while the launcher stayed open but the embedded agent backend had already crashed.
- Electrobun startup and exception handling — why the agent keeps the API server up on load failure.
- Plugin resolution and NODE_PATH — why dynamic plugin imports need
NODE_PATHin dev/CLI/Electrobun. - CHANGELOG — concrete changes and WHYs per release.