Fix Pulumi deployment due to a partial apply#6
Conversation
There was an issue where a partial apply caused these resources to not be recreated meaning the contents were not copied to or created on the remote machine.
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (2)
WalkthroughUpdated trigger/dependency wiring in main.go: many remote copy/command resources now include Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
🍹
|
|
🍹 The Update (preview) for holochain/nomad-server/nomad-server (at ed1921b) was successful. ✨ Neo ExplanationA change to the Nomad server configuration file is triggering a full re-provisioning of the Nomad server setup, including service restarts and ACL re-bootstrapping, alongside a refactor of how job variables are injected into the server environment.Root Cause AnalysisThe Nomad configuration file ( Dependency ChainThe updated Nomad config file is the root trigger. Because nearly all remote commands use Risk analysisMedium risk. The entire Nomad server setup sequence will be torn down and re-run — including service restarts ( Resource Changes Name Type Operation
+- create-etc-nomad-dir command:remote:Command replaced
+- copy-nomad-config command:remote:CopyToRemote replaced
+- apply-job-runner-policy command:remote:Command replaced
+- copy-ca-key command:remote:Command replaced
+- copy-job-runner-policy command:remote:CopyToRemote replaced
+- copy-ca-cert command:remote:CopyToRemote replaced
+- create-server-cert command:remote:Command replaced
+- chown-etc-nomad-dir command:remote:Command replaced
+- enable-nomad-service command:remote:Command replaced
+- start-nomad-service command:remote:Command replaced
+- copy-nomad-service-config command:remote:CopyToRemote replaced
+- wait-for-nomad-user command:remote:Command replaced
+ add-nomad-jobs-vars command:remote:Command create
- add-influx-db-token-var command:remote:Command delete
+- create-opt-nomad-data-dir command:remote:Command replaced
+- chown-etc-nomad-dir-before-server-cert command:remote:Command replaced
+- print-server-cert command:remote:Command replaced
+- acl-bootstrap command:remote:Command replaced
|
These resources already use the reserved IP in the conn so its not needed.
407e446 to
571839b
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@main.go`:
- Around line 276-289: The Create command string in the remote.NewCommand call
is currently passing the literal text "LC_UNYT_DURABLE_OBJECTS_SECRET" instead
of expanding the environment variable; update the Create string in the
CommandArgs passed to remote.NewCommand so UNYT_DURABLE_OBJECTS_SECRET is set to
the environment variable (i.e., reference $LC_UNYT_DURABLE_OBJECTS_SECRET or
\"$LC_UNYT_DURABLE_OBJECTS_SECRET\" consistent with the other vars) so the
secret stored by nomad var put comes from unytDurableObjectsSecret rather than a
literal.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 2dd70cce-b966-4cd2-8d59-f58a04f238c0
📒 Files selected for processing (2)
Pulumi.nomad-server.yamlmain.go
571839b to
ed1921b
Compare
|
✔️ fddaddb...ed1921b - Conventional commits check succeeded. |
After a partial/failed apply then some resources are not re-created when the droplet changes meaning that some files are missing from the droplet. I've added the droplet ID as a trigger to all resources so if the droplet changes then all resources are recreated causing all the commands and copies to run again.
All the Nomad variables were missing or out-of-date too so I've updated them as the new server will need them.
Summary by CodeRabbit