-
Notifications
You must be signed in to change notification settings - Fork 6
refactor(vuln): use database-upgrade entrypoint opt #20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
cc @ehelms |
Tests blocked on #18 |
a4c11cf
to
36c5013
Compare
I moved the upgrade out to a oneshot service (https://github.com/ehelms/puppet-iop/blob/master/manifests/service_vulnerability.pp#L60). If you rebase this and modify it there I think it will pass. |
36c5013
to
877936e
Compare
Rebased |
The rebase is changing more than it needs to:
|
@ehelms the point of this is to drop the oneoff container in favor of handling db upgrades in entrypoint. |
877936e
to
388ac12
Compare
I think the problem is that the vuln-manager service is starting but does not wait to signal running and thus execution continues of the other services. I think we would have to ensure |
@ehelms We can technically utilize health status of pods, which podman supports. I wanted to look into it closer, whether systemd can adopt health status of an underlying container. |
I think for now we do one of the following:
|
@ehelms Switched to draft. I think we need to have this figured out before there would be any new database migrations within the service required to be applied, as this standalone migration application is a one-off.
Is that an issue for the FDW setup? |
Does not appear to be so. |
No description provided.