Skip to content

Conversation

vkrizan
Copy link
Contributor

@vkrizan vkrizan commented Jun 30, 2025

No description provided.

@vkrizan
Copy link
Contributor Author

vkrizan commented Jun 30, 2025

cc @ehelms

@vkrizan
Copy link
Contributor Author

vkrizan commented Jun 30, 2025

Tests blocked on #18

@vkrizan vkrizan force-pushed the refactor-vuln-entry branch from a4c11cf to 36c5013 Compare July 1, 2025 07:50
@ehelms
Copy link
Member

ehelms commented Jul 3, 2025

I moved the upgrade out to a oneshot service (https://github.com/ehelms/puppet-iop/blob/master/manifests/service_vulnerability.pp#L60). If you rebase this and modify it there I think it will pass.

@vkrizan vkrizan force-pushed the refactor-vuln-entry branch from 36c5013 to 877936e Compare July 4, 2025 15:14
@vkrizan
Copy link
Contributor Author

vkrizan commented Jul 4, 2025

Rebased

@ehelms
Copy link
Member

ehelms commented Jul 7, 2025

The rebase is changing more than it needs to:

diff --git a/manifests/service_vulnerability.pp b/manifests/service_vulnerability.pp
index 41cbbbc..6366f5f 100644
--- a/manifests/service_vulnerability.pp
+++ b/manifests/service_vulnerability.pp
@@ -78,7 +78,7 @@ class iop::service_vulnerability (
         'Image'         => $image,
         'ContainerName' => 'iop-service-vuln-dbupgrade',
         'Network'       => 'iop-core-network',
-        'Exec'          => 'bash -c /engine/dbupgrade.sh',
+        'Exec'          => '/engine/entrypoint.sh database-upgrade',
         'Volume'        => [
           '/var/run/postgresql:/var/run/postgresql:rw',
         ],

@vkrizan
Copy link
Contributor Author

vkrizan commented Jul 8, 2025

@ehelms the point of this is to drop the oneoff container in favor of handling db upgrades in entrypoint.

@vkrizan vkrizan force-pushed the refactor-vuln-entry branch from 877936e to 388ac12 Compare July 8, 2025 08:19
@ehelms
Copy link
Member

ehelms commented Jul 11, 2025

I think the problem is that the vuln-manager service is starting but does not wait to signal running and thus execution continues of the other services. I think we would have to ensure sdnotify support to do this dependency chaining at the systemd level. I will try it at the puppet level.

@vkrizan
Copy link
Contributor Author

vkrizan commented Jul 11, 2025

@ehelms We can technically utilize health status of pods, which podman supports. I wanted to look into it closer, whether systemd can adopt health status of an underlying container.

@ehelms ehelms marked this pull request as draft July 11, 2025 16:23
@ehelms ehelms marked this pull request as ready for review July 11, 2025 16:23
@ehelms
Copy link
Member

ehelms commented Jul 11, 2025

I think for now we do one of the following:

  1. Update the migrate service exec to use the new command
  2. Leave this code alone, put this into draft PR for now and revisit when we have time

@vkrizan vkrizan marked this pull request as draft July 14, 2025 12:42
@vkrizan
Copy link
Contributor Author

vkrizan commented Jul 14, 2025

@ehelms Switched to draft. I think we need to have this figured out before there would be any new database migrations within the service required to be applied, as this standalone migration application is a one-off.

I think the problem is that the vuln-manager service is starting but does not wait to signal running and thus execution continues of the other services.

Is that an issue for the FDW setup?

@ehelms
Copy link
Member

ehelms commented Jul 14, 2025

Is that an issue for the FDW setup?

Does not appear to be so.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants