We likely need 2 containers:
- One to run
ami api on every server that needs to run an instance of the ML processing backend
- One to run the headless CLI processing tools, especially for processing a queue from a shared database with multiple compute nodes
- Optionally one to run the Gradio demo? (
ami gradio)
Each has the same dependencies, so they all use same container with different commands. Can the ami CLI tool be the entrypoint?