How would you want an AI autoscaler to reason about scaling? #1
hwclass
announced in
Announcements
Replies: 1 comment
-
|
As mentioned in https://www.linkedin.com/feed/update/urn:li:ugcPost:7387501476232900609?commentUrn=urn%3Ali%3Acomment%3A%28ugcPost%3A7387501476232900609%2C7389598203366772736%29&dashCommentUrn=urn%3Ali%3Afsd_comment%3A%287389598203366772736%2Curn%3Ali%3AugcPost%3A7387501476232900609%29, scaling based on NATS queue backlog (count of pending messages) would be great |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hey everyone 👋
I’ve been experimenting with Docker’s new AI stack — Model Runner, cagent, and MCP — to build something I call Docktor 🐳🩺, an AI-native autoscaler for Docker Compose.
Instead of static scaling rules or thresholds, Docktor uses local LLM reasoning to decide why and when to scale up or down — based on live container metrics like CPU usage.
I’d love to hear from the community:
If you had an AI autoscaler running in your Docker setup — how would you want it to think?
Some questions to spark ideas from you:
If you’ve used Kubernetes Horizontal Pod Autoscalers, Prometheus, or even custom scripts, what’s something you wish they did better?
Let’s discuss how we could make scaling intelligent, explainable, and local-first.
Feel free to drop your thoughts below 👇 🤗
📎 Context / Links
Docker Model Runner, cagent, MCP
Beta Was this translation helpful? Give feedback.
All reactions