Hey — really interesting approach with static CI scanning for MCP servers. I've been thinking about the same problem from a different angle.
Your approach: Scan the server code/config for risk signals before deployment.
My question: What about runtime behavior? A server might pass all static checks but still fail 20% of the time in production.
I've been experimenting with a lightweight middleware that logs actual tool executions (success/fail, latency) and gives a simple trust verdict based on real history:
can_I_trust("my-server") → { verdict: "yes", success_rate: 99.2% }
It feels like static analysis (what you're doing) + runtime verification could be really complementary.
Curious:
- Have you thought about incorporating runtime data into trust scoring?
- Would a "runtime trust score" that feeds into your risk assessment be useful?
Here's my experiment if you want to look: https://github.com/xkumakichi/veridict
No pressure at all — just exploring this space and your project caught my eye.
Hey — really interesting approach with static CI scanning for MCP servers. I've been thinking about the same problem from a different angle.
Your approach: Scan the server code/config for risk signals before deployment.
My question: What about runtime behavior? A server might pass all static checks but still fail 20% of the time in production.
I've been experimenting with a lightweight middleware that logs actual tool executions (success/fail, latency) and gives a simple trust verdict based on real history:
It feels like static analysis (what you're doing) + runtime verification could be really complementary.
Curious:
Here's my experiment if you want to look: https://github.com/xkumakichi/veridict
No pressure at all — just exploring this space and your project caught my eye.