You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The smart edge and AI gateway for agents. Arch is a proxy server that handles the low-level work in building agents like applying guardrails, routing prompts to the right agent, and unifying access to LLMs. It's framework-agnostic, natively understands prompts, and helps you build agents faster.
This is a robust and configurable LLM proxy server built with Node.js, Express, and PostgreSQL. It acts as an intermediary between your applications and various Large Language Model (LLM) providers
A personal LLM gateway with fault-tolerant capabilities for calls to LLM models from any provider with OpenAI-compatible APIs. Advanced features like retry, model sequencing, and body parameter injection are also available. Especially useful to work with AI coders like Cline and RooCode and providers like OpenRouter.
Configurable, interactive proxy server for all LLM hackers. Features API key rotation, protocol conversion, and piping API traffic through locally installed CLI apps like gemini-cli. Route any app to any remote LLM model or backend, override hardcoded models
A TypeScript wrapper to seamlessly route multiple Vercel AI providers by model name, offering unparalleled flexibility and extensibility for managing diverse AI services—inspired by the provider-routing architecture of litellm but optimized for TypeScript/Vercel workflows