MCPizer lets your AI assistant (Claude, VS Code, etc.) call any REST API or gRPC service by automatically converting their schemas into MCP (Model Context Protocol) tools.
Key Features:
- 🚀 GitHub integration - fetch schemas directly with
github://
URLs - 📄 .proto file support - use gRPC without reflection enabled
- 🔐 Private repo support - automatic authentication via
gh
CLI - 🌐 Connect-RPC support - HTTP/JSON and gRPC modes
- 🔧 Auto-discovery - finds OpenAPI/Swagger endpoints automatically
MCPizer is a server that:
- Auto-discovers API schemas from your services (OpenAPI/Swagger, gRPC reflection, .proto files)
- Converts them into tools your AI can use
- Handles all the API calls with proper types and error handling
Works with any framework that exposes OpenAPI schemas (FastAPI, Spring Boot, Express, etc.) or gRPC services (with reflection or .proto files). No code changes needed in your APIs - just point MCPizer at them!
sequenceDiagram
participant AI as AI Assistant<br/>(Claude/VS Code)
participant MCP as MCPizer
participant API as Your APIs<br/>(REST/gRPC)
Note over AI,API: Initial Setup
MCP->>API: Auto-discover schemas
API-->>MCP: OpenAPI/gRPC reflection
MCP->>MCP: Convert to MCP tools
Note over AI,API: Runtime Usage
AI->>MCP: List available tools
MCP-->>AI: Tools from all APIs
AI->>MCP: Call tool "create_user"
MCP->>API: POST /users
API-->>MCP: {"id": 123, "name": "Alice"}
MCP-->>AI: Tool result
graph TB
subgraph "AI Assistants"
Claude[Claude Desktop]
VSCode[VS Code Extensions]
Other[Other MCP Clients]
end
subgraph "MCPizer"
Transport{Transport Layer}
Discovery[Schema Discovery]
Converter[Tool Converter]
Invoker[API Invoker]
Transport -->|STDIO/SSE| Discovery
Discovery --> Converter
Converter --> Invoker
end
subgraph "Your APIs"
FastAPI[FastAPI<br/>Auto-discovery]
Spring[Spring Boot<br/>Auto-discovery]
gRPC[gRPC Services<br/>Reflection/.proto]
Custom[Custom APIs<br/>Direct schema URL]
end
Claude --> Transport
VSCode --> Transport
Other --> Transport
Invoker --> FastAPI
Invoker --> Spring
Invoker --> gRPC
Invoker --> Custom
style MCPizer fill:#e1f5e1
style Transport fill:#fff2cc
style Discovery fill:#fff2cc
style Converter fill:#fff2cc
style Invoker fill:#fff2cc
# Install MCPizer
go install github.com/i2y/mcpizer/cmd/mcpizer@latest
# Verify installation
mcpizer --help
# Use default config file (configs/mcpizer.yaml)
mcpizer
# Specify config file via command line (highest priority)
mcpizer -config=/path/to/config.yaml
# Use GitHub-hosted config
mcpizer -config=github://myorg/configs/mcpizer-prod.yaml
# Or via environment variable
export MCPIZER_CONFIG_FILE=/path/to/config.yaml
mcpizer
# STDIO mode with custom config
mcpizer -transport=stdio -config=./my-config.yaml
Note: Make sure
$GOPATH/bin
is in your PATH. If not installed, install Go first.
Create a config file with your API endpoints:
schema_sources:
# Production APIs with HTTPS
- https://api.mycompany.com # Auto-discovers OpenAPI
- https://api.example.com/openapi.json # Direct schema URL
# GitHub-hosted schemas (NEW: use github:// URLs)
- github://myorg/api-specs/main/user-api.yaml # Uses gh CLI auth
- github://OAI/OpenAPI-Specification/examples/v3.0/petstore.yaml@master
- https://raw.githubusercontent.com/myorg/api-specs/main/user-api.yaml # Direct URL also works
# Internal services (FastAPI, Spring Boot, etc.)
- http://my-fastapi-app:8000 # Auto-discovers at /openapi.json, /docs
- http://spring-service:8080 # Auto-discovers at /v3/api-docs
# gRPC services (must have reflection enabled)
- grpc://my-grpc-service:50051
# gRPC with .proto files (NEW! - no reflection needed)
- url: https://raw.githubusercontent.com/myorg/protos/main/service.proto
server: grpc://production.example.com:50051
# Or use github:// for private repos (uses gh CLI)
- url: github://myorg/protos/service.proto@main
server: grpc://production.example.com:50051
# Connect-RPC services (NEW!)
# If the service supports gRPC reflection:
- grpc://connect.example.com:50051
# Connect-RPC with HTTP/JSON mode:
- url: github://connectrpc/examples/eliza/eliza.proto
server: https://demo.connectrpc.com
type: connect
mode: http # Use HTTP/JSON for easier debugging
# Local development
- http://localhost:3000
- grpc://localhost:50052
# Public test APIs
- https://petstore3.swagger.io/api/v3/openapi.json
- grpc://grpcb.in:9000
MCPizer supports two transport modes:
Used by clients that start MCPizer as a subprocess and communicate via standard input/output.
Example: Claude Desktop
Add to your configuration file:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json
- Windows:
%APPDATA%\Claude\claude_desktop_config.json
- Linux:
~/.config/Claude/claude_desktop_config.json
{
"mcpServers": {
"mcpizer": {
"command": "mcpizer",
"args": ["-transport=stdio", "-config=/path/to/your/config.yaml"]
}
}
}
The client will start MCPizer automatically when needed.
Used by clients that connect to a running MCPizer server via HTTP.
# Start MCPizer server (if your client doesn't start it automatically)
mcpizer
# Server runs at http://localhost:8080/sse
Configure your MCP client to connect to http://localhost:8080/sse
Note: Some clients may start the server automatically, while others require manual startup.
# Quick test - list available tools
mcpizer -transport=stdio << 'EOF'
{"jsonrpc":"2.0","method":"tools/list","id":1}
EOF
# Interactive mode
mcpizer -transport=stdio
I want to... | Do this... |
---|---|
Use my API with Claude Desktop | Add config to claude_desktop_config.json (see Quick Start) |
Test if my API works with MCP | Run mcpizer -transport=stdio and check tool list |
Run as a background service | Use SSE mode with mcpizer (no args) |
Debug connection issues | Set MCPIZER_LOG_LEVEL=debug |
Use a private GitHub repo | Use github:// URLs (requires gh CLI) |
Use gRPC without reflection | Use .proto files with server field |
Multiple environments, same API | Use same schema file, different server values |
MCPizer looks for config in this order:
-config
command line flag (highest priority)$MCPIZER_CONFIG_FILE
environment variableconfigs/mcpizer.yaml
(default)
REST APIs (OpenAPI/Swagger)
schema_sources:
# Auto-discovery from base URL
- https://api.production.com # Tries /openapi.json, /swagger.json, etc.
- http://internal-api:8000 # For internal services
# Direct schema URLs
- https://api.example.com/v3/openapi.yaml
- https://raw.githubusercontent.com/company/api-specs/main/openapi.json
Connect-RPC Services (NEW!)
schema_sources:
# Connect-RPC with gRPC reflection (if supported)
- grpc://connect.example.com:50051
# Connect-RPC with HTTP/JSON mode
- url: github://connectrpc/examples/eliza/eliza.proto
server: https://demo.connectrpc.com
type: connect
mode: http # HTTP/JSON mode (default)
# Connect-RPC with gRPC mode
- url: https://raw.githubusercontent.com/myorg/protos/service.proto
server: grpc://connect.example.com:50051
type: connect
mode: grpc # Use gRPC transport
Connect-RPC features:
- HTTP/JSON mode: Human-readable, works with curl and browser tools
- gRPC mode: Binary protocol, more efficient
- Dual support: Same service can be accessed via both modes
- No proxy needed: Direct HTTP/JSON communication
MCPizer supports OpenAPI schema files that are hosted separately from the actual API server. This is useful when:
- The API doesn't expose its own schema - You can write an OpenAPI spec for any API
- Schema is managed separately - Documentation team maintains schemas independently
- Multiple environments - One schema file for dev/staging/production APIs
How it works:
schema_sources:
# Schema file points to production API
- https://docs.company.com/api/v1/openapi.yaml
# Local schema file for external API
- ./schemas/third-party-api.yaml
The OpenAPI spec contains server URLs:
servers:
- url: https://api.production.com
description: Production server
- url: https://api.staging.com
description: Staging server
MCPizer will:
- Fetch the schema from the schema_sources URL
- Read the
servers
section from the OpenAPI spec - Use the first available server URL for actual API calls
Example: Creating OpenAPI spec for an API without documentation
If you have an API at https://internal-api.company.com
that doesn't provide OpenAPI:
- Write your own OpenAPI spec:
openapi: 3.0.0
info:
title: Internal API
version: 1.0.0
servers:
- url: https://internal-api.company.com
paths:
/users:
get:
summary: List users
responses:
'200':
description: Success
content:
application/json:
schema:
type: array
items:
type: object
properties:
id: {type: integer}
name: {type: string}
- Host it anywhere:
- GitHub:
https://raw.githubusercontent.com/yourorg/specs/main/api.yaml
- S3/CDN:
https://cdn.company.com/api-specs/v1/openapi.json
- Local file:
./schemas/third-party-api.yaml
- GitHub:
- Point MCPizer to your schema file
graph TD
Start["Base URL provided:<br/>http://your-api:8000"]
Try1["/openapi.json<br/>FastAPI default"]
Try2["/docs/openapi.json<br/>FastAPI alt"]
Try3["/swagger.json<br/>Swagger 2.0"]
Try4["/v3/api-docs<br/>Spring Boot"]
Try5["...more paths..."]
Found["✓ Schema found!<br/>Parse and convert"]
NotFound["✗ Not found<br/>Try direct URL"]
Start --> Try1
Try1 -->|404| Try2
Try2 -->|404| Try3
Try3 -->|404| Try4
Try4 -->|404| Try5
Try1 -->|200| Found
Try2 -->|200| Found
Try3 -->|200| Found
Try4 -->|200| Found
Try5 -->|All fail| NotFound
style Start fill:#e3f2fd
style Found fill:#c8e6c9
style NotFound fill:#ffcdd2
Supported frameworks:
- FastAPI:
/openapi.json
,/docs/openapi.json
- Spring Boot:
/v3/api-docs
,/swagger-ui/swagger.json
- Express/NestJS:
/api-docs
,/swagger.json
- Rails:
/api/v1/swagger.json
,/apidocs
- See full list
gRPC Services
schema_sources:
# Using gRPC reflection (requires reflection enabled on server)
- grpc://your-grpc-host:50051 # Your service
- grpc://grpcb.in:9000 # Public test service
# Using .proto files (NEW! - no reflection needed)
- url: https://raw.githubusercontent.com/grpc/grpc-go/master/examples/helloworld/helloworld/helloworld.proto
server: grpc://production.example.com:50051
# Private GitHub .proto files (uses gh CLI authentication)
- url: github://myorg/protos/user-service.proto
server: grpc://user-service:50051
# With specific branch/tag
- url: github://grpc/grpc-go/examples/helloworld/helloworld/[email protected]
server: grpc://production.example.com:50051
Option 1: gRPC Reflection (requires reflection enabled):
// In your gRPC server
import "google.golang.org/grpc/reflection"
reflection.Register(grpcServer)
Option 2: .proto Files (NEW! - more secure, no reflection needed):
- Host your
.proto
files anywhere (GitHub, S3, CDN, etc.) - GitHub URLs (
github://
) automatically usegh
CLI authentication - Specify the
server
endpoint separately - Perfect for production where reflection is disabled
- Allows schema versioning and CI/CD validation
For alternative reflection implementations, see:
- connectrpc/grpcreflect-go Connect-Go's reflection implementation
Local Files
schema_sources:
- ./api-spec.json
- /path/to/openapi.yaml
MCPizer can fetch schemas directly from GitHub repositories using the gh
CLI tool - including both OpenAPI and .proto files:
schema_sources:
# OpenAPI schemas from GitHub
- github://owner/repo/path/to/openapi.yaml
- github://microsoft/api-guidelines/graph/[email protected]
# .proto files from GitHub (NEW!)
- url: github://grpc/grpc-go/examples/helloworld/helloworld/helloworld.proto@master
server: grpc://production.example.com:50051
# Private repositories (uses gh CLI authentication)
- github://myorg/private-apis/user-api.yaml
- url: github://myorg/private-protos/[email protected]
server: grpc://internal-service:50051
# Load MCPizer config itself from GitHub!
# Set MCPIZER_CONFIG_FILE=github://myorg/configs/mcpizer.yaml
Benefits:
- ✅ Works with private repositories (uses
gh
authentication) - ✅ Specify branches/tags with
@ref
syntax - ✅ No need to manage raw GitHub URLs or tokens
- ✅ Supports both OpenAPI and .proto files
- ✅ Config files can also be stored in GitHub
Requirements:
- Install GitHub CLI:
brew install gh
(macOS) or see docs - Authenticate:
gh auth login
Variable | Default | When to use |
---|---|---|
MCPIZER_CONFIG_FILE |
~/.mcpizer.yaml |
Different config per environment Can be github:// URL! |
MCPIZER_LOG_LEVEL |
info |
Set to debug for troubleshooting |
MCPIZER_LOG_FILE |
/tmp/mcpizer.log |
Change log location (STDIO mode) |
MCPIZER_LISTEN_ADDR |
:8080 |
Change port (SSE mode) |
MCPIZER_HTTP_CLIENT_TIMEOUT |
30s |
Slow APIs need more time |
# 1. Your FastAPI runs on port 8000
python -m uvicorn main:app
# 2. Install MCPizer
go install github.com/i2y/mcpizer/cmd/mcpizer@latest
# 3. Configure (~/.mcpizer.yaml)
echo "schema_sources:\n - http://localhost:8000" > ~/.mcpizer.yaml
# 4. Add to Claude Desktop config and restart
# Now ask Claude: "What endpoints are available?"
# Quick check - what tools are available?
mcpizer -transport=stdio << 'EOF'
{"jsonrpc":"2.0","method":"tools/list","id":1}
EOF
# Should list all your API endpoints as tools
# For APIs that require authentication headers
schema_sources:
# Object format with headers (for fetching schemas)
- url: https://api.example.com/openapi.json
headers:
Authorization: "Bearer YOUR_API_TOKEN"
X-API-Key: "YOUR_API_KEY"
# GitHub private repos (automatic auth via gh CLI)
- github://myorg/private-apis/openapi.yaml # No headers needed!
- url: github://myorg/private-protos/api.proto # gh handles auth
server: grpc://api.example.com:50051
# Simple format (no auth required)
- https://public-api.example.com/swagger.json
Note: These headers are used when fetching the schema files. Headers required for actual API calls should be defined in the OpenAPI spec itself.
# 1. Check if your API is running
curl http://localhost:8000/openapi.json # Should return JSON
# 2. Run with debug logging
MCPIZER_LOG_LEVEL=debug mcpizer -transport=stdio
# 3. Check the log file
tail -f /tmp/mcpizer.log
Option 1: If reflection is enabled
# Simple - just point to the service
schema_sources:
- grpc://my-service:50051
Option 2: Using .proto files (recommended)
# More secure - no reflection needed in production
schema_sources:
# From GitHub (private repos supported)
- url: github://mycompany/protos/[email protected]
server: grpc://user-service.prod:443
# From any HTTPS URL
- url: https://cdn.mycompany.com/schemas/order-service.proto
server: grpc://order-service.prod:443
Option 1: Direct binary execution
# Run in background with specific config
mcpizer -config /etc/mcpizer/production.yaml &
# Or use systemd (create /etc/systemd/system/mcpizer.service)
[Unit]
Description=MCPizer MCP Server
After=network.target
[Service]
Type=simple
ExecStart=/usr/local/bin/mcpizer
Environment="MCPIZER_CONFIG_FILE=/etc/mcpizer/production.yaml"
Restart=always
User=mcpizer
[Install]
WantedBy=multi-user.target
# See what's happening
MCPIZER_LOG_LEVEL=debug mcpizer -transport=stdio
# Watch logs (STDIO mode)
tail -f /tmp/mcpizer.log
# Test your API is accessible
curl http://your-api-host:8000/openapi.json
# Test gRPC reflection
grpcurl -plaintext your-grpc-host:50051 list
Problem | Solution |
---|---|
"No tools available" | • Check API is running • Try direct schema URL • Check debug logs |
"Connection refused" | • Wrong port? • Check if API is running • Firewall blocking? |
"String should have at most 64 characters" | Update MCPizer - this is fixed in latest version |
gRPC "connection refused" | • Enable reflection in your gRPC server • Check with grpcurl • Or use .proto file approach instead |
"Schema not found at base URL" | • Specify exact schema path • Check if API exposes OpenAPI |
".proto file missing server" | • Add server: grpc://host:port to your config• Required for .proto files |
Here's how MCPizer works with a FastAPI service:
flowchart LR
subgraph "Your FastAPI App"
API[FastAPI Service<br/>Port 8000]
Schema["/openapi.json<br/>Auto-generated"]
API --> Schema
end
subgraph "MCPizer Config"
Config["~/.mcpizer.yaml<br/>schema_sources:<br/>http://my-fastapi:8000"]
end
subgraph "MCPizer Process"
Discover["(1) Discover schema<br/>at /openapi.json"]
Convert["(2) Convert endpoints<br/>to MCP tools"]
Register["(3) Register tools<br/>with MCP protocol"]
Discover --> Convert
Convert --> Register
end
subgraph "AI Assistant"
List["List tools:<br/>• get_item<br/>• create_item<br/>• update_item"]
Call["Call: get_item<br/>{item_id: 123}"]
Result["Result:<br/>{id: 123, name: 'Test'}"]
List --> Call
Call --> Result
end
Config --> Discover
Schema --> Discover
Register --> List
Call -->|HTTP GET /items/123| API
API -->|JSON Response| Result
style API fill:#e8f4fd
style Config fill:#fff4e6
style Register fill:#e8f5e9
style Result fill:#f3e5f5
# main.py
from fastapi import FastAPI
app = FastAPI()
@app.get("/items/{item_id}")
def get_item(item_id: int, q: str = None):
return {"item_id": item_id, "q": q}
# MCPizer auto-discovers at http://localhost:8000/openapi.json
Option 1: Using Reflection
// Enable reflection for MCPizer
import "google.golang.org/grpc/reflection"
func main() {
s := grpc.NewServer()
pb.RegisterYourServiceServer(s, &server{})
reflection.Register(s) // This line enables MCPizer support
s.Serve(lis)
}
Option 2: Using .proto Files (Recommended for Production)
# config.yaml
schema_sources:
# Your .proto file in version control
- url: github://myorg/protos/[email protected]
server: grpc://user-service.prod.example.com:443
# Multiple environments, same schema
- url: github://myorg/protos/[email protected]
server: grpc://user-service.staging.example.com:443
Benefits:
- ✅ No reflection needed in production
- ✅ Version-controlled schemas
- ✅ CI/CD can validate schemas
- ✅ Same .proto for multiple environments
# Run tests
go test ./...
# Run integration tests (requires internet connection)
go test -tags=integration ./...
# Build locally
go build -o mcpizer ./cmd/mcpizer
# Run with example services (includes Petstore, gRPC test service, Jaeger)
docker compose up
# Run individual examples
cd examples/fastapi && pip install -r requirements.txt && python main.py
See examples/ for more complete examples:
- proto-config.yaml - Using .proto files with multiple environments
- fastapi/ - FastAPI integration example
- grpc-service/ - gRPC service with reflection
Contributions welcome! Please:
- Check existing issues first
- Fork and create a feature branch
- Add tests for new functionality
- Submit a PR
MIT - see LICENSE