⚠️ This is an experimental feature. The agent runtime, deployment model, and APIs described here are under active development and subject to change.
Today, you can build custom agents with GitHub Copilot. You define your agent's personality and behavior in a markdown file (AGENTS.md), add skills as knowledge files, and configure MCP servers for live data and actions. All of that just works in VS Code or Copilot CLI — locally, on your machine.
This repo demonstrates an experimental new runtime that lets you deploy the same markdown-based agent project to Azure Functions with zero code changes. The agent runs in the cloud, behind an HTTP API, and can be called from anywhere.
Key features
- Deploy markdown-based agents as an Azure Functions app
- Choose from GitHub models or Microsoft Foundry models to power your agent
- Built-in HTTP APIs for chatting with your agent (
POST /agent/chat,POST /agent/chatstream) - Built-in MCP server endpoint for remote MCP clients (
/runtime/webhooks/mcp) - Built-in single-page chat UI
- Automatic session persistence with Azure Files
- Run prompts on a schedule using timer triggers
- Give your agent custom tools written in plain Python
Hosting your agent in Azure Functions
Azure Functions is a serverless compute platform that already supports runtimes like JavaScript, Python, and .NET. An agent project with AGENTS.md, skills, and MCP servers is just another workload. This experiment adds a new runtime to Azure Functions that natively understands and runs markdown-based agent projects.
Development workflow:
- Define and test your agent in VS Code as a standard Copilot project
- Deploy the same project to Azure Functions with
azd up - Your agent is now a cloud-hosted HTTP API — no rewrites needed
This repo includes a sample Microsoft expert agent that helps developers and architects look up Azure pricing, estimate costs, and answer questions using official Microsoft Learn documentation.
src/ # Your agent — a pure Copilot project, no cloud knowledge
├── AGENTS.md # Agent instructions and behavior (+ optional frontmatter)
├── .github/skills/
│ └── azure-pricing/ # Skill: fetch real-time Azure retail pricing
│ └── SKILL.md
├── .vscode/mcp.json # MCP servers (Microsoft Learn)
└── tools/
└── cost_estimator.py # Tool: estimate monthly/annual costs from a unit price
The src folder contains only your agent definition — no Copilot SDK, no Azure Functions code, no cloud infrastructure concerns. It's just a standard markdown-based agent project. The agent format is the programming model.
AGENTS.md supports optional YAML frontmatter. The frontmatter can be used to take your agent beyond HTTP or a chat interface by integrating with Azure Functions' event-driven programming model. For example, you can define timer-triggered functions that run on a schedule without needing to write any Azure Functions code.
- Open the
srcfolder in VS Code - Enable the experimental setting:
chat.useAgentSkills - Enable built-in tools in Copilot Chat
- Start chatting with your agent in Copilot Chat
Your agent's instructions from AGENTS.md, skills from .github/skills/, and MCP servers from .vscode/mcp.json are all automatically loaded.
The Azure Functions deployment requires a GitHub token with Copilot permissions. GitHub Copilot SDK (which is used by Functions to run your agent) currently requires authentication to persist and resume sessions (even though sessions are stored locally). If you choose a GitHub model to power your agent (see Model Selection), the token is also used to access the model.
- Go to https://github.com/settings/personal-access-tokens/new
- Under Permissions, click + Add permissions
- In the Select account permissions dropdown, check Copilot Requests (Read-only)
- Click Generate token and save it securely
From the terminal, run the following command:
azd upWithin minutes, you have a fully deployed agent behind an HTTP API and a built-in chat UI. The same source code that runs locally in Copilot Chat now runs remotely on Azure Functions.
During deployment, you'll be prompted for:
| Prompt | Description |
|---|---|
| Azure Location | Azure region for deployment |
| GitHub Token | Your GitHub PAT with Copilot Requests permission (required — used for session persistence and GitHub model access) |
| Model Selection | Which model to use (see below) |
| VNet Enabled | Whether to deploy with VNet integration |
You can choose from two categories of models:
- GitHub models (
github:prefix) — Use the GitHub Copilot model API. No additional Azure infrastructure is deployed. Examples:github:claude-sonnet-4.6,github:claude-opus-4.6,github:gpt-5.2 - Microsoft Foundry models (
foundry:prefix) — Deploys a Microsoft Foundry account and model in your subscription. Examples:foundry:gpt-4.1-mini,foundry:claude-opus-4-6,foundry:o4-mini
To change the model after initial deployment:
azd env set MODEL_SELECTION "github:gpt-5.2"
azd upWhen running in Azure, agent sessions are automatically persisted to an Azure Files share mounted into the function app. This means conversation state survives across function app restarts and is shared across all instances, enabling multi-turn conversations with session resumption.
Locally, sessions are stored in ~/.copilot/session-state/.
You can define scheduled agent runs directly in src/AGENTS.md frontmatter using a functions array.
---
functions:
- name: timerAgent
trigger: timer
schedule: "0 */2 * * * *"
prompt: "What's the price of a Standard_D4s_v5 VM in East US?"
logger: true
---Current behavior:
- Only
trigger: timeris supported right now. Other trigger types are explicitly rejected at startup. functionssection is optional.scheduleandpromptare required for timer entries.nameis optional (a safe unique name is generated if omitted).loggeris optional and defaults totrue.
When logger: true, the timer logs full agent output via logging.info, including:
session_id- final
response response_intermediatetool_calls
Timer functions are registered at startup from frontmatter and run in the same runtime as /agent/chat.
You can add custom tools by dropping plain Python files into src/tools/.
Example:
from pydantic import BaseModel, Field
class CostEstimatorParams(BaseModel):
unit_price: float = Field(description="Retail price per unit")
unit_of_measure: str = Field(description="Unit of measure, e.g. '1 Hour'")
quantity: float = Field(description="Monthly quantity")
async def cost_estimator(params: CostEstimatorParams) -> str:
"""Estimate monthly and annual costs from unit price and usage."""
monthly_cost = params.unit_price * params.quantity
annual_cost = monthly_cost * 12
return f"Monthly: ${monthly_cost:.4f} | Annual: ${annual_cost:.4f}"How tool discovery works:
- At runtime, the function app scans
tools/*.pyfor tool definitions. - It loads module-level functions defined in that module and filters out names that start with
_. - The function docstring becomes the tool description (fallback:
Tool: <function_name>if no docstring). - It registers only one function per file (the first function returned from discovery, which is name-sorted).
- If a tool module fails to import/load, the runtime logs the error and continues.
Guidelines:
- Keep tool functions focused and deterministic.
- Prefer a typed params model (for example, a Pydantic
BaseModel) and pass it as the function argument. - Use clear type hints and docstrings.
- Add any Python dependencies your tools need to
src/requirements.txt.
Important: custom Python tools run in the cloud runtime (Azure Functions). They are not executed in local Copilot Chat.
After deployment, open your function app root URL:
https://<your-app>.azurewebsites.net/
The root route serves a built-in single-page chat UI.
At first load, enter:
- Base URL (typically your function app URL)
- Chat function key (see next section for how to get this)
These values are stored in browser local storage. You can reopen/edit them later via the gear icon.
You can also prefill both values via URL hash:
https://<your-app>.azurewebsites.net/#baseUrl=https%3A%2F%2F<your-app>.azurewebsites.net&key=<url-encoded-key>
On load, the page reads and stores these values, then removes the hash from the address bar.
The function app also exposes an MCP server endpoint:
https://<your-app>.azurewebsites.net/runtime/webhooks/mcp
By default, this endpoint requires the MCP extension system key in the x-functions-key header.
# Get the function app name from azd
FUNC_NAME=$(azd env get-value AZURE_FUNCTION_NAME)
# Get the resource group
RG=$(az functionapp list --query "[?name=='$FUNC_NAME'].resourceGroup" -o tsv)
# Get the MCP extension system key
MCP_KEY=$(az functionapp keys list --name "$FUNC_NAME" --resource-group "$RG" --query systemKeys.mcp_extension -o tsv)
echo "$MCP_KEY"Use inputs with password: true so the MCP key isn't hardcoded in the file.
{
"inputs": [
{
"type": "promptString",
"id": "functions-mcp-extension-system-key",
"description": "Azure Functions MCP Extension System Key",
"password": true
},
{
"type": "promptString",
"id": "functionapp-host",
"description": "Function app host, e.g. func-api-xxxx.azurewebsites.net"
}
],
"servers": {
"remote-mcp-function": {
"type": "http",
"url": "https://${input:functionapp-host}/runtime/webhooks/mcp",
"headers": {
"x-functions-key": "${input:functions-mcp-extension-system-key}"
}
}
}
}Once deployed, your agent is available as an HTTP API with two chat endpoints:
POST /agent/chatfor standard JSON responsesPOST /agent/chatstreamfor streaming Server-Sent Events (SSE)
curl -X POST "https://<your-app>.azurewebsites.net/agent/chat?code=<function-key>" \
-H "Content-Type: application/json" \
-d '{"prompt": "What is the price of a Standard_D4s_v5 VM in East US?"}'{
"session_id": "abc123-def456-...",
"response": "The agent's final response text",
"response_intermediate": "Any intermediate responses",
"tool_calls": ["list of tools invoked during the response"]
}The response always includes a session_id (also returned in the x-ms-session-id response header). Use this ID to continue the conversation.
To resume an existing session, pass the session ID in the x-ms-session-id request header:
# Follow-up — resumes the same session with full conversation history
curl -X POST "https://<your-app>.azurewebsites.net/agent/chat?code=<function-key>" \
-H "Content-Type: application/json" \
-H "x-ms-session-id: abc123-def456-..." \
-d '{"prompt": "If I run that VM 24/7 for a month, what would it cost?"}'If you omit x-ms-session-id, a new session is created automatically and its ID is returned in the response. See test/test.cloud.http for more examples.
Use POST /agent/chatstream to receive responses incrementally as SSE events.
curl -N -X POST "https://<your-app>.azurewebsites.net/agent/chatstream?code=<function-key>" \
-H "Content-Type: application/json" \
-H "Accept: text/event-stream" \
-d '{"prompt": "Give me a quick summary of Azure Functions pricing in 3 bullets."}'To resume an existing session, pass x-ms-session-id the same way as /agent/chat.
Typical streamed event types include:
session(containssession_id)delta(incremental text chunks)intermediate(intermediate reasoning/response snippets)tool_start/tool_end(tool execution lifecycle metadata)message(final full response)done(stream completion)
Example SSE payload sequence:
data: {"type":"session","session_id":"..."}
data: {"type":"delta","content":"Hello"}
data: {"type":"tool_start","tool_name":"bash","tool_call_id":"..."}
data: {"type":"message","content":"Hello...final"}
data: {"type":"done"}
After deployment, get the function app hostname and the chat function key using the Azure CLI:
# Get the function app name from azd
FUNC_NAME=$(azd env get-value AZURE_FUNCTION_NAME)
# Get the resource group
RG=$(az functionapp list --query "[?name=='$FUNC_NAME'].resourceGroup" -o tsv)
# Get the base URL
HOST=$(az functionapp show --name "$FUNC_NAME" --resource-group "$RG" --query defaultHostName -o tsv)
echo "https://$HOST"
# Get the chat function key
az functionapp function keys list --name "$FUNC_NAME" --resource-group "$RG" --function-name chat --query default -o tsvUse these values to populate @baseUrl and @defaultKey in test/test.cloud.http.
- Python tools in
src/tools/do not work locally since they're not natively supported by Copilot. They are fully functional after deploying withazd up. - Use
azd up, notazd provision+azd deployseparately. The pre-package hook scripts don't run in the correct sequence when provision and deploy are executed independently. - Windows is not supported. The packaging hooks are shell scripts (
.sh) and require macOS, Linux, or WSL.
- Clone this repo
- Open
srcin VS Code and chat with the agent locally (MCP and skills work; Python tools require cloud deployment) - Explore the
srcfolder to see the agent definition - Run
azd upto deploy to Azure Functions - Open your cloud-hosted chat UI at
/ - Optionally call
/agent/chat(JSON) or/agent/chatstream(SSE) directly (seetest/test.cloud.httpfor examples)