Python + Agents livestream series: Resources #331
Replies: 58 comments 1 reply
-
|
2026/02/24: Discord OH Q&A: How does middleware work in the Agent Framework? 📹 0:01 The Agent Framework supports three types of middleware, each operating at a different level:
All three middleware types let you mutate the result if needed. You can define middleware using simple functions or classes. |
Beta Was this translation helpful? Give feedback.
-
|
2026/02/24: Discord OH Q&A: Why do the tools in the demos have hard-coded return values? 📹 4:01 The demo tools return hard-coded values so they work without requiring API keys. For a real implementation, you'd replace the hard-coded returns with actual API calls (e.g., |
Beta Was this translation helpful? Give feedback.
-
|
2026/02/24: Discord OH Q&A: How does "context" differ across frameworks? 📹 5:11 The word "context" is extremely overloaded in the AI/agent space. In the Agent Framework specifically:
Every framework uses "context" differently, and even within a single framework it can mean different things depending on where it appears. |
Beta Was this translation helpful? Give feedback.
-
|
2026/02/24: Discord OH Q&A: What should I do if I get an "unavailable model" error with GPT-5 Mini? 📹 6:52 GPT-5 Mini access may be more restricted for some users on GitHub Models. Workarounds:
All the examples in the repo check for a |
Beta Was this translation helpful? Give feedback.
-
|
2026/02/24: Discord OH Q&A: Is it possible to see the full information sent to the LLM? 📹 9:52 Yes — set the logging level to debug: import logging
logging.basicConfig(level=logging.DEBUG)This shows the full HTTP request being sent to the chat completions endpoint, including the JSON data with the conversation, model, streaming settings, and tool definitions. Since the Agent Framework wraps the OpenAI SDK, setting debug logging will show what's sent to the LLM. Seeing the response body is harder — the repo's AGENTS.md file has tips for how to inspect response bodies with various SDKs. Open Telemetry tracing (covered in the Thursday session) provides another way to see this information. |
Beta Was this translation helpful? Give feedback.
-
|
2026/02/24: Discord OH Q&A: Were these examples hand-coded or vibe-coded? 📹 13:54 A mix. The earlier examples shown in the session were mostly hand-coded. For later, more complex examples, the process was collaborative with GitHub Copilot:
It's described as a collaborative process rather than pure "vibe coding." |
Beta Was this translation helpful? Give feedback.
-
|
2026/02/24: Discord OH Q&A: Do you recommend starting with a deployed model (Azure Foundry) for learning agents? 📹 15:53 Yes, deploying sooner is better because:
Even $20 worth of credits goes a long way. You can use Azure, OpenAI directly, or both. The repo's README has instructions for deploying to Azure with |
Beta Was this translation helpful? Give feedback.
-
|
2026/02/24: Discord OH Q&A: Can you use local Ollama models with the Agent Framework? 📹 17:49 Yes, technically. The question is whether they work well. Tips:
A live demo showed Llama 3.1 successfully handling a basic agent example through Ollama. |
Beta Was this translation helpful? Give feedback.
-
|
2026/02/24: Discord OH Q&A: Are all the models you're using free? 📹 25:11 No. The cost breakdown:
|
Beta Was this translation helpful? Give feedback.
-
|
2026/02/24: Discord OH Q&A: Does the tracing in Agent Framework work with OpenAI tracing? 📹 28:00 Probably not directly. Agent Framework uses Open Telemetry for tracing, while OpenAI tracing appears to be its own thing (built specifically for the OpenAI Agents SDK). Since the Agent Framework wraps the OpenAI client, there might theoretically be a way to pass tracing info through, but it would likely not work out of the box. This topic is covered more in the Thursday session on Open Telemetry. |
Beta Was this translation helpful? Give feedback.
-
|
2026/02/24: Discord OH Q&A: How does the supervisor agent pattern work? 📹 29:06 A supervisor agent manages multiple specialist agents by wrapping them as tools:
Key observations from the live demo:
|
Beta Was this translation helpful? Give feedback.
-
|
2026/02/24: Discord OH Q&A: Can you use GitHub Copilot models with the Agent Framework? 📹 36:53 Yes. The Agent Framework has a GitHub Copilot provider:
It works by wrapping the Copilot CLI binary. In the live demo, it was tricky to get working inside a dev container (required installing the Copilot CLI and logging in within the container). Once set up, you just swap The GitHub Copilot team considers their agent runtime to be among the best available. Note that the Copilot CLI's agentic loop is actually different from VS Code's Copilot agentic loop — they implement things differently despite sharing the product name. Links shared: |
Beta Was this translation helpful? Give feedback.
-
|
2026/02/24: Discord OH Q&A: Do you always use Codespaces or only for demos? 📹 42:20 Lately, more local development instead of Codespaces. The main reason is that |
Beta Was this translation helpful? Give feedback.
-
|
2026/02/24: Discord OH Q&A: What is YOLO mode in Copilot? 📹 50:39 YOLO mode auto-approves all tool/command executions without confirmation. It's available both in the Copilot CLI and VS Code (search for "auto approve" in settings). Caution: Even inside dev containers and Codespaces, authenticated tools (like the GitHub MCP server) can still perform real actions. The recommendation is to approve commands per session (per chat thread) rather than enabling full YOLO mode globally, since authenticated access to services like GitHub means an agent could make real changes. |
Beta Was this translation helpful? Give feedback.
-
|
2026/02/25: Discord OH Q&A: How can we configure the GitHub workspace to call paid, authenticated Azure LLMs? 📹 0:54 The python-agentframework-demos repo README has instructions for configuring model providers. If you're not using GitHub Models in a Codespace, you need to set up a
The Links shared: |
Beta Was this translation helpful? Give feedback.
-
|
2026/03/03: What LLMs or SLMs do you recommend for running workflows locally? This question was asked in Discord chat near the end of the session but the recording cut out before it could be answered on stream. Foundry Local is a tool for running models locally, available on GitHub. |
Beta Was this translation helpful? Give feedback.
-
|
2026/03/03: Where can I find all the course series? This question was asked in Discord chat. Links shared: |
Beta Was this translation helpful? Give feedback.
-
|
2026/03/03: What is the status of Foundry Local for Linux? Currently, Foundry Local only supports Mac and Windows. Linux support is in private preview. Fill out this form |
Beta Was this translation helpful? Give feedback.
-
|
2026/03/04: How do you increase the performance (latency) of a multi-agent system? 📹 0:30 The first step is to set up quality evaluations with a ground truth baseline, so you can make performance changes and confirm quality doesn't regress. Once evaluations are in place, you can try several optimizations:
Always check evaluations after each change — sometimes a quality improvement causes latency to spike, making it not worth it. You need evaluations to reason about the latency/quality trade-off. |
Beta Was this translation helpful? Give feedback.
-
|
2026/03/04: How do you incorporate A2A protocols into model orchestrations to integrate other agent providers into Foundry orchestration? 📹 3:47 Microsoft Agent Framework has a sub-package specifically for A2A (Agent-to-Agent) integration. It lets you connect to an A2A agent and get responses from it. The documentation on hosting your agent with A2A protocol covers this integration. However, the specifics depend on whether you're trying to communicate with an A2A agent or host one yourself. |
Beta Was this translation helpful? Give feedback.
-
|
2026/03/04: Are there limitations in workflow evaluation using Microsoft Foundry if you deploy your agent as a hosted agent? 📹 5:48 Pamela hasn't personally deployed an agent as a hosted agent yet and couldn't speak to specific limitations. If anyone has experience with hosted agents and evaluation, they were encouraged to share. A follow-up series about hosting agents may provide the opportunity to explore this. |
Beta Was this translation helpful? Give feedback.
-
|
2026/03/04: What evaluation framework do you recommend — DeepEval, Ragas, others? 📹 6:42 For workflows, the same principles as agent evaluation apply: evaluate the final output of the workflow against ground truth, but also evaluate each individual agent along the way. Recommended options:
For .NET developers, Agent Eval (a .NET evaluation framework) was recommended by another attendee — it includes latency and cost checking plus built-in red teaming. Pamela also recommended subscribing to Hamel Husain's blog for everything related to LLM evaluation. |
Beta Was this translation helpful? Give feedback.
-
|
2026/03/04: Is there a more native way to access workflow context from middleware, rather than manually injecting it? 📹 10:50 This question was about middleware needing to save data to workflow shared state, but middleware doesn't have access to the workflow context. The attendee was manually injecting the context. Pamela acknowledged this is a deep question and suggested posting it as a discussion on the Agent Framework GitHub repo, since the middleware story for workflows specifically may need improvement. The discussions and issues on the agent framework repo have been very helpful for getting answers from the team. |
Beta Was this translation helpful? Give feedback.
-
|
2026/03/04: How does tracing/logging work with workflows? 📹 12:14 It works the same way as with agents — just call The traces show parent spans for each workflow step: the agent execution, the edge (transition between agents), and each subsequent agent. So you can see the full workflow flow in the trace viewer. This also works with Aspire or App Insights. All you need is: from agent_framework import configure_otel_providers
configure_otel_providers() |
Beta Was this translation helpful? Give feedback.
-
|
2026/03/04: Will Microsoft Agent Framework be submitted to the AI Foundation (which has MCP, Goose, and agents.md)? 📹 17:54 Pamela hasn't heard anything about this and isn't sure how projects get added to the foundation. Microsoft Agent Framework has a lot of Microsoft/Azure-specific integration, so it's unclear whether it would fit. Her observation is that protocols (A2A from Google, MCP from Anthropic) tend to come from companies developing frontier LLM models, and Microsoft doesn't have its own frontier models yet. Agent Framework tends to adopt emerging industry patterns (A2A, AGUI, MCP) rather than originating them. It would be nice if the industry agreed on common terminology, but terms like "magentic" originated from Microsoft (via AutoGen), while other frameworks like LangChain have their own orchestration concepts (e.g., Deep Agents). |
Beta Was this translation helpful? Give feedback.
-
|
2026/03/04: How should you version prompts and tool descriptions for agent systems? 📹 23:21 Since tool descriptions are in code and are part of what the LLM processes, it's hard to separate prompt versioning from code versioning. Pamela's recommendation: keep prompts in your codebase. You can put system prompts in separate files (markdown or Jinja2 templates) and pull them in, but version them alongside your code. Tie evaluations to your PRs — Pamela showed a GitHub Actions workflow that can be triggered on PRs to run evaluations against the local app inside the runner, upload results as artifacts, and summarize them. This way, changes to prompts or tools get evaluated as part of the normal code review process. Links shared: |
Beta Was this translation helpful? Give feedback.
-
|
2026/03/04: What real-world problems do these workflow patterns solve architecturally? 📹 33:31 Pamela acknowledged that while she can show the patterns, she can only speak to scenarios from her own job as an advocate. She plans to automate more of her own workflows with agent-framework in the future, possibly in conjunction with the Copilot SDK for coding tasks. Known strong use cases:
She encouraged attendees using workflows and agents in production to share what works and what doesn't to help inspire others. |
Beta Was this translation helpful? Give feedback.
-
|
2026/03/04: Can you use the OpenAI real-time API with Microsoft Agent Framework? 📹 36:41 Pamela hasn't played with the newest OpenAI real-time models yet. Another advocate, Bruno Capuano, has a sample that combines real-time audio with agent framework in .NET, using whisper for speech-to-text and text-to-speech with voice activity detection. Pamela suggested reaching out to Bruno on LinkedIn for additional advice or samples, and noted that showing the overlap of agent framework with different communication modalities (WhatsApp, real-time audio) is a common request. Links shared: |
Beta Was this translation helpful? Give feedback.
-
|
2026/03/04: Can we get the full stack code for the AI finance agent? 📹 40:50 Yes — the Agentic AI Investment Analysis Sample is the full-stack repo. It uses React (with React Flow), Tailwind for the frontend, and FastAPI for the backend. Important caveat: The repo currently uses an old version of agent framework and does not pin the version in requirements, so it's hard to run right now, unless you change requirements.txt to specify the old version |
Beta Was this translation helpful? Give feedback.
-
|
2026/03/04: Can you do breakpoint debugging with workflows in VS Code? 📹 27:34 Yes! Pamela demonstrated this live. Key tips:
Tip: You could even ask GitHub Copilot to write VS Code debug middleware for you. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Join us for our 6-part live stream series on using Python with the Microsoft Agent Framework to build AI agents and agentic workflows!
Register for the series
Livestreams
Tune in for the live streams from February 24th through March 5th, or watch them after:
Code samples
The majority of examples will be shown from this repository:
https://github.com/Azure-Samples/python-agentframework-demos
You can either open that in GitHub Codespaces and run most examples with GitHub Models, or login to Azure and use a Microsoft Foundry model deployment.
Office hours
After every session, at 11:30AM PT, we'll have an office hours in the Foundry Discord. Bring your questions there, or post them in this discussion!
We're also recording the OH as unlisted videos on YouTube and uploading the Q&A as comments to this thread:
Related resources
If you are enjoying this series, you might also like:
Beta Was this translation helpful? Give feedback.
All reactions