Skip to content

Commit 9f612c8

Browse files
viniciusdsmellogustavocidornelas
authored andcommitted
feat(examples): add async LangChain callback handler notebook
1 parent f1b9761 commit 9f612c8

File tree

1 file changed

+343
-0
lines changed

1 file changed

+343
-0
lines changed
Lines changed: 343 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,343 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"metadata": {},
6+
"source": [
7+
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/openlayer-ai/openlayer-python/blob/main/examples/tracing/langchain/async_langchain_callback.ipynb)\n",
8+
"\n",
9+
"# <a id=\"top\">Openlayer Async LangChain Callback Handler</a>\n",
10+
"\n",
11+
"This notebook demonstrates how to use Openlayer's **AsyncOpenlayerHandler** to monitor async LLMs, chains, tools, and agents built with LangChain.\n",
12+
"\n",
13+
"The AsyncOpenlayerHandler provides:\n",
14+
"- Full async/await support for non-blocking operations\n",
15+
"- Proper trace management in async environments\n",
16+
"- Support for concurrent LangChain operations\n",
17+
"- Comprehensive monitoring of async chains, tools, and agents\n"
18+
]
19+
},
20+
{
21+
"cell_type": "markdown",
22+
"metadata": {},
23+
"source": [
24+
"## 1. Installation\n",
25+
"\n",
26+
"Install the required packages:\n"
27+
]
28+
},
29+
{
30+
"cell_type": "code",
31+
"execution_count": null,
32+
"metadata": {},
33+
"outputs": [],
34+
"source": [
35+
"%pip install openlayer langchain langchain_openai langchain_community\n"
36+
]
37+
},
38+
{
39+
"cell_type": "markdown",
40+
"metadata": {},
41+
"source": [
42+
"## 2. Environment Setup\n",
43+
"\n",
44+
"Configure your API keys and Openlayer settings:\n"
45+
]
46+
},
47+
{
48+
"cell_type": "code",
49+
"execution_count": null,
50+
"metadata": {},
51+
"outputs": [],
52+
"source": [
53+
"import os\n",
54+
"import asyncio\n",
55+
"from typing import List, Dict, Any\n",
56+
"\n",
57+
"# OpenAI API key\n",
58+
"os.environ[\"OPENAI_API_KEY\"] = \"\"\n",
59+
"\n",
60+
"# Openlayer configuration\n",
61+
"os.environ[\"OPENLAYER_API_KEY\"] = \"\"\n",
62+
"os.environ[\"OPENLAYER_INFERENCE_PIPELINE_ID\"] = \"\"\n"
63+
]
64+
},
65+
{
66+
"cell_type": "markdown",
67+
"metadata": {},
68+
"source": [
69+
"## 3. Instantiate the AsyncOpenlayerHandler\n",
70+
"\n",
71+
"Create the async callback handler:\n"
72+
]
73+
},
74+
{
75+
"cell_type": "code",
76+
"execution_count": null,
77+
"metadata": {},
78+
"outputs": [],
79+
"source": [
80+
"from openlayer.lib.integrations import langchain_callback\n",
81+
"\n",
82+
"# Create the async callback handler\n",
83+
"async_openlayer_handler = langchain_callback.AsyncOpenlayerHandler(\n",
84+
" # Optional: Add custom metadata that will be attached to all traces\n",
85+
" user_id=\"demo_user\",\n",
86+
" environment=\"development\",\n",
87+
" session_id=\"async_langchain_demo\"\n",
88+
")\n",
89+
"\n",
90+
"print(\"AsyncOpenlayerHandler created successfully!\")\n"
91+
]
92+
},
93+
{
94+
"cell_type": "markdown",
95+
"metadata": {},
96+
"source": [
97+
"## 4. Basic Async Chat Example\n",
98+
"\n",
99+
"Let's start with a simple async chat completion:\n"
100+
]
101+
},
102+
{
103+
"cell_type": "code",
104+
"execution_count": null,
105+
"metadata": {},
106+
"outputs": [],
107+
"source": [
108+
"from langchain_openai import ChatOpenAI\n",
109+
"from langchain.schema import HumanMessage, SystemMessage\n",
110+
"\n",
111+
"async def basic_async_chat():\n",
112+
" \"\"\"Demonstrate basic async chat with tracing.\"\"\"\n",
113+
" \n",
114+
" # Create async chat model with callback\n",
115+
" chat = ChatOpenAI(\n",
116+
" model=\"gpt-3.5-turbo\",\n",
117+
" max_tokens=100,\n",
118+
" temperature=0.7,\n",
119+
" callbacks=[async_openlayer_handler]\n",
120+
" )\n",
121+
" \n",
122+
" # Single async invocation\n",
123+
" print(\"🤖 Single async chat completion...\")\n",
124+
" messages = [\n",
125+
" SystemMessage(content=\"You are a helpful AI assistant.\"),\n",
126+
" HumanMessage(content=\"What are the benefits of async programming in Python?\")\n",
127+
" ]\n",
128+
" \n",
129+
" response = await chat.ainvoke(messages)\n",
130+
" print(f\"Response: {response.content}\")\n",
131+
" \n",
132+
" return response\n",
133+
"\n",
134+
"# Run the basic example\n",
135+
"response = await basic_async_chat()\n",
136+
"print(\"\\n✅ Basic async chat completed and traced!\")\n"
137+
]
138+
},
139+
{
140+
"cell_type": "markdown",
141+
"metadata": {},
142+
"source": [
143+
"## 5. Concurrent Async Operations\n",
144+
"\n",
145+
"Demonstrate the power of async with concurrent operations:\n"
146+
]
147+
},
148+
{
149+
"cell_type": "code",
150+
"execution_count": null,
151+
"metadata": {},
152+
"outputs": [],
153+
"source": [
154+
"async def concurrent_chat_operations():\n",
155+
" \"\"\"Demonstrate concurrent async chat operations with individual tracing.\"\"\"\n",
156+
" \n",
157+
" chat = ChatOpenAI(\n",
158+
" model=\"gpt-3.5-turbo\",\n",
159+
" max_tokens=75,\n",
160+
" temperature=0.5,\n",
161+
" callbacks=[async_openlayer_handler]\n",
162+
" )\n",
163+
" \n",
164+
" # Define multiple questions to ask concurrently\n",
165+
" questions = [\n",
166+
" \"What is machine learning?\",\n",
167+
" \"Explain quantum computing in simple terms.\",\n",
168+
" \"What are the benefits of renewable energy?\",\n",
169+
" \"How does blockchain technology work?\"\n",
170+
" ]\n",
171+
" \n",
172+
" print(f\"🚀 Starting {len(questions)} concurrent chat operations...\")\n",
173+
" \n",
174+
" # Create concurrent tasks\n",
175+
" tasks = []\n",
176+
" for i, question in enumerate(questions):\n",
177+
" messages = [\n",
178+
" SystemMessage(content=f\"You are expert #{i+1}. Give a concise answer.\"),\n",
179+
" HumanMessage(content=question)\n",
180+
" ]\n",
181+
" task = chat.ainvoke(messages)\n",
182+
" tasks.append((question, task))\n",
183+
" \n",
184+
" # Execute all tasks concurrently\n",
185+
" import time\n",
186+
" start_time = time.time()\n",
187+
" \n",
188+
" results = await asyncio.gather(*[task for _, task in tasks])\n",
189+
" \n",
190+
" end_time = time.time()\n",
191+
" \n",
192+
" # Display results\n",
193+
" print(f\"\\n⚡ Completed {len(questions)} operations in {end_time - start_time:.2f} seconds\")\n",
194+
" for i, (question, result) in enumerate(zip([q for q, _ in tasks], results)):\n",
195+
" print(f\"\\n❓ Q{i+1}: {question}\")\n",
196+
" print(f\"💡 A{i+1}: {result.content[:100]}...\")\n",
197+
" \n",
198+
" return results\n",
199+
"\n",
200+
"# Run concurrent operations\n",
201+
"concurrent_results = await concurrent_chat_operations()\n",
202+
"print(\"\\n✅ Concurrent operations completed and all traced separately!\")\n"
203+
]
204+
},
205+
{
206+
"cell_type": "markdown",
207+
"metadata": {},
208+
"source": [
209+
"## 6. Async Streaming Example\n",
210+
"\n",
211+
"Demonstrate async streaming with token-by-token generation:\n"
212+
]
213+
},
214+
{
215+
"cell_type": "code",
216+
"execution_count": null,
217+
"metadata": {},
218+
"outputs": [],
219+
"source": [
220+
"async def async_streaming_example():\n",
221+
" \"\"\"Demonstrate async streaming with tracing.\"\"\"\n",
222+
" \n",
223+
" # Create streaming chat model\n",
224+
" streaming_chat = ChatOpenAI(\n",
225+
" model=\"gpt-3.5-turbo\",\n",
226+
" max_tokens=200,\n",
227+
" temperature=0.7,\n",
228+
" streaming=True,\n",
229+
" callbacks=[async_openlayer_handler]\n",
230+
" )\n",
231+
" \n",
232+
" print(\"🌊 Starting async streaming...\")\n",
233+
" \n",
234+
" messages = [\n",
235+
" SystemMessage(content=\"You are a creative storyteller.\"),\n",
236+
" HumanMessage(content=\"Tell me a short story about a robot learning to paint.\")\n",
237+
" ]\n",
238+
" \n",
239+
" # Stream the response\n",
240+
" full_response = \"\"\n",
241+
" async for chunk in streaming_chat.astream(messages):\n",
242+
" if chunk.content:\n",
243+
" print(chunk.content, end=\"\", flush=True)\n",
244+
" full_response += chunk.content\n",
245+
" \n",
246+
" print(\"\\n\")\n",
247+
" return full_response\n",
248+
"\n",
249+
"# Run streaming example\n",
250+
"streaming_result = await async_streaming_example()\n",
251+
"print(\"\\n✅ Async streaming completed and traced!\")\n"
252+
]
253+
},
254+
{
255+
"cell_type": "markdown",
256+
"metadata": {},
257+
"source": [
258+
"## 7. Async Chain Example\n",
259+
"\n",
260+
"Create and run an async chain with proper tracing:\n"
261+
]
262+
},
263+
{
264+
"cell_type": "code",
265+
"execution_count": null,
266+
"metadata": {},
267+
"outputs": [],
268+
"source": [
269+
"from langchain.chains import LLMChain\n",
270+
"from langchain.prompts import PromptTemplate\n",
271+
"from langchain_openai import OpenAI\n",
272+
"\n",
273+
"async def async_chain_example():\n",
274+
" \"\"\"Demonstrate async LLM chain with tracing.\"\"\"\n",
275+
" \n",
276+
" # Create LLM with callback\n",
277+
" llm = OpenAI(\n",
278+
" model=\"gpt-3.5-turbo-instruct\",\n",
279+
" max_tokens=150,\n",
280+
" temperature=0.8,\n",
281+
" callbacks=[async_openlayer_handler]\n",
282+
" )\n",
283+
" \n",
284+
" # Create a prompt template\n",
285+
" prompt = PromptTemplate(\n",
286+
" input_variables=[\"topic\", \"audience\"],\n",
287+
" template=\"\"\"\n",
288+
" Write a brief explanation about {topic} for {audience}.\n",
289+
" Make it engaging and easy to understand.\n",
290+
" \n",
291+
" Topic: {topic}\n",
292+
" Audience: {audience}\n",
293+
" \n",
294+
" Explanation:\n",
295+
" \"\"\"\n",
296+
" )\n",
297+
" \n",
298+
" # Create the chain\n",
299+
" chain = LLMChain(\n",
300+
" llm=llm,\n",
301+
" prompt=prompt,\n",
302+
" callbacks=[async_openlayer_handler]\n",
303+
" )\n",
304+
" \n",
305+
" print(\"🔗 Running async chain...\")\n",
306+
" \n",
307+
" # Run the chain asynchronously\n",
308+
" result = await chain.arun(\n",
309+
" topic=\"artificial intelligence\",\n",
310+
" audience=\"high school students\"\n",
311+
" )\n",
312+
" \n",
313+
" print(f\"Chain result: {result}\")\n",
314+
" return result\n",
315+
"\n",
316+
"# Run the chain example\n",
317+
"chain_result = await async_chain_example()\n",
318+
"print(\"\\n✅ Async chain completed and traced!\")\n"
319+
]
320+
}
321+
],
322+
"metadata": {
323+
"kernelspec": {
324+
"display_name": ".venv",
325+
"language": "python",
326+
"name": "python3"
327+
},
328+
"language_info": {
329+
"codemirror_mode": {
330+
"name": "ipython",
331+
"version": 3
332+
},
333+
"file_extension": ".py",
334+
"mimetype": "text/x-python",
335+
"name": "python",
336+
"nbconvert_exporter": "python",
337+
"pygments_lexer": "ipython3",
338+
"version": "3.10.16"
339+
}
340+
},
341+
"nbformat": 4,
342+
"nbformat_minor": 2
343+
}

0 commit comments

Comments
 (0)