Skip to content

Commit a989f82

Browse files
authored
core: expand docstring for RunnableParallel (langchain-ai#16600)
- **Description:** expand docstring for RunnableParallel - **Issue:** langchain-ai#16462 Feel free to modify this or let me know how it can be improved!
1 parent e30c666 commit a989f82

File tree

1 file changed

+78
-4
lines changed
  • libs/core/langchain_core/runnables

1 file changed

+78
-4
lines changed

libs/core/langchain_core/runnables/base.py

Lines changed: 78 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1804,7 +1804,7 @@ def mul_two(x: int) -> int:
18041804
# Or equivalently:
18051805
# sequence = RunnableSequence(first=runnable_1, last=runnable_2)
18061806
sequence.invoke(1)
1807-
await runnable.ainvoke(1)
1807+
await sequence.ainvoke(1)
18081808
18091809
sequence.batch([1, 2, 3])
18101810
await sequence.abatch([1, 2, 3])
@@ -2451,9 +2451,83 @@ async def input_aiter() -> AsyncIterator[Input]:
24512451

24522452

24532453
class RunnableParallel(RunnableSerializable[Input, Dict[str, Any]]):
2454-
"""
2455-
A runnable that runs a mapping of runnables in parallel,
2456-
and returns a mapping of their outputs.
2454+
"""A runnable that runs a mapping of runnables in parallel, and returns a mapping
2455+
of their outputs.
2456+
2457+
RunnableParallel is one of the two main composition primitives for the LCEL,
2458+
alongside RunnableSequence. It invokes runnables concurrently, providing the same
2459+
input to each.
2460+
2461+
A RunnableParallel can be instantiated directly or by using a dict literal within a
2462+
sequence.
2463+
2464+
Here is a simple example that uses functions to illustrate the use of
2465+
RunnableParallel:
2466+
2467+
.. code-block:: python
2468+
2469+
from langchain_core.runnables import RunnableLambda
2470+
2471+
def add_one(x: int) -> int:
2472+
return x + 1
2473+
2474+
def mul_two(x: int) -> int:
2475+
return x * 2
2476+
2477+
def mul_three(x: int) -> int:
2478+
return x * 3
2479+
2480+
runnable_1 = RunnableLambda(add_one)
2481+
runnable_2 = RunnableLambda(mul_two)
2482+
runnable_3 = RunnableLambda(mul_three)
2483+
2484+
sequence = runnable_1 | { # this dict is coerced to a RunnableParallel
2485+
"mul_two": runnable_2,
2486+
"mul_three": runnable_3,
2487+
}
2488+
# Or equivalently:
2489+
# sequence = runnable_1 | RunnableParallel(
2490+
# {"mul_two": runnable_2, "mul_three": runnable_3}
2491+
# )
2492+
# Also equivalently:
2493+
# sequence = runnable_1 | RunnableParallel(
2494+
# mul_two=runnable_2,
2495+
# mul_three=runnable_3,
2496+
# )
2497+
2498+
sequence.invoke(1)
2499+
await sequence.ainvoke(1)
2500+
2501+
sequence.batch([1, 2, 3])
2502+
await sequence.abatch([1, 2, 3])
2503+
2504+
RunnableParallel makes it easy to run Runnables in parallel. In the below example,
2505+
we simultaneously stream output from two different Runnables:
2506+
2507+
.. code-block:: python
2508+
2509+
from langchain_core.prompts import ChatPromptTemplate
2510+
from langchain_core.runnables import RunnableParallel
2511+
from langchain_openai import ChatOpenAI
2512+
2513+
model = ChatOpenAI()
2514+
joke_chain = (
2515+
ChatPromptTemplate.from_template("tell me a joke about {topic}")
2516+
| model
2517+
)
2518+
poem_chain = (
2519+
ChatPromptTemplate.from_template("write a 2-line poem about {topic}")
2520+
| model
2521+
)
2522+
2523+
runnable = RunnableParallel(joke=joke_chain, poem=poem_chain)
2524+
2525+
# Display stream
2526+
output = {key: "" for key, _ in runnable.output_schema()}
2527+
for chunk in runnable.stream({"topic": "bear"}):
2528+
for key in chunk:
2529+
output[key] = output[key] + chunk[key].content
2530+
print(output)
24572531
"""
24582532

24592533
steps: Mapping[str, Runnable[Input, Any]]

0 commit comments

Comments
 (0)