Skip to content

Commit 958923b

Browse files
authored
Update Arbor Tutorials (#9007)
* Update DSPy docs * Remove `lm_local_arbor.py` provider (moved to https://github.com/Ziems/arbor for cleanliness and faster development)
1 parent b67732d commit 958923b

File tree

3 files changed

+86
-573
lines changed

3 files changed

+86
-573
lines changed

docs/docs/tutorials/rl_multihop/index.ipynb

Lines changed: 50 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88
"\n",
99
"WARNING: This feature is new and extremely EXPERIMENTAL. Unlike almost everything else in DSPy, it's currently in pure proof of concept and development mode, but we release it to encourage community involvement.\n",
1010
"\n",
11-
"For this tutorial, you will also need DSPy's Arbor RL server.\n",
11+
"For this tutorial, you will also need [DSPy's Arbor RL framework](https://github.com/Ziems/arbor) which you can install with:\n",
1212
"\n",
1313
"```bash\n",
1414
"> pip install -U arbor-ai\n",
@@ -22,23 +22,25 @@
2222
"outputs": [],
2323
"source": [
2424
"import dspy\n",
25-
"from dspy.clients.lm_local_arbor import ArborProvider\n",
26-
"\n",
2725
"import arbor\n",
26+
"from arbor import ArborGRPO, ArborProvider\n",
2827
"arbor_server_info = arbor.init() # Initialize the Arbor server in the background\n",
2928
"\n",
3029
"port = 7453\n",
31-
"local_lm_name = \"Qwen/Qwen2.5-7B-Instruct\"\n",
30+
"local_lm_name = \"Qwen/Qwen2.5-1.5B-Instruct\"\n",
3231
"local_lm = dspy.LM(\n",
3332
" model=f\"openai/arbor:{local_lm_name}\",\n",
3433
" provider=ArborProvider(),\n",
35-
" temperature=0.7,\n",
36-
" api_base=arbor_server_info[\"api_base\"],\n",
34+
" api_base=arbor_server_info[\"base_url\"],\n",
35+
" # Arbor checks to make sure these match the training config\n",
36+
" temperature=1.0,\n",
37+
" top_p=1.0,\n",
38+
" top_k=-1,\n",
39+
" repetition_penalty=1.0,\n",
40+
" max_tokens=2048,\n",
3741
")\n",
3842
"\n",
39-
"dspy.configure(lm=local_lm)\n",
40-
"\n",
41-
"openai_lm = dspy.LM(model=\"openai/gpt-4.1-mini\")"
43+
"dspy.configure(lm=local_lm)"
4244
]
4345
},
4446
{
@@ -97,7 +99,12 @@
9799
"source": [
98100
"### Load the HoVer dataset.\n",
99101
"\n",
100-
"Let's load a dataset for our task. We'll load examples from the HoVer multi-hop task, where the input is a (really!) complex claim and the output we're seeking is the set of Wikipedia pages that are required to fact-check that claim."
102+
"Let's load a dataset for our task. We'll load examples from the HoVer multi-hop task, where the input is a (really!) complex claim and the output we're seeking is the set of Wikipedia pages that are required to fact-check that claim.\n",
103+
"\n",
104+
"You may have to install an older version of the dataset to get it working properly...\n",
105+
"```shell\n",
106+
"> pip install datasets==3.6.0\n",
107+
"```"
101108
]
102109
},
103110
{
@@ -226,47 +233,61 @@
226233
"metadata": {},
227234
"outputs": [],
228235
"source": [
229-
"from dspy.teleprompt.grpo import GRPO\n",
230-
"\n",
231236
"program = ResearchHop(num_docs=4, num_hops=2)\n",
232237
"program.set_lm(local_lm)\n",
233238
"\n",
234-
"# NOTE: Training on 6 GPUs.\n",
239+
"# NOTE: Training on 4 GPUs.\n",
235240
"train_kwargs = {\n",
236241
" \"per_device_train_batch_size\": 2,\n",
237-
" \"gradient_accumulation_steps\": 8,\n",
242+
" \"gradient_accumulation_steps\": 24/6,\n",
238243
" \"temperature\": 1.0,\n",
239-
" \"beta\": 0.04,\n",
240-
" \"learning_rate\": 1e-5,\n",
244+
" \"top_k\": -1,\n",
245+
" \"top_p\": 1.0,\n",
246+
" \"repetition_penalty\": 1.0,\n",
247+
" \"beta\": 0.00,\n",
248+
" \"learning_rate\": 1e-6,\n",
241249
" \"gradient_checkpointing\": True,\n",
242-
" \"gradient_checkpointing_kwargs\": {\"use_reentrant\": False},\n",
243250
" \"bf16\": True,\n",
244251
" \"lr_scheduler_type\": \"constant_with_warmup\",\n",
252+
" \"loss_type\": \"dapo\",\n",
253+
" \"max_steps\": 1000,\n",
254+
" \"report_to\": \"wandb\",\n",
255+
" \"log_completions\": True,\n",
256+
" \"logging_steps\": 1,\n",
245257
" \"max_prompt_length\": None,\n",
246258
" \"max_completion_length\": None,\n",
247-
" \"scale_rewards\": True,\n",
248-
" \"max_grad_norm\": 0.5,\n",
249-
" \"lora\": True,\n",
259+
" \"scale_rewards\": False,\n",
260+
" \"max_grad_norm\": 1.0,\n",
261+
" \"lora_config\": {\n",
262+
" \"lora_alpha\": 16,\n",
263+
" \"lora_dropout\": 0.05,\n",
264+
" \"r\": 8,\n",
265+
" \"target_modules\": [\"q_proj\", \"k_proj\", \"v_proj\", \"o_proj\", \"up_proj\", \"down_proj\", \"gate_proj\"],\n",
266+
" },\n",
267+
" \"num_training_gpus\": 3,\n",
268+
" \"num_inference_gpus\": 1,\n",
269+
" \"weight_decay\": 0.001,\n",
250270
"}\n",
251271
"\n",
252-
"compiler = GRPO(\n",
272+
"compiler = ArborGRPO(\n",
253273
" metric=recall,\n",
254274
" num_dspy_examples_per_grpo_step=6,\n",
255-
" num_rollouts_per_grpo_step=4,\n",
275+
" num_rollouts_per_grpo_step=24,\n",
256276
" exclude_demos=True,\n",
257-
" num_train_steps=100,\n",
277+
" num_train_steps=1000,\n",
258278
" num_threads=16,\n",
259279
" use_train_as_val=False,\n",
260-
" num_steps_for_val=10,\n",
280+
" num_steps_for_val=50,\n",
261281
" train_kwargs=train_kwargs,\n",
262-
" report_train_scores=False,\n",
282+
" checkpoint=\"single-best\",\n",
263283
")\n",
264284
"\n",
265285
"optimized_program = compiler.compile(\n",
266286
" student=program,\n",
267287
" trainset=trainset,\n",
268288
" valset=devset,\n",
269-
")\n"
289+
")\n",
290+
"\n"
270291
]
271292
},
272293
{
@@ -290,13 +311,13 @@
290311
"cell_type": "markdown",
291312
"metadata": {},
292313
"source": [
293-
"In our preliminary experiments, training above for about 18 hours boosts the recall (devset) from 61.8% to 66.2%. This is _typically_ worse on cost/quality basis than you'd get from running prompt optimizers dspy.MIPROv2 or dspy.SIMBA, but it's still a very solid start for online RL over arbitrary LM programs for small LMs."
314+
"In our preliminary experiments, training about 18 hours boosts the recall (devset) from 61.8% to 66.2%. This is _typically_ worse on cost/quality basis than you'd get from running prompt optimizers dspy.MIPROv2 or dspy.SIMBA, but it's still a very solid start for online RL over arbitrary LM programs for small LMs."
294315
]
295316
}
296317
],
297318
"metadata": {
298319
"kernelspec": {
299-
"display_name": "jun2024_py310",
320+
"display_name": "arbor-exps",
300321
"language": "python",
301322
"name": "python3"
302323
},
@@ -310,7 +331,7 @@
310331
"name": "python",
311332
"nbconvert_exporter": "python",
312333
"pygments_lexer": "ipython3",
313-
"version": "3.10.14"
334+
"version": "3.11.13"
314335
}
315336
},
316337
"nbformat": 4,

docs/docs/tutorials/rl_papillon/index.ipynb

Lines changed: 36 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -8,12 +8,11 @@
88
"\n",
99
"WARNING: This feature is new and extremely EXPERIMENTAL. Unlike almost everything else in DSPy, it's currently in pure proof of concept and development mode, but we release it to encourage community involvement.\n",
1010
"\n",
11-
"In this tutorial, we optimize the LM weights of [PAPILLON](https://dspy.ai/tutorials/papillon/) with `dspy.GRPO`, a generalization of the popular GRPO online RL algorithm of LLMs to sophisticated multi-module LM programs.\n",
11+
"In this tutorial, we optimize the LM weights of [PAPILLON](https://dspy.ai/tutorials/papillon/) with `ArborGRPO`, a generalization of the popular GRPO online RL algorithm of LLMs to sophisticated multi-module LM programs.\n",
1212
"\n",
13-
"PAPILLON is a system for privacy-preserving delegation, where we will teach a tiny model (1.7B parameters) to use an \"untrusted\" external LLM, which is more powerful but may save your private data, to balance high-quality and private chat.\n",
14-
"\n",
15-
"For this tutorial, you will also need the Arbor RL server.\n",
13+
"PAPILLON is a system for privacy-preserving delegation, where we will teach a tiny model (1.5B parameters) to use an \"untrusted\" external LLM, which is more powerful but may save your private data, to balance high-quality and private chat.\n",
1614
"\n",
15+
"For this tutorial, you will also need [DSPy's Arbor RL framework](https://github.com/Ziems/arbor) which you can install with:\n",
1716
"```bash\n",
1817
"> pip install -U arbor-ai\n",
1918
"```"
@@ -26,18 +25,22 @@
2625
"outputs": [],
2726
"source": [
2827
"import dspy\n",
29-
"from dspy.clients.lm_local_arbor import ArborProvider\n",
30-
"\n",
3128
"import arbor\n",
29+
"from arbor import ArborGRPO, ArborProvider\n",
3230
"arbor_server_info = arbor.init() # Initialize the Arbor server in the background\n",
3331
"\n",
3432
"port = 7453\n",
35-
"local_lm_name = \"Qwen/Qwen2.5-7B-Instruct\"\n",
33+
"local_lm_name = \"Qwen/Qwen2.5-1.5B-Instruct\"\n",
3634
"local_lm = dspy.LM(\n",
3735
" model=f\"openai/arbor:{local_lm_name}\",\n",
3836
" provider=ArborProvider(),\n",
39-
" temperature=0.7,\n",
40-
" api_base=arbor_server_info[\"api_base\"],\n",
37+
" api_base=arbor_server_info[\"base_url\"],\n",
38+
" # Arbor checks to make sure these match the training config\n",
39+
" temperature=1.0,\n",
40+
" top_p=1.0,\n",
41+
" top_k=-1,\n",
42+
" repetition_penalty=1.0,\n",
43+
" max_tokens=2048,\n",
4144
")\n",
4245
"\n",
4346
"dspy.configure(lm=local_lm)\n",
@@ -255,30 +258,43 @@
255258
"metadata": {},
256259
"outputs": [],
257260
"source": [
258-
"from dspy.teleprompt.grpo import GRPO\n",
259-
"\n",
260261
"papillon = PAPILLON(untrusted_model=openai_lm)\n",
261262
"papillon.set_lm(local_lm)\n",
262263
"\n",
263-
"# NOTE: Training on 3 GPUs.\n",
264+
"# NOTE: Training on 4 GPUs.\n",
264265
"train_kwargs = {\n",
265266
" \"per_device_train_batch_size\": 8,\n",
266267
" \"gradient_accumulation_steps\": 4,\n",
267268
" \"temperature\": 1.0,\n",
268-
" \"beta\": 0.04,\n",
269-
" \"learning_rate\": 2e-6,\n",
269+
" \"top_k\": -1,\n",
270+
" \"top_p\": 1.0,\n",
271+
" \"repetition_penalty\": 1.0,\n",
272+
" \"beta\": 0.00,\n",
273+
" \"learning_rate\": 1e-6,\n",
270274
" \"gradient_checkpointing\": True,\n",
271-
" \"gradient_checkpointing_kwargs\": {\"use_reentrant\": False},\n",
272275
" \"bf16\": True,\n",
273276
" \"lr_scheduler_type\": \"constant_with_warmup\",\n",
277+
" \"loss_type\": \"dapo\",\n",
278+
" \"max_steps\": 1000,\n",
279+
" \"report_to\": \"wandb\",\n",
280+
" \"log_completions\": True,\n",
281+
" \"logging_steps\": 1,\n",
274282
" \"max_prompt_length\": None,\n",
275283
" \"max_completion_length\": None,\n",
276-
" \"scale_rewards\": True,\n",
277-
" \"max_grad_norm\": 0.5,\n",
278-
" \"lora\": True,\n",
284+
" \"scale_rewards\": False,\n",
285+
" \"max_grad_norm\": 1.0,\n",
286+
" \"lora_config\": {\n",
287+
" \"lora_alpha\": 16,\n",
288+
" \"lora_dropout\": 0.05,\n",
289+
" \"r\": 8,\n",
290+
" \"target_modules\": [\"q_proj\", \"k_proj\", \"v_proj\", \"o_proj\", \"up_proj\", \"down_proj\", \"gate_proj\"],\n",
291+
" },\n",
292+
" \"num_training_gpus\": 3,\n",
293+
" \"num_inference_gpus\": 1,\n",
294+
" \"weight_decay\": 0.001,\n",
279295
"}\n",
280296
"\n",
281-
"compiler = GRPO(\n",
297+
"compiler = ArborGRPO(\n",
282298
" metric=compute_overall_score,\n",
283299
" multitask=True,\n",
284300
" num_dspy_examples_per_grpo_step=4,\n",
@@ -320,13 +336,8 @@
320336
"cell_type": "markdown",
321337
"metadata": {},
322338
"source": [
323-
"In our preliminary experiments, training above for three hours boosts the composite score (devset) from 54.6% to 60.0%. This is _typically_ worse on cost/quality basis than you'd get from running prompt optimizers like dspy.MIPROv2 or dspy.SIMBA, but it's still a very solid start for online RL over arbitrary LM programs for tiny LMs."
339+
"In our preliminary experiments, training three hours boosts the composite score (devset) from 54.6% to 60.0%. This is _typically_ worse on cost/quality basis than you'd get from running prompt optimizers like dspy.MIPROv2 or dspy.SIMBA, but it's still a very solid start for online RL over arbitrary LM programs for tiny LMs."
324340
]
325-
},
326-
{
327-
"cell_type": "markdown",
328-
"metadata": {},
329-
"source": []
330341
}
331342
],
332343
"metadata": {

0 commit comments

Comments
 (0)