Releases: jupyterlab/jupyter-ai
v3.0.0b8
3.0.0b8
See this issue for more info. We're presenting this release at JupyterCon 2025 in San Diego two days from now! More details to follow.
Enhancements made
Contributors to this release
v3.0.0beta7
3.0.0beta7
This release notably upgrades to jupyterlab-chat==0.17.0, which is in the process of being published onto Conda Forge. This release is being targeted as the first V3 release to be published onto Conda Forge! π
This release also fixes a bug that prevented some users from starting Jupyter AI locally on v3.0.0b6. Thank you @andreyvelich for contributing that fix so quickly! πͺ
Finally, we've also added some enhancements & fixes for the magic commands & the model parameters UI. π€
Enhancements made
- Upgrade to 
jupyterlab-chatv0.17.0 #1480 (@dlqqq) - Add 
api_baseto common model parameters #1478 (@jonahjung22) - [magics] Add options to include the API url & key with alias #1477 (@srdas)
 - Simplify model parameter REST API #1475 (@jonahjung22)
 - Add model parameter type dropdown #1473 (@jonahjung22)
 - [magics] Add 
--api-baseand--api-key-namearguments #1471 (@srdas) - Show the AI settings in the right area with Jupyter Notebook #1470 (@jtpio)
 
Bugs fixed
- Fix empty directory for Jupyter AI config #1472 (@andreyvelich)
 
Maintenance and upkeep improvements
- fixes directory of pr template #1474 (@jonahjung22)
 
Contributors to this release
(GitHub contributors page for this release)
@andreyvelich | @brichet | @dlqqq | @ellisonbg | @jonahjung22 | @jtpio | @srdas
v3.0.0beta6
3.0.0beta6
This release includes several major upgrades to Jupyter AI v3, most notably migrating from Langchain to LiteLLM.
- 
π Jupyter AI now provides >1000 LLMs out-of-the-box, without requiring an optional dependency for most providers. The only optional dependency that you may need is
boto3, which is required for Amazon Bedrock models. - 
π Jupyter AI is significantly faster to install and start. The Jupyter AI server extension startup time has been reduced from ~10000ms to ~2500ms (-75 pp). The remaining startup latency mostly comes from the time it takes to import
jupyter_ai. We plan to improve this further by iterating on #1115. - 
πͺ We have completely overhauled the AI settings page & simplified the model configuration process. The new AI settings page allows you to type in any LiteLLM model ID, without being restricted to the suggestions that appear as a popup. This will allow you to use the latest LLMs as soon as they are released, even if they have not yet been added to the model lists in our source code.
- By v3, users will also be able to define custom model parameters, which are passed directly as keyword arguments to 
litellm.acompletion(). Users will not have to request maintainers to add fields to models anymore. 
 - By v3, users will also be able to define custom model parameters, which are passed directly as keyword arguments to 
 - 
π Finally, we've greatly simplified the process of providing your API keys. All API keys can now be defined as environment variables directly passed to
jupyter-lab. You may also define API keys locally in the.envfile at your workspace root, which is used throughout all of Jupyter AI. You can edit the.envfile directly, use the UI we provide in the AI settings page. 
There are some minor breaking changes:
- 
The path local personas are loaded from has been moved from
.jupyter/to.jupyter/personas. - 
The new "model parameters" section has a couple of bugs that will be fixed in future pre-releases.
 - 
We have temporary hidden the "inline completion model" section until we refactor the backend to work with LiteLLM. That work is being tracked in #1431. Contributions welcome.
 - 
We have also hidden the "embedding model" section. We plan for Jupyternaut to automatically gather the context it needs entirely through agentic tool-calling, which may remove the need for a vector store & embedding model. This may change in the future depending on the results on this effort.
 
Enhancements made
- PR Template #1446 (@jonahjung22)
 - Load local personas from 
.jupyter/personasinstead of.jupyter/#1443 (@andrii-i) - Migrate from LangChain to LiteLLM (major upgrade) #1426 (@dlqqq)
 
Contributors to this release
(GitHub contributors page for this release)
@andrii-i | @cszhbo | @dlqqq | @jonahjung22 | @srdas
v3.0.0beta5
3.0.0beta5
Enhancements made
- Add file attachment directly to JupyternautPersona when file is included in message #1419 (@joadoumie)
 - Add VertexAI model provider #1417 (@anthonyhungnguyen)
 
Maintenance and upkeep improvements
Contributors to this release
v2.31.6
2.31.6
Enhancements made
- Add VertexAI model provider #1417 (@anthonyhungnguyen)
 - Refresh the list of supported Gemini models. #1381 (@haofan)
 
Maintenance and upkeep improvements
Documentation improvements
Contributors to this release
v3.0.0beta4
3.0.0beta4
Enhancements made
Bugs fixed
- Bump 
@jupyter/chatdependency and regenerateyarn.lock, pincohereto<5.16#1412 (@andrii-i) - Return error message when the completion model is not specified for the Jupyternaut persona #1408 (@srdas)
 
Contributors to this release
(GitHub contributors page for this release)
@andrii-i | @dlqqq | @ellisonbg | @srdas
v3.0.0beta3
3.0.0beta3
Enhancements made
- Bump jupyterlab-chat version to v0.16.0 #1406 (@andrii-i)
 - Update user message routing rules #1399 (@3coins)
 - Use 
uv, overhaul dev setup, update contributor docs #1392 (@dlqqq) 
Contributors to this release
(GitHub contributors page for this release)
@3coins | @andrii-i | @dlqqq | @ellisonbg
v3.0.0beta2
3.0.0beta2
Enhancements made
- Add error handling for persona loading failures #1397 (@ellisonbg)
 - Add ignore globs for hidden files in CM config #1396 (@ellisonbg)
 - Hide backslashes in 
@filepaths with spaces #1390 (@andrii-i) - Load personas dynamically from 
.jupyterdir #1380 (@fperez) 
Contributors to this release
v3.0.0beta1
3.0.0beta1
Enhancements made
- Upgrade to Jupyter Chat v0.15.0 #1389 (@dlqqq)
 - Add MCP config to the .jupyter directory #1385 (@ellisonbg)
 - Added toolkit models #1382 (@3coins)
 - Refresh the list of supported Gemini models. #1381 (@haofan)
 - Allow personas to get chat path and directory #1379 (@dlqqq)
 - Add functions for finding the .jupyter directory or the workspace directory #1376 (@ellisonbg)
 
Maintenance and upkeep improvements
Contributors to this release
(GitHub contributors page for this release)
@3coins | @dlqqq | @ellisonbg | @haofan | @pre-commit-ci
v3.0.0b0
3.0.0b0
This is the first beta release of Jupyter AI v3! We've completed a majority of the new APIs & integrations that we plan to use in v3.0.0. It's now time for us to build features, fix bugs, (greatly) improve the UI, and make Jupyternaut a powerful default AI agent. We plan to move very quickly in the next couple of weeks to make v3.0.0 available to users as soon as we can. If everything works out, we will release v3.0.0 by the end of June. πͺ
This release notably implements the "stop streaming" button that existed in Jupyter AI v2 & enhances the performance by removing thousands of lines of old v2 code. Besides the slash command capabilities (which will be implemented as agent tools in beta), Jupyter AI v3 now has feature parity with Jupyter AI v2. π
Enhancements made
Maintenance and upkeep improvements
- Raise 
jupyterlab-chatversion ceiling #1373 (@dlqqq) - Remove unused code from v3 
mainbranch #1369 (@dlqqq)