In xonsh shell it's possible to replace everything. So let's imagine that we have local LLM model (e.g. llama) with low latency. We can use it for everything:
- Parse command and colorize it based on context (How to make 🌈 lolcat colors style for input in xonsh? xonsh#6050).
- Suggest the rest of the command during typing using suggester (How to manage prompt suggestion for AI/LLM features? (grey colored command that is suggested from history) xonsh#6044).
- Check errors during typing.
- Using it in a completer list (https://xon.sh/tutorial_completers.html).
- Help with a command on user language.
For community
⬇️ Please click the 👍 reaction instead of leaving a +1 or 👍 comment