Leela Chess Zero (lc0) Lens (lczerolens
): a set of utilities to make interpretability easy and framework-agnostic (PyTorch): use it with tdhook
, captum
, zennit
, or nnsight
.
pip install lczerolens
Take the viz
extra to render heatmaps and the backends
extra to use the lc0
backends.
Get the best move predicted by a model:
from lczerolens import LczeroBoard, LczeroModel
model = LczeroModel.from_hf("lczerolens/maia-1100")
board = LczeroBoard()
output = model(board)
best_move_idx = output["policy"].argmax()
print(board.decode_move(best_move_idx))
Use lczerolens
with your preferred PyTorch interpretability framework (tdhook
, captum
, zennit
, nnsight
). More examples in the framework-agnostic interpretability notebook.
from lczerolens import LczeroBoard, LczeroModel
model = LczeroModel.from_hf("lczerolens/maia-1100")
board = LczeroBoard()
# TODO: complete this example
- Encode Boards:
- Load Models:
- Move Prediction:
- Run Models on GPU:
- Evaluate Models on Puzzles:
- Convert Official Weights:
- Visualise Heatmaps:
- Probe Concepts:
- Walkthrough:
- Framework-Agnostic Interpretability:
- More to come...
Some Hugging Face Spaces are available to try out the library. The demo (:red_circle: in construction) will showcase some of the features of the library and the backends demo makes the conversion of lc0 models to onnx
easy.
Additionally, you can run the gradio demos locally. First you'll need to clone the spaces (after cloning the repo):
git clone https://huggingface.co/spaces/lczerolens/demo spaces/demo
And optionally the backends demo:
git clone https://huggingface.co/spaces/lczerolens/backends-demo spaces/backends-demo
And then launch the demo (running on port 8000
):
just demo
To test the backends use:
just demo-backends
See the full documentation.
See the guidelines in CONTRIBUTING.md.