Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,7 @@ Stamp currently supports the following feature extractors:
- [mSTAR][mstar]
- [MUSK][musk]
- [PLIP][plip]
- [TICON][ticon]


As some of the above require you to request access to the model on huggingface,
Expand Down Expand Up @@ -158,6 +159,7 @@ meaning ignored that it was ignored during feature extraction.
[EAGLE]: https://github.com/KatherLab/EAGLE
[MADELEINE]: https://huggingface.co/MahmoodLab/madeleine
[PRISM]: https://huggingface.co/paige-ai/Prism
[TICON]: https://cvlab-stonybrook.github.io/TICON/ "TICON: A Slide-Level Tile Contextualizer for Histopathology Representation Learning"



Expand Down
27 changes: 14 additions & 13 deletions mcp/README.md
Original file line number Diff line number Diff line change
@@ -1,23 +1,24 @@
# STAMP MCP Server

A FastMCP-based Model Context Protocol server wrapping [STAMP](https://github.com/KatherLab/STAMP)’s CLI, enabling seamless integration of STAMP preprocessing, training, encoding, evaluation, and inference into LLM-based pipelines.
A FastMCP-based Model Context Protocol server wrapping [STAMP](https://github.com/KatherLab/STAMP)'s tools, enabling seamless integration of STAMP preprocessing, training, encoding, evaluation, and inference into LLM-based pipelines.

## Overview

This server lets LLM agents invoke STAMP tools via structured calls. It exposes the following tools:

- `preprocess_stamp(...)`: tile & extract WSI features
- `train_stamp(...)`: train weakly supervised models
- `crossval_stamp(...)`: k-fold cross‑validation
- `deploy_stamp(...)`: inference on held‑out data
- `encode_slides_stamp(...)`: slide-level feature encoding
- `encode_patients_stamp(...)`: patient-level feature encoding
- `heatmaps_stamp(...)`: model-based heatmap visualization
- `statistics_stamp(...)`: compute classification metrics
- `read_file(...)` & `list_files(...)`: safe disk access
- `check_available_devices()`: query Torch/Platform device availability

Each tool serializes config into YAML, launches `stamp <mode>`, streams logs back, and returns stdout/stderr.
- `preprocess_stamp()`: tile & extract WSI features
- `train_stamp()`: train weakly supervised models
- `crossval_stamp()`: k-fold cross‑validation
- `deploy_stamp()`: inference on held‑out data
- `encode_slides_stamp()`: slide-level feature encoding
- `encode_patients_stamp()`: patient-level feature encoding
- `heatmaps_stamp()`: model-based heatmap visualization
- `statistics_stamp()`: compute classification metrics
- `read_file()` & `list_files()`: safe disk access
- `check_available_devices()`: query Torch/Platform device availability
- `analyze_csv()` & `list_column_values`: useful for clinical and slide tables

Each tool serializes config into YAML and directly calls STAMP's internal `_run_cli()` function, streaming logs back in real-time and returning execution results.

## Installation
To run the MCP server is as simple as intalling STAMP as it is explained in the main README.md file, but adding `--extra mcp` to the command. For a GPU repository installation it would be like this:
Expand Down
Loading