A unified API for quickly and easily trying 50+ (and growing!) image matching models.
Jump to: Install | Use | Models | Add a Model / Contributing | Acknowledgements | Cite
Compare matching models across various scenes. For example, we show SIFT-LightGlue and LoFTR matches on pairs:
(1) outdoor, (2) indoor, (3) satellite remote sensing, (4) paintings, (5) a false positive, and (6) spherical.
You can also extract keypoints and associated descriptors.
Clone recursively and installed packages:
git clone --recursive https://github.com/gmberton/image-matching-models
cd image-matching-models
pip install -r requirements.txt # required to support editable dependencies
pip install .Some models require additional optional dependencies which are not included in the default list, like torch-geometric (required by SphereGlue) and tensorflow (required by OmniGlue). To install these, use
pip install .[all]
You can use any of the over 50 matchers simply like this. You never need to download weights, it's all taken care in the code.
from matching import get_matcher
from matching.viz import plot_matches, plot_kpts
# Choose any of the 50+ matchers listed below
matcher = get_matcher("superpoint-lightglue", device="cuda")
img_size = 512 # optional
img0 = matcher.load_image("assets/example_pairs/outdoor/montmartre_close.jpg", resize=img_size)
img1 = matcher.load_image("assets/example_pairs/outdoor/montmartre_far.jpg", resize=img_size)
result = matcher(img0, img1)
# result.keys() = ["num_inliers", "H", "all_kpts0", "all_kpts1", "all_desc0", "all_desc1", "matched_kpts0", "matched_kpts1", "inlier_kpts0", "inlier_kpts1"]
# This will plot visualizations for matches as shown in the figures above
plot_matches(img0, img1, result, save_path="plot_matches.png")
# Or you can extract and visualize keypoints as easily as
result = matcher.extract(img0)
# result.keys() = ["all_kpts0", "all_desc0"]
plot_kpts(img0, result, save_path="plot_kpts.png")You can also run matching or extraction as standalone scripts, to get the same results as above. Matching:
python imm_match.py --matcher superpoint-lightglue --out_dir outputs_superpoint-lightglue --input assets/example_pairs/outdoor/montmartre_close.jpg assets/example_pairs/outdoor/montmartre_far.jpgKeypoints extraction:
python imm_extract.py --matcher superpoint-lightglue --out_dir outputs_superpoint-lightglue --input assets/example_pairs/outdoor/montmartre_close.jpgThese scripts can take as input images, folders with multiple images (or multiple pairs of images), or files with pairs of images paths. To see all possible parameters run
python imm_match.py -h
# or
python imm_extract.py -hWe support the following methods:
Dense: roma, tiny-roma, duster, master, minima-roma, ufm
Semi-dense: loftr, eloftr, se2loftr, xoftr, minima-loftr, aspanformer, matchformer, xfeat-star, xfeat-star-steerers[-perm/-learned], edm, rdd-star, topicfm[-plus]
Sparse: [sift, superpoint, disk, aliked, dedode, doghardnet, gim, xfeat]-lightglue, dedode, steerers, affine-steerers, xfeat-steerers[-perm/learned], dedode-kornia, [sift, orb, doghardnet]-nn, patch2pix, superglue, r2d2, d2net, gim-dkm, xfeat, omniglue, [dedode, xfeat, aliked]-subpx, [sift, superpoint]-sphereglue, minima-superpoint-lightglue, liftfeat, rdd-[sparse,lightglue, aliked], ripe, lisrd
See Model Details to see runtimes, supported devices, and source of each model.
See CONTRIBUTING.md for details. We follow the 1st principle of PyTorch: Usability over Performance
Special thanks to the authors of all models included in this repo (links in Model Details), and to authors of other libraries we wrap like the Image Matching Toolbox and Kornia.
This repo was created as part of the EarthMatch paper. Please cite EarthMatch if this repo is helpful to you!
@InProceedings{Berton_2024_EarthMatch,
author = {Berton, Gabriele and Goletto, Gabriele and Trivigno, Gabriele and Stoken, Alex and Caputo, Barbara and Masone, Carlo},
title = {EarthMatch: Iterative Coregistration for Fine-grained Localization of Astronaut Photography},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2024},
}















