This repository contains the official PyTorch implementation of our BMVC 2025 paper:
"PrIINeR: Towards Prior-Informed Implicit Neural Representations for Accelerated MRI"
Ziad Al-Haj Hemidi, Eytan Kats, Mattias P. Heinrich
📍 To appear at British Machine Vision Conference (BMVC) 2025, Sheffield, UK
Acceleration in Magnetic Resonance Imaging (MRI) is essential to reduce acquisition time but typically degrades image quality due to undersampling artifacts. Implicit Neural Representations (INRs) have recently emerged as a promising instance-specific alternative for image reconstruction. However, their performance deteriorates under high acceleration factors due to limited structural priors and a lack of generalizability.
PrIINeR addresses this limitation by integrating population-level prior knowledge from pre-trained deep learning models into the INR-based reconstruction process. Specifically, we propose a hybrid framework that performs instance-specific INR optimization, guided by global anatomical priors, while enforcing dual data consistency with both the acquired k-space and prior-based reconstructions. The proposed approach significantly improves structural fidelity, reduces artifacts, and outperforms existing INR-based baselines on the NYU fastMRI dataset.
The PrIINeR framework consists of three key components:
- Prior-Guided Supervision: Incorporation of deep priors extracted from pre-trained models (e.g., UNet, Transformer-based, or generative INR models).
- Instance-Level INR Optimization: An implicit network parameterized over spatial coordinates and optimized per subject using only the undersampled k-space and deep priors if available.
- Dual Data Consistency Constraints: Enforced both in the k-space domain of the acquired measurements and the deep prior-based reconstructions, promoting reconstructions with high structural fidelity and reduced artifacts.
This formulation allows PrIINeR to act as a plug-and-play framework with interchangeable priors, adaptable across different prior architectures and acceleration scenarios.
.
├── create_env.sh # Environment setup script
├── data/ # Example k-space file
│ └── kspace_knee_slice.h5
├── models/ # Pre-trained prior models
│ ├── unet.pth
│ ├── genINR.pth
│ └── reconFormer.pth
├── src/ # Core implementation
│ ├── priiner.py
│ ├── genINR.py
│ ├── Recurrent_Transformer.py
│ ├── RS_attention.py
│ ├── configs.py
│ └── utils.py
├── run_piiner_no_prior.py # INR-only baseline
├── run_piiner_unet.py # UNet-based prior integration
├── run_piiner_genINR.py # genINR-based prior integration
├── run_piiner_reconFormer.py # Transformer-based prior integration
└── README.md
We recommend using Mamba for faster dependency resolution. To set up the environment:
# Clone the repository
git clone https://github.com/multimodallearning/PrIINeR.git
# Navigate to the repository
cd PrIINeR
# Create and activate the environment
bash create_env.sh
This script will create a new environment named priiner and install all required packages.
Dependencies • Python ≥ 3.11 • PyTorch • MONAI • torchmetrics • timm • tinycudann (requires an NVIDIA GPU with CUDA support)
💡 If you don’t have mamba, replace mamba with conda in the script.
We provide four scripts corresponding to different variants of the PrIINeR reconstruction method. Choose according to whether or not prior models are used:
🔹 INR-only baseline (no prior)
python run_piiner_no_prior.py
🔹 UNet-based prior
python run_piiner_unet.py
🔹 Generative INR-based prior
python run_piiner_genINR.py
🔹 Transformer-based prior (ReconFormer)
python run_piiner_reconFormer.py
Each script will: • Load the undersampled k-space slice. • Instantiate the INR and (if applicable) the prior model. • Optimize the INR with dual constraints. • Save reconstructed images and performance metrics.
If you use this work in your research, please cite:
@inproceedings{priiner2025,
title = {PrIINeR: Towards Prior-Informed Implicit Neural Representations for Accelerated MRI},
author = {Ziad Al-Haj Hemidi and Eytan Kats and Mattias P. Heinrich},
booktitle = {British Machine Vision Conference (BMVC)},
year = {2025}
}
This project is licensed under the MIT License. See the LICENSE file for details.
For questions or collaborations, please contact:
- Ziad Al-Haj Hemidi — [email protected]
- Eytan Kats — [email protected]
- Mattias P. Heinrich — [email protected]