Reliable, minimal and scalable library for evaluating and conducting world model research
-
Updated
Apr 5, 2026 - Python
Reliable, minimal and scalable library for evaluating and conducting world model research
Experiments in Joint Embedding Predictive Architectures (JEPAs).
👆PyTorch Implementation of JEDi Metric described in "Beyond FVD: Enhanced Evaluation Metrics for Video Generation Quality"
GenBio-PathFM is a histopathology foundation model from GenBio AI.
Joint Embedding Predictive Architecture for World Models, written in Rust.
This VL-JEPA implimentation takes direct insperation from the original VL-JEPA paper
An open-source attempt at training a variant of LeCun's energy-based models (EBM) to reason in latent space and solve Sudoku.
Seeing Beyond Words: Self-Supervised Visual Learning for Multimodal Large Language Models
A Video Joint Embedding Predictive Architecture (JEPA) that runs on a personal computer.
38M-param time-series world model: FSQ tokenizer → Mamba-2 JEPA → OT-CFM → TD-MPC2 agent. 838M tokens, TPU v6e, JAX/Flax.
A deterministic execution spine and cognitive memory layer for orchestrating Predictive World Models (JEPAs) without text-based prompting.
A simple and efficient implementation of Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture (I-JEPA)
Training backend for Cell Observatory models
"Predict and Reconstruct: Joint Objectives for Self-Supervised Language Representation Learning" — hybrid JEPA + MLM pre-training for text encoders with GLUE evaluation : https://doi.org/10.13140/RG.2.2.17818.30404
Using the JEPA architecture for multimodal language translation
Train a JEPA world model on a set of pre-collected trajectories from an environment involving an agent in two rooms.
Learn JEPA from scratch -- 10 notebooks from representations to world models
Project for Yann Lecun's Deep Learning class. In this project, we train a JEPA world model on a set of pre-collected trajectories from a toy environment involving an agent in two rooms.
A PyTorch implementation of Latent Embedding JEPA for learning world models in continuous environments without latent collapse.
Add a description, image, and links to the jepa topic page so that developers can more easily learn about it.
To associate your repository with the jepa topic, visit your repo's landing page and select "manage topics."