Skip to content

Commit 0f82252

Browse files
kew6688DuangZhu
andauthored
[Fix] Add changelog and Update task name to IIGN (#229)
* add changelog * polish changelog 0.3.0 * refine changelog formatting * change all 'iion' to 'iign' * Fix grammar in changelog.md for v0.3.0 release Updated changelog for version 0.3.0 with highlights, new features, improvements, and bug fixes. * Update changelog.md --------- Co-authored-by: DuangZhu <shaohao9.zhu@gmail.com>
1 parent 5f3246c commit 0f82252

File tree

6 files changed

+81
-18
lines changed

6 files changed

+81
-18
lines changed

README.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -164,8 +164,6 @@ Please refer to the [documentation](https://internrobotics.github.io/user_guide/
164164
| InternVLA-N1 (Dual System)<span style="color: #28a745; font-size: 0.9em"> with NavDP*</span> | RGB-D | 4.70 | 59.7 | 50.6 | 69.7 |
165165
| InternVLA-N1 (Dual System)<span style="color: #28a745; font-size: 0.9em"> DualVLN </span> | RGB | **4.58** | **61.4** | **51.8** | **70.0** |
166166

167-
---
168-
169167
#### <u>VLN-PE Benchmarks</u>
170168

171169
**📍 Flash Controller on R2R Unseen**
@@ -203,8 +201,6 @@ Please refer to the [documentation](https://internrobotics.github.io/user_guide/
203201
| ViPlanner | 54.3 | 52.5 |
204202
| NavDP <InternVLA-N1 (System 1)> | **65.7** | **60.7** |
205203

206-
---
207-
208204
## 🔧 Customization
209205

210206
Please refer to the [tutorial](https://internrobotics.github.io/user_guide/internnav/tutorials/index.html) for advanced usage of InternNav, including customization of datasets, models and experimental settings.

docs/changelog.md

Lines changed: 67 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,67 @@
1+
# Changelog
2+
3+
All notable changes to this project will be documented in this file.
4+
5+
## Unreleased
6+
7+
Upcoming changes will be tracked in this section.
8+
9+
## Changelog of v0.3.0 (2026/01/05)
10+
### Highlights
11+
- Support training of InternVLA-N1 and evaluation on RxR (#184)
12+
- Support training and evaluation for the [VL-LN benchmark](https://arxiv.org/html/2512.22342v2) (#193, #198)
13+
- Add a new flash without collision controller (#189)
14+
15+
### New Features
16+
- Add training code for InternVLA-N1 (#184)
17+
- Support evaluation on the RxR dataset (#184)
18+
- Add training code for the VL-LN benchmark baseline (#198)
19+
- Support evaluation on VL-LN benchmark (#193)
20+
- Add a Flash-without-Collisoin controller (#189)
21+
22+
### Improvements
23+
- Decouple System 2 and Dual-System evaluation functions in the Habitat evaluator for better readability (#184)
24+
- Update InternVLA-N1 agent in VLN-PE to align with the updated InternVLA-N1 policy interface (#184)
25+
- Enhance the Habitat evaluation pipeline to handle NaN values in results (#217)
26+
- Update the README to include community tutorials (#217)
27+
28+
### Bug Fixes
29+
- Fix the version of diffusers in the requirements (#184)
30+
- Fix the result JSON saving path in VLN-PE (#217)
31+
- Fix a bug in RxR evaluation result collection (#217)
32+
- Removed legacy code in scripts/demo (#217)
33+
34+
### Contributors
35+
@kellyiss @DuangZhu @0309hws @kew6688
36+
37+
Full Changelog: https://github.com/InternRobotics/InternNav/compare/release/v0.2.0...release/v0.3.0
38+
39+
## Changelog of v0.2.0 (2025/12/04)
40+
### Highlights
41+
- Support distributed evaluation for VLN-PE, reducing full benchmark runtime to ~1.6 hours using 16 GPUs (≈13× speedup over single-GPU eval) (#168)
42+
- Enhance Habitat evaluation flow with `DistributedEvaluator` and `HabitatEnv` integrated into the InternNav framework (#168)
43+
- Support install flags for dependency isolation: `[habitat]`, `[isaac]`, `[model]` (#135)
44+
45+
### New Features
46+
- Support distributed evaluation for VLN-PE (#168)
47+
- Support a unified evaluation script `eval.py`, with new Habitat evaluation configs in `scripts/eval/configs` (#168)
48+
- Support install flags for dependency isolation (#168)
49+
50+
### Improvements
51+
- Add `HabitatEnv` with episode pool management (#168)
52+
- Update `InternUtopiaEnv` for distributed execution and episode pool management (#168)
53+
- Enhance `episode_loader` in VLN-PE with new distributed mode compatibility (#168)
54+
- Update `data_collector` to support progress checkpointing and incremental result aggregation in distributed evaluation. (#168)
55+
56+
### Bug Fixes
57+
- Fix logger disabled after Isaac Sim initialization during evaluator bootstrap (#168)
58+
- Fix dataloader bug where `revise_one_data()` was incorrectly applied to all datasets (#168)
59+
- Fix visualization images dimension mismatch during InternVLA-N1 evaluation (#168)
60+
- Fix distributed evaluation crash in rdp policy (#168)
61+
- Fix GitHub CI tests (#168)
62+
63+
### Contributors
64+
A total of 3 developers contributed to this release.
65+
@kew6688, @Gariscat, @yuqiang-yang
66+
67+
Full changelog: https://github.com/InternRobotics/InternNav/compare/release/v0.1.0...release/v0.2.0

internnav/dataset/internvla_n1_lerobot_dataset.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1371,7 +1371,7 @@ def __getitem__(self, i):
13711371
def make_supervised_data_module(tokenizer: transformers.PreTrainedTokenizer, data_args) -> Dict:
13721372
"""Make dataset and collator for supervised fine-tuning."""
13731373
train_datasets = []
1374-
if data_args.iion_dataset_use:
1374+
if data_args.iign_dataset_use:
13751375
train_datasets.append(VLLNDataset(tokenizer=tokenizer, data_args=data_args))
13761376
if data_args.vln_dataset_use:
13771377
train_datasets.append(NavPixelGoalDataset(tokenizer=tokenizer, data_args=data_args))

internnav/dataset/vlln_lerobot_dataset.py

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -15,31 +15,31 @@
1515
from .rope2d import get_rope_index_2, get_rope_index_25
1616

1717
# Define placeholders for dataset paths
18-
IION_split1 = {
18+
IIGN_split1 = {
1919
"data_path": "projects/VL-LN-Bench/traj_data/mp3d_split1",
2020
"height": 125,
2121
"pitch_1": 0,
2222
"pitch_2": 30,
2323
}
2424

25-
IION_split2 = {
25+
IIGN_split2 = {
2626
"data_path": "projects/VL-LN-Bench/traj_data/mp3d_split2",
2727
"height": 125,
2828
"pitch_1": 0,
2929
"pitch_2": 30,
3030
}
3131

32-
IION_split3 = {
32+
IIGN_split3 = {
3333
"data_path": "projects/VL-LN-Bench/traj_data/mp3d_split3",
3434
"height": 125,
3535
"pitch_1": 0,
3636
"pitch_2": 30,
3737
}
3838

3939
data_dict = {
40-
"iion_split1": IION_split1,
41-
"iion_split2": IION_split2,
42-
"iion_split3": IION_split3,
40+
"iign_split1": IIGN_split1,
41+
"iign_split2": IIGN_split2,
42+
"iign_split3": IIGN_split3,
4343
}
4444

4545
IGNORE_INDEX = -100
@@ -55,14 +55,14 @@
5555

5656
class VLLNDataset(Dataset):
5757
"""
58-
Dataset for 'Vision-Language'-'Language-Navigation' (VL-LN) / IION-style training.
58+
Dataset for 'Vision-Language'-'Language-Navigation' (VL-LN) / IIGN-style training.
5959
6060
Args:
6161
tokenizer (transformers.PreTrainedTokenizer): Tokenizer used to encode
6262
the chat template and produce `input_ids` / `labels`.
6363
data_args: A config-like object that must provide at least:
64-
- iion_dataset_use (str): comma-separated dataset names, optionally
65-
with sampling rate suffix like `iion_split1%50`.
64+
- iign_dataset_use (str): comma-separated dataset names, optionally
65+
with sampling rate suffix like `iign_split1%50`.
6666
- model_type (str): decides which rope-index function to use.
6767
- sample_step (int): stride for sampling start frames.
6868
- pixel_goal_only (bool): whether to keep only pixel-goal samples.
@@ -74,7 +74,7 @@ class VLLNDataset(Dataset):
7474

7575
def __init__(self, tokenizer: transformers.PreTrainedTokenizer, data_args):
7676
super(VLLNDataset, self).__init__()
77-
dataset = data_args.iion_dataset_use.split(",")
77+
dataset = data_args.iign_dataset_use.split(",")
7878
dataset_list = data_list(dataset)
7979
rank0_print(f"Loading datasets: {dataset_list}")
8080
self.video_max_total_pixels = getattr(data_args, "video_max_total_pixels", 1664 * 28 * 28)

internnav/trainer/internvla_n1_argument.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ class DataArguments:
2929
video_min_frame_pixels: int = field(default=4 * 28 * 28)
3030

3131
vln_dataset_use: str = field(default="")
32-
iion_dataset_use: str = field(default="")
32+
iign_dataset_use: str = field(default="")
3333
sample_step: int = field(default=4)
3434
num_history: Optional[int] = field(default=8)
3535
predict_step_num: Optional[int] = field(default=32)

scripts/train/qwenvl_train/train_system2_vlln.sh

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ max_pixels=313600
2727
min_pixels=3136
2828

2929
# Dataset configuration (replace with public dataset names)
30-
iion_datasets=iion_split1,iion_split2 #,iion_split3
30+
iign_datasets=iign_split1,iign_split2 #,iign_split3
3131

3232
# Output configuration
3333
run_name=InternVLA-N1-vlln
@@ -38,7 +38,7 @@ srun torchrun --nnodes=$SLURM_NNODES --nproc_per_node=8 \
3838
internnav/trainer/internvla_vlln_trainer.py \
3939
--deepspeed ${deepspeed} \
4040
--model_name_or_path "${llm}" \
41-
--iion_dataset_use ${iion_datasets} \
41+
--iign_dataset_use ${iign_datasets} \
4242
--data_flatten False \
4343
--tune_mm_vision True \
4444
--tune_mm_mlp True \

0 commit comments

Comments
 (0)