Skip to content

Commit 26a4bf9

Browse files
authored
[Fix] update docker image for InternNav v0.3.0 (#8)
* fix docker image 030 * add config note for agent server
1 parent 2922cc6 commit 26a4bf9

File tree

2 files changed

+28
-7
lines changed

2 files changed

+28
-7
lines changed

source/en/user_guide/internnav/quick_start/evaluation.md

Lines changed: 25 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -11,10 +11,25 @@ Model weights of InternVLA-N1 (Dual System) can be downloaded from [InternVLA-N1
1111
### Evaluation on Isaac Sim
1212
Before evaluation, we should download the robot assets from [InternUTopiaAssets](https://huggingface.co/datasets/InternRobotics/Embodiments) and move them to the `data/` directory.
1313

14-
[UPDATE] We support using local model and isaac sim in one process now. Evaluate on Single-GPU:
14+
InternNav supports two execution modes for running the model during evaluation.
15+
16+
#### 1) In-Process Mode (use_agent_server = False)
17+
We now support running the local model and Isaac Sim in the same process, enabling single-GPU evaluation without launching a separate agent service, and also supports multi-process execution, where each process hosts its own simulator and local model.
1518

1619
```bash
1720
python scripts/eval/eval.py --config scripts/eval/configs/h1_internvla_n1_async_cfg.py
21+
22+
# set config with the following fields
23+
eval_cfg = EvalCfg(
24+
task=TaskCfg(
25+
task_settings={
26+
'use_distributed': False, # disable Ray-based distributed evaluation
27+
}
28+
),
29+
eval_settings={
30+
'use_agent_server': False, # run the model in the same process as the simulator
31+
},
32+
)
1833
```
1934

2035
For multi-gpu inference, currently we support inference on environments that expose a torchrun-compatible runtime model (e.g., Torchrun or Aliyun DLC).
@@ -30,8 +45,8 @@ For multi-gpu inference, currently we support inference on environments that exp
3045
--config scripts/eval/configs/h1_internvla_n1_async_cfg.py
3146
```
3247

33-
The main architecture of the whole-system evaluation adopts a client-server model. In the client, we specify the corresponding configuration (*.cfg), which includes settings such as the scenarios to be evaluated, robots, models, and parallelization parameters. The client sends requests to the server, which then submits tasks to the Ray distributed framework based on the corresponding cfg file, enabling the entire evaluation process to run.
34-
48+
#### 2) Agent Server Mode (use_agent_server = True)
49+
We also support running the model in a separate process.
3550
First, change the 'model_path' in the cfg file to the path of the InternVLA-N1 weights. Start the evaluation server:
3651
```bash
3752
# from one process
@@ -44,6 +59,13 @@ Then, start the client to run evaluation:
4459
# from another process
4560
conda activate <internutopia>
4661
MESA_GL_VERSION_OVERRIDE=4.6 python scripts/eval/eval.py --config scripts/eval/configs/h1_internvla_n1_async_cfg.py
62+
63+
# set config with the following fields
64+
eval_cfg = EvalCfg(
65+
eval_settings={
66+
'use_agent_server': True, # run the model in the same process as the simulator
67+
},
68+
)
4769
```
4870

4971
The evaluation results will be saved in the `eval_results.log` file in the output_dir of the config file. The whole evaluation process takes about 10 hours at RTX-4090 graphics platform.

source/en/user_guide/internnav/quick_start/simulation.md

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ To help you get started quickly, we've prepared a **Docker image** pre-configure
1414

1515
You can pull the image (~17GB) and run evaluations in the container using the following command:
1616
```bash
17-
docker pull crpi-mdum1jboc8276vb5.cn-beijing.personal.cr.aliyuncs.com/internrobotics/internnav:v0.2.0
17+
docker pull crpi-mdum1jboc8276vb5.cn-beijing.personal.cr.aliyuncs.com/internrobotics/internnav:v0.3.0
1818
```
1919

2020
Run the container by:
@@ -40,12 +40,11 @@ docker run --name internnav -it --rm --gpus all --network host \
4040
-v ${HOME}/docker/isaac-sim/data:/root/.local/share/ov/data:rw \
4141
-v ${HOME}/docker/isaac-sim/documents:/root/Documents:rw \
4242
-v ${PWD}/data/scene_data/mp3d_pe:/isaac-sim/Matterport3D/data/v1/scans:rw \
43-
crpi-mdum1jboc8276vb5.cn-beijing.personal.cr.aliyuncs.com/internrobotics/internnav:v0.2.0
43+
crpi-mdum1jboc8276vb5.cn-beijing.personal.cr.aliyuncs.com/internrobotics/internnav:v0.3.0
4444
```
45-
After the container started, you can quickly start the env and install the InternNav:
45+
After the container started, you can quickly start to use the InternNav with conda environment:
4646
```bash
4747
conda activate internutopia
48-
pip install -e .[isaac,model]
4948
```
5049
<!-- To help you get started quickly, we've prepared a Docker image pre-configured with Isaac Sim 4.5 and InternUtopia. You can pull the image and run evaluations in the container using the following command:
5150
```bash

0 commit comments

Comments
 (0)