You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: source/en/user_guide/internnav/quick_start/evaluation.md
+25-3Lines changed: 25 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,10 +11,25 @@ Model weights of InternVLA-N1 (Dual System) can be downloaded from [InternVLA-N1
11
11
### Evaluation on Isaac Sim
12
12
Before evaluation, we should download the robot assets from [InternUTopiaAssets](https://huggingface.co/datasets/InternRobotics/Embodiments) and move them to the `data/` directory.
13
13
14
-
[UPDATE] We support using local model and isaac sim in one process now. Evaluate on Single-GPU:
14
+
InternNav supports two execution modes for running the model during evaluation.
We now support running the local model and Isaac Sim in the same process, enabling single-GPU evaluation without launching a separate agent service, and also supports multi-process execution, where each process hosts its own simulator and local model.
'use_agent_server': False, # run the model in the same process as the simulator
31
+
},
32
+
)
18
33
```
19
34
20
35
For multi-gpu inference, currently we support inference on environments that expose a torchrun-compatible runtime model (e.g., Torchrun or Aliyun DLC).
@@ -30,8 +45,8 @@ For multi-gpu inference, currently we support inference on environments that exp
The main architecture of the whole-system evaluation adopts a client-server model. In the client, we specify the corresponding configuration (*.cfg), which includes settings such as the scenarios to be evaluated, robots, models, and parallelization parameters. The client sends requests to the server, which then submits tasks to the Ray distributed framework based on the corresponding cfg file, enabling the entire evaluation process to run.
34
-
48
+
#### 2) Agent Server Mode (use_agent_server = True)
49
+
We also support running the model in a separate process.
35
50
First, change the 'model_path' in the cfg file to the path of the InternVLA-N1 weights. Start the evaluation server:
36
51
```bash
37
52
# from one process
@@ -44,6 +59,13 @@ Then, start the client to run evaluation:
'use_agent_server': True, # run the model in the same process as the simulator
67
+
},
68
+
)
47
69
```
48
70
49
71
The evaluation results will be saved in the `eval_results.log` file in the output_dir of the config file. The whole evaluation process takes about 10 hours at RTX-4090 graphics platform.
After the container started, you can quickly start the env and install the InternNav:
45
+
After the container started, you can quickly start to use the InternNav with conda environment:
46
46
```bash
47
47
conda activate internutopia
48
-
pip install -e .[isaac,model]
49
48
```
50
49
<!-- To help you get started quickly, we've prepared a Docker image pre-configured with Isaac Sim 4.5 and InternUtopia. You can pull the image and run evaluations in the container using the following command:
0 commit comments