Skip to content

Could you give me some clues on reproducing the CLIP feature alignment from DR.Robot? #3

@qrcat

Description

@qrcat

I'm trying to reimplement the "Text to Robot Pose with CLIP" of paper but haven't achieved the same results.

I've been attempting to match the conditions outlined in the paper. When training the shadow hand model, I used:

python generate_robot_data.py --model_xml_dir mujoco_menagerie/shadow_hand --camera_distance_factor 0.4
python train.py --dataset_path data/shadow_hand --experiment_name shadow_hand --canonical_training_iterations 5000 --pose_conditioned_training_iterations 30_000

I also wrote a script for aligning CLIP features using the 🤗 openai/clip-vit-base-patch32 encoder.
The initial pose is come from get_canonical_pose at utils.mujoco_utils, and I use Adam to optimize it.
The loss function is dot product between language and image embeddings

loss = -torch.matmul(image_embedding, text_features.T.detach())

I've noticed some oddities. Firstly, the loss function starts off much lower than what was reported in the paper. On the webpage, I saw that the initial error value was around -24, but my reproduction yields a value below -30. I suspect this has something to do with the prompts. Secondly, it's difficult to optimize to the desired pose.
loss
render

Therefore, I would like to know more about the implementation details regarding this part, such as the optimizer settings or if any additional tricks were used, etc. Could you please share that with me?

I'm looking forward to receiving your reply.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions