I am looking to add sensory/depth perception to an agent so he is able to 'see' and navigate the environment before him. However, Brax does not seem to have a built-in method for computing depth such as a range finder function.
My current idea is to have the agent throw lasers in every direction to get the depths and then concatenate the angle and distance value of each ray. In other words, compute the distance and angle from the agent to the object in the environment which is a single mesh object from a LiDAR scan. I will then need to adapt the input size of the neural network so that the size of the observation space matches the input size of the policy network.
Does anyone have any experience with adding depth perception in Brax or a clever idea for how it can be done? Thanks

I am looking to add sensory/depth perception to an agent so he is able to 'see' and navigate the environment before him. However, Brax does not seem to have a built-in method for computing depth such as a range finder function.
My current idea is to have the agent throw lasers in every direction to get the depths and then concatenate the angle and distance value of each ray. In other words, compute the distance and angle from the agent to the object in the environment which is a single mesh object from a LiDAR scan. I will then need to adapt the input size of the neural network so that the size of the observation space matches the input size of the policy network.
Does anyone have any experience with adding depth perception in Brax or a clever idea for how it can be done? Thanks