scale from Equirectangular images #644
Unanswered
varunkudan
asked this question in
Q&A
Replies: 2 comments
-
|
@varunkudan Could you provide video you are testing on? |
Beta Was this translation helpful? Give feedback.
0 replies
-
|
A complete stab guess I'd say is that in monoculars (I..e. Fisheye and equirectangular) usages it completely guesses scale. It has nothing to ground it's coordinates to any real world measurement unless you have a source of depth info. As such I'd speculate your observed scale differences is a byproduct of how the system guesses/initalises scale at the 360 vs 180 deg FOV for moncular cameras. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi all, I am unable to address the following issue and hence need your suggestions. I am using equirectangular images from Insta360 and running stell vslam. I use the pose file hence generated along with the accelerometer and gyrosope value to estimate scale using optimization aproach. I extract accelerometer and gyroscope values using gyroflow-python.
When I use the pose obtained from only front facing camera, I get a correct scale. But when i do the same with equirectangular images, I get half the scale. I am having trouble understanding this. Kindly request you to reason this.
Thank you very much for your time.
Beta Was this translation helpful? Give feedback.
All reactions