You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I wanted to scope out a planned update that will make it easier to script and automate the calibration workflow. Along with this I'll be renaming some things and generally trying to get the code a bit more "ergonomic".
The basic idea is that you can install the calibration pipeline tools with: pip install caliscope to use it as a library. Alternatively, you can use pip install caliscope[gui] to add the full graphical application.
Cameras can be initialized several ways depending on what you have:
# From video files (reads resolution from metadata)intrinsic_videos= {
0: project/"intrinsic"/"cam_0.mp4",
1: project/"intrinsic"/"cam_1.mp4",
2: project/"intrinsic"/"cam_2.mp4",
}
cameras=CameraArray.from_videos(intrinsic_videos)
# From known image sizes (no video files needed)cameras=CameraArray.from_image_sizes({
0: (1920, 1080),
1: (1920, 1080),
2: (1920, 1080),
})
# From a previously saved calibrationcameras=CameraArray.from_toml(project/"camera_array.toml")
Intrinsic calibration
forcam_id, videoinintrinsic_videos.items():
points=extract_image_points({cam_id: video}, tracker)
output=calibrate_intrinsics(points, cameras[cam_id])
cameras[cam_id] =output.cameraprint(f"Camera {cam_id}: RMSE={output.report.rmse:.3f}px")
# Save intrinsics for reusecameras.to_toml(project/"camera_array.toml")
Extrinsic calibration
extrinsic_videos= {
0: project/"extrinsic"/"cam_0.mp4",
1: project/"extrinsic"/"cam_1.mp4",
2: project/"extrinsic"/"cam_2.mp4",
}
points=extract_image_points(extrinsic_videos, tracker)
capture_volume=CaptureVolume.bootstrap(points, cameras) # relative poses estimated from linked stereopairs# Refinecapture_volume=capture_volume.optimize() # bundle adjustment performedcapture_volume=capture_volume.filter_by_percentile_error(2.5)
capture_volume=capture_volume.optimize()
# Set real-world scale and origin using the charuco board visible at frame 0capture_volume=capture_volume.align_to_object(sync_index=0)
# Optionally rotate to match your lab coordinate systemcapture_volume=capture_volume.rotate("x", 90)
# Save everything (camera_array.toml + image_points.csv + world_points.csv)capture_volume.save(project/"capture_volume")
Reusing intrinsics with a new extrinsic calibration
# Load cameras with existing intrinsicscameras=CameraArray.from_toml(project/"camera_array.toml")
# New extrinsic videos from the new setupnew_extrinsic_videos= {
0: project/"new_session"/"cam_0.mp4",
1: project/"new_session"/"cam_1.mp4",
2: project/"new_session"/"cam_2.mp4",
}
points=extract_image_points(new_extrinsic_videos, tracker)
# bootstrap() uses the existing intrinsics, computes new extrinsicscapture_volume=CaptureVolume.bootstrap(points, cameras)
capture_volume=capture_volume.optimize()
capture_volume.save(project/"new_session"/"capture_volume")
The key: bootstrap() reads intrinsics from the cameras you pass in, but computes extrinsics fresh from the new observations. Your lens calibration is preserved.
Checkpointing
Every intermediate result can be saved and loaded independently. This means you can stop and resume at any point, or re-run just the expensive step that changed:
# Save after tracking (the slow part)points.to_csv(project/"checkpoints"/"extrinsic_points.csv")
# Later, resume from saved pointspoints=ImagePoints.from_csv(project/"checkpoints"/"extrinsic_points.csv")
capture_volume=CaptureVolume.bootstrap(points, cameras)
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
I wanted to scope out a planned update that will make it easier to script and automate the calibration workflow. Along with this I'll be renaming some things and generally trying to get the code a bit more "ergonomic".
The basic idea is that you can install the calibration pipeline tools with:
pip install caliscopeto use it as a library. Alternatively, you can usepip install caliscope[gui]to add the full graphical application.Does the API below make sense for your use case?
Planned workflow example
Creating cameras
Cameras can be initialized several ways depending on what you have:
Intrinsic calibration
Extrinsic calibration
Reusing intrinsics with a new extrinsic calibration
The key:
bootstrap()reads intrinsics from the cameras you pass in, but computes extrinsics fresh from the new observations. Your lens calibration is preserved.Checkpointing
Every intermediate result can be saved and loaded independently. This means you can stop and resume at any point, or re-run just the expensive step that changed:
Beta Was this translation helpful? Give feedback.
All reactions