A consolidated toolkit for turning 360° video captures into photogrammetry (PGM) datasets and optimized point clouds that seed 3D Gaussian Splatting (3DGS) projects. The scripts cover every stage from frame extraction to PLY refinement, and the desktop GUI orchestrates the full flow for multiple photogrammetry and 3DGS apps.
Project status: actively edited and expanded. Expect interface polish, richer presets, and additional documentation updates in upcoming revisions. The code in this repository was generated with the assistance of OpenAI Codex.
- Python 3.7 or newer
pipfor installing Python packages- Python dependencies listed in
requirements.txt - FFmpeg available on your
PATH - PyTorch (
torch1.10+ and matchingtorchvisionbuild) for the human masking tool - GPU/CUDA is optional for masking, but accelerates large batches.
- Clone the repository
git clone https://github.com/Mistral-Yu/360Cam-PGM-3DGS-Tools.git cd 360Cam-PGM-3DGS-Tools - Create and activate a Conda environment
conda create -n gs360 python=3.7 conda activate gs360
Tip: pick any supported Python version (3.7+) that matches your GPU drivers/CUDA toolkits.
- Install Python dependencies
pip install -r requirements.txt
Note: PyTorch wheels are platform-specific. If
pipcannot find a build for your OS/Python combo, installtorch/torchvisionmanually inside the Conda environment before rerunning the command. - Verify FFmpeg
ffmpeg -version
GUI Explanation: gs360_360GUI.py (Tabs: Video2Frames, FrameSelector, 360PerspCut, SegmentationMaskTool, PointCloudOptimizer, MS360xmlToPerspCams).
Video2Framestab: Sample equirectangular frames from a 360° video.FrameSelectortab: Score and retain the sharpest frames for Structure-from-Motion (SfM).360PerspCuttab: Convert panoramas into perspective views.SegmentationMaskTooltab: Use SegmentationMaskTool to preview and refine masks for unwanted subjects or rig elements before reconstruction to reduce ghosts and cleanup artifacts.- Use exported views to align in your photogrammetry software (RealityScan, Metashape, etc.) and export the PLY point cloud plus camera metadata.
PointCloudOptimizertab: Optimize initial point cloud, Downsample/merge the PLY or Colmap PointCloud3D for your 3DGS tool (PostShot, gsplat, etc.) . Delete sky point clouds generated with photogrammetry software, add clean Sky point clouds.MS360xmlToPerspCamstab: Experimental tool for convert Metashape spherical camera XML into perspective camera parameters. Colmap, Metashape xml, transform.json, RealityScan xmp.DualFisheyePipelinetab: Experimental tool for correcting dual-fisheye distortion from Metashape XML calibration and exporting perspective views.
- Create a 360 video using DJI Studio or similar software (This GitHub repository has been tested with the Osmo 360. Recommended settings: apply D-Log only, keep all other settings at their default value of 0 if correct color values are required under a linear workflow. And turn RockSteady off.)
- Launch
gs360_360GUI.py. - In the
360PerspCuttab, click Browse Video and select a video file. - Choose a preset.
- Under Video (direct export), set the FPS.
- Click Run Export to write the images.
- RealityScan: launch the app, import images, then select all.
Set Prior Calibration -> Prior to Fixed or Prior to Approximate, and change Focal Length to the value shown in the
360PerspCutlog (Preset default: 12 mm, fisheyelike: 17mm, full360coverage: 14 mm). Set Prior Lens Distortion -> Prior to Fixed. - Or Metashape: launch the app, import images, then go to Tools -> Camera Calibration (Initial tab).
Set Type to Precalibration and update f to the value shown in the
360PerspCutlog (Preset default: 533.33333, fisheyelike: 755.55556, full360coverage: 622.22222). Next, click the Fixed parameters Select button and Check all or check distortion parameters except f. - Bring the RealityScan or Metashape alignment results into a 3DGS tool such as PostShot.
- Create a 360 video using DJI Studio or similar software.
- Launch
gs360_360GUI.py. - In the
Video2Framestab, extract 360 frames by setting the FPS and running the export. - In Metashape, import the frames, then go to Tools -> Camera Calibration and set Camera Type = Spherical.
- Align the cameras.
- Export from Metashape: File -> Export -> Export Cameras (Agisoft XML) and File -> Export -> Point Cloud (PLY).
- Back in
gs360_360GUI.py, open the MS360xmlToPerspCams tab and set Input XML to the exported XML. - Set Format = transforms, enable PerspCut, set PerspCut input to the 360 image folder, and set Points PLY to the exported PLY. Run the tool.
- Use PerspCut out,
transforms.json, and the rotated PLY in PostShot (or similar 3DGS tools).
TODO
- Metashape Pro, Multi-Cameras-System. (Known bug: If you import perspective_cams_Multi-Camera-System.xml with MS360xmlToPerspCams and no tie points appear after alignment(sometimes), re-importing the XML file fixes the issue.).
- RealityCapture or Metashape Standard with spherical Metashape Alignment.xml.
- Hybrid scan combining mirrorless camera and 360-degree camera.
NOTES
- After converting to the RealityScan format (XMP) using MS360xmlToPersCams, be careful with the Position Accuracy setting in the alignment settings when running alignment in RealityScan. The default value is too large — it is intended for aerial capture.
This project is released under the MIT License. Copyright (c) 2025 Yu.
- Dual fisheye extraction → Undistort, apply D-log M to sRGB, Correct color differents → Convert to 5 perspective images → Update Metashape Multi-Camera System workflow
- CameraFormatConverter. RealityScan(CSV, Ply, XMP) ←→ Metashape(XML, Ply) ←→ Colmap
- Flesh out a full GUI walkthrough (tab descriptions, launch parameters, screenshot gallery).
- Redesign the GUI using PySide or a similar framework
- Create the workflow using Mermaid notation
- Fisheye lens calibration and color correction, and extract perspective images directly from fisheye images