A ROS2-based lightweight LiDAR mapping tool integrating the Velodyne VLP-16 LiDAR and Wheeltec N100 IMU for GPS-denied 3D mapping and localization. Designed for challenging environments like cave surveying, this project demonstrates practical hardware integration, sensor fusion, and implementation of state-of-the-art open-source SLAM algorithms.
This repository documents my journey building a professional-grade perception stack from the ground up, focusing on real-world sensor integration and SLAM algorithm implementation. Through iterative hardware testing and field deployments, I've developed practical expertise in robotic perception systems while exploring applications from vehicle-mounted mapping to aerial surveying.
Key Learning Areas:
- Multi-sensor hardware integration and calibration
- ROS2 ecosystem and sensor drivers
- SLAM algorithm evaluation and tuning
- Field data collection and post-processing pipelines
- System packaging for mobile platforms
Handheld SLAM platform built around a Small Rig cage which gives many easy attachment points for PI, HMI, batteries, and VLP-16:
Vehicle-mounted FAST-LIO mapping down a residential street:
SLAM.mp4
- LiDAR: Velodyne VLP-16 (used, sourced via eBay)
- IMU: Wheeltec N100 9-axis IMU
- Compute: Raspberry Pi 5 (8GB) running ROS2 Humble
Through testing multiple IMU units, I learned firsthand why sensor quality is critical for SLAM performance:
| IMU Model | Result | Key Learnings |
|---|---|---|
| Bosch BNO085 | Poor SLAM convergence | low-cost MEMS sensors suffer from significant drift and noise in dynamic environments |
| Witmotion WT901B | Unstable pose estimation | calibration software extremely buggy could never get properly calibrated |
| Wheeltec N100 | Successful mapping | worked well, though higher-grade IMU would improve performance |
Why IMU Quality Matters for SLAM:
- Motion Prediction: High-rate IMU data (>100Hz) provides inter-frame motion estimates, crucial for scan matching initialization
- Drift Compensation: Superior bias stability reduces integration drift during feature-sparse environments
- Tightly-Coupled Fusion: Algorithms like FAST-LIO rely on accurate IMU pre-integration; noise directly degrades optimization convergence
- Dynamic Motion Handling: Better gyroscope performance maintains pose accuracy during aggressive vehicle maneuvers or walking motion
Advantages:
- Stable platform for algorithm validation
- Higher speeds test scan matching robustness
- Easy to collect large-scale mapping datasets
Lessons Learned:
- Vibration isolation critical for IMU mounting
- Vehicle velocity aids loop closure detection
- Suction mount provides surprising rigidity at <30mph
Field Setup:
Challenges Encountered:
- Walking-induced IMU oscillations cause scan distortion
- Handheld variability exposes IMU quality limitations
- Power management more complex than anticipated
Initial thoughts for a development roadmap follows a phased approach: first establishing aerial mapping capabilities with the VLP-16 stack, then transitioning to interior cave surveying with a more compact sensor package.
Phase 1: Heavy-Lift Platform
- X650 hexacopter carrying VLP-16 + N100 IMU + Pi5
- General aerial surveying
- Proof-of-concept for aerial LiDAR workflows
- Data fusion: LiDAR point clouds + RGB photogrammetry (RealityScan) for textured, high-fidelity 3D models
Phase 2: Compact Cave Interior Platform (Future) To enable drone-based mapping inside caves, I plan to downsize to a Livox AVIA LiDAR, which features:
- Integrated IMU (eliminating separate N100)
- Significantly reduced weight and form factor
- Non-repetitive scanning pattern (ideal for SLAM)
- Mountable on 5-7" FPV quadcopter frames
Hybrid Cave Surveying Strategy: The compact FPV platform would have limited flight time (~5-10 minutes), making it ideal for mapping large cave chambers and rooms, while handheld recording handles smaller passages and crawlways where drone operation isn't feasible. This two-pronged approach maximizes coverage while working within the constraints of each platform.
Target Platform: X650 hexacopter Payload: VLP-16 + Pi5 + battery
- Offline Recording: Raspberry Pi 5 logs
/velodyne_pointsand/imutopics to SSD - Post-Processing Pipeline: FAST-LIO batch processing on workstation for high-fidelity maps
- Multi-Modal Fusion: RGB camera integration for textured point clouds
- Photogrammetry Complement: Already experimenting with RealityScan for hybrid surveying workflows
- Cave Survey Workflow: Aerial entrance + large room mapping -> handheld interior mapping -> registration and fusion of datasets
Rationale: Aerial platforms enable comprehensive cave entrance, surrounding terrain, and large room/passage mapping, while handheld systems handle interior passages. While commercial solutions like Inertial Labs RESEPI exist, building from scratch maximizes learning in sensor fusion, embedded systems, and autonomous flight integration—skills directly applicable to challenging mapping environments.
- Best overall performance with N100 IMU
- Tightly-coupled iterated Kalman filter approach
- Excellent in feature-sparse environments
- Lightweight, geometry-only approach
- Good for initial testing without IMU dependency
- Limited performance in long corridors
- Google's proven solution for indoor mapping
- Heavier computational requirements
- Strong loop closure but requires tuning for outdoor use
Current Workflow:
- Mount stack on platform (vehicle/drone)
- Launch ROS2 drivers and record sensor topics
- Transfer bag files to workstation
- Run SLAM algorithm offline for optimized maps
- Export PCD/PLY for analysis and visualization
This project serves multiple objectives:
- Cave Mapping Application: Create a practical tool for 3D surveying of caves and underground environments where GPS is unavailable
- Lightweight & Portable: Design a system that can be deployed handheld, on drones, or mounted on ground vehicles for versatility
- Skill Development: Hands-on experience with perception systems, sensor fusion, and robotics middleware
- Algorithm Understanding: Practical evaluation of SLAM approaches in GPS-denied, feature-limited environments
- System Integration: Real-world hardware challenges (power, mounting, calibration, environmental factors)
- Portfolio Building: Demonstrate technical capabilities in robotic perception systems
- Open Learning: Document successes and failures for community benefit
- Complete heavy-lift drone build for aerial surveying
- Develop ruggedized handheld mapping configuration
- Test SLAM algorithms in GPS-denied, low-feature environments
- Integrate RGB camera for textured mapping in low-light conditions
- Develop automated data processing pipeline
- Create survey stich workflow: aerial mapping → handheld mapping → combined dataset
- Develop lightweight drone with Livox AVIA for surveying
- Deploy system in actual cave environments for field validation
- Integrate real-time SLAM for autonomous drone navigation and GPS-denied waypoint missions
- Investigate visual-inertial odometry for redundancy in feature-sparse areas
- Compare LiDAR vs. photogrammetry reconstruction quality in caves
- Explore integration with flight controllers for autonomous obstacle avoidance
- Develop waterproofing solutions for wet environments
- Hardware matters more than expected: Algorithm performance ceiling is set by sensor quality
- Iterative development is key: Cheaper components taught valuable lessons before investing in quality hardware
- Real-world != simulation: Vibration, power management, and environmental factors dominate practical deployments
- Data collection is hard: Robust recording systems and backup power are non-negotiable
Note: This project represents ongoing learning and development in robotic perception systems. Feedback and collaboration opportunities are always welcome.
