Localization and Mapping in Autonomous Driving

Localization and Mapping in Autonomous Driving

Knowledge Tree for SDC Localization and Mapping

Autonomous Driving is a brand-new area for me to explore. It is a technology that combining all the cutting-edge technologies together, from 5G and cloud to CV and AI. I think it will be the most ground-breaking product of our time. It can be viewed as a very primitive step of the intelligent machine. Look forward to the human history to the future, maybe the ultimate task of the our generation is to deliver such technology. So it seems pretty reasonable and necessary for me to spend a weekend to understand how self-driving cars works. Here is what I learned.

Self-driving Basics

  1. Levels of SDC: I used to have misunderstandings for this concept. I thought the level is determined based on the technology used. But it actually defined based on the human involvement of the whole driving process. Current industry are targeting L4 SDC
  2. SDC is also classified into 3 different use cases:
    1. Passenger car: Safety first, large scale operation, traffic rules and complex road scenario.
    2. Delivery car: Less safety and comfortable concerns. Less accuracy LM
    3. Cleaning car: Aim to cover the ground of a whole area
  3. Besides the car itself, SDC basically adding 5 new tech:
    1. Perception: Real-time perceive the dynamic objects on the road, like pedestrian or other cars
    2. Localization and Mapping: Sensor fusion, SLAM, map building, HD map.
    3. Planning and Control: Route arranging, control and navigation, safety and comfort.
    4. Simulation: Physically based simulation env. Test env for all components of SDC
    5. In-car user experience: Next generation in-car experience. Safety monitoring and entertainment. This is very important, once we achieve L5, we need to find something for human to do in the car
  4. ADAS: Advanced Driver Assistance Systems. Traditional smart drive technology. ADASIS: interface for ADAS. SDC technology is an addOn to the ADAS.

Localization and Mapping (LM)

  1. LM is a broad concept in robotics. It apply to any kind of the Robotics system. Here list some basic differences between AR glasses and SDC SLAM:
    1. AR: map is relative small and unknown, using VO/VIO, frontend is for rendering base on real-time headpose, backend is for optimize the map, FPS is top req
    2. SDC LM: map is large scale and use prior cloud HD Map, life-long SLAM, using LiDAR, safety is top req, frontend is for localization in the HD map. Usually no need to optimize/update the map.
  2. SDC LM sensors:
    1. GNSS and RTK: GPS is not always guaranteed to be stable
    2. IMU, Wheel speed and rotation: Dead Reckoning, but have accumulated drifting error
    3. Camera: Vision for detect semantic loop closure and road segmentation, using deep learning to do semantic segmentation
    4. LiDAR: 360 degree Point cloud, work bad in fog or rainy day
    5. Ultrasonic and Radar: works well in fog or rainy day

Mapping

  1. HD Map: High Definition Map. Resolution < 10cm. It contains different levels of details
    1. Road Topology Map: Use for basic navigation
    2. Lane Map: High accuracy lane geographical model. Use for path planning
    3. Landmark Map: Use for storing the sign/traffic light/other road info
    4. Localization Feature Map: LiDAR point cloud or features for localization, like corners of line splitter
    5. Dynamic Map: store some info like weather and construction site
  2. Map need annotation. Like the sign on the road, the bump of the road and any other road information that cannot directly obtain from point cloud. Here are some ways of labeling:
    1. Manually Labelling
    2. Satellite Yaw Scan mapping
    3. AI annotation
  3. HD Map Annotation content:
    1. Road boundary estimation
    2. Lane Detection
    3. Crosswalk estimization
    4. Lane/road topology
  4. HD Map stored follow the OpenDrive spec
  5. Map compression, map can be very big and cannot download on the fly. So we can compress some redundant info
    1. OctoMap: Divide the point cloud space to tree structure grid, like BVH.
    2. Point Cloud Compression (Voxel Filtering): for a single grid, we can compress all points into 1 point. So that we reduced the feature map
    3. Occupancy Grid Map: Occupied, Idle, Unknown. Probability of occupied of a grid
  6. Download on the fly or at user home
    1. Map Engine: take care of map assembly
    2. Download using 5G on the fly
    3. If download at home then only download the HD Map for the route
  7. 3 ways to build the map
    1. Use professional LiDAR based collection car. The data is more trustable.
    2. Use any car that utilize the the HD Map. Camera based.
    3. Mix usage of both
  8. Map alignment: some times, due to drifting issue, the 2 slices of map have certain offset. We might need to add more constrain when optimize the map to remove the offset, or just have human manually align it.
  9. Map Building: Offline, cloud, heavy computing.
    1. Offline SfM Pose graph optimization: optimize both car pose and the features.
      1. Partial/global BA update
    2. If feature matching result is bad. Use consistent data.
  10. LiDAR raw point cloud need to have prune, DL based method:
    1. Rain/fog removal
    2. Dynamic Object removal: car, people
    3. Point cloud for certain object like road sign need to be rectified
  11. Map always need verification through simulation before publish
  12. Testing
    1. Fleet on actual road
    2. Structure test on fake road
    3. Simulation
      1. Drive log playback
      2. Design test cases
  13. When start service at a new city, always need to collecting and test the map for 30 days
  14. Map update strategy:
    1. Need to use consistent data across different cars in the fleet and different times (loop closure). Such that we have an unsupervised map update mechanism
  15. When download slice on the fly, we might need some overlapping area between slices. So that car can do the gap close loop closure
    1. We should know the transform between adjacent slices
    2. Each slice will have it local coordinate system
  16. HD Map will be segmented into slices. And download slices base on request
  17. How about indoor map for parking structure?
  18. For LiDAR feature point, there is no descriptor.
  19. We can use filter to remove standalone point in the raw map
  20. HD Map update need to have OTA
  21. Other Constraints when BA optimize the pose graph:
    1. height: car should always on ground
    2. Use 3D geometry constraints to make things obey the rules
    3. Loop Closure
    4. Multilevel Loop Closure: see the same feature at a far distance
    5. Global constraints
    6. Manually Labeling constraints
  22. Map should only store the key frame data?
  23. When collecting data, scanner car need to make sure of loop closure

Localization

  1. Use the real-time LiDAR data to match the features in the ground truth HD Map, so that inference the car’s pose
    1. Horizontal localization: using the lane splitter
    2. Vertical localization: using the road signs
  2. Map building is mostly doing pose graph optimize features, Localization is using ICP and Particle Filter to estimate the car’s pose.
  3. ICP, Iterative Closest Points, 3D feature matching to 3D:
    1. Good thing of ICP is that the SVD solution (linear solution) can give us the global minimum of lose. So not really need to optimize over the pose graph. Unless it is for the map building, which need to refine the feature points and other states
    2. SVD solution of ICP check slambook p173
  4. Other methods for localization:
    1. NDT: normal Distribution Transform
    2. Particle Filter
    3. Grid search?
  5. LiDAR Odometry: fuse LiDAR with RTK and IMU
  6. GPS data might be bad during runtime. We can remove the GPS outliner.
  7. When localization, if the LiDAR is bad due to rain or fog, or blocked by side objects, DR (Dead Reconking) takes more weight when output prediection

Some Thoughts of SDC

  1. Since we still have a long road to go before L5, I can see there are some controversial concepts in SDC. They are reasonable for current stage. However, ultimately they may block the truly self-driving car’s evolvement.
    1. I see cars are adding in-car camera to monitoring driver’s attention on the road. But if this is a self driving car then why driver need attention on the road
    2. Also most company rely on HD Map to guide the car driving. If we want the car move itself like a human driver, then it should learn the road by itself.
    3. Also when user arrive at some unknown area, HD Map will not work. Someone need to drive the scanner car to there and scan it. So this makes SDC difficult to extend to larger scale.
    4. Assume we have L5 drive technology and have it for decades, in some special scenarios, user still need to take over the driving, which leads to the question that if user haven’t been driving for a long time, will they still capable of driving?
    5. HD Map requires a lot effort to labelling it. Make it feel like human is working for the machine instead.

Resources

  1. https://github.com/JakobEngel/dso
  2. https://github.com/RobustFieldAutonomyLab/LeGO-LOAM
  3. https://www.bilibili.com/video/BV1i54y1v7jt?from=search&seid=5174757499603140692
  4. https://github.com/qiaoxu123/Self-Driving-Cars
  5. https://www.youtube.com/watch?v=Q0nGo2-y0xY&ab_channel=LexFridman