[Hello World!] – Localisation (NDT)

Share this page

In this new series of fortnightly technical blogs, StreetDrone is going to lift the LiD(-AR) on one aspect of autonomous driving software development. Our “Hello World!” series is written by software engineer Efimia Panagiotaki, a tech-trotter with engineering experience ranging from the National Technical University of Athens to ETH Zurich to Williams Formula 1. Efimia is leading the development of StreetDrone’s autonomous driving software by leveraging Autoware, a ROS-based, open-source self-driving stack.

You know the feeling, waking up after a long, sweet afternoon nap.. you don’t know where you are and it takes a moment to remember who you are? You are lost and completely disoriented. But clarity slowly gathers as you start recognising your surroundings and gradually you start finding your place within a familiar environment.

Just like that! As soon as you figure out where you are, all other functional capability follows.

This familiar process is called localisation in self driving car terms, and it’s the most essential part of the process.

Before a vehicle makes any decisions, it needs to figure out where it is in the world. The ‘world’ in our case is a pre-recorded map of the environment. Self-driving vehicles are simplifying the localisation problem by using highly detailed map data collected in advance.

It’s worth mentioning that there are great localisation techniques where the vehicle is localising and navigating successfully without having a map already available. This is called SLAM (Simultaneous Localisation and Mapping) and refers to the construction or the update of a map of an unknown environment while simultaneously navigating and keeping track of the vehicle within it.

Overview:

For the vehicle to accurately estimate its position within an environment, we are using a point cloud map-based localisation method which we evaluate in terms of accuracy, resilience to error and noise, coverage and cost. The point cloud map is extracted from a LiDAR sensor (light detection and ranging sensor) placed at the top of our vehicles. Our localisation approach is using the point cloud matching method, Normal Distribution Transform or NDT in which a LiDAR scan from the vehicle is being compared to the LiDAR scan of the map, in order to calculate its position in that map.

Mapping

LiDARs are 3D laser scanning mechanisms which measure the distance of a surface by illuminating it with laser light and calculating the differences in return times and wavelengths of the light pulse when it bounces back to the receiver. In simple words, this sensor throws rapid pulses of laser light onto a surface and measures how long it takes for the light to return in order to measure relative distances and positions.

By repeating this process, we extract a collection of data points in 3D space, which form the point cloud of the area the vehicle is in. By processing these points we extract the final map of the area.

test alt text

Localisation

As previously mentioned, normal distributions transform NDT matching localisation technique refers to the matching of the point clouds from the vehicle’s LiDAR scans to the pre-recorded map.

Naturally, though, the scans cannot be identical on each run of the vehicle. Self-driving cars are operating in dynamic and constantly changing environments under non-ideal and imperfect conditions. Even though dynamic objects are not being taken into account when mapping or localising, the measurement errors on the point clouds cause a slight misalignment on the points. In real-world scenarios, the environment has probably changed between the time we recorded the map and the time the vehicle starts localising.

A great example in an urban environment are parked cars that are perceived as static objects and consequently became part of the map, but then when we start a new scan, those cars have moved.

test alt text

Digging into the theory of NDT a bit more, this technique is not looking for the exact point clouds to match, as identical scans cannot be achieved. NDT[1] subdivides the plane, similar to the occupancy grid, but where the occupancy grid represents the probability of a cell being occupied, NDT represents the probability of measuring a sample for each position within the cell. The result is a description of the plane in the form of a probability density.

In our case, the point cloud is divided into a uniform 3D grid and each cell (voxel) keeps the mean and distribution of the sub-point cloud assigned to it, representing normal distributions. The evaluation metrics[2] of NDT are iteration, fitness score and transformation probability. Iteration refers to the number of cycles until achieving matching convergence. Fitness score represents the average sum of distances between closest points. Transformation probability is the score of one point, when we divide the fitness score by the total number of points on the input point cloud.

We have performed a series of static and dynamic tests in real-world dynamic environments in order to evaluate the performance of the algorithm. Results gave us less than 7.5 cm positional error, demonstrating that this technique is accurate, robust and very resilient to noise and errors on the input scans. The factors worth mentioning which affect the precision and performance of NDT are the downsampling methods of the input point cloud (voxel grid filter[3]), the resolution of the point clouds and the initialization of the matching. The initialization refers to the defining of the initial position and pose of the vehicle in the map.

[1] Biber, Peter & Straßer, Wolfgang. (2003). The Normal Distributions Transform: A New Approach to Laser Scan Matching. IEEE International Conference on Intelligent Robots and Systems. 3. 2743 – 2748 vol.3. 10.1109/IROS.2003.1249285

[2] Carballo, Alexander & Monrroy, Abraham & Wong, David & Narksri, Patiphon & Lambert, Jacob & Kitsukawa, Yuki & Takeuchi, Eijiro & Kato, Shinpei & Takeda, Kazuya. (2020). Characterization of Multiple 3D LiDARs for localization and Mapping using Normal Distributions Transform.

[3] Han, Xian-Feng & Jin, Jesse & Wang, Ming-Jie & Jiang, Wei & Gao, Lei & Xiao, Liping. (2017). A review of algorithms for filtering the 3D point cloud. Signal Processing: Image Communication. 57. 10.1016/j.image.2017.05.009.

To get in touch with StreetDrone, send an email to info@streetdrone.com and we’ll get back to you ASAP.

About the Author

Related Posts

Follow Us