Extraction of Corresponding Point Cloud Features Based on Least Squares Deviation Analysis

Authors

  • Zhenhao Xing
  • Xiayu Zhao
  • Hongmei Qu

DOI:

https://doi.org/10.54691/rhdwg968

Keywords:

Multi-line LiDAR, frame-to-frame registration, least squares method, deviation analysis, feature matching.

Abstract

Accurate extraction of corresponding feature points between adjacent frames is crucial for improving pose estimation accuracy in multi-line LiDAR frame-to-frame registration. This paper proposes a least squares deviation analysis method to optimize the matching process of pole-like object features. First, point cloud data is preprocessed, including denoising and grid projection, to enhance feature point stability. Then, candidate feature points are initially selected using point cloud clustering and bounding box methods. The least squares method is applied to analyze the deviation of features between adjacent frames, eliminating points with significant matching errors. Finally, optimized feature matching improves the accuracy of frame-to-frame registration. Experimental results demonstrate that the proposed method can efficiently and accurately extract corresponding pole-like object features between adjacent frames, enhancing registration stability.

Downloads

Download data is not yet available.

References

[1] H. Li, T. Meng, X. Zhang, J. Wei, Y. Ma and Y. Liu: Improved Euclidean Clustering Point Cloud Segmentation Algorithm Based on Curvature Constraint, Proc. Chinese Control and Decision Conference (CCDC) (Yichang, China, 2023), Vol. 35, p. 2360-2365, doi: 10.1109/CCDC58219.2023.10326614.

[2] F. Chen et al.: Point Cloud Segmentation Algorithm Based on Improved Euclidean Clustering, IEEE Access, Vol. 12 (2024), p. 152959-152971, doi: 10.1109/ACCESS.2024.3480333.

[3] S.J. Miller: The Method of Least Squares (MS., Brown University, USA, 2006), p. 5-11.

[4] I. Markovsky and S. Van Huffel: Overview of Total Least-Squares Methods, Signal Processing, Vol. 87 (2007), p. 2283-2302, doi: 10.1016/j.sigpro.2007.04.004.

[5] M. Weinmann, B. Jutzi and C. Mallet: Geometric Features and Their Relevance for 3D Point Cloud Classification, ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., Vol. IV-1/W1 (2017), p. 157-164, doi: 10.5194/isprs-annals-IV-1-W1-157-2017.

[6] Y. Zhang, G. Geng, X. Wei, S. Zhang and S. Li: A Statistical Approach for Extraction of Feature Lines from Point Clouds, Computers & Graphics, Vol. 56 (2016), p. 31-45, doi: 10.1016/j.cag.2016.01.004.

[7] X.-F. Han, Z.-A. Feng, S.-J. Sun and G.-Q. Xiao: 3D Point Cloud Descriptors: State-of-the-Art, Artificial Intelligence Review, Vol. 56 (2023), p. 12033-12083, doi: 10.1007/s10462-023-10486-4.

[8] R.B. Rusu, N. Blodow, Z.C. Marton and M. Beetz: Aligning Point Cloud Views Using Persistent Feature Histograms, Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems (Nice, France, 2008), p. 3384-3391, doi: 10.1109/IROS.2008.4650967

[9] R.B. Rusu, N. Blodow and M. Beetz: Fast Point Feature Histograms (FPFH) for 3D Registration, Proc. IEEE International Conference on Robotics and Automation (Kobe, Japan, 2009), p. 3212-3217, doi: 10.1109/ROBOT.2009.5152473.

[10] R.B. Rusu, G. Bradski, R. Thibaux and J. Hsu: Fast 3D Recognition and Pose Using the Viewpoint Feature Histogram, Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems (Taipei, Taiwan, 2010), p. 2155-2162, doi: 10.1109/IROS.2010.5651280.

[11] R.Q. Charles, H. Su, K. Mo and L.J. Guibas: PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation, Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (Honolulu, HI, USA, 2017), p. 77-85, doi: 10.1109/CVPR.2017.16.

[12] C.R. Qi, L. Yi, H. Su and L.J. Guibas: PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space, 2017, DOI: 10.48550/arXiv.1706.02413

Downloads

Published

19-03-2025

Issue

Section

Articles