Adaptive Robust Homography Estimation via Hybrid RANSAC and Iteratively Reweighted Least Squares

Authors

  • Bilel Zerouali Hassiba Benbouali University of Chlef image/svg+xml Author
    Competing Interests

    No competing interests this author may have with the research subject.

  • Ahmed M. Osman Suez University image/svg+xml Author
    Competing Interests

    No competing interests this author may have with the research subject.

  • Enas Selem Suez University image/svg+xml Author
    Competing Interests

    No competing interests this author may have with the research subject.

           

DOI:

https://doi.org/10.66279/f6kjr057

Keywords:

Robust Estimation, Adaptive Hybrid IRLS, RANSAC Optimization, HPatches Benchmark, Homography Estimation.

Abstract

Homography estimation under heavy illumination variation and viewpoint changes remains a challenging problem because of high outlier ratios and noise with non-Gaussian statistics. This paper introduces the Adaptive Hybrid Iteratively Reweighted Least Squares (AH-IRLS) algorithm, which leverages both robust initialization via RANSAC and iterative refinement through the IRLS procedure.
Two adaptive mechanisms are embedded in the algorithm.
First, the inlier threshold is determined automatically using the Median Absolute Deviation (MAD), eliminating the need for any manual parameter setting. Second, at every iteration, the robust loss function is selected adaptively based on the skewness and kurtosis of the current residual distribution. The core idea is not to invent new estimation primitives but to combine established robust statistical tools intelligently within a single homography estimation pipeline. Numerical stability is ensured through Hartley normalization and weighted
singular-value decomposition (SVD).
Experiments on the HPatches benchmark show consistent improvements over classical RANSAC. Advanced sampling-based baselines (PROSAC, MAGSAC++) are also considered for additional validation. On the v grace and i toy sequences, AH-IRLS reduces the root-mean-square error (RMSE) by 56.8\,\% and 52.8%, respectively, while achieving a mean inlier rate of 94.2%. These results confirm that combining adaptive statistical modeling with robust geometric estimation is highly effective for challenging real-world conditions.

Downloads

Download data is not yet available.

References

[1] R. Hartley and A. Zisserman, Multiple view geometry in computer vision. Cambridge university press, 2003. DOI: https://doi.org/10.1017/CBO9780511811685

[2] O. Chum and J. Matas, “Matching with prosac-progressive sample consensus,” in 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR’05), vol. 1, pp. 220–226, IEEE, 2005. DOI: https://doi.org/10.1109/CVPR.2005.221

[3] M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Communications of the ACM, vol. 24, no. 6, pp. 381–395, 1981. DOI: https://doi.org/10.1145/358669.358692

[4] D. Barath, J. Noskova, M. Ivashechkin, and J. Matas, “Magsac++, a fast, reliable and accurate robust estimator,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 1304–1312, 2020. DOI: https://doi.org/10.1109/CVPR42600.2020.00138

[5] R. Raguram, O. Chum, M. Pollefeys, J. Matas, and J.-M. Frahm, “Usac: A universal framework for random sample consensus,” IEEE transactions on pattern analysis and machine intelligence, vol. 35, no. 8, pp. 2022–2038, 2012. DOI: https://doi.org/10.1109/TPAMI.2012.257

[6] P. W. Holland and R. E. Welsch, “Robust regression using iteratively reweighted least-squares,” Communications in Statistics-theory and Methods, vol. 6, no. 9, pp. 813–827, 1977. DOI: https://doi.org/10.1080/03610927708827533

[7] C. V. Stewart, “Robust parameter estimation in computer vision,” SIAM review, vol. 41, no. 3, pp. 513–537, 1999. DOI: https://doi.org/10.1137/S0036144598345802

[8] D. Barath and J. Matas, “Graph-cut ransac,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 6733–6741, 2018. DOI: https://doi.org/10.1109/CVPR.2018.00704

[9] P. H. Torr, S. J. Nasuto, and J. M. Bishop, “Napsac: High noise, high dimensional robust estimation-it’s in the bag,” in British Machine Vision Conference (BMVC), vol. 2, p. 3, 2002.

[10] V. Balntas, K. Lenc, A. Vedaldi, and K. Mikolajczyk, “Hpatches: A benchmark and evaluation of handcrafted and learned local descriptors,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5173–5182, 2017. DOI: https://doi.org/10.1109/CVPR.2017.410

[11] P. H. Torr and A. Zisserman, “Mlesac: A new robust estimator with application to estimating image geometry,” Computer vision and image understanding, vol. 78, no. 1, pp. 138–156, 2000. DOI: https://doi.org/10.1006/cviu.1999.0832

[12] G. Nousias, K. K. Delibasis, and I. G. Maglogiannis, “Intelligent sampling consensus for homography estimation in football videos using featureless unpaired points,” IEEE Access, vol. 13, pp. 187843–187857, 2025. DOI: https://doi.org/10.1109/ACCESS.2025.3627538

[13] D. Barath, J. Matas, and J. Noskova, “Magsac: Marginalizing sample consensus,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10197–10205, 2019. DOI: https://doi.org/10.1109/CVPR.2019.01044

[14] K. Aftab and R. Hartley, “Convergence of iteratively re-weighted least squares to robust m-estimators,” in 2015 IEEE Winter Conference on Applications of Computer Vision, pp. 480–487, IEEE, 2015. DOI: https://doi.org/10.1109/WACV.2015.70

[15] L. Peng, C. Kümmerle, and R. Vidal, “On the convergence of irls and its variants in outlier-robust estimation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17808–17818, 2023. DOI: https://doi.org/10.1109/CVPR52729.2023.01708

[16] J. Li, Q. Hu, and M. Ai, “Robust geometric model estimation based on scaled welsch q-norm,” IEEE Transactions on Geoscience and Remote Sensing, vol. 58, no. 8, pp. 5908–5921, 2020. DOI: https://doi.org/10.1109/TGRS.2020.2972982

[17] J. Li, Q. Hu, M. Ai, and S. Wang, “A geometric estimation technique based on adaptive m-estimators: Algorithm and applications,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14, pp. 5613–5626, 2021. DOI: https://doi.org/10.1109/JSTARS.2021.3078516

[18] I. Daubechies, R. DeVore, M. Fornasier, and C. S. Güntürk, “Iteratively reweighted least squares minimization for sparse recovery,” Communications on Pure and Applied Mathematics: A Journal Issued by the Courant Institute of Mathematical Sciences, vol. 63, no. 1, pp. 1–38, 2010. DOI: https://doi.org/10.1002/cpa.20303

[19] H. Le, A. Eriksson, M. Milford, T. T. Do, T. J. Chin, and D. Suter, “Non-smooth m-estimator for maximum consensus estimation,” in Proceedings of the British Machine Vision Conference (BMVC) 2018, pp. 1–12, BMVA Press, 2018.

[20] F. Wen, R. Ying, Z. Gong, and P. Liu, “Efficient algorithms for maximum consensus robust fitting,” IEEE Transactions on Robotics, vol. 36, no. 1, pp. 92–106, 2019. DOI: https://doi.org/10.1109/TRO.2019.2943061

[21] H. Le, T.-J. Chin, A. Eriksson, T.-T. Do, and D. Suter, “Deterministic approximate methods for maximum consensus robust fitting,” IEEE transactions on pattern analysis and machine intelligence, vol. 43, no. 3, pp. 842–857, 2019. DOI: https://doi.org/10.1109/TPAMI.2019.2939307

[22] T.-J. Chin and D. Suter, “The maximum consensus problem,” in The Maximum Consensus Problem: Recent Algorithmic Advances, pp. 1–19, Springer, 2022. DOI: https://doi.org/10.1007/978-3-031-01818-3_1

[23] P. Antonante, V. Tzoumas, H. Yang, and L. Carlone, “Outlier-robust estimation: Hardness, minimally tuned algorithms, and applications,” IEEE Transactions on Robotics, vol. 38, no. 1, pp. 281–301, 2021. DOI: https://doi.org/10.1109/TRO.2021.3094984

[24] L. Carlone, “Estimation contracts for outlier-robust geometric perception,” Foundations and Trends® in Robotics, vol. 11, no. 2-3, pp. 90–224, 2023. DOI: https://doi.org/10.1561/2300000077

[25] K. Lebeda, “Robust sampling consensus,” CMP„ CZECH Technical University in Prague, no. 1, pp. 1–67, 2013.

[26] D. DeTone, T. Malisiewicz, and A. Rabinovich, “Deep image homography estimation,” arXiv preprint arXiv:1606.03798, 2016.

[27] J. Bian, W.-Y. Lin, Y. Matsushita, S.-K. Yeung, T.-D. Nguyen, and M.-M. Cheng, “Gms: Grid-based motion statistics for fast, ultra-robust feature correspondence,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4181–4190, 2017. DOI: https://doi.org/10.1109/CVPR.2017.302

[28] L. Wang, X. Zhang, Z. Jiang, K. Dai, T. Xie, L. Yang, W. Yu, Y. Shen, B. Xu, and J. Li, “Fmrt: Learning accurate feature matching with reconciliatory transformer,” IEEE Transactions on Automation Science and Engineering, vol. 22, pp. 11826–11842, 2025. DOI: https://doi.org/10.1109/TASE.2025.3538919

[29] K. Dai, Z. Zhou, Z. Jiang, Q. Sun, T. Xie, H. Gao, T. An, R. Li, and L. Zhao, “Vd-matcher: A very deep local

feature matcher with weight recycling and keypoint detection,” IEEE Transactions on Circuits and Systems for Video Technology, 2025.

[30] W. Zhong and J. Jiang, “Lgfctr: Local and global feature convolutional transformer for image matching,” Expert Systems with Applications, vol. 270, p. 126393, 2025. DOI: https://doi.org/10.1016/j.eswa.2025.126393

[31] X. Lu and S. Du, “Jamma: Ultra-lightweight local feature matching with joint mamba,” in Proceedings of the Computer Vision and Pattern Recognition Conference, pp. 14934–14943, 2025. DOI: https://doi.org/10.1109/CVPR52734.2025.01391

[32] S. Zhang, Z. Li, K. Zhang, Y. Lu, Y. Deng, L. Tang, X. Jiang, and J. Ma, “Deep learning reforms image matching: A survey and outlook,” arXiv preprint arXiv:2506.04619, 2025.

[33] Z. Zhang, “A flexible new technique for camera calibration,” IEEE Transactions on pattern analysis and machine intelligence, vol. 22, no. 11, pp. 1330–1334, 2000. DOI: https://doi.org/10.1109/34.888718

[34] T. Nguyen, S. W. Chen, S. S. Shivakumar, C. J. Taylor, and V. Kumar, “Unsupervised deep homography: A fast and robust homography estimation model,” IEEE Robotics and Automation Letters, vol. 3, no. 3, pp. 2346–2353, 2018. DOI: https://doi.org/10.1109/LRA.2018.2809549

[35] S.-Y. Cao, R. Zhang, L. Luo, B. Yu, Z. Sheng, J. Li, and H.-L. Shen, “Recurrent homography estimation using homography-guided image warping and focus transformer,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 9833–9842, 2023. DOI: https://doi.org/10.1109/CVPR52729.2023.00948

[36] J. Liu and X. Li, “Geometrized transformer for self-supervised homography estimation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9556–9565, 2023. DOI: https://doi.org/10.1109/ICCV51070.2023.00876

[37] J. Zhang, C. Wang, S. Liu, L. Jia, N. Ye, J. Wang, J. Zhou, and J. Sun, “Content-aware unsupervised deep homography estimation,” in European conference on computer vision, pp. 653–669, Springer, 2020. DOI: https://doi.org/10.1007/978-3-030-58452-8_38

[38] C. Ruby and M. Lakshmanan, “Liénard type nonlinear oscillators and quantum solvability,” Physica Scripta, vol. 99, no. 6, p. 062004, 2024. DOI: https://doi.org/10.1088/1402-4896/ad40dc

[39] P. Huber and E. Ronchetti, Robust Statistics. Wiley Series in Probability and Statistics, Wiley, 2011.

[40] R. I. Hartley, “In defense of the eight-point algorithm,” IEEE Transactions on pattern analysis and machine intelligence, vol. 19, no. 6, pp. 580–593, 1997. DOI: https://doi.org/10.1109/34.601246

[41] P. J. Rousseeuw and C. Croux, “Alternatives to the median absolute deviation,” Journal of the American Statistical association, vol. 88, no. 424, pp. 1273–1283, 1993. DOI: https://doi.org/10.1080/01621459.1993.10476408

[42] R. Fisher, Statistical Methods for Research Workers. Biological monographs and manuals, Oliver and Boyd, 1925.

[43] A. E. Beaton and J. W. Tukey, “The fitting of power series, meaning polynomials, illustrated on band-spectroscopic data,” Technometrics, vol. 16, no. 2, pp. 147–185, 1974. DOI: https://doi.org/10.1080/00401706.1974.10489171

[44] M. J. Black and P. Anandan, “The robust estimation of multiple motions: Parametric and piecewise-smooth flow fields,” Computer vision and image understanding, vol. 63, no. 1, pp. 75–104, 1996. DOI: https://doi.org/10.1006/cviu.1996.0006

[45] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International journal of computer vision, vol. 60, no. 2, pp. 91–110, 2004. DOI: https://doi.org/10.1023/B:VISI.0000029664.99615.94

Downloads

Published

25-04-2026

Data Availability Statement

 The data that support the findings of this study are available on request from the corresponding authors.

How to Cite

Adaptive Robust Homography Estimation via Hybrid RANSAC and Iteratively Reweighted Least Squares. (2026). Computational Discovery and Intelligent Systems (CDIS), 3(2), 132-149. https://doi.org/10.66279/f6kjr057

Most read articles by the same author(s)

Similar Articles

You may also start an advanced similarity search for this article.