基于两阶段代价矩阵和动态注意力的双目立体匹配网络
CSTR:
作者:
作者单位:

同济大学 电子与信息工程学院,上海 201804

作者简介:

王志成,教授,工学博士,主要研究方向为模式识别、计算机视觉。E-mail: zhichengwang@tongji.edu.cn

通讯作者:

王泽灏,硕士生,主要研究方向为双目立体匹配。E-mail:2033063@tongji.edu.cn

中图分类号:

TP391.4

基金项目:

中国国防基础研究项目(JCKY 2020206B03)


EDNet++: Improving Stereo Matching with Two-Stage Combined Cost Volume and Multiscale Dynamic Attention
Author:
Affiliation:

College of Electronics and Information Engineering, Tongji University, Shanghai 201804, China

  • 摘要
  • | |
  • 访问统计
  • |
  • 参考文献 [37]
  • |
  • 相似文献 [20]
  • | | |
  • 文章评论
    摘要:

    目前大多数先进的双目立体匹配网络通过构建4D代价矩阵以保留图像的语义信息,增加了网络的计算量开销。为了解决上述问题,提出了两阶段的组合代价矩阵和多尺度动态注意力的EDNet++网络。首先从全局的、粗粒度的视差搜索范围上构建的基于相似度的代价矩阵作为引导,在局部的搜索范围上实现细粒度的组合代价矩阵,其次提出基于残差的动态注意力机制,其根据中间结果信息自适应地生成空间上的注意力分布,并且通过迁移实验证明了该方法的有效性,最后在各大公开数据集上的对比实验结果表明,相较于其他方法,EDNet++方法能够达到算法精度和实时性的良好平衡。

    Abstract:

    Most state-of-the-art stereo matching networks construct 4D cost volume to preserve the semantic information of the image, which increases the computational cost of the network. To solve this problem, a network named EDNet++ with a two-stage combined cost volume and a multi-scale dynamic attention is proposed. First, a correlation cost volume is constructed based on global and coarse-grained disparity search range, which is used as a guide to construct a fine-grained combined cost volume on the local disparity search range. Then, the dynamic attention mechanism based on residuals can adaptively generate spatial attention distribution according to the intermediate result information, and the effectiveness of this method is proved by the transfer experiment. The comparison experiments on various public data sets show that EDNet++ can achieve a good balance between accuracy and real-time performance compared with other methods.

    参考文献
    [1] KENDALL A, MARTIROSYAN H, DASGUPTA S, et al. End-to-end learning of geometry and context for deep stereo regression[C]//2017 IEEE International Conference on Computer Vision (ICCV). Venice: IEEE, 2017: 66-75.
    [2] CHANG J, CHEN Y. Pyramid stereo matching network[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018: 5410-5418.
    [3] GUO X, YANG K, YANG W, et al. Group-wise correlation stereo network[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach: IEEE, 2019: 3268-3277.
    [4] ZHANG F, PRISACARIU V, YANG R, et al. Ga-net: Guided aggregation net for end-to-end stereo matching[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach: IEEE, 2019: 185-194.
    [5] MAYER N, ILG E, AUSSER P H ? , et al. A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas: IEEE, 2016: 4040-4048.
    [6] LIANG Z, FENG Y, GUO Y, et al. Learning for disparity estimation through feature constancy[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018: 2811-2820.
    [7] YANG G, ZHAO H, SHI J, et al. Segstereo: Exploiting semantic information for disparity estimation[C]//Proceedings of the European Conference on Computer Vision (ECCV). Munich: IEEE, 2018: 636-651.
    [8] PANG J, SUN W, REN J S, et al. Cascade residual learning: A two-stage convolutional neural network for stereo matching[C]//2017 IEEE International Conference on Computer Vision Workshops (ICCVW). Venice: IEEE, 2017: 878-886.
    [9] SCHARSTEIN D, SZELISKI R. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms[J]. International Journal of Computer Vision, 2002, 47: 7.
    [10] ZBONTAR J, LECUN Y. Stereo matching by training a convolutional neural network to compare image patches[J]. Journal of Machine Learning Research, 2016, 17(65): 1.
    [11] DOSOVITSKIY A, FISCHER P, ILG E, et al. Flownet: Learning optical flow with convolutional networks[C]//2015 IEEE International Conference on Computer Vision (ICCV). Santiago: IEEE, 2015: 2758-2766.
    [12] CHEN Z, SUN X, WANG L, et al. A deep visual correspondence embedding model for stereo matching costs[C]//2015 IEEE International Conference on Computer Vision (ICCV). Santiago: IEEE, 2015: 972-980.
    [13] FENG Y, LIANG Z, LIU H. Efficient deep learning for stereo matching with larger image patches[C]//2017 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI). Shanghai: IEEE, 2017: 1-5.
    [14] NIE G, CHENG M, LIU Y, et al. Multi-level context ultra-aggregation for stereo matching[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach: IEEE, 2019: 3278-3286.
    [15] HUANG G, LIU Z, VAN DER Maaten L, et al. Densely connected convolutional networks[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu: IEEE, 2017: 2261-2269.
    [16] GU X, FAN Z, ZHU S, et al. Cascade cost volume for high-resolution multi-view stereo and stereo matching[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle: IEEE, 2020: 2492-2501.
    [17] SHEN Z, DAI Y, RAO Z. Cfnet: Cascade and fused cost volume for robust stereo matching[C]//2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Nashville: IEEE, 2021: 13901-13910.
    [18] KHAMIS S, FANELLO S, RHEMANN C, et al. Stereonet: Guided hierarchical refinement for real-time edge-aware depth prediction[C]//Proceedings of the European Conference on Computer Vision (ECCV). Venice: IEEE, 2018: 573-590.
    [19] GIDARIS S, Detect KOMODAKIS N. , replace, refine: Deep structured prediction for pixel wise labeling[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu: IEEE, 2017: 7187-7196.
    [20] JIE Z, WANG P, LING Y, et al. Left-right comparative recurrent model for stereo matching[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018: 3838-3846.
    [21] YU A, GUO W, LIU B, et al. Attention aware cost volume pyramid based multi-view stereo network for 3d reconstruction[J]. ISPRS Journal of Photogrammetry and Remote Sensing. Netherlands: Elsevier, 2021, 175: 448.
    [22] STUCKER C, SCHINDLER K. Resdepth: Learned residual stereo reconstruction[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Seattle: IEEE, 2020: 707-716.
    [23] RONNEBERGER O, FISCHER P, BROX T. U-net: convolutional networks for biomedical image segmentation[C]//2015 Medical Image Computing and Computer-Assisted Intervention (MICCAI). Munich: Springer Nature, 2015: 234-241.
    [24] WANG Y, LAI Z, HUANG G, et al. Anytime stereo image depth estimation on mobile devices[C]//2019 International Conference on Robotics and Automation (ICRA). Montreal: IEEE, 2019: 5893-5900.
    [25] CHENG X, ZHONG Y, HARANDI M, et al. Hierarchical neural architecture search for deep stereo matching[J]. Advances in Neural Information Processing Systems. Vancouver: IEEE, 2020, 33: 22158.
    [26] PASZKE A, GROSS S, MASSA F, et al. Pytorch: An imperative style, high-performance deep learning library[C]//WALLACH H, LAROCHELLE H, BEYGELZIMER A, et al. Advances in Neural Information Processing Systems: volume 32. Curran Associates, Inc., Vancouver: IEEE, 2019:8026-8037.
    [27] ZHANG S, WANG Z, WANG Q, et al. Ednet: Efficient disparity estimation with cost volume combination and attention-based spatial residual[C]//2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Nashville: IEEE, 2021: 5433-5442
    [28] CHABRA R, STRAUB J, SWEENY C, et al. Stereodrnet: Dilated residual stereo net[J]. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach: IEEE, 2019: 11786-11795.
    [29] WANG Q, SHI S, ZHENG S, et al. FADNet: A fast and accurate network for disparity estimation[C]//2020 IEEE/International Conference on Robotics and Automation (ICRA). Paris: IEEE, 2020: 101-107.
    [30] XU H, ZHANG J, et al. Aanet: Adaptive aggregation network for efficient stereo matching[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle: IEEE, 2020: 1956-1965.
    [31] XU G, CHENG J, GUO P, et al. Attention concatenation volume for accurate and efficient stereo matching[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. New Orleans: IEEE, 2022: 12981-12990.
    [32] ZBONTAR J, LECUN Y. Computing the stereo matching cost with a convolutional neural network[C]//2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Boston: IEEE, 2015: 1592-1599.
    [33] YIN Z, DARRELL T, YU F. Hierarchical discrete distribution decomposition for match density estimation[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach: IEEE, 2019: 6037-6046.
    [34] DUGGAL S, WANG S, MA W, et al. Deeppruner: Learning efficient stereo matching via differentiable patchmatch[C]//2019 IEEE/CVF International Conference on Computer Vision (ICCV). Seoul: IEEE, 2019: 4383-4392.
    [35] BADKI A, TROCCOLI A, KIM K, et al. Bi3d: Stereo depth estimation via binary classifications[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle: IEEE, 2020: 1597-1605.
    [36] YANG M, WU F, LI W. Rlstereo: Real-time stereo matching based on reinforcement learning[J]. IEEE Transactions on Image Processing, 2021, 30: 9442.
    [37] TONIONI A, TOSI F, POGGI M, et al. Real-time self-adaptive deep stereo[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach: IEEE, 2019: 195-204.
    引证文献
    网友评论
    网友评论
    分享到微博
    发 布
引用本文

王志成,王泽灏.基于两阶段代价矩阵和动态注意力的双目立体匹配网络[J].同济大学学报(自然科学版),2024,52(10):1640~1648

复制
分享
文章指标
  • 点击次数:28
  • 下载次数: 150
  • HTML阅读次数: 5
  • 引用次数: 0
历史
  • 收稿日期:2022-11-21
  • 在线发布日期: 2024-11-01
文章二维码