中国测绘科学研究院  ,您好!注销 
 
 
高级检索
 
查询检索功能:提供快速检索、高级检索、二次检索、模糊检索、导航检索等功能,并对检索结果进行优化处理,使您更快、更准确的查找到您所需要的内容。
Blurry dense object extraction based on buffer parsing network for high-resolution satellite remote sensing imagery
h_mid_bq  下载全文   在线阅读  
辅助翻译
Blurry dense object extraction based on buffer parsing network for high-resolution satellite remote sensing imagery
Dingyuan Chen; Yanfei Zhong; Ailong Ma; Liangpei Zhang
PDF(9798.86KB)
From:ISPRS Journal of Photogrammetry and Remote Sensing
2024 Vol.207 , Pages 122-140(doi:10.1016/j.isprsjprs.2023.11.007)

Abstract:Despite the remarkable progress of deep learning-based object extraction in revealing the number and boundary location of geo-objects for high-resolution satellite imagery, it still faces challenges in accurately extracting blurry dense objects. Unlike general objects, blurry dense objects have limited spatial resolution, leading to inaccurate and connected boundaries. Even with the improved spatial resolution and recent boundary refinement methods for general object extraction, connected boundaries may remain undetected in blurry dense object extraction if the gap between object boundaries is less than the spatial resolution. This paper proposes a blurry dense object extraction method named the buffer parsing network (BPNet) for satellite imagery. To solve the connected boundary problem, a buffer parsing module is designed for dense boundary separation. Its essential component is a buffer parsing architecture that comprises a boundary buffer generator and an interior/boundary parsing step. This architecture is instantiated as a dual-task mutual learning head that co-learns the mutual information between the interior and boundary buffer, which estimates the dependence between the dual-task outputs. Specifically, the boundary buffer head generates a buffer region that overlaps with the interior, enabling the architecture to learn the dual-task bias and assign a reliable semantic in the overlapping region through high-confidence voting. To alleviate the inaccurate boundary location problem, BPNet incorporates a high-frequency refinement module for blurry boundary refinement. This module includes a high-frequency enhancement unit to enhance high-frequency signals at the blurry boundaries and a cascade buffer parsing refinement unit that integrates the buffer parsing architecture coarse-to-fine to recover the boundary details progressively. The proposed BPNet framework is validated on two representative blurry dense object datasets for small vehicle and agricultural greenhouse object extraction. The results indicate the superior performance of the BPNet framework, achieving 25.25% and 73.51% in contrast to the state-of-the-art PointRend method, which scored 21.92% and 63.95% in the A P 50 s e g m metric on two datasets, respectively. Furthermore, the ablation analysis of the super-resolution and building extraction methods demonstrates the significance of high-quality boundary details for subsequent practical applications, such as building vectorization . The code is available at: https://github.com/Dingyuan-Chen/BPNet .
KeyWord:Blurry dense object extraction; Dense boundary separation; Blurry boundary refinement; Buffer parsing architecture; High-resolution remote sensing imagery;

相关文献:
1.Knowledge evolution learning: A cost-free weakly supervised semantic segmentation framework for high-resolution land cover classification
2.Multi-echo hyperspectral reflectance extraction method based on full waveform hyperspectral LiDAR
3.geeSEBAL-MODIS: Continental-scale evapotranspiration based on the surface energy balance for South America
4.Suaeda salsa spectral index for Suaeda salsa mapping and fractional cover estimation in intertidal wetlands
5.Better localized predictions with Out-of-Scope information and Explainable AI: One-Shot SAR backscatter nowcast framework with data from neighboring region
6.Rapid survey method for large-scale outdoor surveillance cameras using binary space partitioning
7.Edge aware depth inference for large-scale aerial building multi-view stereo
8.An efficient point cloud place recognition approach based on transformer in dynamic environment
9.Unsupervised domain adaptation for SAR target classification based on domain- and class-level alignment: From simulated to real data

请合理使用本系统,请遵守《中华人民共和国著作版权法》的规定,尊重知识产权 3.0.0.17382