Toggle Main Menu Toggle Search

Open Access padlockePrints

Automatic Extraction of Water and Shadow from SAR Images Based on a Multi-Resolution Dense Encoder and Decoder Network

Lookup NU author(s): Dr Lifu Chen, Professor Zhenhong Li, Dr Jin Xing

Downloads


Licence

This work is licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0).


Abstract

The water and shadow areas in SAR images contain rich information for various applications, which cannot be extracted automatically and precisely at present. To handle this problem, a new framework called Multi-Resolution Dense Encoder and Decoder (MRDED) network is proposed, which integrates Convolutional Neural Network (CNN), Residual Network (ResNet), Dense Convolutional Network (DenseNet), Global Convolutional Network (GCN), and Convolutional Long Short-Term Memory (ConvLSTM). MRDED contains three parts: the Gray Level Gradient Co-occurrence Matrix (GLGCM), the Encoder network, and the Decoder network. GLGCM is used to extract low-level features, which are further processed by the Encoder. The Encoder network employs ResNet to extract features at different resolutions. There are two components of the Decoder network, namely, the Multi-level Features Extraction and Fusion (MFEF) and Score maps Fusion (SF). We implement two versions of MFEF, named MFEF1 and MFEF2, which generate separate score maps. The difference between them lies in that the Chained Residual Pooling (CRP) module is utilized in MFEF2, while ConvLSTM is adopted in MFEF1 to form the Improved Chained Residual Pooling (ICRP) module as the replacement. The two separate score maps generated by MFEF1 and MFEF2 are fused with different weights to produce the fused score map, which is further handled by the Softmax function to generate the final extraction results for water and shadow areas. To evaluate the proposed framework, MRDED is trained and tested with large SAR images. To further assess the classification performance, a total of eight different classification frameworks are compared with our proposed framework. MRDED outperformed by reaching 80.12% in Pixel Accuracy (PA) and 73.88% in Intersection of Union (IoU) for water, 88% in PA and 77.11% in IoU for shadow, and 95.16% in PA and 90.49% in IoU for background classification, respectively.


Publication metadata

Author(s): Zhang P, Chen L, Li Z, Xing J, Xing X, Yuan Z

Publication type: Article

Publication status: Published

Journal: Sensors

Year: 2019

Volume: 19

Issue: 16

Online publication date: 16/08/2019

Acceptance date: 12/08/2019

Date deposited: 15/10/2019

ISSN (print): 1424-8220

ISSN (electronic): 1424-8220

Publisher: MDPI

URL: https://doi.org/10.3390/s19163576

DOI: 10.3390/s19163576

PubMed id: 31426396


Altmetrics

Altmetrics provided by Altmetric


Actions

Find at Newcastle University icon    Link to this publication


Share