www.fgks.org   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (33,767)

Search Parameters:
Journal = Remote Sensing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 6882 KiB  
Article
A New Retrieval Algorithm of Fractional Snow over the Tibetan Plateau Derived from AVH09C1
by Hang Yin, Liyan Xu and Yihang Li
Remote Sens. 2024, 16(13), 2260; https://doi.org/10.3390/rs16132260 - 21 Jun 2024
Viewed by 198
Abstract
Snow cover products are primarily derived from the Moderate-resolution Imaging Spectrometer (MODIS) and Advanced Very-High-Resolution Radiometer (AVHRR) datasets. MODIS achieves both snow/non-snow discrimination and snow cover fractional retrieval, while early AVHRR-based snow cover products only focused on snow/non-snow discrimination. The AVHRR Climate Data [...] Read more.
Snow cover products are primarily derived from the Moderate-resolution Imaging Spectrometer (MODIS) and Advanced Very-High-Resolution Radiometer (AVHRR) datasets. MODIS achieves both snow/non-snow discrimination and snow cover fractional retrieval, while early AVHRR-based snow cover products only focused on snow/non-snow discrimination. The AVHRR Climate Data Record (AVHRR-CDR) provides a nearly 40-year global dataset that has the potential to fill the gap in long-term snow cover fractional monitoring. Our study selects the Qinghai–Tibet Plateau as the experimental area, utilizing AVHRR-CDR surface reflectance data (AVH09C1) and calibrating with the MODIS snow product MOD10A1. The snow cover percentage retrieval from the AVHRR dataset is performed using Surface Reflectance at 0.64 μm (SR1) and Surface Reflectance at 0.86 μm (SR2), along with a simulated Normalized Difference Snow Index (NDSI) model. Also, in order to detect the effects of land-cover type and topography on snow inversion, we tested the accuracy of the algorithm with and without these influences, respectively (vanilla algorithm and improved algorithm). The accuracy of the AVHRR snow cover percentage data product is evaluated using MOD10A1, ground snow-depth measurements and ERA5. The results indicate that the logic model based on NDSI has the best fitting effect, with R-square and RMSE values of 0.83 and 0.10, respectively. Meanwhile, the accuracy was improved after taking into account the effects of land-cover type and topography. The model is validated using MOD10A1 snow-covered areas, showing snow cover area differences of less than 4% across 6 temporal phases. The improved algorithm results in better consistency with MOD10A1 than with the vanilla algorithm. Moreover, the RMSE reaches greater levels when the elevation is below 2000 m or above 6000 m and is lower when the slope is between 16° and 20°. Using ground snow-depth measurements as ground truth, the multi-year recall rates are mostly above 0.7, with an average recall rate of 0.81. The results also show a high degree of consistency with ERA5. The validation results demonstrate that the AVHRR snow cover percentage remote sensing product proposed in this study exhibits high accuracy in the Tibetan Plateau region, also demonstrating that land-cover type and topographic factors are important to the algorithm. Our study lays the foundation for a global snow cover percentage product based on AVHRR-CDR and furthermore lays a basic work for generating a long-term AVHRR-MODIS fractional snow cover dataset. Full article
Show Figures

Figure 1

Figure 1
<p>Study area and test areas for regression.</p>
Full article ">Figure 2
<p>Flowchart of AVHRR-FSC generation and accuracy analysis in this study.</p>
Full article ">Figure 3
<p>Linear fitting of SR1, SR2 and NDSI<sub>AVHRR</sub> with MOD10A1 for training data. The blue line represents the fit line, and the dash line represents the perfect prediction line. (<b>a</b>) the linear fitting of SR1 with MOD10A1; (<b>b</b>) the linear fitting of SR2 with MOD10A1; (<b>c</b>) the linear fitting of NDSI<sub>AVHRR</sub> with MOD10A1.</p>
Full article ">Figure 4
<p>Scatter density plots for Logistic fitting results after considering elevation, slop, BT4 and comparison of predicted values with MOD10A1. The four subplots are related to uncategorized, grass, forest and bareland for vegetation cover types. The dash line represents the perfect prediction line.</p>
Full article ">Figure 5
<p>Comparison of AVHRR_FSC (the vanilla algorithm) and MODIS_FSC mapping; AVHRR_FSC and MODIS_FSC stand for total snow cover area, and RMSE is pixel-based calculated. 1, 2, and 3 correspond to the three test areas.</p>
Full article ">Figure 6
<p>Comparison of AVHRR_FSC* (the improved algorithm) and MODIS_FSC mapping; AVHRR_FSC* and MODIS_FSC stand for total snow cover area, and RMSE is pixel-based calculated. 1, 2, and 3 correspond to the three test areas.</p>
Full article ">Figure 7
<p>Trends in RMSE of AVHRR FSC inversion results with elevation and slope variation, using MOD10A1 as the ground truth. RMSE and AVHRR_FSC stand for the vanilla algorithm, RMSE* and AVHRR_FSC* stand for the improved algorithm. (<b>a</b>) trends in RMSE of AVHRR FSC inversion results with elevation; (<b>b</b>) Trends in RMSE of AVHRR FSC inversion results with slop.</p>
Full article ">Figure 8
<p>Trends in RMSE of AVHRR FSC inversion results with different vegetation types, using MOD10A1 as the ground truth. RMSE and AVHRR_FSC stand for the vanilla algorithm, RMSE* and AVHRR_FSC* stand for improved algorithm.</p>
Full article ">Figure 9
<p>Snow cover pixel count of AVHRR-FSC* compared to ERA5. AVHRR-FSC* is the result of the improved algorithm. (<b>a</b>) shows the comparison of daily snow cover pixel count; (<b>b</b>) shows the consistency between the two datasets.</p>
Full article ">
17 pages, 2420 KiB  
Article
Estimation of the Wind Field with a Single High-Frequency Radar
by Abïgaëlle Dussol and Cédric Chavanne
Remote Sens. 2024, 16(13), 2258; https://doi.org/10.3390/rs16132258 - 21 Jun 2024
Viewed by 206
Abstract
Over several decades, high-frequency (HF) radars have been employed for remotely measuring various ocean surface parameters, encompassing surface currents, waves, and winds. Wind direction and speed are usually estimated from both first-order and second-order Bragg-resonant scatter from two or more HF radars monitoring [...] Read more.
Over several decades, high-frequency (HF) radars have been employed for remotely measuring various ocean surface parameters, encompassing surface currents, waves, and winds. Wind direction and speed are usually estimated from both first-order and second-order Bragg-resonant scatter from two or more HF radars monitoring the same area of the ocean surface. This limits the observational domain to the common area where second-order scatter is available from at least two radars. Here, we propose to estimate wind direction and speed from the first-order scatter of a single HF radar, yielding the same spatial coverage as for surface radial currents. Wind direction is estimated using the ratio of the positive and negative first-order Bragg peaks intensity, with a new simple algorithm to remove the left/right directional ambiguity from a single HF radar. Wind speed is estimated from wind direction and de-tided surface radial currents using an artificial neural network which has been trained with in situ wind speed observations. Radar-derived wind estimations are compared with in situ observations in the Lower Saint-Lawrence Estuary (Quebec, Canada). The correlation coefficients between radar-estimated and in situ wind directions range from 0.84 to 0.95 for Wellen Radars (WERAs) and from 0.79 to 0.97 for Coastal Ocean Dynamics Applications Radars (CODARs), while the root mean square differences range from 8° to 12° for WERAs and from 10° to 19° for CODARs. Correlation coefficients between the radar-estimated and the in situ wind speeds range from 0.89 to 0.93 for WERAs and from 0.81 to 0.93 for CODARs, while the root mean square differences range from 1.3 m.s−1 to 2.3 m.s−1 for WERAs and from 1.6 m.s−1 to 3.9 m.s−1 for CODARs. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>An example of a HF radar sea echo Doppler spectrum from the WERA W2 (16.15 MHz) showing the first-order Bragg peaks. <math display="inline"><semantics> <msub> <mi>f</mi> <mrow> <mi>B</mi> <mi>r</mi> <mi>a</mi> <mi>g</mi> <mi>g</mi> </mrow> </msub> </semantics></math> is the Bragg frequency, and <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>f</mi> </mrow> </semantics></math> is the offset of the first-order peak relative to the Bragg frequency. The SNR of the first-order peaks are indicated.</p>
Full article ">Figure 2
<p>The distributions of surface wave energy relative to the wind direction angle are illustrated for scenarios with the wind blowing towards (<b>left</b>), at a right angle (<b>middle</b>), and away from (<b>right</b>) the radar look direction. The backscatter spectra below illustrate the relative heights of the approaching (<math display="inline"><semantics> <mrow> <mo>+</mo> <msub> <mi mathvariant="normal">f</mi> <mi>B</mi> </msub> </mrow> </semantics></math>) and receding (<math display="inline"><semantics> <mrow> <mo>−</mo> <msub> <mi mathvariant="normal">f</mi> <mi>B</mi> </msub> </mrow> </semantics></math>) Bragg peaks for each corresponding case. This figure is adapated from Fernandez et al. (1997) [<a href="#B3-remotesensing-16-02258" class="html-bibr">3</a>].</p>
Full article ">Figure 3
<p>Schematic to remove the wind directional ambiguity using a single HF radar. The gray area is used to obtain the histogram of all possible wind directions.</p>
Full article ">Figure 4
<p>Histograms of (<b>a</b>) the left solutions, (<b>b</b>) the right solutions and (<b>c</b>) both solutions (left and right). The modal direction is indicated by the vertical line in (<b>c</b>).</p>
Full article ">Figure 5
<p>Training time and SI (relative to buoy PMZA-Riki observations) for different ANN architectures.</p>
Full article ">Figure 6
<p>ANN structure used for wind field estimations with a single HF radar.</p>
Full article ">Figure 7
<p>(<b>a</b>) Map of the Gulf and (<b>b</b>) the lower Estuary of Saint Lawrence. The black-outlined rectangle in (<b>a</b>) delimits the study area in the lower Saint Lawrence Estuary. The instrument locations are indicated, and the wind field predicted by a numerical model, for the 1 January 2017 at 23h00, is shown by the blue arrows. The black lines indicate isobaths.</p>
Full article ">Figure 8
<p>Correlation between radar-estimated and in situ wind direction (blue lines) and number of data points (red lines) as a function of SNR threshold values, for W2 WERA (solid lines) and C1 CODAR (dashed lines).</p>
Full article ">Figure 9
<p>Scatterplots of (<b>a</b>,<b>c</b>) wind direction and (<b>b</b>,<b>d</b>) wind speed estimated from (<b>a</b>,<b>b</b>) W1 WERA and (<b>c</b>,<b>d</b>) C2 CODAR versus in situ measurements in summer 2013.</p>
Full article ">Figure 10
<p>Time series of (<b>a</b>) wind direction and (<b>b</b>) wind speed in winter 2016–2017 from in situ observations (red), numerical model HRDPS (blue) and W2 WERA (black).</p>
Full article ">Figure 11
<p>Spatial distribution of (<b>a</b>) modulus and (<b>b</b>) phase of complex correlation coefficients, <span class="html-italic">R</span>, between wind velocities estimated from W2 WERA and simulated by HRDPS for the winter 2016–2017.</p>
Full article ">Figure 12
<p>Winds estimated from W1 (blue arrows), W2 (red arrows), C1 (green arrows), C2 (black arrows), measured at Bic station (magenta arrow), and simulated by HRDPS model (gray arrows) at 11h00, 27 January 2017.</p>
Full article ">Figure 13
<p>Wind speed projected in radar direction, shown in black for the meteorologic station and in red for the ANN, versus residual radial current from W2 in winter 2016–2017.</p>
Full article ">Figure 14
<p>Median absolute value of differences between in situ and radar wind directions versus wind speed in winter 2017 (black line) and summer 2013 (red line). Dashed line indicates a threshold value of 5 degrees.</p>
Full article ">
26 pages, 9310 KiB  
Article
Discrimination of Degraded Pastures in the Brazilian Cerrado Using the PlanetScope SuperDove Satellite Constellation
by Angela Gabrielly Pires Silva, Lênio Soares Galvão, Laerte Guimarães Ferreira Júnior, Nathália Monteiro Teles, Vinícius Vieira Mesquita and Isadora Haddad
Remote Sens. 2024, 16(13), 2256; https://doi.org/10.3390/rs16132256 - 21 Jun 2024
Viewed by 536
Abstract
Pasture degradation poses significant economic, social, and environmental impacts in the Brazilian savanna ecosystem. Despite these impacts, effectively detecting varying intensities of agronomic and biological degradation through remote sensing remains challenging. This study explores the potential of the eight-band PlanetScope SuperDove satellite constellation [...] Read more.
Pasture degradation poses significant economic, social, and environmental impacts in the Brazilian savanna ecosystem. Despite these impacts, effectively detecting varying intensities of agronomic and biological degradation through remote sensing remains challenging. This study explores the potential of the eight-band PlanetScope SuperDove satellite constellation to discriminate between five classes of pasture degradation: non-degraded pasture (NDP); pastures with low- (LID) and moderate-intensity degradation (MID); severe agronomic degradation (SAD); and severe biological degradation (SBD). Using a set of 259 cloud-free images acquired in 2022 across five sites located in central Brazil, the study aims to: (i) identify the most suitable period for discriminating between various degradation classes; (ii) evaluate the Random Forest (RF) classification performance of different SuperDove attributes; and (iii) compare metrics of accuracy derived from two predicted scenarios of pasture degradation: a more challenging one involving five classes (NDP, LID, MID, SAD, and SBD), and another considering only non-degraded and severely degraded pastures (NDP, SAD, and SBD). The study assessed individual and combined sets of SuperDove attributes, including band reflectance, vegetation indices, endmember fractions from spectral mixture analysis (SMA), and image texture variables from Gray-level Co-occurrence Matrix (GLCM). The results highlighted the effectiveness of the transition from the rainy to the dry season and the period towards the beginning of a new seasonal rainy cycle in October for discriminating pasture degradation. In comparison to the dry season, more favorable discrimination scenarios were observed during the rainy season. In the dry season, increased amounts of non-photosynthetic vegetation (NPV) complicate the differentiation between NDP and SBD, which is characterized by high soil exposure. Pastures exhibiting severe biological degradation showed greater sensitivity to water stress, manifesting earlier reflectance changes in the visible and near-infrared bands of SuperDove compared to other classes. Reflectance-based classification yielded higher overall accuracy (OA) than the approaches using endmember fractions, vegetation indices, or texture metrics. Classifications using combined attributes achieved an OA of 0.69 and 0.88 for the five-class and three-class scenarios, respectively. In the five-class scenario, the highest F1-scores were observed for NDP (0.61) and classes of agronomic (0.71) and biological (0.88) degradation, indicating the challenges in separating low and moderate stages of pasture degradation. An initial comparison of RF classification results for the five categories of degraded pastures, utilizing reflectance data from MultiSpectral Instrument (MSI)/Sentinel-2 (400–2500 nm) and SuperDove (400–900 nm), demonstrated an enhanced OA (0.79 versus 0.66) with Sentinel-2 data. This enhancement is likely to be attributed to the inclusion of shortwave infrared (SWIR) spectral bands in the data analysis. Our findings highlight the potential of satellite constellation data, acquired at high spatial resolution, for remote identification of pasture degradation. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Summary of the methodology used in the current work to discriminate pasture degradation with SuperDove satellite constellation data.</p>
Full article ">Figure 2
<p>Location of the five sites (15 × 15 km each) selected in the southeastern region of the Brazilian state of Goiás in a climatically homogeneous region. The insets show photographs representative of non-degraded pasture (NDP) and of pastures with low-intensity degradation (LID), moderate-intensity degradation (MID), severe agronomic degradation (SAD), and severe biological degradation (SBD). The sites are numbered according to the municipality where they are located: 1. Bela Vista de Goiás; 2. Caldas Novas; 3. Piracanjuba; 4. Pontalina; and 5. Trindade. Long-term monthly precipitation (average values between 2001 and 2021) and the dry season period are also indicated for reference.</p>
Full article ">Figure 3
<p>Frequency of cloud-free images captured by the SuperDove satellite constellation in 2022 for each of the five selected sites targeted for analysis. The dry season period is indicated for reference.</p>
Full article ">Figure 4
<p>False color composites generated from SuperDove imagery, illustrating visual distinctions between plots of biologically degraded (SBD) and non-degraded (NDP) pastures across various seasonal stages. Notable time points include the rainy season (DOY 62 and 324 in (<b>a</b>,<b>e</b>)), the transition from the rainy to the dry season (DOY 153 in (<b>b</b>)), the middle of the dry season (DOY 227 in (<b>c</b>)), and the transition from the dry to the rainy season (DOY 273 in (<b>d</b>)). SuperDove bands 8 (NIR), 7 (red-edge) and 6 (red) are shown in red, green and blue colors, respectively.</p>
Full article ">Figure 5
<p>Seasonal variations in mean reflectance for both (<b>a</b>) red and (<b>b</b>) near-infrared (NIR) bands (SuperDove bands 6 and 8) across different pasture degradation classes. The symbols within the profiles denote data acquisition in 2022 through the satellite constellation. Class abbreviations are defined in the text.</p>
Full article ">Figure 6
<p>Seasonal variation in the Mahalanobis distance for discriminating areas of Severe Agronomic Degradation (SAD) from those with Low- (LID) and Moderate-intensity (MID) degradation using the eight-band reflectance data from SuperDove.</p>
Full article ">Figure 7
<p>Endmember reflectance spectra derived from Sequential Maximum Angle Convex Cone (SMACC) for SuperDove data acquired on 2 June (DOY 153) over areas exhibiting varying degrees of pasture degradation across the five studied sites in central Brazil.</p>
Full article ">Figure 8
<p>Scatterplots illustrating the relationships between (<b>a</b>) NDVI and GRND and (<b>b</b>) EVI and REND for three field-sampled classes of pasture degradation: Non-degraded pasture (NDP) and pastures with severe agronomic (SAD) and biological (SBD) degradation.</p>
Full article ">Figure 9
<p>False color composites (SuperDove bands 8, 7, and 6 in RGB) illustrating examples of the five classes of pasture degradation (NDP, LID, MID, SAD, and SBD) are presented on the left side of the figure. In the middle panel, color composites of green vegetation (GV1), GV2, and soil (S) fraction images in RGB are displayed. Lastly, NDVI images are presented on the right side.</p>
Full article ">Figure 10
<p>Variations in Gray Level Co-occurrence Matrix (GLCM) texture metrics, specifically (<b>a</b>) texture mean and (<b>b</b>) texture variance, calculated from the Near-Infrared (NIR) band 8 of SuperDove for the five classes of pasture degradation.</p>
Full article ">Figure 11
<p>Variations in Precision, Recall, F1-score, and Overall Accuracy (OA) resulting from the Random Forest (RF) classification of five classes of pasture degradation (NDP, LID, MID, SAD, and SBD). The classifier utilized individual attributes, including the reflectance of the eight SuperDove bands, five vegetation indices (EVI, GRND, MPRI, NDVI, and REND), and four-endmember fractions from Spectral Mixture Analysis (SMA) (GV1, GV2, soil, and shade). Results from GLCM texture metrics were excluded for enhanced graphical representation. The reported results refer to the validation dataset.</p>
Full article ">Figure 12
<p>Percentage of importance assigned to each variable in the Random Forest (RF) classification of five classes of pasture degradation (NDP, LID, MID, SAD, and SBD).</p>
Full article ">Figure 13
<p>(<b>a</b>) Ground truth map and (<b>b</b>) Random Forest classification of degraded and non-degraded pastures. Classification uncertainties are depicted in (<b>c</b>). The abbreviations are defined in the text.</p>
Full article ">Figure 14
<p>Variations in F1-score and Overall Accuracy (OA) resulting from the Random Forest (RF) classification of five classes of pasture degradation (NDP, LID, MID, SAD, and SBD) using the combined sets of attributes. Results are presented for two dates representing the transition from the rainy to the dry season (DOY 153 in June; blue color results) and the middle of the dry season (DOY 227 in August; red color results). The reported results refer to the validation dataset.</p>
Full article ">Figure 15
<p>Average reflectance spectra from OLI/Landsat-8 data acquired over field-surveyed plots representing non-degraded pastures and pastures experiencing severe agronomical or biological degradation. The results are presented for various dates during the year 2021, specifically during (<b>a</b>) the transition from the rainy to the dry season, (<b>b</b>) the middle of the dry season, and (<b>c</b>) after the occurrence of the first rainfall events in the new seasonal cycle in October.</p>
Full article ">Figure 16
<p>Variations in F1-score and Overall Accuracy (OA) resulting from the Random Forest (RF) classification of five classes of pasture degradation (NDP, LID, MID, SAD, and SBD) using reflectance data of 10 bands (10-m and 20-m spatial resolution) from the Multispectral Instrument (MSI)/Sentinel-2 (400–2500 nm) and eight bands from the SuperDove (400–900 nm). Images from both instruments were acquired in approximately coincident dates (2 and 4 June 2022). The reported results in blue (SuperDove) and red (MSI/Sentinel-2) colors refer to the validation dataset.</p>
Full article ">
30 pages, 20731 KiB  
Article
Automatic Classification of Submerged Macrophytes at Lake Constance Using Laser Bathymetry Point Clouds
by Nike Wagner, Gunnar Franke, Klaus Schmieder and Gottfried Mandlburger
Remote Sens. 2024, 16(13), 2257; https://doi.org/10.3390/rs16132257 - 21 Jun 2024
Viewed by 303
Abstract
Submerged aquatic vegetation, also referred to as submerged macrophytes, provides important habitats and serves as a significant ecological indicator for assessing the condition of water bodies and for gaining insights into the impacts of climate change. In this study, we introduce a novel [...] Read more.
Submerged aquatic vegetation, also referred to as submerged macrophytes, provides important habitats and serves as a significant ecological indicator for assessing the condition of water bodies and for gaining insights into the impacts of climate change. In this study, we introduce a novel approach for the classification of submerged vegetation captured with bathymetric LiDAR (Light Detection And Ranging) as a basis for monitoring their state and change, and we validated the results against established monitoring techniques. Employing full-waveform airborne laser scanning, which is routinely used for topographic mapping and forestry applications on dry land, we extended its application to the detection of underwater vegetation in Lake Constance. The primary focus of this research lies in the automatic classification of bathymetric 3D LiDAR point clouds using a decision-based approach, distinguishing the three vegetation classes, (i) Low Vegetation, (ii) High Vegetation, and (iii) Vegetation Canopy, based on their height and other properties like local point density. The results reveal detailed 3D representations of submerged vegetation, enabling the identification of vegetation structures and the inference of vegetation types with reference to pre-existing knowledge. While the results within the training areas demonstrate high precision and alignment with the comparison data, the findings in independent test areas exhibit certain deficiencies that are likely addressable through corrective measures in the future. Full article
24 pages, 9744 KiB  
Article
iblueCulture: Data Streaming and Object Detection in a Real-Time Video Streaming Underwater System
by Apostolos Vlachos, Eleftheria Bargiota, Stelios Krinidis, Kimon Papadimitriou, Angelos Manglis, Anastasia Fourkiotou and Dimitrios Tzovaras
Remote Sens. 2024, 16(13), 2254; https://doi.org/10.3390/rs16132254 - 21 Jun 2024
Viewed by 306
Abstract
The rich and valuable underwater cultural heritage present in the Mediterranean is often overlooked, if not completely unknown, due to the inherent difficulties in using physical approaches. The iblueCulture project was created to bridge that gap by introducing a real-time texturing and streaming [...] Read more.
The rich and valuable underwater cultural heritage present in the Mediterranean is often overlooked, if not completely unknown, due to the inherent difficulties in using physical approaches. The iblueCulture project was created to bridge that gap by introducing a real-time texturing and streaming system. The system captures video streams from eight underwater cameras and manipulates it to texture and colorize the underwater cultural heritage site and its immediate surroundings in a virtual reality environment. The system can analyze incoming data and, by detecting newly introduced objects in sight, use them to enhance the user experience (such as displaying a school of fish as they pass by) or for site security. This system has been installed in some modern and ancient shipwrecks in Greece and was used for in situ viewing. It can also be modified to work remotely, for example, in museums or educational institutions, to make the sites more accessible and raise public awareness. It can potentially be used in any underwater site, both for presentation and education, as well as for monitoring and security purposes. Full article
Show Figures

Figure 1

Figure 1
<p>3D model of one of the Peristera Byzantine shipwrecks.</p>
Full article ">Figure 2
<p>The Unity application while running on the workstation.</p>
Full article ">Figure 3
<p>The user character, just before the dive.</p>
Full article ">Figure 4
<p>The user character reaches the bottom and sees the UCH site, in the distance.</p>
Full article ">Figure 5
<p>The user navigating the UCH site.</p>
Full article ">Figure 6
<p>Close-up camera views.</p>
Full article ">Figure 7
<p>Peristera frame/tests in ArtGAN, SAM and Track Anything.</p>
Full article ">Figure 8
<p>YOLOv8 performance on frames of the Peristera video with v9-Instance-con-sam (bueno IS).</p>
Full article ">Figure 9
<p>Confusion matrix of fish and background.</p>
Full article ">Figure 10
<p>Leaky ReLU graph.</p>
Full article ">Figure 11
<p>Loss curves after we trained our model.</p>
Full article ">Figure 12
<p>Measurements of the metrics for detection and segmentation for the various classes of the dataset.</p>
Full article ">Figure 13
<p>Confusion matrix of YOLOv8 with our custom dataset.</p>
Full article ">Figure 14
<p>Confusion matrix between fish and background.</p>
Full article ">Figure 15
<p>Examples of detection in the frames of the video from the wreck of <span class="html-italic">Peristera</span>.</p>
Full article ">Figure 16
<p>Detections in the video of the Peristera shipwreck. The numbers next to the labels correspond to confidence scores.</p>
Full article ">Figure 17
<p>Masking in the video from the Peristera shipwreck.</p>
Full article ">
7 pages, 175 KiB  
Editorial
Remote Sensing of the Interaction between Human and Natural Ecosystems in Asia
by Bing Xue, Yaotian Xu and Jun Yang
Remote Sens. 2024, 16(13), 2255; https://doi.org/10.3390/rs16132255 - 21 Jun 2024
Viewed by 232
Abstract
Human and natural ecosystems refer to human–social–economic subsystems and natural–ecological subsystems and their interactions. Understanding the interactions between human and natural ecosystems is essential for regional sustainability. However, the coupled human–nature ecosystem is usually highly heterogeneous and both spatially and temporally complex, so [...] Read more.
Human and natural ecosystems refer to human–social–economic subsystems and natural–ecological subsystems and their interactions. Understanding the interactions between human and natural ecosystems is essential for regional sustainability. However, the coupled human–nature ecosystem is usually highly heterogeneous and both spatially and temporally complex, so it is difficult to accurately identify and quantify the interaction between human and natural ecosystems at a large scale. This results in a poor understanding and evaluation of its impact on regional sustainability. Therefore, given the increasing interaction between humans and the natural ecosystem, our Special Issue collated 11 contributions from Asian scholars focusing on the latest research advances in remote sensing technologies and their application to observing, understanding, modeling, and explaining the interaction between human and natural ecosystems. This research involves the development of innovative methods, indicators, and frameworks implementing different perspectives and spatio-temporal scales, covering urban, arid, plateau, watershed, and marine regions in Asia and promoting the sustainable development of regional human and natural ecosystems. Full article
(This article belongs to the Section Ecological Remote Sensing)
21 pages, 52503 KiB  
Article
Study on the Identification, Failure Mode, and Spatial Distribution of Bank Collapses after the Initial Impoundment in the Head Section of Baihetan Reservoir in Jinsha River, China
by Chuangchuang Yao, Lingjing Li, Xin Yao, Renjiang Li, Kaiyu Ren, Shu Jiang, Ximing Chen and Li Ma
Remote Sens. 2024, 16(12), 2253; https://doi.org/10.3390/rs16122253 - 20 Jun 2024
Viewed by 363
Abstract
After the initial impoundment of the Baihetan Reservoir in April 2021, the water level in front of the dam rose about 200 m. The mechanical properties and effects of the bank slopes in the reservoir area changed significantly, resulting in many bank collapses. [...] Read more.
After the initial impoundment of the Baihetan Reservoir in April 2021, the water level in front of the dam rose about 200 m. The mechanical properties and effects of the bank slopes in the reservoir area changed significantly, resulting in many bank collapses. This study systematically analyzed the bank slope of the head section of the reservoir, spanning 30 km from the dam to Baihetan Bridge, through a comprehensive investigation conducted after the initial impoundment. The analysis utilized UAV flights and ground surveys to interpret the bank slope’s distribution characteristics and failure patterns. A total of 276 bank collapses were recorded, with a geohazard development density of 4.6/km. The slope gradient of 26% of the collapsed banks experienced an increase ranging from 5 to 20° after impoundment, whereas the remaining sites’ inclines remained unchanged. According to the combination of lithology and movement mode, the bank failure mode is divided into six types, which are the surface erosion type, surface collapse type, surface slide type, bedding slip type of clastic rock, toppling type of clastic rock, and cavity corrosion type of carbonate rock. It was found that the collapsed banks in the reservoir area of 85% developed in the reactivation of old landslide deposits, while 15% in the clastic and carbonate rock. This study offers guidance for the next phase of bank collapse regulations and future geohazards prevention strategies in the Baihetan Reservoir area. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>,<b>b</b>) The location of study area. (<b>c</b>) Precipitation and water level fluctuations map.</p>
Full article ">Figure 2
<p>Geological map of the study area.</p>
Full article ">Figure 3
<p>Structural map of bank slopes in the study area: (<b>a</b>) Cataclinal slope and anaclinal slope. (<b>b</b>) Orthoclinal slope. (<b>c</b>) The bank slope structure of the study area. (<b>d</b>) Cataclinal slope. (<b>e</b>) Orthoclinal slope. (<b>f</b>) Anaclinal slope.</p>
Full article ">Figure 4
<p>The head section of the bank collapse interpretation: (<b>a</b>) Distribution diagram of bank collapse in the study area. (<b>b</b>) Surface erosion type. (<b>c</b>) Toppling type. (<b>d</b>) Surface slide type. (<b>e</b>) Surface collapse type. (<b>f</b>) Cavity corrosion type. (<b>g</b>) Bedding slip type bank.</p>
Full article ">Figure 5
<p>Surface erosion type bank collapse failure model: (<b>a</b>) Field survey diagram. (<b>b</b>) Sectional diagram.</p>
Full article ">Figure 6
<p>Surface collapse type bank collapse failure model: (<b>a</b>) Field survey diagram. (<b>b</b>) Sectional diagram.</p>
Full article ">Figure 7
<p>Surface slide type bank collapse failure model: (<b>a</b>) Field survey diagram. (<b>b</b>) Sectional diagram.</p>
Full article ">Figure 8
<p>Bedding slide type bank collapse failure model: (<b>a</b>) Field survey diagram. (<b>b</b>) Sectional diagram.</p>
Full article ">Figure 9
<p>Toppling type bank collapse failure model: (<b>a</b>) Field survey diagram. (<b>b</b>) Sectional diagram.</p>
Full article ">Figure 10
<p>Toppling type bank collapse failure model: (<b>a</b>) Field survey diagram. (<b>b</b>) Sectional diagram.</p>
Full article ">Figure 11
<p>Statistical diagram of bank collapse: (<b>a</b>) Location. (<b>b</b>) Lithological group. (<b>c</b>) Bank collapse type. (<b>d</b>) Area. (<b>e</b>) Slope gradient. (<b>f</b>) Bank slope structure.</p>
Full article ">Figure 12
<p>Statistics of bank collapse geometric parameters: (<b>a</b>) Schematic diagram of bank collapse geometric parameters. (<b>b</b>,<b>c</b>) Statistical diagram of bank collapse geometric parameters.</p>
Full article ">Figure 13
<p>Photos of five sites with threats to roads, tunnels and settlements along the river. (<b>a</b>–<b>c</b>) The bank collapse threatens roads and tunnels. (<b>d</b>) Cracks in highway. (<b>e</b>) The bank collapse threatens the storeroom. (<b>f</b>) Cracks around the storeroom. (<b>g</b>,<b>h</b>) The bank collapse threatens Bridges and residential buildings.</p>
Full article ">
23 pages, 76599 KiB  
Article
SRBPSwin: Single-Image Super-Resolution for Remote Sensing Images Using a Global Residual Multi-Attention Hybrid Back-Projection Network Based on the Swin Transformer
by Yi Qin, Jiarong Wang, Shenyi Cao, Ming Zhu, Jiaqi Sun, Zhicheng Hao and Xin Jiang
Remote Sens. 2024, 16(12), 2252; https://doi.org/10.3390/rs16122252 - 20 Jun 2024
Viewed by 259
Abstract
Remote sensing images usually contain abundant targets and complex information distributions. Consequently, networks are required to model both global and local information in the super-resolution (SR) reconstruction of remote sensing images. The existing SR reconstruction algorithms generally focus on only local or global [...] Read more.
Remote sensing images usually contain abundant targets and complex information distributions. Consequently, networks are required to model both global and local information in the super-resolution (SR) reconstruction of remote sensing images. The existing SR reconstruction algorithms generally focus on only local or global features, neglecting effective feedback for reconstruction errors. Therefore, a Global Residual Multi-attention Fusion Back-projection Network (SRBPSwin) is introduced by combining the back-projection mechanism with the Swin Transformer. We incorporate a concatenated Channel and Spatial Attention Block (CSAB) into the Swin Transformer Block (STB) to design a Multi-attention Hybrid Swin Transformer Block (MAHSTB). SRBPSwin develops dense back-projection units to provide bidirectional feedback for reconstruction errors, enhancing the network’s feature extraction capabilities and improving reconstruction performance. SRBPSwin consists of the following four main stages: shallow feature extraction, shallow feature refinement, dense back projection, and image reconstruction. Firstly, for the input low-resolution (LR) image, shallow features are extracted and refined through the shallow feature extraction and shallow feature refinement stages. Secondly, multiple up-projection and down-projection units are designed to alternately process features between high-resolution (HR) and LR spaces, obtaining more accurate and detailed feature representations. Finally, global residual connections are utilized to transfer shallow features during the image reconstruction stage. We propose a perceptual loss function based on the Swin Transformer to enhance the detail of the reconstructed image. Extensive experiments demonstrate the significant reconstruction advantages of SRBPSwin in quantitative evaluation and visual quality. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The overall architecture of SRBPSwin. <math display="inline"><semantics> <mo>⊕</mo> </semantics></math> indicates the element-wise sum.</p>
Full article ">Figure 2
<p>(<b>a</b>) Multi-attention Hybrid Swin Transformer Block (MAHSTB). (<b>b</b>) Channel- and Spatial-attention Block (CSAB). (<b>c</b>) Channel attention (CA) block. (<b>d</b>) Spatial attention (SA) block. <math display="inline"><semantics> <mo>⊕</mo> </semantics></math> indicates the element-wise sum. <math display="inline"><semantics> <mo>⊗</mo> </semantics></math> indicates the element-wise product.</p>
Full article ">Figure 3
<p>(<b>a</b>) Up-projection Swin Unit (UPSU). (<b>b</b>) Down-projection Swin Unit (DPSU). <math display="inline"><semantics> <mo>⊕</mo> </semantics></math> indicates the element-wise sum. <math display="inline"><semantics> <mo>⊖</mo> </semantics></math> indicates the element-wise difference.</p>
Full article ">Figure 4
<p>PSNR curves of our method, based on using CSAB or not. Base refers to the network that uses only STB, while Base + CSAB denotes MAHSTB. The results are compared on the validation dataset with a scale factor of 2<math display="inline"><semantics> <mo>×</mo> </semantics></math> during the overall training phase.</p>
Full article ">Figure 5
<p>Visual comparison of ablation study to verify the effectiveness of MAHSTB; Base refers to the network that uses only STB, while Base + CSAB denotes MAHSTB. We used a red box to mark the area for enlargement on the left HR image. On the right, we present the corresponding HR image and the results reconstructed by the different methods.</p>
Full article ">Figure 6
<p>PSNR curves of our method, based on using <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mrow> <mi>S</mi> <mi>w</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </mrow> </semantics></math> or not. The results are compared on the validation dataset with a scale factor of 2<math display="inline"><semantics> <mo>×</mo> </semantics></math> during the overall training phase.</p>
Full article ">Figure 7
<p>Visual comparison of ablation study to verify the effectiveness of <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mrow> <mi>S</mi> <mi>w</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </mrow> </semantics></math>. We used a red box to mark the area for enlargement on the left HR image. On the right, we present the corresponding HR image and the results reconstructed using the different loss functions.</p>
Full article ">Figure 8
<p>PSNR comparison for different methods on the validation dataset with a scale factor of 2<math display="inline"><semantics> <mo>×</mo> </semantics></math> during the training phase.</p>
Full article ">Figure 9
<p>PSNR comparison for different methods on the validation dataset with a scale factor of 4<math display="inline"><semantics> <mo>×</mo> </semantics></math> during the training phase.</p>
Full article ">Figure 10
<p>Visual comparison of some representative SR methods and our model at the 2<math display="inline"><semantics> <mo>×</mo> </semantics></math> scale factor.</p>
Full article ">Figure 11
<p>Visual comparison of some representative SR methods and our model at the 4<math display="inline"><semantics> <mo>×</mo> </semantics></math> scale factor.</p>
Full article ">
19 pages, 6650 KiB  
Technical Note
Innovative Rotating SAR Mode for 3D Imaging of Buildings
by Yun Lin, Ying Wang, Yanping Wang, Wenjie Shen and Zechao Bai
Remote Sens. 2024, 16(12), 2251; https://doi.org/10.3390/rs16122251 - 20 Jun 2024
Viewed by 273
Abstract
Three-dimensional SAR imaging of urban buildings is currently a hotspot in the research area of remote sensing. Synthetic Aperture Radar (SAR) offers all-time, all-weather, high-resolution capacity, and is an important tool for the monitoring of building health. Buildings have geometric distortion in conventional [...] Read more.
Three-dimensional SAR imaging of urban buildings is currently a hotspot in the research area of remote sensing. Synthetic Aperture Radar (SAR) offers all-time, all-weather, high-resolution capacity, and is an important tool for the monitoring of building health. Buildings have geometric distortion in conventional 2D SAR images, which brings great difficulties to the interpretation of SAR images. This paper proposes a novel Rotating SAR (RSAR) mode, which acquires 3D information of buildings from two different angles in a single rotation. This new RSAR mode takes the center of a straight track as its rotation center, and obtains images of the same facade of a building from two different angles. By utilizing the differences in geometric distortion of buildings in the image pair, the 3D structure of the building is reconstructed. Compared to the existing tomographic SAR or circular SAR, this method does not require multiple flights in different elevations or observations from varying aspect angles, and greatly simplifies data acquisition. Furthermore, both simulation analysis and actual data experiment have verified the effectiveness of the proposed method. Full article
(This article belongs to the Special Issue Advances in Synthetic Aperture Radar Data Processing and Application)
Show Figures

Figure 1

Figure 1
<p>The geometric model of Rotating SAR.</p>
Full article ">Figure 2
<p>Schematic diagram of BP imaging at different angles in the same coordinate.</p>
Full article ">Figure 3
<p>RD projection model of the building.</p>
Full article ">Figure 4
<p>Schematic of the RD projection model.</p>
Full article ">Figure 5
<p>Geometric projection model at various rotation angles.</p>
Full article ">Figure 6
<p>RD geometric projection relationship.</p>
Full article ">Figure 7
<p>Diagram of distance offset among projection points.</p>
Full article ">Figure 8
<p>Flowchart of the main algorithm for 3D imaging.</p>
Full article ">Figure 9
<p>Different hypothetical elevation schematics.</p>
Full article ">Figure 10
<p>Flowchart of image neighborhood matching with <math display="inline"><semantics> <mi>n</mi> </semantics></math> hypothetical heights.</p>
Full article ">Figure 11
<p>Simulation results for point targets: (<b>a</b>) SAR image of the point targets (A1–G1) at angle <math display="inline"><semantics> <mrow> <msub> <mi>θ</mi> <mn>1</mn> </msub> </mrow> </semantics></math>; (<b>b</b>) SAR image of the point targets (A2–G2) at angle <math display="inline"><semantics> <mrow> <msub> <mi>θ</mi> <mn>2</mn> </msub> </mrow> </semantics></math>; (<b>c</b>) the curve depicts the relationship between height and the offset of the image pair; (<b>d</b>) maximum correlation coefficient curve; (<b>e</b>) side view of 3D point clouds; (<b>f</b>) top view of 3D point clouds.</p>
Full article ">Figure 12
<p>The experimental equipment and scene.</p>
Full article ">Figure 13
<p>Observation area.</p>
Full article ">Figure 14
<p>Practical scene experiment results: (<b>a</b>) 2D SAR image at angle <math display="inline"><semantics> <mrow> <msub> <mi>θ</mi> <mn>1</mn> </msub> </mrow> </semantics></math>; (<b>b</b>) 2D SAR image at angle <math display="inline"><semantics> <mrow> <msub> <mi>θ</mi> <mn>2</mn> </msub> </mrow> </semantics></math>; (<b>c</b>) mask image of the building; (<b>d</b>) maximum correlation coefficient curve.</p>
Full article ">Figure 15
<p>Three-dimensional SAR images of the proposed algorithm: (<b>a</b>) side view; (<b>b</b>) top view; (<b>c</b>) different target heights in 2D image; (<b>d</b>) color mapping of point clouds height.</p>
Full article ">
27 pages, 6641 KiB  
Article
Biomass Estimation and Saturation Value Determination Based on Multi-Source Remote Sensing Data
by Rula Sa, Yonghui Nie, Sergey Chumachenko and Wenyi Fan
Remote Sens. 2024, 16(12), 2250; https://doi.org/10.3390/rs16122250 - 20 Jun 2024
Viewed by 308
Abstract
Forest biomass estimation is undoubtedly one of the most pressing research subjects at present. Combining multi-source remote sensing information can give full play to the advantages of different remote sensing technologies, providing more comprehensive and rich information for aboveground biomass (AGB) estimation research. [...] Read more.
Forest biomass estimation is undoubtedly one of the most pressing research subjects at present. Combining multi-source remote sensing information can give full play to the advantages of different remote sensing technologies, providing more comprehensive and rich information for aboveground biomass (AGB) estimation research. Based on Landsat 8, Sentinel-2A, and ALOS2 PALSAR data, this paper takes the artificial coniferous forests in the Saihanba Forest of Hebei Province as the object of study, fully explores and establishes remote sensing factors and information related to forest structure, gives full play to the advantages of spectral signals in detecting the horizontal structure and multi-dimensional synthetic aperture radar (SAR) data in detecting the vertical structure, and combines environmental factors to carry out multivariate synergistic methods of estimating the AGB. This paper uses three variable selection methods (Pearson correlation coefficient, random forest significance, and the least absolute shrinkage and selection operator (LASSO)) to establish the variable sets, combining them with three typical non-parametric models to estimate AGB, namely, random forest (RF), support vector regression (SVR), and artificial neural network (ANN), to analyze the effect of forest structure on biomass estimation, explore the suitable AGB of artificial coniferous forests estimation of machine learning models, and develop the method of quantifying saturation value of the combined variables. The results show that the horizontal structure is more capable of explaining the AGB compared to the vertical structure information, and that combining the multi-structure information can improve the model results and the saturation value to a great extent. In this study, different sets of variables can produce relatively superior results in different models. The variable set selected using LASSO gives the best results in the SVR model, with an R2 values of 0.9998 and 0.8792 for the training and the test set, respectively, and the highest saturation value obtained is 185.73 t/ha, which is beyond the range of the measured data. The problem of saturation in biomass estimation in boreal medium- and high-density forests was overcome to a certain extent, and the AGB of the Saihanba area was better estimated. Full article
Show Figures

Figure 1

Figure 1
<p>Location map of the study area: (<b>a</b>) the location map of the study area; (<b>b</b>) the HV polarization data of the study area; (<b>c</b>) the true color image of Sentinel-2, with the actual sample locations indicated by green dots.</p>
Full article ">Figure 2
<p>Relationships between forest structural parameters at the sample site level: (<b>a</b>) mean DBH vs. S; (<b>b</b>) CC vs. BA; (<b>c</b>) mean DBH vs. mean forest height, where the size and color shade of the dots vary with biomass.</p>
Full article ">Figure 3
<p>Flowchart of the methodology.</p>
Full article ">Figure 4
<p>Determination of the number of model leaves and decision tree.</p>
Full article ">Figure 5
<p>Parameter optimization diagram of three models. From left to right are the results of RF, SVR, and ANN models. From top to bottom are the results obtained for the horizontal structure indices (V1), vertical structure indices (V2), horizontal + vertical structure indices (V3), horizontal + vertical structure indices + topographical variables (V4), Pearson selection variable (V5), RF importance selection of the variables (V6), and the variable chosen by the LASSO (V7) in each model.</p>
Full article ">Figure 6
<p>Summary graphs of the results of the training and test sets of the three models for estimating AGB. The first and second rows of each model are the training set results, and test set results, respectively. The left side is <math display="inline"><semantics> <mrow> <msup> <mrow> <mi>R</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msup> </mrow> </semantics></math>, and the right side is RMSE.</p>
Full article ">Figure 7
<p>Summary plot of model results. From left to right, the horizontal structure indices (V1), vertical structure indices (V2), horizontal + vertical structure indices (V3), horizontal + vertical structure indices + topographical variables (V4), Pearson selection variable (V5), RF importance selection of the variables (V6), and the variable chosen by the LASSO (V7) were introduced into the three models to estimate the results of AGB. From top to bottom are the results of RF, SVR, and ANN models, and the first and second rows of each model are the training set results and test set results, respectively. The horizontal axis of the image is the measured data, the vertical axis is the predicted results, the blue is the 1:1 straight line, and the green is the fitted line.</p>
Full article ">Figure 8
<p>AGB map of the study area estimated by the LASSO-based SVR model: (<b>a</b>) AGB map of the study area; (<b>b</b>) histogram of AGB distribution.</p>
Full article ">Figure 9
<p>The spherical model curves for each structure index and different variable sets under different ML models. The left side shows the horizontal structure indices figures (RVI, CTI, MTI, PTI) in order, and the individual index figures of CC, S, and BA are shown on the right. Right side: vertical structure indices figures and a summary plot of the spherical model curves for the different sets of variables under the RF, SVR, and ANN models.</p>
Full article ">
23 pages, 2885 KiB  
Article
Exploring Spatial Patterns of Tropical Peatland Subsidence in Selangor, Malaysia Using the APSIS-DInSAR Technique
by Betsabé de la Barreda-Bautista, Martha J. Ledger, Sofie Sjögersten, David Gee, Andrew Sowter, Beth Cole, Susan E. Page, David J. Large, Chris D. Evans, Kevin J. Tansey, Stephanie Evers and Doreen S. Boyd
Remote Sens. 2024, 16(12), 2249; https://doi.org/10.3390/rs16122249 - 20 Jun 2024
Viewed by 316
Abstract
Tropical peatlands in Southeast Asia have experienced widespread subsidence due to forest clearance and drainage for agriculture, oil palm and pulp wood production, causing concerns about their function as a long-term carbon store. Peatland drainage leads to subsidence (lowering of peatland surface), an [...] Read more.
Tropical peatlands in Southeast Asia have experienced widespread subsidence due to forest clearance and drainage for agriculture, oil palm and pulp wood production, causing concerns about their function as a long-term carbon store. Peatland drainage leads to subsidence (lowering of peatland surface), an indicator of degraded peatlands, while stability/uplift indicates peatland accumulation and ecosystem health. We used the Advanced Pixel System using the Intermittent SBAS (ASPIS-DInSAR) technique with biophysical and geographical data to investigate the impact of peatland drainage and agriculture on spatial patterns of subsidence in Selangor, Malaysia. Results showed pronounced subsidence in areas subjected to drainage for agricultural and oil palm plantations, while stable areas were associated with intact forests. The most powerful predictors of subsidence rates were the distance from the drainage canal or peat boundary; however, other drivers such as soil properties and water table levels were also important. The maximum subsidence rate detected was lower than that documented by ground-based methods. Therefore, whilst the APSIS-DInSAR technique may underestimate absolute subsidence rates, it gives valuable information on the direction of motion and spatial variability of subsidence. The study confirms widespread and severe peatland degradation in Selangor, highlighting the value of DInSAR for identifying priority zones for restoration and emphasising the need for conservation and restoration efforts to preserve Selangor peatlands and prevent further environmental impacts. Full article
(This article belongs to the Section Biogeosciences Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area. North and South Selangor. Red points represent the points where pixels were extracted for the analysis. Peatlands are enclosed in the black polygon and rivers and canals are represented by blue lines.</p>
Full article ">Figure 2
<p>Methodology framework.</p>
Full article ">Figure 3
<p>(<b>a</b>) Land cover map from North Selangor; (<b>b</b>) Subsidence over North Selangor; (<b>c</b>) number of coherent pairs per pixel over North Selangor (coherence count); (<b>d</b>) Land cover map from South Selangor; (<b>e</b>) Subsidence over South Selangor; (<b>f</b>) number of coherent pairs per pixel over North Selangor. The subsidence data are in mm yr<sup>−1</sup> between 2017 and 2019. A greater negative value (red) indicates a greater subsidence rate. Coherence count data ranges from 71 to 1335, whereby the higher the value, the greater the number of consistently coherent pairs that exist for this pixel. Black and blue lines represent the peatland extent. Areas of notable interest are marked with a red square.</p>
Full article ">Figure 4
<p>Rates of subsidence in mm yr<sup>−1</sup> computed from the surface motion velocity (2017–2019) among different land cover classes. Mean and SD are shown. A greater negative value indicates greater subsidence rates. (<b>a</b>) North Selangor subsidence rates; (<b>b</b>) South Selangor subsidence rates.</p>
Full article ">Figure 5
<p>(<b>a</b>) Variable importance based on MSE; (<b>b</b>) variable importance based on node purity; (<b>c</b>) variable importance in optimum variables selected based on MSE (<b>d</b>) variable importance in optimum variables selected based on node purity.</p>
Full article ">Figure 5 Cont.
<p>(<b>a</b>) Variable importance based on MSE; (<b>b</b>) variable importance based on node purity; (<b>c</b>) variable importance in optimum variables selected based on MSE (<b>d</b>) variable importance in optimum variables selected based on node purity.</p>
Full article ">Figure A1
<p>Residual plots for multiple regression.</p>
Full article ">
18 pages, 5061 KiB  
Article
Generating 10-Meter Resolution Land Use and Land Cover Products Using Historical Landsat Archive Based on Super Resolution Guided Semantic Segmentation Network
by Dawei Wen, Shihao Zhu, Yuan Tian, Xuehua Guan and Yang Lu
Remote Sens. 2024, 16(12), 2248; https://doi.org/10.3390/rs16122248 - 20 Jun 2024
Viewed by 288
Abstract
Generating high-resolution land cover maps using relatively lower-resolution remote sensing images is of great importance for subtle analysis. However, the domain gap between real lower-resolution and synthetic images has not been permanently resolved. Furthermore, super-resolution information is not fully exploited in semantic segmentation [...] Read more.
Generating high-resolution land cover maps using relatively lower-resolution remote sensing images is of great importance for subtle analysis. However, the domain gap between real lower-resolution and synthetic images has not been permanently resolved. Furthermore, super-resolution information is not fully exploited in semantic segmentation models. By solving the aforementioned issues, a deeply fused super resolution guided semantic segmentation network using 30 m Landsat images is proposed. A large-scale dataset comprising 10 m Sentinel-2, 30 m Landsat-8 images, and 10 m European Space Agency (ESA) Land Cover Product is introduced, facilitating model training and evaluation across diverse real-world scenarios. The proposed Deeply Fused Super Resolution Guided Semantic Segmentation Network (DFSRSSN) combines a Super Resolution Module (SRResNet) and a Semantic Segmentation Module (CRFFNet). SRResNet enhances spatial resolution, while CRFFNet leverages super-resolution information for finer-grained land cover classification. Experimental results demonstrate the superior performance of the proposed method in five different testing datasets, achieving 68.17–83.29% and 39.55–75.92% for overall accuracy and kappa, respectively. When compared to ResUnet with up-sampling block, increases of 2.16–34.27% and 8.32–43.97% were observed for overall accuracy and kappa, respectively. Moreover, we proposed a relative drop rate of accuracy metrics to evaluate the transferability. The model exhibits improved spatial transferability, demonstrating its effectiveness in generating accurate land cover maps for different cities. Multi-temporal analysis reveals the potential of the proposed method for studying land cover and land use changes over time. In addition, a comparison of the state-of-the-art full semantic segmentation models indicates that spatial details are fully exploited and presented in semantic segmentation results by the proposed method. Full article
(This article belongs to the Special Issue AI-Driven Mapping Using Remote Sensing Data)
Show Figures

Figure 1

Figure 1
<p>The network structure of the deeply fused super resolution guided semantic segmentation network (DFSRSSN): Super Resolution Residual Network (SRResNet) and Cross-Resolution Feature Fusion Network (CRFFNet).</p>
Full article ">Figure 2
<p>The 10 m land cover results using Landsat-8 images (<b>a</b>) Landsat-8 RGB images; (<b>b</b>) Super-resolution image; (<b>c</b>) ResUnet with up-sampling block (ResUnet_UP); (<b>d</b>) Deeply Fused Super Resolution Guided Semantic Segmentation Network (DFSRSSN); and (<b>e</b>) ESA 2020.</p>
Full article ">Figure 3
<p>Comparison of user’s accuracy (<span class="html-italic">UA</span>), producer’s accuracy (<span class="html-italic">PA</span>), and <span class="html-italic">F</span>1 score for the different testing datasets: (<b>a</b>) Dataset I, (<b>b</b>) Dataset II, (<b>c</b>) Dataset III, (<b>d</b>) Dataset IV, and (<b>e</b>) Dataset V. Note: 1 tree cover, 2 shrubland, 3 grassland, 4 cropland, 5 built-up, 6 bare/sparse vegetation, 7 permanent water bodies, 8 herbaceous wetland.</p>
Full article ">Figure 4
<p>The land use and land cover map of Wuhan in (<b>a</b>) 2013 and (<b>b</b>) 2020.</p>
Full article ">Figure 5
<p>Representative scenes of 10 m multi-temporal results for Wuhan: (<b>a</b>) Landsat image in 2013; (<b>b</b>) Land cover map in 2013; (<b>c</b>) Landsat image in 2020; (<b>d</b>) Land cover map in 2020, and (<b>e</b>) ESA 2020.</p>
Full article ">
21 pages, 11309 KiB  
Article
LiDAR Point Cloud Augmentation for Adverse Conditions Using Conditional Generative Model
by Yuxiao Zhang, Ming Ding, Hanting Yang, Yingjie Niu, Maoning Ge, Kento Ohtani, Chi Zhang and Kazuya Takeda
Remote Sens. 2024, 16(12), 2247; https://doi.org/10.3390/rs16122247 - 20 Jun 2024
Viewed by 235
Abstract
The perception systems of autonomous vehicles face significant challenges under adverse conditions, with issues such as obscured objects and false detections due to environmental noise. Traditional approaches, which typically focus on noise removal, often fall short in such scenarios. Addressing the lack of [...] Read more.
The perception systems of autonomous vehicles face significant challenges under adverse conditions, with issues such as obscured objects and false detections due to environmental noise. Traditional approaches, which typically focus on noise removal, often fall short in such scenarios. Addressing the lack of diverse adverse weather data in existing automotive datasets, we propose a novel data augmentation method that integrates realistically simulated adverse weather effects into clear condition datasets. This method not only addresses the scarcity of data but also effectively bridges domain gaps between different driving environments. Our approach centers on a conditional generative model that uses segmentation maps as a guiding mechanism to ensure the authentic generation of adverse effects, which greatly enhances the robustness of perception and object detection systems in autonomous vehicles, operating under varied and challenging conditions. Besides the capability of accurately and naturally recreating over 90% of the adverse effects, we demonstrate that this model significantly improves the performance and accuracy of deep learning algorithms for autonomous driving, particularly in adverse weather scenarios. In the experiments employing our augmented approach, we achieved a 2.46% raise in the 3D average precision, a marked enhancement in detection accuracy and system reliability, substantiating the model’s efficacy with quantifiable improvements in 3D object detection compared to models without augmentation. This work not only serves as an enhancement of autonomous vehicle perception systems under adverse conditions but also marked an advancement in deep learning models in adverse condition research. Full article
(This article belongs to the Special Issue Remote Sensing Advances in Urban Traffic Monitoring)
Show Figures

Figure 1

Figure 1
<p>Point cloud in clear driving scenario (<b>a</b>) and corresponding snow-augmented results (<b>b</b>) expected from augmentation models. Red boxes denote locations where snow effects were generated. Color encoded by height.</p>
Full article ">Figure 2
<p>Workflow of the condition-guided adverse effects augmentation model based on novel segmentation maps production and early data fusion techniques. The clear data input is obtained from filtered raw adverse data to establish intrinsic correlation for optimal training. The cluster segmentation map serves as a conditional guide, which can be input into the generative model through early data fusion. Data with adverse conditions are generated under the guidance of the segmentation map.</p>
Full article ">Figure 3
<p>Examples of segmentation maps of CADC dataset in a depth image format for visualization. Images are rendered with pixel values multiplied by 64 under the OpenCV BGR environment for better illustration purposes. Red points denote snow clusters, blue denotes scattered snow points, green denotes all the objects, and black means void (no signal).</p>
Full article ">Figure 4
<p>Diagram of the early fusion process for conditional augmentation in point clouds.</p>
Full article ">Figure 5
<p>Architecture of the condition-guided adverse effects augmentation model based on CycleGAN [<a href="#B13-remotesensing-16-02247" class="html-bibr">13</a>]. Clear A and Snow B along with their segmentation maps are the input data. The condition-guided conversions are conducted by 6-channel generators while the reconstructions are completed by 3-channel generators. <math display="inline"><semantics> <msub> <mi>D</mi> <mi>A</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>D</mi> <mi>B</mi> </msub> </semantics></math> are discriminators.</p>
Full article ">Figure 6
<p>Set (a) augmentation results in the Canadian driving scenario. First row—BEV scenes, colored by height; middle row—clustered results, colored by cluster groups; bottom row—enlarged third-person view center part around the ego vehicle, colored by height. Red boxes and arrows—locations where snow’s effects are reproduced.</p>
Full article ">Figure 7
<p>Set (b) augmentation results in the Canadian driving scenario. First row—BEV scenes, colored by height; middle row—clustered results, colored by cluster groups; bottom row—enlarged third-person view center part around the ego vehicle, colored by height. Red boxes and arrows—locations where snow’s effects are reproduced.</p>
Full article ">Figure 8
<p>Precision and recall rates comparisons of adverse effects generation based on snow clusters.</p>
Full article ">Figure 9
<p>Qualitative comparison of detection results on samples from CADC containing fierce adverse conditions. The top row shows the corresponding forward 180° RGB images. The rest show the LiDAR point clouds with ground-truth boxes and predictions using the baseline (“no augmentation”), our augmentation, and DROR. Red dots denote pedestrians, and black boxes with red dots in the center denote cars (or trucks). Point cloud colors encoded in height.</p>
Full article ">Figure 10
<p>Set (a) augmentation results in the Nagoya driving scenario. Red boxes and arrows—locations where adverse effects are synthesized.</p>
Full article ">Figure 11
<p>Set (b) augmentation results in the Nagoya driving scenario. Red boxes and arrows—locations where adverse effects are synthesized.</p>
Full article ">
25 pages, 11675 KiB  
Article
An Ensemble Machine Learning Model to Estimate Urban Water Quality Parameters Using Unmanned Aerial Vehicle Multispectral Imagery
by Xiangdong Lei, Jie Jiang, Zifeng Deng, Di Wu, Fangyi Wang, Chengguang Lai, Zhaoli Wang and Xiaohong Chen
Remote Sens. 2024, 16(12), 2246; https://doi.org/10.3390/rs16122246 - 20 Jun 2024
Viewed by 443
Abstract
Urban reservoirs contribute significantly to human survival and ecological balance. Machine learning-based remote sensing techniques for monitoring water quality parameters (WQPs) have gained increasing prominence in recent years. However, these techniques still face challenges such as inadequate band selection, weak machine learning model [...] Read more.
Urban reservoirs contribute significantly to human survival and ecological balance. Machine learning-based remote sensing techniques for monitoring water quality parameters (WQPs) have gained increasing prominence in recent years. However, these techniques still face challenges such as inadequate band selection, weak machine learning model performance, and the limited retrieval of non-optical active parameters (NOAPs). This study focuses on an urban reservoir, utilizing unmanned aerial vehicle (UAV) multispectral remote sensing and ensemble machine learning (EML) methods to monitor optically active parameters (OAPs, including Chla and SD) and non-optically active parameters (including CODMn, TN, and TP), exploring spatial and temporal variations of WQPs. A framework of Feature Combination and Genetic Algorithm (FC-GA) is developed for feature band selection, along with two frameworks of EML models for WQP estimation. Results indicate FC-GA’s superiority over popular methods such as the Pearson correlation coefficient and recursive feature elimination, achieving higher performance with no multicollinearity between bands. The EML model demonstrates superior estimation capabilities for WQPs like Chla, SD, CODMn, and TP, with an R2 of 0.72–0.86 and an MRE of 7.57–42.06%. Notably, the EML model exhibits greater accuracy in estimating OAPs (MRE ≤ 19.35%) compared to NOAPs (MRE ≤ 42.06%). Furthermore, spatial and temporal distributions of WQPs reveal nitrogen and phosphorus nutrient pollution in the upstream head and downstream tail of the reservoir due to human activities. TP, TN, and Chla are lower in the dry season than in the rainy season, while clarity and CODMn are higher in the dry season than in the rainy season. This study proposes a novel approach to water quality monitoring, aiding in the identification of potential pollution sources and ecological management. Full article
(This article belongs to the Special Issue Remote Sensing in Natural Resource and Water Environment II)
Show Figures

Figure 1

Figure 1
<p>Flowchart of the research framework. The whole study consists of four main parts: data collection and preprocessing, feature bands selection, ensemble machine learning model development, and retrieval of the spatial and temporal distribution of WQPs.</p>
Full article ">Figure 2
<p>The study area. (<b>a</b>,<b>b</b>) Locations of the Longdong Reservoir. (<b>c</b>) Sampling points of water quality in the Longdong Reservoir. In panel (<b>c</b>), the sampling points of water quality are marked in green, and the RGB image is composited from three UAV remote sensing bands of Red, Blue, and Green.</p>
Full article ">Figure 3
<p>Flowchart of feature band selection based on FC-GA. FC-GA consists of two parts: feature combination of arithmetic operations and random combination, band selection based on genetic algorithm, and <span class="html-italic">VIF</span>.</p>
Full article ">Figure 4
<p>The modeling framework of the EML-1 and EML-2. (<b>a</b>,<b>c</b>) The training and prediction of EML-1, respectively; (<b>b</b>,<b>d</b>) the training and prediction of EML-2, respectively. The difference in panel (<b>b</b>) compared to panel (<b>a</b>) is that the training dataset and testing dataset of level 0 are re-entered into the meta-model of level 1, and the difference in panel (<b>d</b>) compared to panel (<b>c</b>) is the input of new data into the meta-model of level 1.</p>
Full article ">Figure 4 Cont.
<p>The modeling framework of the EML-1 and EML-2. (<b>a</b>,<b>c</b>) The training and prediction of EML-1, respectively; (<b>b</b>,<b>d</b>) the training and prediction of EML-2, respectively. The difference in panel (<b>b</b>) compared to panel (<b>a</b>) is that the training dataset and testing dataset of level 0 are re-entered into the meta-model of level 1, and the difference in panel (<b>d</b>) compared to panel (<b>c</b>) is the input of new data into the meta-model of level 1.</p>
Full article ">Figure 5
<p>Performance evaluation of five water quality parameters using EML-1, EML-2, BRR, SVR, NNR, CART, RF, LightGBM, and MLP. Each panel, (<b>a</b>–<b>e</b>), indicates Chla, SD, COD<sub>Mn</sub>, TN, and TP, respectively. Meanwhile, the black dotted line presents an angle of 45.</p>
Full article ">Figure 5 Cont.
<p>Performance evaluation of five water quality parameters using EML-1, EML-2, BRR, SVR, NNR, CART, RF, LightGBM, and MLP. Each panel, (<b>a</b>–<b>e</b>), indicates Chla, SD, COD<sub>Mn</sub>, TN, and TP, respectively. Meanwhile, the black dotted line presents an angle of 45.</p>
Full article ">Figure 5 Cont.
<p>Performance evaluation of five water quality parameters using EML-1, EML-2, BRR, SVR, NNR, CART, RF, LightGBM, and MLP. Each panel, (<b>a</b>–<b>e</b>), indicates Chla, SD, COD<sub>Mn</sub>, TN, and TP, respectively. Meanwhile, the black dotted line presents an angle of 45.</p>
Full article ">Figure 6
<p>Maps of WQPs concentration distribution. (<b>A</b>–<b>E</b>) Chla, SD, COD<sub>Mn</sub>, TN, and TP, respectively; (<b>a</b>–<b>f</b>) the reversal results of six periods, respectively. COD<sub>Mn</sub>, TN, and TP are divided according to the grade range of China’s Environmental Quality Standards for Surface Water (GB3838-2002 [<a href="#B8-remotesensing-16-02246" class="html-bibr">8</a>]), while Chla and SD are equally divided according to the total range of each. A small number of holes in the maps are caused by image mosaicking deviation.</p>
Full article ">Figure 6 Cont.
<p>Maps of WQPs concentration distribution. (<b>A</b>–<b>E</b>) Chla, SD, COD<sub>Mn</sub>, TN, and TP, respectively; (<b>a</b>–<b>f</b>) the reversal results of six periods, respectively. COD<sub>Mn</sub>, TN, and TP are divided according to the grade range of China’s Environmental Quality Standards for Surface Water (GB3838-2002 [<a href="#B8-remotesensing-16-02246" class="html-bibr">8</a>]), while Chla and SD are equally divided according to the total range of each. A small number of holes in the maps are caused by image mosaicking deviation.</p>
Full article ">Figure 7
<p>Trends of water quality changes in the six periods of the best model reversion. (<b>a</b>–<b>e</b>) The five WQPs of Chla, SD, COD<sub>Mn</sub>, TN, and TP, respectively; A~F in the horizontal coordinates represent the six periods of 4 January 2022, 7 April 2022, 31 July 2022, 26 April 2023, 27 May 2023, and 11 June 2023, respectively. The error bars represent the degree of dispersion of each water quality parameter.</p>
Full article ">
26 pages, 8482 KiB  
Article
Adaptive Background Endmember Extraction for Hyperspectral Subpixel Object Detection
by Lifeng Yang, Xiaorui Song, Bin Bai and Zhuo Chen
Remote Sens. 2024, 16(12), 2245; https://doi.org/10.3390/rs16122245 - 20 Jun 2024
Viewed by 430
Abstract
Subpixel object detection presents a significant challenge within the domain of hyperspectral image (HSI) processing, primarily due to the inherently limited spatial resolution of imaging spectrometers. For subpixel object detection, the dimensional extent of the object of interest is smaller than an individual [...] Read more.
Subpixel object detection presents a significant challenge within the domain of hyperspectral image (HSI) processing, primarily due to the inherently limited spatial resolution of imaging spectrometers. For subpixel object detection, the dimensional extent of the object of interest is smaller than an individual pixel, which significantly diminishes the utility of spatial information pertaining to the object. Therefore, the efficacy of detection algorithms depends heavily on the spectral data inherent in the image. The detection of subpixel objects in hyperspectral imagery primarily relies on the suppression of the background and the enhancement of the object of interest. Hence, acquiring accurate background information from HSI images is a crucial step. In this study, an adaptive background endmember extraction for hyperspectral subpixel object detection is proposed. An adaptive scale constraint is incorporated into the background spectral endmember learning process to improve the adaptability of background endmember extraction, thus further enhancing the algorithm’s generalizability and applicability in diverse analytical scenarios. Experimental results demonstrate that the adaptive endmember extraction-based subpixel object detection algorithm consistently outperforms existing state-of-the-art algorithms in terms of detection efficacy on both simulated and real-world datasets. Full article
(This article belongs to the Special Issue Advances in Hyperspectral Remote Sensing Image Processing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Methodological framework adopted for the proposed hyperspectral subpixel object detection.</p>
Full article ">Figure 2
<p>Reference spectrum of the object to be detected on the simulated dataset.</p>
Full article ">Figure 3
<p>The simulated hyperspectral dataset: (<b>a</b>) with SNR = 30 dB; (<b>b</b>) ground truth.</p>
Full article ">Figure 4
<p>ROC curves of different hyperspectral subpixel detection approaches on the simulated dataset: (<b>a</b>) SNR = 25 dB; (<b>b</b>) SNR = 30 dB; and (<b>c</b>) SNR = 35 dB.</p>
Full article ">Figure 4 Cont.
<p>ROC curves of different hyperspectral subpixel detection approaches on the simulated dataset: (<b>a</b>) SNR = 25 dB; (<b>b</b>) SNR = 30 dB; and (<b>c</b>) SNR = 35 dB.</p>
Full article ">Figure 5
<p>The experimental Urban dataset: (<b>a</b>) true color composite of the hyperspectral imagery; (<b>b</b>) ground truth.</p>
Full article ">Figure 6
<p>The reference background endmembers of the experimental Urban dataset.</p>
Full article ">Figure 7
<p>Object detection score images of various subpixel detection methodologies applied to the Urban dataset: (<b>a</b>) 2D plot of SACE; (<b>b</b>) 3D plot of SACE; (<b>c</b>) 2D plot of CSCR; (<b>d</b>) 3D plot of CSCR; (<b>e</b>) 2D plot of hCEM; (<b>f</b>) 3D plot of hCEM; (<b>g</b>) 2D plot of SPSMF; (<b>h</b>) 3D plot of SPSMF; (<b>i</b>) 2D plot of PALM; (<b>j</b>) 3D plot of PALM; (<b>k</b>) 2D plot of HSPRD; (<b>l</b>) 3D plot of HSPRD; (<b>m</b>) 2D plot of the proposed method; and (<b>n</b>) 3D plot of the proposed method. (The <span class="html-italic">X</span>-axis and <span class="html-italic">Y</span>-axis represent the spatial coordinates of the column pixel index and the row pixel index, respectively).</p>
Full article ">Figure 7 Cont.
<p>Object detection score images of various subpixel detection methodologies applied to the Urban dataset: (<b>a</b>) 2D plot of SACE; (<b>b</b>) 3D plot of SACE; (<b>c</b>) 2D plot of CSCR; (<b>d</b>) 3D plot of CSCR; (<b>e</b>) 2D plot of hCEM; (<b>f</b>) 3D plot of hCEM; (<b>g</b>) 2D plot of SPSMF; (<b>h</b>) 3D plot of SPSMF; (<b>i</b>) 2D plot of PALM; (<b>j</b>) 3D plot of PALM; (<b>k</b>) 2D plot of HSPRD; (<b>l</b>) 3D plot of HSPRD; (<b>m</b>) 2D plot of the proposed method; and (<b>n</b>) 3D plot of the proposed method. (The <span class="html-italic">X</span>-axis and <span class="html-italic">Y</span>-axis represent the spatial coordinates of the column pixel index and the row pixel index, respectively).</p>
Full article ">Figure 7 Cont.
<p>Object detection score images of various subpixel detection methodologies applied to the Urban dataset: (<b>a</b>) 2D plot of SACE; (<b>b</b>) 3D plot of SACE; (<b>c</b>) 2D plot of CSCR; (<b>d</b>) 3D plot of CSCR; (<b>e</b>) 2D plot of hCEM; (<b>f</b>) 3D plot of hCEM; (<b>g</b>) 2D plot of SPSMF; (<b>h</b>) 3D plot of SPSMF; (<b>i</b>) 2D plot of PALM; (<b>j</b>) 3D plot of PALM; (<b>k</b>) 2D plot of HSPRD; (<b>l</b>) 3D plot of HSPRD; (<b>m</b>) 2D plot of the proposed method; and (<b>n</b>) 3D plot of the proposed method. (The <span class="html-italic">X</span>-axis and <span class="html-italic">Y</span>-axis represent the spatial coordinates of the column pixel index and the row pixel index, respectively).</p>
Full article ">Figure 8
<p>ROC curves of different hyperspectral subpixel detection approaches on the experimental Urban dataset.</p>
Full article ">Figure 9
<p>The endmember estimates obtained via HSPRD and the proposed method: (<b>a</b>) Asphalt; (<b>b</b>) Grass; (<b>c</b>) Trees; and (<b>d</b>) Roofs.</p>
Full article ">Figure 9 Cont.
<p>The endmember estimates obtained via HSPRD and the proposed method: (<b>a</b>) Asphalt; (<b>b</b>) Grass; (<b>c</b>) Trees; and (<b>d</b>) Roofs.</p>
Full article ">Figure 10
<p>The MUUFL Gulfport dataset: (<b>a</b>) true color composite of the hyperspectral imagery; (<b>b</b>) ground truth of Solid Brown panels.</p>
Full article ">Figure 11
<p>Reference spectrum of the Solid Brown panel on the MUUFL Gulfport dataset.</p>
Full article ">Figure 12
<p>Results of various subpixel detection methods applied to the MUUFL Gulfport dataset: (<b>a</b>) SACE; (<b>b</b>) CSCR; (<b>c</b>) hCEM; (<b>d</b>) SPSMF; (<b>e</b>) PALM; (<b>f</b>) HSPRD; and (<b>g</b>) the proposed method.</p>
Full article ">Figure 12 Cont.
<p>Results of various subpixel detection methods applied to the MUUFL Gulfport dataset: (<b>a</b>) SACE; (<b>b</b>) CSCR; (<b>c</b>) hCEM; (<b>d</b>) SPSMF; (<b>e</b>) PALM; (<b>f</b>) HSPRD; and (<b>g</b>) the proposed method.</p>
Full article ">Figure 13
<p>ROC curves of different hyperspectral subpixel detection approaches on the MUUFL Gulfport dataset.</p>
Full article ">Figure 14
<p>The HOSD hyperspectral data: (<b>a</b>) the false-color composite of the hyperspectral imagery; (<b>b</b>) ground truth.</p>
Full article ">Figure 15
<p>Results of various subpixel detection methodologies applied to the HOSD dataset: (<b>a</b>) SACE; (<b>b</b>) CSCR; (<b>c</b>) hCEM; (<b>d</b>) SPSMF; (<b>e</b>) PALM; (<b>f</b>) HSPRD; and (<b>g</b>) the proposed method.</p>
Full article ">Figure 16
<p>ROC curves of different subpixel detection approaches on the HOSD dataset.</p>
Full article ">
Back to TopTop