www.fgks.org   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (33,578)

Search Parameters:
Journal = Remote Sensing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 5275 KiB  
Article
Spatially Explicit Individual Tree Height Growth Models from Bi-Temporal Aerial Laser Scanning
by Serajis Salekin, David Pont, Yvette Dickinson and Sumedha Amarasena
Remote Sens. 2024, 16(13), 2270; https://doi.org/10.3390/rs16132270 (registering DOI) - 21 Jun 2024
Abstract
Individual-tree-based models (IBMs) have emerged to provide finer-scale operational simulations of stand dynamics by accommodating and/or representing tree-to-tree interactions and competition. Like stand-level growth model development, IBMs need an array of detailed data from individual trees in any stand through repeated measurement. Conventionally, [...] Read more.
Individual-tree-based models (IBMs) have emerged to provide finer-scale operational simulations of stand dynamics by accommodating and/or representing tree-to-tree interactions and competition. Like stand-level growth model development, IBMs need an array of detailed data from individual trees in any stand through repeated measurement. Conventionally, these data have been collected through forest mensuration by establishing permanent sample plots or temporary measurement plots. With the evolution of remote sensing technology, it is now possible to efficiently collect more detailed information reflecting the heterogeneity of the whole forest stand than before. Among many techniques, airborne laser scanning (ALS) has proved to be reliable and has been reported to have potential to provide unparallel input data for growth models. This study utilized repeated ALS data to develop a model to project the annualized individual tree height increment (ΔHT) in a conifer plantation by considering spatially explicit competition through a mixed-effects modelling approach. The ALS data acquisition showed statistical and biological consistency over time in terms of both response and important explanatory variables, with correlation coefficients ranging from 0.65 to 0.80. The height increment model had high precision (RMSE = 0.92) and minimal bias (0.03), respectively, for model fitting. Overall, the model showed high integrity with the current biological understanding of individual tree growth in a monospecific Pinus radiata plantation. The approach used in this study provided a robust model of annualized individual tree height growth, suggesting such an approach to modelling will be useful for future forest management. Full article
Show Figures

Figure 1

Figure 1
<p>Study site and experimental plots.</p>
Full article ">Figure 2
<p>Matching treetops detected in lidar with ground tree numbers.</p>
Full article ">Figure 3
<p>Distributions of tree height (HT), crown height (CH), crown volume (CV<sub>F</sub>) and surface area (CS) from ALS1 and ALS2. The mean values of changes (Δ) and bi-temporal correlations (r) are also labelled.</p>
Full article ">Figure 4
<p>Residual distribution and residuals against predicted plots for generalized mixed-effects model (GLMM) (<b>A</b>,<b>B</b>) and augmented empirical model (<b>C</b>,<b>D</b>). Red lines show the models fitting trend.</p>
Full article ">Figure 5
<p>Effect of spatially explicit competition indices (i.e., crown volume and neighborhood stem density) on annualized height.</p>
Full article ">
19 pages, 4394 KiB  
Article
Ionosphere-Weighted Network Real-Time Kinematic Server-Side Approach Combined with Single-Differenced Observations of GPS, GAL, and BDS
by Yi Ma, Hongjin Xu, Yifan Wang, Yunbin Yuan, Xingyu Chen, Zelin Dai and Qingsong Ai
Remote Sens. 2024, 16(13), 2269; https://doi.org/10.3390/rs16132269 (registering DOI) - 21 Jun 2024
Abstract
Currently, network real-time kinematic (NRTK) technology is one of the primary approaches used to achieve real-time dynamic high-precision positioning, and virtual reference station (VRS) technology, with its high accuracy and compatibility, has become the most important type of network RTK solution. The key [...] Read more.
Currently, network real-time kinematic (NRTK) technology is one of the primary approaches used to achieve real-time dynamic high-precision positioning, and virtual reference station (VRS) technology, with its high accuracy and compatibility, has become the most important type of network RTK solution. The key to its successful implementation lies in correctly fixing integer ambiguities and extracting spatially correlated errors. This paper first introduces real-time data processing flow on the VRS server side. Subsequently, an improved ionosphere-weighted VRS approach is proposed based on single-differenced observations of GPS, GAL, and BDS. With the prerequisite of ensuring estimable integer properties of ambiguities, it directly estimates the single-differenced ionospheric delay and tropospheric delay between reference stations, reducing the double-differenced (DD) observation noise introduced by conventional models and accelerating the system initialization speed. Based on this, we provide an equation for generating virtual observations directly based on single-differenced atmospheric corrections without specifying the pivot satellite. This further simplifies the calculation process and enhances the efficiency of the solution. Using Australian CORS data for testing and analysis, and employing the approach proposed in this paper, the average initialization time on the server side was 40 epochs, and the average number of available satellites reached 23 (with an elevation greater than 20°). Two positioning modes, ‘Continuous’ (CONT) and ‘Instantaneous’ (INST), were employed to evaluate VRS user positioning accuracy, and the distance covered between the user and the master station was between 20 and 50 km. In CONT mode, the average positioning errors in the E/N/U directions were 0.67/0.82/1.98 cm, respectively, with an average success fixed rate of 98.76% (errors in all three directions were within 10 cm). In INST mode, the average positioning errors in the E/N/U directions were 1.29/1.29/2.13 cm, respectively, with an average success fixed rate of 89.56%. The experiments in this study demonstrate that the proposed approach facilitates efficient ambiguity resolution (AR) and atmospheric parameter extraction on the server side, thus enabling users to achieve centimeter-level positioning accuracy instantly. Full article
Show Figures

Figure 1

Figure 1
<p>Principle of network RTK algorithm based on real-time data stream.</p>
Full article ">Figure 2
<p>The distribution map of stations. Red triangles indicate reference stations; blue dots represent user stations.</p>
Full article ">Figure 3
<p>DOP and # of satellites for BL06 on DOY 339, 2023. (<b>a</b>) PDOP, (<b>b</b>) VDOP, (<b>c</b>) ADOP; the threshold of 0.12 cycles is delineated by the presence of the orange dashed line. (<b>d</b>) The blue line represents the total # of G+E+C satellites, while the red line denotes the # of fixed ambiguities.</p>
Full article ">Figure 4
<p>Time series of differential clock bias, differential phase bias, and differential code bias between receivers for baseline BL06 on DOY 339, 2023. Columns represent GPS, Galileo, and BeiDou receiver bias terms, respectively, with mean and standard deviation values annotated in nanoseconds (ns).</p>
Full article ">Figure 5
<p>Availability of satellites and initialization time on DOY 339, 2023. The orange boxplot depicts the quantity of satellites satisfying initialization conditions and with elevation above 20 degrees. The blue line denotes the average initialization time, conducted at four-hour intervals. The x-axis is annotated with the user station names representing their respective subnetworks.</p>
Full article ">Figure 6
<p>Error bar of ionospheric interpolation errors at user stations on DOY 339, 2023: GPS (<b>top row</b>), GAL (<b>middle row</b>), and BDS (<b>bottom row</b>); the x-axis represents the names of user stations and their distances to the nearest master reference station.</p>
Full article ">Figure 7
<p>The time series of ADOP at user station WSEA on DOY 339, 2023. The <b>left panel</b> depicts the “Continuous” mode, while the <b>right panel</b> illustrates the “Instantaneous” mode.</p>
Full article ">Figure 8
<p>Statistical analysis of user station positioning results on DOY 339, 2023. The red bar chart illustrates the positioning accuracy in “CONT” mode, while the blue bar chart represents the positioning accuracy in “INST” mode. The red line depicts the success fixed rate in “CONT” mode, whereas the blue line illustrates the success fixed rate in “INST” mode.</p>
Full article ">Figure 9
<p>Error distribution in user station WSEA (east/north/up directions) on DOY 339, 2023. The left subfigure (<b>a</b>) illustrates the error distribution in “CONT” mode, while the right subfigure (<b>b</b>) depicts the error distribution in “INST” mode.</p>
Full article ">
14 pages, 2069 KiB  
Article
Massive Sea-Level-Driven Marsh Migration and Implications for Coastal Resilience along the Texas Coast
by Nathalie W. Jung, Thomas A. Doe, Yoonho Jung and Timothy M. Dellapenna
Remote Sens. 2024, 16(13), 2268; https://doi.org/10.3390/rs16132268 (registering DOI) - 21 Jun 2024
Abstract
Tidal salt marshes offer crucial ecosystem services in the form of carbon sequestration, fisheries, property and recreational values, and protection from storm surges, and are therefore considered one of the most valuable and fragile ecosystems worldwide, where sea-level rise and direct human modifications [...] Read more.
Tidal salt marshes offer crucial ecosystem services in the form of carbon sequestration, fisheries, property and recreational values, and protection from storm surges, and are therefore considered one of the most valuable and fragile ecosystems worldwide, where sea-level rise and direct human modifications resulted in the loss of vast regions of today’s marshland. The extent of salt marshes therefore relies heavily on the interplay between upland migration and edge erosion. We measured changes in marsh size based on historical topographic sheets from the 1850s and 2019 satellite imagery along the Texas coast, which is home to three of the largest estuaries in North America (e.g., Galveston, Corpus Christi, and Matagorda Bays). We further distinguished between changes in high and low marsh based on local elevation data in an effort to estimate changes in local ecosystem services. Our results showed that approximately 410 km2 (58%) of salt marshes were lost due to coastal erosion and marsh ponding and nearly 510 km2 (72%) of salt marshes were created, likely due to upland submergence. Statistical analyses showed a significant relationship between marsh migration and upland slope, suggesting that today’s marshland formed as a result of submergence of barren uplands along gently sloping coastal plains. Although the overall areal extent of Texas marshes increased throughout the last century (~100 km2 or 14%), economic gains through upland migration of high marshes (mostly in the form of property value (USD 0.7–1.0 trillion)) were too small to offset sea-level-driven losses of crucial ecosystem services of Texan low marshes (in the form of storm protection and fisheries (USD 2.1–2.7 trillion)). Together, our results suggest that despite significant increases in marsh area, the loss of crucial ecosystem services underscores the complexity and importance of considering not only quantity but also quality in marshland conservation efforts. Full article
(This article belongs to the Section Ecological Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Study area map of the Texas coast. Map shows the mapping extent, the location of the major estuaries, and the tide gauge stations displayed in <a href="#remotesensing-16-02268-f002" class="html-fig">Figure 2</a>.</p>
Full article ">Figure 2
<p>Relative sea-level trends. Trends are shown for Galveston Pleasure Pier and Freeport Harbor and were downloaded from the NOAA (<a href="http://tidesandcurrents.noaa.gov/sltrends/sltrends.html" target="_blank">http://tidesandcurrents.noaa.gov/sltrends/sltrends.html</a>, accessed on 1 November 2023). The sudden switch in the vertical location of the relative sea-level trend at Freeport Harbor results from a datum shift. The difference between the two sites is likely due to enhanced land subsidence resulting from extensive groundwater extraction and expanding population within the Houston region [<a href="#B4-remotesensing-16-02268" class="html-bibr">4</a>].</p>
Full article ">Figure 3
<p>Marsh areal change along the Texas coast. (<b>a</b>) Marsh gain (blue), marsh loss (brown), and net change (gray) (gain minus loss) for each embayment along the Texas coast. (<b>b</b>) Marsh gain and loss of high and low marsh for each embayment. Green colors show marsh gain, and red colors show marsh loss.</p>
Full article ">Figure 4
<p>Comparisons between marsh migration rates and the slope of the adjacent upland. Each marker represents one T-sheet. Upland slope was based on high-resolution LiDAR within a 100 m buffer surrounding the present-day marsh. Marsh migration rates are significantly correlated with upland slope (y = 0.002/x; <span class="html-italic">r</span> = −0.74; <span class="html-italic">p</span> &lt;&lt; 0.05). Migration rates for the Chesapeake Bay are based on [<a href="#B8-remotesensing-16-02268" class="html-bibr">8</a>], those for the Florida Big Bend are based on [<a href="#B7-remotesensing-16-02268" class="html-bibr">7</a>], those for the US Mid-Atlantic Coast are based on [<a href="#B19-remotesensing-16-02268" class="html-bibr">19</a>], and those for the Delaware Bay are based on [<a href="#B49-remotesensing-16-02268" class="html-bibr">49</a>].</p>
Full article ">Figure 5
<p>Map showing marsh area change along the United States Atlantic and Gulf coasts. Heights of bars show the magnitude of salt marsh change (%), and colors show the direction of salt marsh change (i.e., gain or loss) (%). Florida salt marsh change is based on [<a href="#B7-remotesensing-16-02268" class="html-bibr">7</a>], North Carolina salt marsh change is based on [<a href="#B19-remotesensing-16-02268" class="html-bibr">19</a>], Chesapeake Bay (VA + MD) salt marsh change is based on [<a href="#B8-remotesensing-16-02268" class="html-bibr">8</a>], and Rhode Island, Massachusetts, New Hampshire, and Maine salt marsh changes are based on [<a href="#B69-remotesensing-16-02268" class="html-bibr">69</a>].</p>
Full article ">
18 pages, 9058 KiB  
Article
Semi-Supervised FMCW Radar Hand Gesture Recognition via Pseudo-Label Consistency Learning
by Yuhang Shi, Lihong Qiao, Yucheng Shu, Baobin Li, Bin Xiao, Weisheng Li and Xinbo Gao
Remote Sens. 2024, 16(13), 2267; https://doi.org/10.3390/rs16132267 (registering DOI) - 21 Jun 2024
Abstract
Hand gesture recognition is pivotal in facilitating human–machine interaction within the Internet of Things. Nevertheless, it encounters challenges, including labeling expenses and robustness. To tackle these issues, we propose a semi-supervised learning framework guided by pseudo-label consistency. This framework utilizes a dual-branch structure [...] Read more.
Hand gesture recognition is pivotal in facilitating human–machine interaction within the Internet of Things. Nevertheless, it encounters challenges, including labeling expenses and robustness. To tackle these issues, we propose a semi-supervised learning framework guided by pseudo-label consistency. This framework utilizes a dual-branch structure with a mean-teacher network. Within this setup, a global and locally guided self-supervised learning encoder acts as a feature extractor in a teacher–student network to efficiently extract features, maximizing data utilization to enhance feature representation. Additionally, we introduce a pseudo-label Consistency-Guided Mean-Teacher model, where simulated noise is incorporated to generate newly unlabeled samples for the teacher model before advancing to the subsequent stage. By enforcing consistency constraints between the outputs of the teacher and student models, we alleviate accuracy degradation resulting from individual differences and interference from other body parts, thereby bolstering the network’s robustness. Ultimately, the teacher model undergoes refinement through exponential moving averages to achieve stable weights. We evaluate our semi-supervised method on two publicly available hand gesture datasets and compare it with several state-of-the-art fully-supervised algorithms. The results demonstrate the robustness of our method, achieving an accuracy rate exceeding 99% across both datasets. Full article
33 pages, 6438 KiB  
Article
Deep-Learning-Based Automatic Sinkhole Recognition: Application to the Eastern Dead Sea
by Osama Alrabayah, Danu Caus, Robert Alban Watson, Hanna Z. Schulten, Tobias Weigel, Lars Rüpke and Djamil Al-Halbouni
Remote Sens. 2024, 16(13), 2264; https://doi.org/10.3390/rs16132264 (registering DOI) - 21 Jun 2024
Abstract
Sinkholes can cause significant damage to infrastructures, agriculture, and endanger lives in active karst regions like the Dead Sea’s eastern shore at Ghor Al-Haditha. The common sinkhole mapping methods often require costly high-resolution data and manual, time-consuming expert analysis. This study introduces an [...] Read more.
Sinkholes can cause significant damage to infrastructures, agriculture, and endanger lives in active karst regions like the Dead Sea’s eastern shore at Ghor Al-Haditha. The common sinkhole mapping methods often require costly high-resolution data and manual, time-consuming expert analysis. This study introduces an efficient deep learning model designed to improve sinkhole mapping using accessible satellite imagery, which could enhance management practices related to sinkholes and other geohazards in evaporite karst regions. The developed AI system is centered around the U-Net architecture. The model was initially trained on a high-resolution drone dataset (0.1 m GSD, phase I), covering 250 sinkhole instances. Subsequently, it was additionally fine-tuned on a larger dataset from a Pleiades Neo satellite image (0.3 m GSD, phase II) with 1038 instances. The training process involved an automated image-processing workflow and strategic layer freezing and unfreezing to adapt the model to different input scales and resolutions. We show the usefulness of initial layer features learned on drone data, for the coarser, more readily-available satellite inputs. The validation revealed high detection accuracy for sinkholes, with phase I achieving a recall of 96.79% and an F1 score of 97.08%, and phase II reaching a recall of 92.06% and an F1 score of 91.23%. These results confirm the model’s accuracy and its capability to maintain high performance across varying resolutions. Our findings highlight the potential of using RGB visual bands for sinkhole detection across different karst environments. This approach provides a scalable, cost-effective solution for continuous mapping, monitoring, and risk mitigation related to sinkhole hazards. The developed system is not limited only to sinkholes however, and can be naturally extended to other geohazards as well. Moreover, since it currently uses U-Net as a backbone, the system can be extended to incorporate super-resolution techniques, leveraging U-Net based latent diffusion models to address the smaller-scale, ambiguous geo-structures that are often found in geoscientific data. Full article
(This article belongs to the Special Issue Artificial Intelligence for Natural Hazards (AI4NH))
20 pages, 8890 KiB  
Article
High-Precision Monitoring Method for Bridge Deformation Measurement and Error analysis Based on Terrestrial Laser Scanning
by Yin Zhou, Jinyu Zhu, Lidu Zhao, Guotao Hu, Jingzhou Xin, Hong Zhang and Jun Yang
Remote Sens. 2024, 16(13), 2263; https://doi.org/10.3390/rs16132263 (registering DOI) - 21 Jun 2024
Abstract
In bridge structure monitoring and evaluation, deformation data serve as a crucial basis for assessing structural conditions. Different from discrete monitoring points, spatially continuous deformation modes provide a comprehensive understanding of deformation and potential information. Terrestrial laser scanning (TLS) is a three-dimensional deformation [...] Read more.
In bridge structure monitoring and evaluation, deformation data serve as a crucial basis for assessing structural conditions. Different from discrete monitoring points, spatially continuous deformation modes provide a comprehensive understanding of deformation and potential information. Terrestrial laser scanning (TLS) is a three-dimensional deformation monitoring technique that has gained wide attention in recent years, demonstrating its potential in capturing structural deformation models. In this study, a TLS-based bridge deformation mode monitoring method is proposed, and a deformation mode calculation method combining sliding windows and surface fitting is developed, which is called the SWSF method for short. On the basis of the general characteristics of bridge structures, a deformation error model is established for the SWSF method, with a detailed quantitative analysis of each error component. The analysis results show that the deformation monitoring error of the SWSF method consists of four parts, which are related to the selection of the fitting function, the density of point clouds, the noise of point clouds, and the registration accuracy of point clouds. The error caused by point cloud noise is the main error component. Under the condition that the noise level of point clouds is determined, the calculation error of the SWSF method can be significantly reduced by increasing the number of points of point clouds in the sliding window. Then, deformation testing experiments were conducted under different measurement distances, proving that the proposed SWSF method can achieve a deformation monitoring accuracy of up to 0.1 mm. Finally, the proposed deformation mode monitoring method based on TLS and SWSF was tested on a railway bridge with a span of 65 m. The test results showed that in comparison with the commonly used total station method, the proposed method does not require any preset reflective markers, thereby improving the deformation monitoring accuracy from millimeter level to submillimeter level and transforming the discrete measurement point data form into spatially continuous deformation modes. Overall, this study introduces a new method for accurate deformation monitoring of bridges, demonstrating the significant potential for its application in health monitoring and damage diagnosis of bridge structures. Full article
Show Figures

Figure 1

Figure 1
<p>Key steps of bridge deformation mode monitoring method.</p>
Full article ">Figure 2
<p>Geometric relationships in the deformation process. (<b>a</b>) The shape functions and (<b>b</b>) the deformation function.</p>
Full article ">Figure 3
<p>Relationship between local surface function and local deformation function. (<b>a</b>) Forms of deformation of bridges, (<b>b</b>) local surface fitting, and (<b>c</b>) local deformation fitting.</p>
Full article ">Figure 4
<p>Dense point cloud high-order polynomial fitting surface. (<b>a</b>) Sliding window range, (<b>b</b>) point cloud 1, and (<b>c</b>) point cloud 2.</p>
Full article ">Figure 5
<p>Sparse point cloud high-order polynomial fitting surface. (<b>a</b>) Sliding window range, (<b>b</b>) point cloud 1, and (<b>c</b>) point cloud 2.</p>
Full article ">Figure 6
<p>First-order polynomial fitting surface results. (<b>a</b>) Sliding window range, (<b>b</b>) point cloud 1, and (<b>c</b>) point cloud 2.</p>
Full article ">Figure 7
<p>Error <span class="html-italic">E<sub>s</sub></span>.</p>
Full article ">Figure 8
<p>Relationship between the standard deviation of point cloud noise and range.</p>
Full article ">Figure 9
<p>Relationship between Sd(<span class="html-italic">E<sub>N</sub></span>) and number of points <span class="html-italic">n</span>. (<b>a</b>) <span class="html-italic">σ</span> = 1 mm and (<b>b</b>) <span class="html-italic">σ</span> = 2 mm.</p>
Full article ">Figure 10
<p>Regularity diagram of <span class="html-italic">E<sub>N</sub></span>. (<b>a</b>) <span class="html-italic">E<sub>N</sub></span> versus the number of points <span class="html-italic">n</span> and noise <span class="html-italic">σ</span>, (<b>b</b>) relationship between the number of points <span class="html-italic">n</span> and noise <span class="html-italic">σ</span> when Sd(<span class="html-italic">E<sub>N</sub></span>) = 0.05 mm, and (<b>c</b>) relationship between the number of points <span class="html-italic">n</span> and noise <span class="html-italic">σ</span> when Sd(<span class="html-italic">E<sub>N</sub></span>) = 0.5 mm.</p>
Full article ">Figure 11
<p>Deformation accuracy verification experiment.</p>
Full article ">Figure 12
<p>Deformation monitoring results under different conditions: (<b>a</b>) Leica P50 and range of 100 m, (<b>b</b>) Leica RTC360 and range of 100 m, (<b>c</b>) Leica P50 and range of 200 m, and (<b>d</b>) mean square error of deformation monitoring error.</p>
Full article ">Figure 13
<p>Site plan of a rail bridge.</p>
Full article ">Figure 14
<p>Test scheme. (<b>a</b>) Instrument and (<b>b</b>) loading scheme for trains.</p>
Full article ">Figure 15
<p>Point cloud model of the test bridge.</p>
Full article ">Figure 16
<p>Results for the test bridge.</p>
Full article ">
19 pages, 3064 KiB  
Article
Voxel- and Bird’s-Eye-View-Based Semantic Scene Completion for LiDAR Point Clouds
by Li Liang, Naveed Akhtar, Jordan Vice and Ajmal Mian
Remote Sens. 2024, 16(13), 2266; https://doi.org/10.3390/rs16132266 (registering DOI) - 21 Jun 2024
Abstract
Semantic scene completion is a crucial outdoor scene understanding task that has direct implications for technologies like autonomous driving and robotics. It compensates for unavoidable occlusions and partial measurements in LiDAR scans, which may otherwise cause catastrophic failures. Due to the inherent complexity [...] Read more.
Semantic scene completion is a crucial outdoor scene understanding task that has direct implications for technologies like autonomous driving and robotics. It compensates for unavoidable occlusions and partial measurements in LiDAR scans, which may otherwise cause catastrophic failures. Due to the inherent complexity of this task, existing methods generally rely on complex and computationally demanding scene completion models, which limits their practicality in downstream applications. Addressing this, we propose a novel integrated network that combines the strengths of 3D and 2D semantic scene completion techniques for efficient LiDAR point cloud scene completion. Our network leverages a newly devised lightweight multi-scale convolutional block (MSB) to efficiently aggregate multi-scale features, thereby improving the identification of small and distant objects. It further utilizes a layout-aware semantic block (LSB), developed to grasp the overall layout of the scene to precisely guide the reconstruction and recognition of features. Moreover, we also develop a feature fusion module (FFM) for effective interaction between the data derived from two disparate streams in our network, ensuring a robust and cohesive scene completion process. Extensive experiments with the popular SemanticKITTI dataset demonstrate that our method achieves highly competitive performance, with an mIoU of 35.7 and an IoU of 51.4. Notably, the proposed method achieves an mIoU improvement of 2.6 % compared to previous methods. Full article
(This article belongs to the Section Engineering Remote Sensing)
25 pages, 17006 KiB  
Article
A Modified Look-Up Table Based Algorithm with a Self-Posed Scheme for Fine-Mode Aerosol Microphysical Properties Inversion by Multi-Wavelength Lidar
by Zeyu Zhou, Yingying Ma, Zhenping Yin, Qiaoyun Hu, Igor Veselovskii, Detlef Müller and Wei Gong
Remote Sens. 2024, 16(13), 2265; https://doi.org/10.3390/rs16132265 (registering DOI) - 21 Jun 2024
Abstract
Aerosol microphysical properties, including aerosol particle size distribution, complex refractive index and concentration properties, are key parameters evaluating the impact of aerosols on climate, meteorology, and human health. High Spectral Resolution Lidar (HSRL) is an efficient tool for probing the vertical optical properties [...] Read more.
Aerosol microphysical properties, including aerosol particle size distribution, complex refractive index and concentration properties, are key parameters evaluating the impact of aerosols on climate, meteorology, and human health. High Spectral Resolution Lidar (HSRL) is an efficient tool for probing the vertical optical properties of aerosol particles, including the aerosol backscatter coefficient (β) and extinction coefficient (α), at multiple wavelengths. To swiftly process vast data volumes, address the ill-posedness of retrieval problems, and suit simpler lidar systems, this study proposes an algorithm (modified algorithm) for retrieving microphysical property profiles from the HSRL optical data targeting fine-mode aerosols, building upon a previous algorithm (basic algorithm). The modified algorithm is based on a look-up table (LUT) approach, combined with the k-nearest neighbor (k-NN) and random forest (RF) algorithms, and it optimizes the decision tree generation strategy, incorporating a self-posed scheme. In numerical simulation tests for different lidar configurations, the modified algorithm reduced retrieval errors by 41%, 30%, and 32% compared to the basic algorithm for 3β + 2α, 3β + 1α, and 2β + 1α, respectively, with a remarkable improvement of stability. In two observation scenes of a field campaign, the median relative errors of the effective radius for 3β + 2α were 6% and −3%, and the median absolute errors of single-scattering albedo were 0.012 and 0.005. This method represents a further step toward the use of the LUT approach, with the potential to provide effective and efficient aerosol microphysical retrieval for simpler lidar systems, which could advance our understanding of aerosols’ climatic, meteorological, and health impacts. Full article
Show Figures

Figure 1

Figure 1
<p>Process of the LUT element matching algorithm based on RF. (<b>a</b>) Process of obtaining the final solution from the LUT. The blue cubes represent the elements of the LUT, yellow cubes represent the reduced solution space obtained by the k-NN algorithm, red cubes represent the possible solutions after processing by the RF algorithm, and the green circle represents the final solution after averaging the possible solutions. The three cubes belong to the same data set. The circle indicates that it generally does not correspond to any LUT element. (<b>b</b>) Workflow of the RF algorithm. Using the “bagging” strategy to extract several permutations from the full permutation to generate decision trees. Each tree prunes optical parameters according to its permutation. The orange circles represent the elements retained during each pruning, light blue circles represent the excluded parts, and arrows indicate different directions in different dimensions. The red circle is the output of a single decision tree, i.e., a possible solution. After averaging all possible solutions, the final solution is obtained, where the yellow part corresponds to the reduced solution space in (<b>a</b>), and the green part corresponds to the final solution. (<b>c</b>) Pruning process of a single decision tree. For each pruning, the optical parameter distances are first sorted. In the first step, for example, <math display="inline"><semantics> <mrow> <msup> <mrow> <mi>G</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math> is selected, which means sorting based on the distance of <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>B</mi> </mrow> <mrow> <mn>355</mn> </mrow> </msub> </mrow> </semantics></math> and retaining the top <math display="inline"><semantics> <mrow> <mi>ω</mi> </mrow> </semantics></math> portion with the smallest distances. In the second step, <math display="inline"><semantics> <mrow> <msup> <mrow> <mi>G</mi> </mrow> <mrow> <mn>3</mn> </mrow> </msup> </mrow> </semantics></math> is selected, and the remaining part is sorted and pruned based on the distance of <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>B</mi> </mrow> <mrow> <mn>1064</mn> </mrow> </msub> </mrow> </semantics></math>. This process continues until the last pruning, where the remaining part is output as a possible solution.</p>
Full article ">Figure 2
<p>Process diagram of the modified algorithm. It includes two inversion iterations, where the solid lines depict the process of the first inversion and the dashed lines represent the process of the second inversion. The parts highlighted in orange indicate the additional aspects introduced by the modified algorithm compared to the basic algorithm.</p>
Full article ">Figure 3
<p>Example of the decision tree pruning process. (<b>a</b>) Distances between all elements in the reduced solution space and the input optical parameter set on the 11 optical parameters. The horizontal axis represents different optical parameters, and the vertical axis represents the magnitude of the distance. The shaded area in the graph indicates the distribution of the data. Optical parameters corresponding to 1–11 are explained on the right side. (<b>b</b>) Operation’s mapping on the sixth optical parameter when pruning the second optical parameter in (<b>a</b>). The red-shaded area represents the data retained after pruning. (<b>c</b>) Operation’s mapping on the second optical parameter when pruning the sixth optical parameter in (<b>a</b>).</p>
Full article ">Figure 4
<p>Flowchart of generating the reduced solution space and the constraint window. The blue portion represents the generation process of the reduced solution space and the green portion represents the generation process of the constraint window.</p>
Full article ">Figure 5
<p>Aircraft trajectory maps in California on (<b>a</b>) 30 January and (<b>b</b>) 31 January. The blue trace represents the track of the B-200 and the green trace represents that of the P-3B.</p>
Full article ">Figure 6
<p>Data processing and comparison process between the two aircraft. The blue annotations indicate important parameters and results during the process. HSRL optical data undergoes screening for depolarization ratio (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi>δ</mi> </mrow> <mrow> <mn>532</mn> </mrow> </msub> </mrow> </semantics></math>) and Ångström exponent (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi>γ</mi> </mrow> <mrow> <mi>α</mi> </mrow> </msub> <mo>(</mo> <mn>355</mn> <mo>−</mo> <mn>532</mn> <mo>)</mo> </mrow> </semantics></math>), followed by inversion to obtain CRI and APSD, and then the calculation of <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>S</mi> <mi>A</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>r</mi> </mrow> <mrow> <mi mathvariant="normal">e</mi> </mrow> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>V</mi> </mrow> <mrow> <mi mathvariant="normal">t</mi> </mrow> </msub> </mrow> </semantics></math> products. P-3B data were screened based on <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Δ</mi> <msub> <mrow> <mi>V</mi> </mrow> <mrow> <mi mathvariant="normal">t</mi> </mrow> </msub> </mrow> </semantics></math>, and APSD, environmental scattering coefficient, and dry absorption coefficient were obtained from measurements by UHSAS, nephelometer, and PSAP, respectively. Finally, <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>S</mi> <mi>A</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>r</mi> </mrow> <mrow> <mi mathvariant="normal">e</mi> </mrow> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>V</mi> </mrow> <mrow> <mi mathvariant="normal">t</mi> </mrow> </msub> </mrow> </semantics></math> are computed. The conditions for mutual comparison of the products obtained from both aircraft are within the spirals of P-3B, where validation of the aerosol vertical profile information can be performed.</p>
Full article ">Figure 7
<p>Average retrieval errors and computation time for the microphysical parameters under the 3<span class="html-italic">β</span> + 2<span class="html-italic">α</span> and 3<span class="html-italic">β</span> + 1<span class="html-italic">α</span> configurations. (<b>a</b>–<b>f</b>) The retrieval errors and consumed time for 3<span class="html-italic">β</span> + 2<span class="html-italic">α</span>. (<b>g</b>–<b>l</b>) The retrieval errors and consumed time for 3<span class="html-italic">β</span> + 1<span class="html-italic">α</span>. The blue bars represent the results of the basic algorithm, while the red bars represent the results of the modified algorithm. The test data are divided into two categories: grid points and non−grid points.</p>
Full article ">Figure 8
<p>Average retrieval errors and computation time for the microphysical parameters under the 2<span class="html-italic">β</span> + 1<span class="html-italic">α</span> and 3<span class="html-italic">β</span> configurations. (<b>a</b>–<b>f</b>) The retrieval errors and consumed time for 2<span class="html-italic">β</span> + 1<span class="html-italic">α</span>. (<b>g</b>–<b>l</b>) The retrieval errors and consumed time for 3<span class="html-italic">β</span>. The results are marked similarly to those in <a href="#remotesensing-16-02265-f007" class="html-fig">Figure 7</a>.</p>
Full article ">Figure 9
<p>Stability testing results for the retrieval algorithms under the 3<span class="html-italic">β</span> + 2<span class="html-italic">α</span> and 3<span class="html-italic">β</span> + 1<span class="html-italic">α</span> configurations. (<b>a</b>–<b>e</b>) Box plots for retrieval errors of aerosol microphysical properties under 3<span class="html-italic">β</span> + 2<span class="html-italic">α.</span> (<b>f</b>–<b>j</b>) Box plots for retrieval errors of aerosol microphysical properties under 3<span class="html-italic">β</span> + 1<span class="html-italic">α.</span> The box plots are generated according to the IQR strategy. The blue box plots represent the basic algorithm and the red plots represent the modified algorithm. The wavy−shaded areas on the <span class="html-italic">y</span>−axis indicate truncation and jumps for visualization purposes.</p>
Full article ">Figure 10
<p>Stability testing results for the retrieval algorithms under the 2<span class="html-italic">β</span> + 1<span class="html-italic">α</span> and 3<span class="html-italic">β</span> configurations. (<b>a</b>–<b>e</b>) Box plots for retrieval errors of aerosol microphysical properties under 2<span class="html-italic">β</span> + 1<span class="html-italic">α.</span> (<b>f</b>–<b>j</b>) Box plots for retrieval errors of aerosol microphysical properties under 3<span class="html-italic">β.</span> The meaning of the labels is consistent with <a href="#remotesensing-16-02265-f009" class="html-fig">Figure 9</a>.</p>
Full article ">Figure 11
<p>Function of inversion error versus fixed error when artificially distorting individual input optical properties. (<b>a</b>–<b>d</b>) Inversion errors of 3<span class="html-italic">β</span> + 2<span class="html-italic">α</span> (<b>a</b>), 3<span class="html-italic">β</span> + 1<span class="html-italic">α</span> (<b>b</b>), 2<span class="html-italic">β</span> + 1<span class="html-italic">α</span> (<b>c</b>) and 3<span class="html-italic">β</span> (<b>d</b>) configurations regarding <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>m</mi> </mrow> <mrow> <mi mathvariant="normal">r</mi> </mrow> </msub> </mrow> </semantics></math>. (<b>e</b>–<b>h</b>) Same as (<b>a</b>–<b>d</b>), but showing inversion errors regarding <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>m</mi> </mrow> <mrow> <mi mathvariant="normal">i</mi> </mrow> </msub> </mrow> </semantics></math>. (<b>i</b>–<b>l</b>) Same as (<b>a</b>–<b>d</b>), but showing inversion errors regarding <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>S</mi> <mi>A</mi> </mrow> </semantics></math>. (<b>m</b>–<b>p</b>) Same as (<b>a</b>–<b>d</b>), but showing inversion errors regarding <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>r</mi> </mrow> <mrow> <mi mathvariant="normal">e</mi> </mrow> </msub> </mrow> </semantics></math>. (<b>q</b>–<b>t</b>) Same as (<b>a</b>–<b>d</b>), but showing inversion errors regarding <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>V</mi> </mrow> <mrow> <mi mathvariant="normal">t</mi> </mrow> </msub> </mrow> </semantics></math>. The horizontal axis represents the value of the fixed error, while the vertical axis represents the inversion error, with the zero−error line highlighted by a dashed line. For different optical parameters, lines with different colors and markers represent <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>β</mi> </mrow> <mrow> <mn>355</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>β</mi> </mrow> <mrow> <mn>532</mn> </mrow> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>β</mi> </mrow> <mrow> <mn>1064</mn> </mrow> </msub> </mrow> </semantics></math> with blue hexagons, orange circles, and yellow stars, respectively, while <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>α</mi> </mrow> <mrow> <mn>355</mn> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>α</mi> </mrow> <mrow> <mn>532</mn> </mrow> </msub> </mrow> </semantics></math> are represented by purple diamonds and green squares, respectively.</p>
Full article ">Figure 12
<p>Inversion errors after applying random Gaussian noise disturbance to the input data at error levels of 10% and 20%. (<b>a</b>–<b>d</b>) Inversion errors of 3<span class="html-italic">β</span> + 2<span class="html-italic">α</span> (<b>a</b>), 3<span class="html-italic">β</span> + 1<span class="html-italic">α</span> (<b>b</b>), 2<span class="html-italic">β</span> + 1<span class="html-italic">α</span> (<b>c</b>) and 3<span class="html-italic">β</span> (<b>d</b>) configurations regarding <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>m</mi> </mrow> <mrow> <mi mathvariant="normal">r</mi> </mrow> </msub> </mrow> </semantics></math>. (<b>e</b>–<b>h</b>) Same as (<b>a</b>–<b>d</b>), but showing inversion errors regarding <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>m</mi> </mrow> <mrow> <mi mathvariant="normal">i</mi> </mrow> </msub> </mrow> </semantics></math>. (<b>i</b>–<b>l</b>) Same as (<b>a</b>–<b>d</b>), but showing inversion errors regarding <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>S</mi> <mi>A</mi> </mrow> </semantics></math>. (<b>m</b>–<b>p</b>) Same as (<b>a</b>–<b>d</b>), but showing inversion errors regarding <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>r</mi> </mrow> <mrow> <mi mathvariant="normal">e</mi> </mrow> </msub> </mrow> </semantics></math>. (<b>q</b>–<b>t</b>) Same as (<b>a</b>–<b>d</b>), but showing inversion errors regarding <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>V</mi> </mrow> <mrow> <mi mathvariant="normal">t</mi> </mrow> </msub> </mrow> </semantics></math>. The error levels of 10% and 20% are represented by blue and orange images, respectively. The results are presented in the form of violin plots, which are an enhanced version of box plots that provide more detailed information about the distribution of the data. In each violin plot, the vertical gray bars correspond to the ends of the box plot whiskers, representing the maximum and minimum values of the statistical distribution. The shaded area corresponds to the interquartile range of 25% and 75% of the box plot. Horizontally, the shaded area represents the probability density function of the data distribution, showing the frequency of data distribution in each interval. The white points indicate the position of zero, and the horizontal green bars represent the mean values.</p>
Full article ">Figure 13
<p>Original optical data from the HSRL collected during the DISCOVER−AQ field campaign in California on 30 and 31 January 2013. The horizontal axis represents UTC time, and the vertical axis represents altitude above sea level. The data for the two days are shown in the left and right columns, respectively. (<b>a</b>,<b>b</b>) Profile s of <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>β</mi> </mrow> <mrow> <mn>355</mn> </mrow> </msub> </mrow> </semantics></math> on the two days. (<b>c</b>,<b>d</b>) Profile s of <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>β</mi> </mrow> <mrow> <mn>532</mn> </mrow> </msub> </mrow> </semantics></math> on the two days. (<b>e</b>,<b>f</b>) Profile s of <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>β</mi> </mrow> <mrow> <mn>1064</mn> </mrow> </msub> </mrow> </semantics></math> on the two days. (<b>g</b>,<b>h</b>) Profile s of <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>α</mi> </mrow> <mrow> <mn>355</mn> </mrow> </msub> </mrow> </semantics></math> on the two days. (<b>i</b>,<b>j</b>) Profile s of <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>α</mi> </mrow> <mrow> <mn>532</mn> </mrow> </msub> </mrow> </semantics></math> on the two days. (<b>k</b>,<b>l</b>) Profile s of <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>γ</mi> </mrow> <mrow> <mi>α</mi> </mrow> </msub> <mo> </mo> <mo>(</mo> <mn>355</mn> <mo>−</mo> <mn>532</mn> <mo>)</mo> </mrow> </semantics></math> on the two days.</p>
Full article ">Figure 14
<p>Comparisons of retrieved microphysical parameter profiles from the HSRL on 30 January 2013, with P−3B in-situ measurements at six validation sites. (<b>a</b>–<b>g</b>) represent the results for <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>r</mi> </mrow> <mrow> <mi mathvariant="normal">e</mi> </mrow> </msub> </mrow> </semantics></math>, while (<b>h</b>–<b>n</b>) represent the results for <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>S</mi> <mi>A</mi> </mrow> </semantics></math>. (<b>a</b>–<b>f</b>) and (<b>h</b>–<b>m</b>) show the profile information for <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>r</mi> </mrow> <mrow> <mi mathvariant="normal">e</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>S</mi> <mi>A</mi> </mrow> </semantics></math>, respectively, at the six sites. The results retrieved using the 3<span class="html-italic">β</span> + 2<span class="html-italic">α</span>, 3<span class="html-italic">β</span> + 1<span class="html-italic">α</span>, 2<span class="html-italic">β</span> + 1<span class="html-italic">α</span>, and 3<span class="html-italic">β</span> configurations are depicted with blue, orange, yellow, and purple lines and markers, respectively, while in-situ measurement data are represented by black lines and markers. The <span class="html-italic">x</span>-axis represents the values of the microphysical parameters, and the <span class="html-italic">y</span>-axis represents altitude. (<b>g</b>,<b>n</b>) show box plots of the retrieval errors for all data points at the six validation sites on that day, where the color scheme matches that of <a href="#remotesensing-16-02265-f009" class="html-fig">Figure 9</a>.</p>
Full article ">Figure 15
<p>Comparisons of retrieved microphysical parameter profiles from the HSRL on 31 January 2013, with P-3B in-situ measurements at six validation sites. (<b>a</b>–<b>g</b>) represent the results for <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>r</mi> </mrow> <mrow> <mi mathvariant="normal">e</mi> </mrow> </msub> </mrow> </semantics></math>, while (<b>h</b>–<b>n</b>) represent the results for <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>S</mi> <mi>A</mi> </mrow> </semantics></math>. (<b>a</b>–<b>f</b>) and (<b>h</b>–<b>m</b>) show the profile information for <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>r</mi> </mrow> <mrow> <mi mathvariant="normal">e</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>S</mi> <mi>A</mi> </mrow> </semantics></math>, respectively, at the six sites. The annotations in the figure correspond to those in <a href="#remotesensing-16-02265-f014" class="html-fig">Figure 14</a>.</p>
Full article ">
20 pages, 12475 KiB  
Article
Assessing Vertical Accuracy and Spatial Coverage of ICESat-2 and GEDI Spaceborne Lidar for Creating Global Terrain Models
by Maarten Pronk, Marieke Eleveld and Hugo Ledoux
Remote Sens. 2024, 16(13), 2259; https://doi.org/10.3390/rs16132259 (registering DOI) - 21 Jun 2024
Abstract
Digital Elevation Models (DEMs) are a necessity for modelling many large-scale environmental processes. In this study, we investigate the potential of data from two spaceborne lidar altimetry missions, ICESat-2 and GEDI—with respect to their vertical accuracies and planimetric data collection patterns—as sources for [...] Read more.
Digital Elevation Models (DEMs) are a necessity for modelling many large-scale environmental processes. In this study, we investigate the potential of data from two spaceborne lidar altimetry missions, ICESat-2 and GEDI—with respect to their vertical accuracies and planimetric data collection patterns—as sources for rasterisation towards creating global DEMs. We validate the terrain measurements of both missions against airborne lidar datasets over three areas in the Netherlands, Switzerland, and New Zealand and differentiate them using land-cover classes. For our experiments, we use five years of ICESat-2 ATL03 data and four years of GEDI L2A data for a total of 252 million measurements. The datasets are filtered using parameter flags provided by the higher-level products ICESat-2 ATL08 and GEDI L3A. For all areas and land-cover classes combined, ICESat-2 achieves a bias of -0.11m, an MAE of 0.43m, and an RMSE of 0.93m. From our experiments, we find that GEDI is less accurate, with a bias of 0.09m, an MAE of 0.98m, and an RMSE of 2.96m. Measurements in open land-cover classes, such as “Cropland” and “Grassland”, result in the best accuracy for both missions. We also find that the slope of the terrain has a major influence on vertical accuracy, more so for GEDI than ICESat-2 because of its larger horizontal geolocation error. In contrast, we find little effect of either beam power or background solar radiation, nor do we find noticeable seasonal effects on accuracy. Furthermore, we investigate the spatial coverage of ICESat-2 and GEDI by deriving a DEM at different horizontal resolutions and latitudes. GEDI has higher spatial coverage than ICESat-2 at lower latitudes due to its beam pattern and lower inclination angle, and a derived DEM can achieve a resolution of 500m. ICESat-2 only reaches a DEM resolution of 700m at the equator, but it increases to almost 200m at higher latitudes. When combined, a 500m resolution lidar-based DEM can be achieved globally. Our results indicate that both ICESat-2 and GEDI enable accurate terrain measurements anywhere in the world. Especially in data-poor areas—such as the tropics—this has potential for new applications and insights. Full article
(This article belongs to the Section Remote Sensing and Geo-Spatial Science)
19 pages, 8018 KiB  
Article
Characteristics of Yellow Sea Fog under the Influence of Eastern China Aerosol Plumes
by Jiakun Liang and Jennifer D. Small Griswold
Remote Sens. 2024, 16(13), 2262; https://doi.org/10.3390/rs16132262 (registering DOI) - 21 Jun 2024
Abstract
Sea fog is a societally relevant phenomenon that occurs under the influence of specific oceanic and atmospheric conditions including aerosol conditions. The Yellow Sea region in China regularly experiences sea fog events, of varying intensity, that impact coastal regions and maritime activities. The [...] Read more.
Sea fog is a societally relevant phenomenon that occurs under the influence of specific oceanic and atmospheric conditions including aerosol conditions. The Yellow Sea region in China regularly experiences sea fog events, of varying intensity, that impact coastal regions and maritime activities. The occurrence and structure of fog are impacted by the concentration of aerosols in the air where the fog forms. Along with industrial development, air pollution has become a serious environmental problem in Northeastern China. These higher pollution levels are confirmed by various satellite remote sensing instruments including the Moderate Resolution Imaging Spectroradiometer (MODIS) aboard the Aqua satellite that observes aerosol and cloud properties. These observations show a clear influence of aerosol loading over the Yellow Sea region, which can impact regional sea fog. In this study, high-resolution data sets from MODIS Aqua L2 are used to investigate the relationships between cloud properties and aerosol features. Using a bi-variate comparison method, we find that, for most cases, larger values of COT (cloud optical thickness) are related to both a smaller DER (droplet effective radius) and higher CTH (cloud top height). However, in the cases where fog is thinner with many zero values in CTH, the larger COT is related to both a smaller DER and CTH. For fog cases where the aerosol type is dominated by smoke (e.g., confirmed fire activities in the East China Plain), the semi-direct effect is indicated and may play a role in determining fog structure such that a smaller DER corresponds with thinner fog and smaller COT values. Full article
Show Figures

Figure 1

Figure 1
<p>MODIS Aqua L1B Granule Images highlighting different fog case scenarios. (<b>a</b>) Fog case on 2 May 2020, red box: “incomplete” fog area, the upper portion of the Yellow Sea is not included in the MODIS granule. (<b>b</b>) Fog case on 31 July 2020, cyan box: fog area covered by high cloud. (<b>c</b>) Fog case on 28 March 2012, yellow box: pollution (aerosol) band visible on and offshore.</p>
Full article ">Figure 2
<p>An example of CTH modification for the fog case was on 13 May 2018. (<b>a</b>) The original CTH of 5 km resolution from MODIS Aqua L2 cloud data product, (<b>b</b>) the modified CTH.</p>
Full article ">Figure 3
<p>MODIS Aqua L1B Granule Image of a fog case on 13 May 2018 (<b>a</b>), and CTH of (<b>b</b>) mean temperature inversion height 633 m, (<b>c</b>) 700 m, (<b>d</b>) 800 m. (<b>e</b>) DER at 1 km resolution from MODIS Aqua L2 cloud data product, (<b>f</b>) result of the CTH for the selected fog area after applying the DER mask and land-sea mask.</p>
Full article ">Figure 4
<p>Terrestrial aerosol types surrounding the Yellow Sea region from the MODIS Aqua L2 aerosol data product. (<b>a</b>) Fog case on 23 May 2006, main aerosol type: sulfate and dust. (<b>b</b>) Fog case on 8 June 2007, main aerosol type: heavy absorbing smoke and sulfate. (<b>c</b>) Fog case on 2 May 2008, main aerosol type: sulfate. (<b>d</b>) Fog case on 3 May 2009, main aerosol type: sulfate and dust. (<b>e</b>) Fog case on 4 May 2009, main aerosol type: sulfate and dust. (<b>f</b>) Fog case on 17 May 2011, main aerosol type: sulfate. (<b>g</b>) Fog case on 1 June 2011, main aerosol type: heavy absorbing smoke, dust, and sulfate. (<b>h</b>) Fog case on 28 March 2012, main aerosol type: sulfate. (<b>i</b>) Fog case on 8 April 2014, main aerosol type: sulfate. (<b>j</b>) Fog case on 9 April 2014, main aerosol type: sulfate and dust. (<b>k</b>) Fog case on 10 April 2016, main aerosol type: sulfate and dust. (<b>l</b>) Fog case on 13 April 2016, main aerosol type: sulfate and dust. (<b>m</b>) Fog case on 14 April 2016, main aerosol type: sulfate and dust. (<b>n</b>) Fog case on 13 May 2018, main aerosol type: sulfate and dust. (<b>o</b>) Fog case on 6 June 2018, main aerosol type: heavy absorbing smoke and sulfate.</p>
Full article ">Figure 5
<p>Fog cases with fire occurrences around the Shandong Peninsula on 8 June 2007 (the first column, (<b>a</b>,<b>d</b>,<b>g</b>,<b>j</b>,<b>m</b>)), 1 June 2011 (the second column (<b>b</b>,<b>e</b>,<b>h</b>,<b>k</b>,<b>n</b>)), and 6 June 2018 (the third column (<b>c</b>,<b>f</b>,<b>i</b>,<b>l</b>,<b>o</b>)). (<b>a</b>–<b>c</b>) Satellite RGB visible image from MODIS L2B Granule Image, red box: pollution band. (<b>d</b>–<b>f</b>) Thermal indicators of fire from NASA World View. (<b>g</b>–<b>i</b>) Vertical structures of air temperature (blue line) and dew point temperature (red line) from Sounding files at Qingdao Station. (<b>j</b>–<b>l</b>) Temperature advection calculated from the NECP/NCAR reanalysis data. The black line indicates the geopotential height at 1000 mb, the black arrows indicate the wind direction at the speed of 10 m/s unit, and the red (blue) areas indicate the warm (cold) temperature advection. (<b>m</b>–<b>o</b>) AOD from MODIS Aqua L2 aerosol data product.</p>
Full article ">Figure 6
<p>Bi-variate comparison for 15 fog cases. Diagonal Pattern (<b>a</b>,<b>d</b>–<b>j</b>,<b>l</b>–<b>n</b>) refers to distributions with larger COT values corresponding to smaller DER values and larger CTH values. Left-Right Pattern (<b>c</b>,<b>k</b>) refers to distributions with larger COT values corresponding to larger DER values and smaller CTH values. Inverse-Diagonal Pattern (<b>b</b>,<b>o</b>) refers to distributions with larger COT values corresponding to both larger DER values and larger CTH values.</p>
Full article ">Figure 7
<p>Aerosol, wind conditions, and cloud properties for the sea fog case on 28 March 2012, from MODIS Aqua L2 cloud data. (<b>a</b>) AOD form MODIS Aqua L2 aerosol data product. (<b>b</b>) Surface wind from NCEP/NCAR reanalysis dataset. (<b>c</b>) DER. (<b>d</b>) CTH. (<b>e</b>) COT.</p>
Full article ">Figure 8
<p>CTH from the MODIS Aqua L2 cloud data product. (<b>a</b>) Fog case on 2 May 2008. (<b>b</b>) Fog case on 10 April 2016.</p>
Full article ">Figure 9
<p>Cloud properties and aerosol for the sea fog case on 8 June 2007, from the MODIS Aqua L2 cloud data. (<b>a</b>) DER. (<b>b</b>) CTH. (<b>c</b>) COT.</p>
Full article ">Figure 10
<p>The sum bi-variate comparison of the 15 fog cases.</p>
Full article ">
19 pages, 5022 KiB  
Article
Predicting and Understanding the Pacific Decadal Oscillation Using Machine Learning
by Zhixiong Yao, Dongfeng Xu, Jun Wang, Jian Ren, Zhenlong Yu, Chenghao Yang, Mingquan Xu, Huiqun Wang and Xiaoxiao Tan
Remote Sens. 2024, 16(13), 2261; https://doi.org/10.3390/rs16132261 (registering DOI) - 21 Jun 2024
Abstract
The Pacific Decadal Oscillation (PDO), the dominant pattern of sea surface temperature anomalies in the North Pacific basin, is an important low-frequency climate phenomenon. Leveraging data spanning from 1871 to 2010, we employed machine learning models to predict the PDO based on variations [...] Read more.
The Pacific Decadal Oscillation (PDO), the dominant pattern of sea surface temperature anomalies in the North Pacific basin, is an important low-frequency climate phenomenon. Leveraging data spanning from 1871 to 2010, we employed machine learning models to predict the PDO based on variations in several climatic indices: the Niño3.4, North Pacific index (NPI), sea surface height (SSH), and thermocline depth over the Kuroshio–Oyashio Extension (KOE) region (SSH_KOE and Ther_KOE), as well as the Arctic Oscillation (AO) and Atlantic Multi-decadal Oscillation (AMO). A comparative analysis of the temporal and spatial performance of six machine learning models was conducted, revealing that the Gated Recurrent Unit model demonstrated superior predictive capabilities compared to its counterparts, through the temporal and spatial analysis. To better understand the inner workings of the machine learning models, SHapley Additive exPlanations (SHAP) was adopted to present the drivers behind the model’s predictions and dynamics for modeling the PDO. Our findings indicated that the Niño3.4, North Pacific index, and SSH_KOE were the three most pivotal features in predicting the PDO. Furthermore, our analysis also revealed that the Niño3.4, AMO, and Ther_KOE indices were positively associated with the PDO, whereas the NPI, SSH_KOE, and AO indices exhibited negative correlations. Full article
(This article belongs to the Special Issue Remote Sensing and Numerical Simulation for Tidal Dynamics)
Show Figures

Figure 1

Figure 1
<p>Time series of monthly values of the PDO, Niño3.4, AMO, AO, NPI, SSH_KOE, and Ther_KOE indices from 1871 to 2010. All indices were smoothed by calculating the 6-month running mean and normalized by their standard deviation.</p>
Full article ">Figure 2
<p>Flow diagram of machine learning for the model prediction. (<b>a</b>) Data Pre-processing: Preparing the raw data for analysis by cleaning, transforming, and selecting relevant features; (<b>b</b>) Data Splitting: Splitting the dataset into training, and testing sets to ensure unbiased evaluation of the model; (<b>c</b>) Modelling Process: Applying the machine learning model to the data to build a predictive model; (<b>d</b>) Model Explanation: Providing insights into how and why the model makes its predictions using the SHAP analysis.</p>
Full article ">Figure 3
<p>(<b>a</b>) Correlation matrix among the indices, and (<b>b</b>) the <math display="inline"><semantics> <mrow> <mi>V</mi> <mi>I</mi> <mi>F</mi> </mrow> </semantics></math> values of dependent variables.</p>
Full article ">Figure 4
<p>The observed and predicted PDO indices in six ML models: (<b>a</b>) ANN, (<b>b</b>) SVR, (<b>c</b>) XGBoost, (<b>d</b>) CNN, (<b>e</b>) LSTM, and (<b>f</b>) GRU from January 1982 to December 2010. The error is represented as the difference between predicted values and observed values. Pink indicates positive errors, while purple represents negative errors.</p>
Full article ">Figure 5
<p>Taylor diagram for the PDO index amplitude, the centered RMSE, and coefficient correlation between the model and observations. The REF point represents zero RMSE compared to the observations. The model standard deviations were normalized to the scale of the observation data. Different legend shapes represent the six different ML models.</p>
Full article ">Figure 6
<p>The (<b>left</b>) correlation coefficient and (<b>right</b>) RMSE between the observed and predicted SSTAs in six ML models from January 1982 to December 2010.</p>
Full article ">Figure 7
<p>Performing sequential forward selection analysis for the GRU model in terms of the correlation coefficient and RMSE as each predictor is added. Histograms and lines represent the correlation coefficient and RMSE, respectively. The error bars and pink shading indicate the standard deviation from the mean of 10 ensemble runs.</p>
Full article ">Figure 8
<p>SHAP force plot results for PDO prediction of samples for 1976/1977 and 1998/1999 regime shifts: (<b>a</b>) Winter, 1975; (<b>b</b>) Winter, 1976; (<b>c</b>) Winter, 1997; and (<b>d</b>) Winter, 1998.</p>
Full article ">Figure 9
<p>(<b>a</b>) SHAP feature importance plot and (<b>b</b>) SHAP summary plot for each dependent variable.</p>
Full article ">Figure 10
<p>SHAP dependence plot for each dependent variable: (<b>a</b>) Niño3.4 (°C), (<b>b</b>) AO (hPa), (<b>c</b>) AMO (°C), (<b>d</b>) NPI (hPa), (<b>e</b>) SSH_KOE (m), and (<b>f</b>) Ther_KOE (m). The black circles represent the SHAP values associated with different features. Red lines represent the polynomial regression of scatter points. The shaded area around each line indicates the range of a 95% confidence interval. Dashed lines represent the SHAP values equal to zero. The histograms on the top and right of each subplot indicate the distribution of feature and SHAP values.</p>
Full article ">Figure 11
<p>SHAP interaction plot for the GRU model with respect to (<b>a</b>) the Niño3.4 and SSH_KOE indices, (<b>b</b>) the Niño3.4 index and the NPI, and (<b>c</b>) the SSH_KOE index and the NPI. Each point represents an example of the testing set.</p>
Full article ">
17 pages, 6882 KiB  
Article
A New Retrieval Algorithm of Fractional Snow over the Tibetan Plateau Derived from AVH09C1
by Hang Yin, Liyan Xu and Yihang Li
Remote Sens. 2024, 16(13), 2260; https://doi.org/10.3390/rs16132260 (registering DOI) - 21 Jun 2024
Abstract
Snow cover products are primarily derived from the Moderate-resolution Imaging Spectrometer (MODIS) and Advanced Very-High-Resolution Radiometer (AVHRR) datasets. MODIS achieves both snow/non-snow discrimination and snow cover fractional retrieval, while early AVHRR-based snow cover products only focused on snow/non-snow discrimination. The AVHRR Climate Data [...] Read more.
Snow cover products are primarily derived from the Moderate-resolution Imaging Spectrometer (MODIS) and Advanced Very-High-Resolution Radiometer (AVHRR) datasets. MODIS achieves both snow/non-snow discrimination and snow cover fractional retrieval, while early AVHRR-based snow cover products only focused on snow/non-snow discrimination. The AVHRR Climate Data Record (AVHRR-CDR) provides a nearly 40-year global dataset that has the potential to fill the gap in long-term snow cover fractional monitoring. Our study selects the Qinghai–Tibet Plateau as the experimental area, utilizing AVHRR-CDR surface reflectance data (AVH09C1) and calibrating with the MODIS snow product MOD10A1. The snow cover percentage retrieval from the AVHRR dataset is performed using Surface Reflectance at 0.64 μm (SR1) and Surface Reflectance at 0.86 μm (SR2), along with a simulated Normalized Difference Snow Index (NDSI) model. Also, in order to detect the effects of land-cover type and topography on snow inversion, we tested the accuracy of the algorithm with and without these influences, respectively (vanilla algorithm and improved algorithm). The accuracy of the AVHRR snow cover percentage data product is evaluated using MOD10A1, ground snow-depth measurements and ERA5. The results indicate that the logic model based on NDSI has the best fitting effect, with R-square and RMSE values of 0.83 and 0.10, respectively. Meanwhile, the accuracy was improved after taking into account the effects of land-cover type and topography. The model is validated using MOD10A1 snow-covered areas, showing snow cover area differences of less than 4% across 6 temporal phases. The improved algorithm results in better consistency with MOD10A1 than with the vanilla algorithm. Moreover, the RMSE reaches greater levels when the elevation is below 2000 m or above 6000 m and is lower when the slope is between 16° and 20°. Using ground snow-depth measurements as ground truth, the multi-year recall rates are mostly above 0.7, with an average recall rate of 0.81. The results also show a high degree of consistency with ERA5. The validation results demonstrate that the AVHRR snow cover percentage remote sensing product proposed in this study exhibits high accuracy in the Tibetan Plateau region, also demonstrating that land-cover type and topographic factors are important to the algorithm. Our study lays the foundation for a global snow cover percentage product based on AVHRR-CDR and furthermore lays a basic work for generating a long-term AVHRR-MODIS fractional snow cover dataset. Full article
Show Figures

Figure 1

Figure 1
<p>Study area and test areas for regression.</p>
Full article ">Figure 2
<p>Flowchart of AVHRR-FSC generation and accuracy analysis in this study.</p>
Full article ">Figure 3
<p>Linear fitting of SR1, SR2 and NDSI<sub>AVHRR</sub> with MOD10A1 for training data. The blue line represents the fit line, and the dash line represents the perfect prediction line. (<b>a</b>) the linear fitting of SR1 with MOD10A1; (<b>b</b>) the linear fitting of SR2 with MOD10A1; (<b>c</b>) the linear fitting of NDSI<sub>AVHRR</sub> with MOD10A1.</p>
Full article ">Figure 4
<p>Scatter density plots for Logistic fitting results after considering elevation, slop, BT4 and comparison of predicted values with MOD10A1. The four subplots are related to uncategorized, grass, forest and bareland for vegetation cover types. The dash line represents the perfect prediction line.</p>
Full article ">Figure 5
<p>Comparison of AVHRR_FSC (the vanilla algorithm) and MODIS_FSC mapping; AVHRR_FSC and MODIS_FSC stand for total snow cover area, and RMSE is pixel-based calculated. 1, 2, and 3 correspond to the three test areas.</p>
Full article ">Figure 6
<p>Comparison of AVHRR_FSC* (the improved algorithm) and MODIS_FSC mapping; AVHRR_FSC* and MODIS_FSC stand for total snow cover area, and RMSE is pixel-based calculated. 1, 2, and 3 correspond to the three test areas.</p>
Full article ">Figure 7
<p>Trends in RMSE of AVHRR FSC inversion results with elevation and slope variation, using MOD10A1 as the ground truth. RMSE and AVHRR_FSC stand for the vanilla algorithm, RMSE* and AVHRR_FSC* stand for the improved algorithm. (<b>a</b>) trends in RMSE of AVHRR FSC inversion results with elevation; (<b>b</b>) Trends in RMSE of AVHRR FSC inversion results with slop.</p>
Full article ">Figure 8
<p>Trends in RMSE of AVHRR FSC inversion results with different vegetation types, using MOD10A1 as the ground truth. RMSE and AVHRR_FSC stand for the vanilla algorithm, RMSE* and AVHRR_FSC* stand for improved algorithm.</p>
Full article ">Figure 9
<p>Snow cover pixel count of AVHRR-FSC* compared to ERA5. AVHRR-FSC* is the result of the improved algorithm. (<b>a</b>) shows the comparison of daily snow cover pixel count; (<b>b</b>) shows the consistency between the two datasets.</p>
Full article ">
17 pages, 2420 KiB  
Article
Estimation of the Wind Field with a Single High-Frequency Radar
by Abïgaëlle Dussol and Cédric Chavanne
Remote Sens. 2024, 16(13), 2258; https://doi.org/10.3390/rs16132258 (registering DOI) - 21 Jun 2024
Abstract
Over several decades, high-frequency (HF) radars have been employed for remotely measuring various ocean surface parameters, encompassing surface currents, waves, and winds. Wind direction and speed are usually estimated from both first-order and second-order Bragg-resonant scatter from two or more HF radars monitoring [...] Read more.
Over several decades, high-frequency (HF) radars have been employed for remotely measuring various ocean surface parameters, encompassing surface currents, waves, and winds. Wind direction and speed are usually estimated from both first-order and second-order Bragg-resonant scatter from two or more HF radars monitoring the same area of the ocean surface. This limits the observational domain to the common area where second-order scatter is available from at least two radars. Here, we propose to estimate wind direction and speed from the first-order scatter of a single HF radar, yielding the same spatial coverage as for surface radial currents. Wind direction is estimated using the ratio of the positive and negative first-order Bragg peaks intensity, with a new simple algorithm to remove the left/right directional ambiguity from a single HF radar. Wind speed is estimated from wind direction and de-tided surface radial currents using an artificial neural network which has been trained with in situ wind speed observations. Radar-derived wind estimations are compared with in situ observations in the Lower Saint-Lawrence Estuary (Quebec, Canada). The correlation coefficients between radar-estimated and in situ wind directions range from 0.84 to 0.95 for Wellen Radars (WERAs) and from 0.79 to 0.97 for Coastal Ocean Dynamics Applications Radars (CODARs), while the root mean square differences range from 8° to 12° for WERAs and from 10° to 19° for CODARs. Correlation coefficients between the radar-estimated and the in situ wind speeds range from 0.89 to 0.93 for WERAs and from 0.81 to 0.93 for CODARs, while the root mean square differences range from 1.3 m.s−1 to 2.3 m.s−1 for WERAs and from 1.6 m.s−1 to 3.9 m.s−1 for CODARs. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>An example of a HF radar sea echo Doppler spectrum from the WERA W2 (16.15 MHz) showing the first-order Bragg peaks. <math display="inline"><semantics> <msub> <mi>f</mi> <mrow> <mi>B</mi> <mi>r</mi> <mi>a</mi> <mi>g</mi> <mi>g</mi> </mrow> </msub> </semantics></math> is the Bragg frequency, and <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>f</mi> </mrow> </semantics></math> is the offset of the first-order peak relative to the Bragg frequency. The SNR of the first-order peaks are indicated.</p>
Full article ">Figure 2
<p>The distributions of surface wave energy relative to the wind direction angle are illustrated for scenarios with the wind blowing towards (<b>left</b>), at a right angle (<b>middle</b>), and away from (<b>right</b>) the radar look direction. The backscatter spectra below illustrate the relative heights of the approaching (<math display="inline"><semantics> <mrow> <mo>+</mo> <msub> <mi mathvariant="normal">f</mi> <mi>B</mi> </msub> </mrow> </semantics></math>) and receding (<math display="inline"><semantics> <mrow> <mo>−</mo> <msub> <mi mathvariant="normal">f</mi> <mi>B</mi> </msub> </mrow> </semantics></math>) Bragg peaks for each corresponding case. This figure is adapated from Fernandez et al. (1997) [<a href="#B3-remotesensing-16-02258" class="html-bibr">3</a>].</p>
Full article ">Figure 3
<p>Schematic to remove the wind directional ambiguity using a single HF radar. The gray area is used to obtain the histogram of all possible wind directions.</p>
Full article ">Figure 4
<p>Histograms of (<b>a</b>) the left solutions, (<b>b</b>) the right solutions and (<b>c</b>) both solutions (left and right). The modal direction is indicated by the vertical line in (<b>c</b>).</p>
Full article ">Figure 5
<p>Training time and SI (relative to buoy PMZA-Riki observations) for different ANN architectures.</p>
Full article ">Figure 6
<p>ANN structure used for wind field estimations with a single HF radar.</p>
Full article ">Figure 7
<p>(<b>a</b>) Map of the Gulf and (<b>b</b>) the lower Estuary of Saint Lawrence. The black-outlined rectangle in (<b>a</b>) delimits the study area in the lower Saint Lawrence Estuary. The instrument locations are indicated, and the wind field predicted by a numerical model, for the 1 January 2017 at 23h00, is shown by the blue arrows. The black lines indicate isobaths.</p>
Full article ">Figure 8
<p>Correlation between radar-estimated and in situ wind direction (blue lines) and number of data points (red lines) as a function of SNR threshold values, for W2 WERA (solid lines) and C1 CODAR (dashed lines).</p>
Full article ">Figure 9
<p>Scatterplots of (<b>a</b>,<b>c</b>) wind direction and (<b>b</b>,<b>d</b>) wind speed estimated from (<b>a</b>,<b>b</b>) W1 WERA and (<b>c</b>,<b>d</b>) C2 CODAR versus in situ measurements in summer 2013.</p>
Full article ">Figure 10
<p>Time series of (<b>a</b>) wind direction and (<b>b</b>) wind speed in winter 2016–2017 from in situ observations (red), numerical model HRDPS (blue) and W2 WERA (black).</p>
Full article ">Figure 11
<p>Spatial distribution of (<b>a</b>) modulus and (<b>b</b>) phase of complex correlation coefficients, <span class="html-italic">R</span>, between wind velocities estimated from W2 WERA and simulated by HRDPS for the winter 2016–2017.</p>
Full article ">Figure 12
<p>Winds estimated from W1 (blue arrows), W2 (red arrows), C1 (green arrows), C2 (black arrows), measured at Bic station (magenta arrow), and simulated by HRDPS model (gray arrows) at 11h00, 27 January 2017.</p>
Full article ">Figure 13
<p>Wind speed projected in radar direction, shown in black for the meteorologic station and in red for the ANN, versus residual radial current from W2 in winter 2016–2017.</p>
Full article ">Figure 14
<p>Median absolute value of differences between in situ and radar wind directions versus wind speed in winter 2017 (black line) and summer 2013 (red line). Dashed line indicates a threshold value of 5 degrees.</p>
Full article ">
26 pages, 9310 KiB  
Article
Discrimination of Degraded Pastures in the Brazilian Cerrado Using the PlanetScope SuperDove Satellite Constellation
by Angela Gabrielly Pires Silva, Lênio Soares Galvão, Laerte Guimarães Ferreira Júnior, Nathália Monteiro Teles, Vinícius Vieira Mesquita and Isadora Haddad
Remote Sens. 2024, 16(13), 2256; https://doi.org/10.3390/rs16132256 (registering DOI) - 21 Jun 2024
Abstract
Pasture degradation poses significant economic, social, and environmental impacts in the Brazilian savanna ecosystem. Despite these impacts, effectively detecting varying intensities of agronomic and biological degradation through remote sensing remains challenging. This study explores the potential of the eight-band PlanetScope SuperDove satellite constellation [...] Read more.
Pasture degradation poses significant economic, social, and environmental impacts in the Brazilian savanna ecosystem. Despite these impacts, effectively detecting varying intensities of agronomic and biological degradation through remote sensing remains challenging. This study explores the potential of the eight-band PlanetScope SuperDove satellite constellation to discriminate between five classes of pasture degradation: non-degraded pasture (NDP); pastures with low- (LID) and moderate-intensity degradation (MID); severe agronomic degradation (SAD); and severe biological degradation (SBD). Using a set of 259 cloud-free images acquired in 2022 across five sites located in central Brazil, the study aims to: (i) identify the most suitable period for discriminating between various degradation classes; (ii) evaluate the Random Forest (RF) classification performance of different SuperDove attributes; and (iii) compare metrics of accuracy derived from two predicted scenarios of pasture degradation: a more challenging one involving five classes (NDP, LID, MID, SAD, and SBD), and another considering only non-degraded and severely degraded pastures (NDP, SAD, and SBD). The study assessed individual and combined sets of SuperDove attributes, including band reflectance, vegetation indices, endmember fractions from spectral mixture analysis (SMA), and image texture variables from Gray-level Co-occurrence Matrix (GLCM). The results highlighted the effectiveness of the transition from the rainy to the dry season and the period towards the beginning of a new seasonal rainy cycle in October for discriminating pasture degradation. In comparison to the dry season, more favorable discrimination scenarios were observed during the rainy season. In the dry season, increased amounts of non-photosynthetic vegetation (NPV) complicate the differentiation between NDP and SBD, which is characterized by high soil exposure. Pastures exhibiting severe biological degradation showed greater sensitivity to water stress, manifesting earlier reflectance changes in the visible and near-infrared bands of SuperDove compared to other classes. Reflectance-based classification yielded higher overall accuracy (OA) than the approaches using endmember fractions, vegetation indices, or texture metrics. Classifications using combined attributes achieved an OA of 0.69 and 0.88 for the five-class and three-class scenarios, respectively. In the five-class scenario, the highest F1-scores were observed for NDP (0.61) and classes of agronomic (0.71) and biological (0.88) degradation, indicating the challenges in separating low and moderate stages of pasture degradation. An initial comparison of RF classification results for the five categories of degraded pastures, utilizing reflectance data from MultiSpectral Instrument (MSI)/Sentinel-2 (400–2500 nm) and SuperDove (400–900 nm), demonstrated an enhanced OA (0.79 versus 0.66) with Sentinel-2 data. This enhancement is likely to be attributed to the inclusion of shortwave infrared (SWIR) spectral bands in the data analysis. Our findings highlight the potential of satellite constellation data, acquired at high spatial resolution, for remote identification of pasture degradation. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Figure 1

Figure 1
<p>Summary of the methodology used in the current work to discriminate pasture degradation with SuperDove satellite constellation data.</p>
Full article ">Figure 2
<p>Location of the five sites (15 × 15 km each) selected in the southeastern region of the Brazilian state of Goiás in a climatically homogeneous region. The insets show photographs representative of non-degraded pasture (NDP) and of pastures with low-intensity degradation (LID), moderate-intensity degradation (MID), severe agronomic degradation (SAD), and severe biological degradation (SBD). The sites are numbered according to the municipality where they are located: 1. Bela Vista de Goiás; 2. Caldas Novas; 3. Piracanjuba; 4. Pontalina; and 5. Trindade. Long-term monthly precipitation (average values between 2001 and 2021) and the dry season period are also indicated for reference.</p>
Full article ">Figure 3
<p>Frequency of cloud-free images captured by the SuperDove satellite constellation in 2022 for each of the five selected sites targeted for analysis. The dry season period is indicated for reference.</p>
Full article ">Figure 4
<p>False color composites generated from SuperDove imagery, illustrating visual distinctions between plots of biologically degraded (SBD) and non-degraded (NDP) pastures across various seasonal stages. Notable time points include the rainy season (DOY 62 and 324 in (<b>a</b>,<b>e</b>)), the transition from the rainy to the dry season (DOY 153 in (<b>b</b>)), the middle of the dry season (DOY 227 in (<b>c</b>)), and the transition from the dry to the rainy season (DOY 273 in (<b>d</b>)). SuperDove bands 8 (NIR), 7 (red-edge) and 6 (red) are shown in red, green and blue colors, respectively.</p>
Full article ">Figure 5
<p>Seasonal variations in mean reflectance for both (<b>a</b>) red and (<b>b</b>) near-infrared (NIR) bands (SuperDove bands 6 and 8) across different pasture degradation classes. The symbols within the profiles denote data acquisition in 2022 through the satellite constellation. Class abbreviations are defined in the text.</p>
Full article ">Figure 6
<p>Seasonal variation in the Mahalanobis distance for discriminating areas of Severe Agronomic Degradation (SAD) from those with Low- (LID) and Moderate-intensity (MID) degradation using the eight-band reflectance data from SuperDove.</p>
Full article ">Figure 7
<p>Endmember reflectance spectra derived from Sequential Maximum Angle Convex Cone (SMACC) for SuperDove data acquired on 2 June (DOY 153) over areas exhibiting varying degrees of pasture degradation across the five studied sites in central Brazil.</p>
Full article ">Figure 8
<p>Scatterplots illustrating the relationships between (<b>a</b>) NDVI and GRND and (<b>b</b>) EVI and REND for three field-sampled classes of pasture degradation: Non-degraded pasture (NDP) and pastures with severe agronomic (SAD) and biological (SBD) degradation.</p>
Full article ">Figure 9
<p>False color composites (SuperDove bands 8, 7, and 6 in RGB) illustrating examples of the five classes of pasture degradation (NDP, LID, MID, SAD, and SBD) are presented on the left side of the figure. In the middle panel, color composites of green vegetation (GV1), GV2, and soil (S) fraction images in RGB are displayed. Lastly, NDVI images are presented on the right side.</p>
Full article ">Figure 10
<p>Variations in Gray Level Co-occurrence Matrix (GLCM) texture metrics, specifically (<b>a</b>) texture mean and (<b>b</b>) texture variance, calculated from the Near-Infrared (NIR) band 8 of SuperDove for the five classes of pasture degradation.</p>
Full article ">Figure 11
<p>Variations in Precision, Recall, F1-score, and Overall Accuracy (OA) resulting from the Random Forest (RF) classification of five classes of pasture degradation (NDP, LID, MID, SAD, and SBD). The classifier utilized individual attributes, including the reflectance of the eight SuperDove bands, five vegetation indices (EVI, GRND, MPRI, NDVI, and REND), and four-endmember fractions from Spectral Mixture Analysis (SMA) (GV1, GV2, soil, and shade). Results from GLCM texture metrics were excluded for enhanced graphical representation. The reported results refer to the validation dataset.</p>
Full article ">Figure 12
<p>Percentage of importance assigned to each variable in the Random Forest (RF) classification of five classes of pasture degradation (NDP, LID, MID, SAD, and SBD).</p>
Full article ">Figure 13
<p>(<b>a</b>) Ground truth map and (<b>b</b>) Random Forest classification of degraded and non-degraded pastures. Classification uncertainties are depicted in (<b>c</b>). The abbreviations are defined in the text.</p>
Full article ">Figure 14
<p>Variations in F1-score and Overall Accuracy (OA) resulting from the Random Forest (RF) classification of five classes of pasture degradation (NDP, LID, MID, SAD, and SBD) using the combined sets of attributes. Results are presented for two dates representing the transition from the rainy to the dry season (DOY 153 in June; blue color results) and the middle of the dry season (DOY 227 in August; red color results). The reported results refer to the validation dataset.</p>
Full article ">Figure 15
<p>Average reflectance spectra from OLI/Landsat-8 data acquired over field-surveyed plots representing non-degraded pastures and pastures experiencing severe agronomical or biological degradation. The results are presented for various dates during the year 2021, specifically during (<b>a</b>) the transition from the rainy to the dry season, (<b>b</b>) the middle of the dry season, and (<b>c</b>) after the occurrence of the first rainfall events in the new seasonal cycle in October.</p>
Full article ">Figure 16
<p>Variations in F1-score and Overall Accuracy (OA) resulting from the Random Forest (RF) classification of five classes of pasture degradation (NDP, LID, MID, SAD, and SBD) using reflectance data of 10 bands (10-m and 20-m spatial resolution) from the Multispectral Instrument (MSI)/Sentinel-2 (400–2500 nm) and eight bands from the SuperDove (400–900 nm). Images from both instruments were acquired in approximately coincident dates (2 and 4 June 2022). The reported results in blue (SuperDove) and red (MSI/Sentinel-2) colors refer to the validation dataset.</p>
Full article ">
30 pages, 20731 KiB  
Article
Automatic Classification of Submerged Macrophytes at Lake Constance Using Laser Bathymetry Point Clouds
by Nike Wagner, Gunnar Franke, Klaus Schmieder and Gottfried Mandlburger
Remote Sens. 2024, 16(13), 2257; https://doi.org/10.3390/rs16132257 (registering DOI) - 21 Jun 2024
Abstract
Submerged aquatic vegetation, also referred to as submerged macrophytes, provides important habitats and serves as a significant ecological indicator for assessing the condition of water bodies and for gaining insights into the impacts of climate change. In this study, we introduce a novel [...] Read more.
Submerged aquatic vegetation, also referred to as submerged macrophytes, provides important habitats and serves as a significant ecological indicator for assessing the condition of water bodies and for gaining insights into the impacts of climate change. In this study, we introduce a novel approach for the classification of submerged vegetation captured with bathymetric LiDAR (Light Detection And Ranging) as a basis for monitoring their state and change, and we validated the results against established monitoring techniques. Employing full-waveform airborne laser scanning, which is routinely used for topographic mapping and forestry applications on dry land, we extended its application to the detection of underwater vegetation in Lake Constance. The primary focus of this research lies in the automatic classification of bathymetric 3D LiDAR point clouds using a decision-based approach, distinguishing the three vegetation classes, (i) Low Vegetation, (ii) High Vegetation, and (iii) Vegetation Canopy, based on their height and other properties like local point density. The results reveal detailed 3D representations of submerged vegetation, enabling the identification of vegetation structures and the inference of vegetation types with reference to pre-existing knowledge. While the results within the training areas demonstrate high precision and alignment with the comparison data, the findings in independent test areas exhibit certain deficiencies that are likely addressable through corrective measures in the future. Full article
Back to TopTop