www.fgks.org   »   [go: up one dir, main page]

Next Issue
Volume 15, April
Previous Issue
Volume 15, February
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 15, Issue 3 (March 2015) – 121 articles , Pages 4605-7083

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
1934 KiB  
Article
A Hybrid PCA-CART-MARS-Based Prognostic Approach of the Remaining Useful Life for Aircraft Engines
by Fernando Sánchez Lasheras, Paulino José García Nieto, Francisco Javier De Cos Juez, Ricardo Mayo Bayón and Victor Manuel González Suárez
Sensors 2015, 15(3), 7062-7083; https://doi.org/10.3390/s150307062 - 23 Mar 2015
Cited by 36 | Viewed by 8328
Abstract
Prognostics is an engineering discipline that predicts the future health of a system. In this research work, a data-driven approach for prognostics is proposed. Indeed, the present paper describes a data-driven hybrid model for the successful prediction of the remaining useful life of [...] Read more.
Prognostics is an engineering discipline that predicts the future health of a system. In this research work, a data-driven approach for prognostics is proposed. Indeed, the present paper describes a data-driven hybrid model for the successful prediction of the remaining useful life of aircraft engines. The approach combines the multivariate adaptive regression splines (MARS) technique with the principal component analysis (PCA), dendrograms and classification and regression trees (CARTs). Elements extracted from sensor signals are used to train this hybrid model, representing different levels of health for aircraft engines. In this way, this hybrid algorithm is used to predict the trends of these elements. Based on this fitting, one can determine the future health state of a system and estimate its remaining useful life (RUL) with accuracy. To evaluate the proposed approach, a test was carried out using aircraft engine signals collected from physical sensors (temperature, pressure, speed, fuel flow, etc.). Simulation results show that the PCA-CART-MARS-based approach can forecast faults long before they occur and can predict the RUL. The proposed hybrid model presents as its main advantage the fact that it does not require information about the previous operation states of the input variables of the engine. The performance of this model was compared with those obtained by other benchmark models (multivariate linear regression and artificial neural networks) also applied in recent years for the modeling of remaining useful life. Therefore, the PCA-CART-MARS-based approach is very promising in the field of prognostics of the RUL for aircraft engines. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Simplified diagram of the engine simulated [<a href="#B13-sensors-15-07062" class="html-bibr">13</a>] (LPC: Low pressure compressor, HPC: High pressure compressor, LPT: Low pressure turbine, HPT: High pressure turbine, N1: Turbine axis and N2: Turbine shaft).</p>
Full article ">Figure 2
<p>Flowchart of the proposed algorithm.</p>
Full article ">Figure 3
<p>Dendrogram of the remaining useful life variable and the other variables after the dimensional reduction.</p>
Full article ">Figure 4
<p>Regression tree of the remaining useful life variable using as input variables Sensor.Measurement7, Sensor.Measurement12, Sensor.Measurement20 and Sensor.Measurement21.</p>
Full article ">Figure 5
<p>Steadiness of the hybrid model compared with the two benchmark techniques.</p>
Full article ">Figure 6
<p>Risk level of the hybrid model compared with the two benchmark techniques.</p>
Full article ">Figure 7
<p>Remaining useful life of one of the validation subsets <span class="html-italic">versus</span> the remaining useful life calculated by the hybrid model.</p>
Full article ">
864 KiB  
Article
A Robust Trust Establishment Scheme for Wireless Sensor Networks
by Farruh Ishmanov, Sung Won Kim and Seung Yeob Nam
Sensors 2015, 15(3), 7040-7061; https://doi.org/10.3390/s150307040 - 23 Mar 2015
Cited by 41 | Viewed by 5555
Abstract
Security techniques like cryptography and authentication can fail to protect a network once a node is compromised. Hence, trust establishment continuously monitors and evaluates node behavior to detect malicious and compromised nodes. However, just like other security schemes, trust establishment is also vulnerable [...] Read more.
Security techniques like cryptography and authentication can fail to protect a network once a node is compromised. Hence, trust establishment continuously monitors and evaluates node behavior to detect malicious and compromised nodes. However, just like other security schemes, trust establishment is also vulnerable to attack. Moreover, malicious nodes might misbehave intelligently to trick trust establishment schemes. Unfortunately, attack-resistance and robustness issues with trust establishment schemes have not received much attention from the research community. Considering the vulnerability of trust establishment to different attacks and the unique features of sensor nodes in wireless sensor networks, we propose a lightweight and robust trust establishment scheme. The proposed trust scheme is lightweight thanks to a simple trust estimation method. The comprehensiveness and flexibility of the proposed trust estimation scheme make it robust against different types of attack and misbehavior. Performance evaluation under different types of misbehavior and on-off attacks shows that the detection rate of the proposed trust mechanism is higher and more stable compared to other trust mechanisms. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>Example of the time-window mechanism.</p>
Full article ">Figure 2
<p>Application of the time-window mechanism in managing and recording observations.</p>
Full article ">Figure 3
<p>Misbehavior frequency estimation using a time-window mechanism.</p>
Full article ">Figure 4
<p>Generated number of misbehaviors.</p>
Full article ">Figure 5
<p>Misbehavior detection (<span class="html-italic">p</span> ≥ 0.6).</p>
Full article ">Figure 6
<p>Misbehavior detection (<span class="html-italic">p</span> ≥ 0.5).</p>
Full article ">Figure 7
<p>False-positive alarm rate.</p>
Full article ">Figure 8
<p>False-negative alarm rate.</p>
Full article ">Figure 9
<p>On-off attack detection (probability of an on period is 0.6).</p>
Full article ">Figure 10
<p>On-off attack detection (probability of an on period is 0.4).</p>
Full article ">Figure 11
<p>On-off attack detection (probability of an on period is 0.2).</p>
Full article ">Figure 12
<p>Total incidents of good and bad behavior in three types of on-off attack.</p>
Full article ">Figure 13
<p>On-off attack detection rate.</p>
Full article ">
6395 KiB  
Article
Improving the Precision and Speed of Euler Angles Computation from Low-Cost Rotation Sensor Data
by Aleš Janota, Vojtech Šimák, Dušan Nemec and Jozef Hrbček
Sensors 2015, 15(3), 7016-7039; https://doi.org/10.3390/s150307016 - 23 Mar 2015
Cited by 50 | Viewed by 14779
Abstract
This article compares three different algorithms used to compute Euler angles from data obtained by the angular rate sensor (e.g., MEMS gyroscope)—the algorithms based on a rotational matrix, on transforming angular velocity to time derivations of the Euler angles and on unit quaternion [...] Read more.
This article compares three different algorithms used to compute Euler angles from data obtained by the angular rate sensor (e.g., MEMS gyroscope)—the algorithms based on a rotational matrix, on transforming angular velocity to time derivations of the Euler angles and on unit quaternion expressing rotation. Algorithms are compared by their computational efficiency and accuracy of Euler angles estimation. If attitude of the object is computed only from data obtained by the gyroscope, the quaternion-based algorithm seems to be most suitable (having similar accuracy as the matrix-based algorithm, but taking approx. 30% less clock cycles on the 8-bit microcomputer). Integration of the Euler angles’ time derivations has a singularity, therefore is not accurate at full range of object’s attitude. Since the error in every real gyroscope system tends to increase with time due to its offset and thermal drift, we also propose some measures based on compensation by additional sensors (a magnetic compass and accelerometer). Vector data of mentioned secondary sensors has to be transformed into the inertial frame of reference. While transformation of the vector by the matrix is slightly faster than doing the same by quaternion, the compensated sensor system utilizing a matrix-based algorithm can be approximately 10% faster than the system utilizing quaternions (depending on implementation and hardware). Full article
(This article belongs to the Special Issue Modeling, Testing and Reliability Issues in MEMS Engineering)
Show Figures

Figure 1

Figure 1
<p>Orientation of the coordinate system axes.</p>
Full article ">Figure 2
<p>Euler angles for 3-2-1 convention.</p>
Full article ">Figure 3
<p>Schematics expressing principle of real-time gyroscope data processing.</p>
Full article ">Figure 4
<p>Half turn of the simulated precession movement.</p>
Full article ">Figure 5
<p>Euler angles during one turn of the simulated movement.</p>
Full article ">Figure 6
<p>Precise version of the algorithm based on a rotation matrix.</p>
Full article ">Figure 7
<p>Errors of the matrix-based algorithms during simulated movement.</p>
Full article ">Figure 8
<p>The algorithm based on Euler angle rates integration.</p>
Full article ">Figure 9
<p>Relation between error and sampling frequency in the algorithm based on the integration of the Euler angle rates.</p>
Full article ">Figure 10
<p>Relation between error and the initial pitch angle <span class="html-italic">β<sub>0</sub></span> at <span class="html-italic">f</span><sub>sample</sub> = 1000 Hz of the Euler angle rates-based algorithm.</p>
Full article ">Figure 11
<p>Principle of the quaternion-based algorithm.</p>
Full article ">Figure 12
<p>Errors of the quaternion-based algorithms during simulated movement.</p>
Full article ">Figure 13
<p>Accelerometer and magnetic compass readings at non-zero pitch <span class="html-italic">β</span> and yaw <span class="html-italic">γ</span>. Acceleration <b><span class="html-italic">a</span></b><sub>acc</sub> is measured by the on-board accelerometer as a sum of the gravity acceleration <b><span class="html-italic">g</span></b> and object’s acceleration <b><span class="html-italic">a</span></b>. Earth’s magnetic field induction <b><span class="html-italic">B</span></b> has inclination <span class="html-italic">θ</span>, declination <span class="html-italic">δ</span> and its horizontal complement points to magnetic North.</p>
Full article ">Figure 14
<p>Roll and pitch calculation from measured gravity acceleration. Object pitches up and rolls right (axis <span class="html-italic">x'</span> points forward). The vector <b><span class="html-italic">g</span></b><span class="html-italic">'</span> defines vertical direction.</p>
Full article ">Figure 15
<p>Data fusion of the gyroscope, accelerometer and magnetic sensor.</p>
Full article ">Figure 16
<p>Estimation of roll angle with gyroscope offset 0.1% of full range (500°/s).</p>
Full article ">Figure 17
<p>Relative error of roll angle estimation with respect to gyroscope offset.</p>
Full article ">
1297 KiB  
Article
Degradation Prediction Model Based on a Neural Network with Dynamic Windows
by Xinghui Zhang, Lei Xiao and Jianshe Kang
Sensors 2015, 15(3), 6996-7015; https://doi.org/10.3390/s150306996 - 23 Mar 2015
Cited by 21 | Viewed by 7356
Abstract
Tracking degradation of mechanical components is very critical for effective maintenance decision making. Remaining useful life (RUL) estimation is a widely used form of degradation prediction. RUL prediction methods when enough run-to-failure condition monitoring data can be used have been fully researched, but [...] Read more.
Tracking degradation of mechanical components is very critical for effective maintenance decision making. Remaining useful life (RUL) estimation is a widely used form of degradation prediction. RUL prediction methods when enough run-to-failure condition monitoring data can be used have been fully researched, but for some high reliability components, it is very difficult to collect run-to-failure condition monitoring data, i.e., from normal to failure. Only a certain number of condition indicators in certain period can be used to estimate RUL. In addition, some existing prediction methods have problems which block RUL estimation due to poor extrapolability. The predicted value converges to a certain constant or fluctuates in certain range. Moreover, the fluctuant condition features also have bad effects on prediction. In order to solve these dilemmas, this paper proposes a RUL prediction model based on neural network with dynamic windows. This model mainly consists of three steps: window size determination by increasing rate, change point detection and rolling prediction. The proposed method has two dominant strengths. One is that the proposed approach does not need to assume the degradation trajectory is subject to a certain distribution. The other is it can adapt to variation of degradation indicators which greatly benefits RUL prediction. Finally, the performance of the proposed RUL prediction model is validated by real field data and simulation data. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Illustration of degradation process and RUL prediction.</p>
Full article ">Figure 2
<p>RUL estimation using rolling prediction.</p>
Full article ">Figure 3
<p>Simulated degradation indicators of two components. (<b>a</b>) Component 1; (<b>b</b>) Component 2.</p>
Full article ">Figure 4
<p>Results of rolling prediction. (<b>a</b>) Results fluctuate in a range; (<b>b</b>) Results keep the same value.</p>
Full article ">Figure 5
<p>Illustration of the training window.</p>
Full article ">Figure 6
<p>Prediction results comparison using different window sizes (start at 10 h).</p>
Full article ">Figure 7
<p>Three typical results from small windows: descending, increasing and stable.</p>
Full article ">Figure 8
<p>Prediction results comparison using different window sizes (start at 25 h).</p>
Full article ">Figure 9
<p>Schematic diagram of window adjusting when change point exists near prediction time <span class="html-italic">t</span>.</p>
Full article ">Figure 10
<p>Whole process of RUL prediction based on the proposed method.</p>
Full article ">Figure 11
<p>Trend prediction results of simulated data of Component 1. (<b>a</b>) prediction results at start time 500 h; (<b>b</b>) prediction results at start time 600 h; (<b>c</b>) prediction results at start time 700 h; (<b>d</b>) prediction results at start time 800 h.</p>
Full article ">Figure 12
<p>Trend prediction results of simulated data of Component 2. (<b>a</b>) prediction results at start time 500 h; (<b>b</b>) prediction results at start time 600 h; (<b>c</b>) prediction results at start time 700 h; (<b>d</b>) prediction results at start time 800 h.</p>
Full article ">Figure 13
<p>Prediction results of Component 1 at different time points.</p>
Full article ">Figure 14
<p>Prediction results of Component 2 at different time points.</p>
Full article ">Figure 15
<p>Three dimensional graph of the gearbox test-rig.</p>
Full article ">Figure 16
<p>Sideband index developed in [<a href="#B31-sensors-15-06996" class="html-bibr">31</a>].</p>
Full article ">Figure 17
<p>Trend prediction results of degradation data of gearbox. (<b>a</b>) prediction results at start time 83.3 h; (<b>b</b>) prediction results at start time 366.7 h; (<b>c</b>) prediction results at start time 433.3 h; (<b>d</b>) prediction results at start time 516.7 h.</p>
Full article ">Figure 18
<p>Comparison of real RUL and predicted RUL of the gearbox degradation data.</p>
Full article ">Figure 19
<p>Explaining the great variance of predicted RUL values.</p>
Full article ">Figure 20
<p>Helicopter normalized degradation indicator of the left shaft of the generator.</p>
Full article ">Figure 21
<p>Trend prediction results of degradation data of gearbox. (<b>a</b>) prediction results at start time 17.3 h; (<b>b</b>) prediction results at start time 26.7 h; (<b>c</b>) prediction results at start time 40 h; (<b>d</b>) prediction results at start time 45.3 h.</p>
Full article ">Figure 22
<p>Comparison of predicted RUL values with real RUL values at different inspection time points.</p>
Full article ">
3030 KiB  
Review
The Application of Biomedical Engineering Techniques to the Diagnosis and Management of Tropical Diseases: A Review
by Fatimah Ibrahim, Tzer Hwai Gilbert Thio, Tarig Faisal and Michael Neuman
Sensors 2015, 15(3), 6947-6995; https://doi.org/10.3390/s150306947 - 23 Mar 2015
Cited by 29 | Viewed by 13161
Abstract
This paper reviews a number of biomedical engineering approaches to help aid in the detection and treatment of tropical diseases such as dengue, malaria, cholera, schistosomiasis, lymphatic filariasis, ebola, leprosy, leishmaniasis, and American trypanosomiasis (Chagas). Many different forms of non-invasive approaches such as [...] Read more.
This paper reviews a number of biomedical engineering approaches to help aid in the detection and treatment of tropical diseases such as dengue, malaria, cholera, schistosomiasis, lymphatic filariasis, ebola, leprosy, leishmaniasis, and American trypanosomiasis (Chagas). Many different forms of non-invasive approaches such as ultrasound, echocardiography and electrocardiography, bioelectrical impedance, optical detection, simplified and rapid serological tests such as lab-on-chip and micro-/nano-fluidic platforms and medical support systems such as artificial intelligence clinical support systems are discussed. The paper also reviewed the novel clinical diagnosis and management systems using artificial intelligence and bioelectrical impedance techniques for dengue clinical applications. Full article
(This article belongs to the Section Biosensors)
Show Figures

Figure 1

Figure 1
<p>Non-invasive haemoglobin modelling of dengue patients using bioelectric impedance analysis and artificial neural networks.</p>
Full article ">Figure 2
<p>(<b>a</b>) Visualization of the self organizing maps for the bioelectric impedance analysis parameters; (<b>b</b>) Symptoms, and signs data. Reproduced with permission [<a href="#B47-sensors-15-06947" class="html-bibr">47</a>].</p>
Full article ">Figure 3
<p>Graphic interface screen for the prediction of day of defervescene of fever in dengue patients.</p>
Full article ">Figure 4
<p>An Automatic dengue risk diagnostic system using artificial neural networks and bioelectric impedance analysis techniques.</p>
Full article ">Figure 5
<p>Dengue patient diagnostic system based on Adaptive Neuro-Fuzzy Inference System.</p>
Full article ">Figure 6
<p>The magnetic bead-based microfluidic chip measures 53 mm × 37 mm. Reproduced with permission [<a href="#B72-sensors-15-06947" class="html-bibr">72</a>].</p>
Full article ">Figure 7
<p>Schematic of the lateral flow strip to diagnose Malaria. (top) Layout of the strip, (middle) Flushing agent is added to help flush parasitized blood along the strip, and (bottom) visible lines indicate presence of antigens in the parasitized blood. Reproduced with permission [<a href="#B89-sensors-15-06947" class="html-bibr">89</a>].</p>
Full article ">Figure 8
<p>Schematic diagram of the process for detection of malaria-infected erythrocytes on a cell microarray chip. (<b>a</b>) Erythrocytes stained with a nuclei-specific fluorescent dye, SYTO 59, for the staining of malaria nuclei dispersed on a cell microarray chip using a pipette, which led to the formation of a monolayer of erythrocytes in the microchambers; (<b>b</b>) Malaria-infected erythrocytes were detected using a microarray scanner with a confocal fluorescence laser by monitoring fluorescence-positive erythrocytes; (<b>c</b>) The target malaria-infected erythrocytes were analyzed quantitatively at the single-cell level. Reproduced with permission [<a href="#B97-sensors-15-06947" class="html-bibr">97</a>]—open access.</p>
Full article ">Figure 9
<p>Ultrasound B-scans of the abdomen showing changes caused by schistosomiasis. White arrow: Central periportal fibrosis, red arrows: fibrosis on the periphery of the liver in a patient diagnosed with advanced hepatosplenic schistosomiasis Reproduced with permission [<a href="#B112-sensors-15-06947" class="html-bibr">112</a>].</p>
Full article ">Figure 10
<p>MRI: Gamma-Gandy bodies (siderotic nodules) pointed by arrows labelled “i” in the spleen of a patient diagnosed with hepatosplenicschistosomiasis. Arrow labelled “ii” shows the portal vein. Reproduced with permission [<a href="#B112-sensors-15-06947" class="html-bibr">112</a>].</p>
Full article ">
7664 KiB  
Article
Bayesian Deconvolution for Angular Super-Resolution in Forward-Looking Scanning Radar
by Yuebo Zha, Yulin Huang, Zhichao Sun, Yue Wang and Jianyu Yang
Sensors 2015, 15(3), 6924-6946; https://doi.org/10.3390/s150306924 - 23 Mar 2015
Cited by 78 | Viewed by 7303
Abstract
Scanning radar is of notable importance for ground surveillance, terrain mapping and disaster rescue. However, the angular resolution of a scanning radar image is poor compared to the achievable range resolution. This paper presents a deconvolution algorithm for angular super-resolution in scanning radar [...] Read more.
Scanning radar is of notable importance for ground surveillance, terrain mapping and disaster rescue. However, the angular resolution of a scanning radar image is poor compared to the achievable range resolution. This paper presents a deconvolution algorithm for angular super-resolution in scanning radar based on Bayesian theory, which states that the angular super-resolution can be realized by solving the corresponding deconvolution problem with the maximum a posteriori (MAP) criterion. The algorithm considers that the noise is composed of two mutually independent parts, i.e., a Gaussian signal-independent component and a Poisson signal-dependent component. In addition, the Laplace distribution is used to represent the prior information about the targets under the assumption that the radar image of interest can be represented by the dominant scatters in the scene. Experimental results demonstrate that the proposed deconvolution algorithm has higher precision for angular super-resolution compared with the conventional algorithms, such as the Tikhonov regularization algorithm, the Wiener filter and the Richardson–Lucy algorithm. Full article
(This article belongs to the Section Remote Sensors)
Show Figures


<p>Diagram of forward-looking scanning radar used for angular super-resolution: (a) geometry model for scanning radar imaging; (b) noise; (c) data acquisition for scanning radar; (d) echo data after range com- pression and range cell migration; (e) deconvolution method; (f) angular super-resolution result.</p>
Full article ">
<p>The geometry relationship of scanning radar.</p>
Full article ">
<p>Location of targets in the simulated scene</p>
Full article ">
<p>The <span class="html-italic">L</span>-curve at the different noise levels; the corner corresponds to the point with the maximum curvature.</p>
Full article ">
<p>(<b>a</b>)The echo data added by Gaussian noise with 30 dB; (<b>b</b>) Angular super-resolution result of the Tikhonov regularization; (<b>c</b>) Angular super-resolution result of the Wiener filter; (<b>d</b>) Angular super-resolution result of the R-L algorithm with 75 iters; (e) Angular super-resolution result of the proposed method with 75 iters and λ =0.1406.</p>
Full article ">
<p>(<b>a</b>) The echo added by Gaussian noise with 20 dB; (<b>b</b>) Angular super-resolution result of the Tikhonov regularization; (<b>c</b>) Angular super-resolution result of the Wiener filter; (<b>d</b>) Angular super-resolution result of the Richardson–Lucy (R-L) algorithm with 110 iters; (e) Angular super-resolution result of the proposed method with 110 iters and λ =0.4504.</p>
Full article ">
<p><b>(a)</b> The echo added by Gaussian noise with 15 dB; (b) Angular super-resolution result of the Tikhonov regularization; (c) Angular super-resolution result of the Wiener filter; (d) Angular super-resolution result of the R-L algorithm with 150 iters; (e) Angular super-resolution result of the proposed method with 150 iters and λ = 0.8296.</p>
Full article ">
<p>Relative error performance comparison under various noise levels.</p>
Full article ">
<p>The tested scene.</p>
Full article ">
771 KiB  
Article
Radar Imaging of Non-Uniformly Rotating Targets via a Novel Approach for Multi-Component AM-FM Signal Parameter Estimation
by Yong Wang
Sensors 2015, 15(3), 6905-6923; https://doi.org/10.3390/s150306905 - 23 Mar 2015
Cited by 7 | Viewed by 5269
Abstract
A novel radar imaging approach for non-uniformly rotating targets is proposed in this study. It is assumed that the maneuverability of the non-cooperative target is severe, and the received signal in a range cell can be modeled as multi-component amplitude-modulated and frequency-modulated (AM-FM) [...] Read more.
A novel radar imaging approach for non-uniformly rotating targets is proposed in this study. It is assumed that the maneuverability of the non-cooperative target is severe, and the received signal in a range cell can be modeled as multi-component amplitude-modulated and frequency-modulated (AM-FM) signals after motion compensation. Then, the modified version of Chirplet decomposition (MCD) based on the integrated high order ambiguity function (IHAF) is presented for the parameter estimation of AM-FM signals, and the corresponding high quality instantaneous ISAR images can be obtained from the estimated parameters. Compared with the MCD algorithm based on the generalized cubic phase function (GCPF) in the authors’ previous paper, the novel algorithm presented in this paper is more accurate and efficient, and the results with simulated and real data demonstrate the superiority of the proposed method. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>Radar imaging geometry of target with non-uniform rotation.</p>
Full article ">Figure 2
<p>Comparison between Chirplet atom and modified version of Chirplet atom. (<b>a</b>) Time series for the Chirplet atom; (<b>b</b>) WVD for the Chirplet atom; (<b>c</b>) Time series for the modified version of Chirplet atom; (<b>d</b>) WVD for the modified version of Chirplet atom.</p>
Full article ">Figure 3
<p>Results of the numerical example. (<b>a</b>) Simulated signal; (<b>b</b>) HAF for the signal; (<b>c</b>) IHAF with lags <math display="inline"> <semantics> <mrow> <msub> <mi>m</mi> <mn>1</mn> </msub> <mo>∈</mo> <mo stretchy="false">[</mo> <mn>1</mn> <mo>:</mo> <mn>10</mn> <mo stretchy="false">]</mo> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mi>m</mi> <mn>2</mn> </msub> <mo>∈</mo> <mo stretchy="false">[</mo> <mn>11</mn> <mo>:</mo> <mn>20</mn> <mo stretchy="false">]</mo> </mrow> </semantics> </math> ; (<b>d</b>) IHAF with lags <math display="inline"> <semantics> <mrow> <msub> <mi>m</mi> <mn>1</mn> </msub> <mo>∈</mo> <mo stretchy="false">[</mo> <mn>1</mn> <mo>:</mo> <mn>20</mn> <mo stretchy="false">]</mo> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mi>m</mi> <mn>2</mn> </msub> <mo>∈</mo> <mo stretchy="false">[</mo> <mn>21</mn> <mo>:</mo> <mn>40</mn> <mo stretchy="false">]</mo> </mrow> </semantics> </math> .</p>
Full article ">Figure 4
<p>Flowchart of radar imaging algorithm in this paper.</p>
Full article ">Figure 5
<p>Simulated target model.</p>
Full article ">Figure 6
<p>Radar image based on the RD algorithm.</p>
Full article ">Figure 7
<p>Time frequency representations for the received signal in a range bin. (<b>a</b>) WVD for the original signal; (<b>b</b>) WVD for two LFM signal components; (<b>c</b>) WVD for two Chirplet components; (<b>d</b>) WVD for two modified version of Chirplet components.</p>
Full article ">Figure 8
<p>Radar images based on LFM signal model. (<b>a</b>) Radar image at time <span class="html-italic">t</span> = 0.17 s; (<b>b</b>) Radar image at time <span class="html-italic">t</span> = 0.22 s.</p>
Full article ">Figure 9
<p>Radar image based on Chirplet decomposition algorithm. (<b>a</b>) Radar image at time <span class="html-italic">t</span> = 0.17 s; (<b>b</b>) Radar image at time <span class="html-italic">t</span> = 0.22 s.</p>
Full article ">Figure 10
<p>Radar image based on modified version of Chirplet decomposition algorithm proposed in [<a href="#B31-sensors-15-06905" class="html-bibr">31</a>]. (<b>a</b>) Radar image at time <span class="html-italic">t</span> = 0.17 s; (<b>b</b>) Radar image at time <span class="html-italic">t</span> = 0.22 s.</p>
Full article ">Figure 11
<p>Radar image based on modified version of Chirplet decomposition algorithm proposed in this paper. (<b>a</b>) Radar image at time <span class="html-italic">t</span> = 0.17 s; (<b>b</b>) Radar image at time <span class="html-italic">t</span> = 0.22 s.</p>
Full article ">Figure 12
<p>Optical picture of the plane.</p>
Full article ">Figure 13
<p>Radar image based on the RD algorithm.</p>
Full article ">Figure 14
<p>Time frequency representations for the received signal in a range bin. (<b>a</b>) WVD for the original signal; (<b>b</b>) WVD for the LFM signal component; (<b>c</b>) WVD for the Chirplet component; (<b>d</b>) WVD for the modified version of Chirplet component.</p>
Full article ">Figure 14 Cont.
<p>Time frequency representations for the received signal in a range bin. (<b>a</b>) WVD for the original signal; (<b>b</b>) WVD for the LFM signal component; (<b>c</b>) WVD for the Chirplet component; (<b>d</b>) WVD for the modified version of Chirplet component.</p>
Full article ">Figure 15
<p>Radar images based on LFM signal model. (<b>a</b>) Radar image at time <span class="html-italic">t</span> = 1.01 s; (<b>b</b>) Radar image at time <span class="html-italic">t</span> = 1.23 s.</p>
Full article ">Figure 16
<p>Radar images based on Chirplet decomposition algorithm. (<b>a</b>) Radar image at time <span class="html-italic">t</span> = 1.01 s; (<b>b</b>) Radar image at time <span class="html-italic">t</span> = 1.23 s.</p>
Full article ">Figure 17
<p>Radar image based on modified version of Chirplet decomposition algorithm proposed in [<a href="#B31-sensors-15-06905" class="html-bibr">31</a>]. (<b>a</b>) Radar image at time <span class="html-italic">t</span> = 1.01 s; (<b>b</b>) Radar image at time <span class="html-italic">t</span> = 1.23 s.</p>
Full article ">Figure 18
<p>Radar images based on modified version of Chirplet decomposition algorithm proposed in this paper. (<b>a</b>) Radar image at time <span class="html-italic">t</span> = 1.01 s; (<b>b</b>) Radar image at time <span class="html-italic">t</span> = 1.23 s.</p>
Full article ">
2439 KiB  
Article
A Novel Abandoned Object Detection System Based on Three-Dimensional Image Information
by Yiliang Zeng, Jinhui Lan, Bin Ran, Jing Gao and Jinlin Zou
Sensors 2015, 15(3), 6885-6904; https://doi.org/10.3390/s150306885 - 23 Mar 2015
Cited by 16 | Viewed by 7592
Abstract
A new idea of an abandoned object detection system for road traffic surveillance systems based on three-dimensional image information is proposed in this paper to prevent traffic accidents. A novel Binocular Information Reconstruction and Recognition (BIRR) algorithm is presented to implement the new [...] Read more.
A new idea of an abandoned object detection system for road traffic surveillance systems based on three-dimensional image information is proposed in this paper to prevent traffic accidents. A novel Binocular Information Reconstruction and Recognition (BIRR) algorithm is presented to implement the new idea. As initial detection, suspected abandoned objects are detected by the proposed static foreground region segmentation algorithm based on surveillance video from a monocular camera. After detection of suspected abandoned objects, three-dimensional (3D) information of the suspected abandoned object is reconstructed by the proposed theory about 3D object information reconstruction with images from a binocular camera. To determine whether the detected object is hazardous to normal road traffic, road plane equation and height of suspected-abandoned object are calculated based on the three-dimensional information. Experimental results show that this system implements fast detection of abandoned objects and this abandoned object system can be used for road traffic monitoring and public area surveillance. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>General abandoned objects in road traffic. (<b>a</b>) Harmless abandoned object; and (<b>b</b>) Hazardous abandoned object.</p>
Full article ">Figure 2
<p>Multi-cameras road monitoring systems.</p>
Full article ">Figure 3
<p>Sketch map of BIRR.</p>
Full article ">Figure 4
<p>Block diagram of the abandoned object recognition system.</p>
Full article ">Figure 5
<p>Binary motion region extraction. (<b>a</b>) motion region in three sequent frames; (<b>b</b>) binary difference image; (<b>c</b>) Binary motion region extraction result.</p>
Full article ">Figure 6
<p>Detection method based on double background models. (<b>a</b>) Current image; (<b>b</b>) Short-term background image; (<b>c</b>) Long-term background image; (<b>d</b>) Short-term foreground image; (<b>e</b>) Long-term foreground image; and (<b>f</b>) Abandoned object.</p>
Full article ">Figure 7
<p>Static foreground region segmentation.</p>
Full article ">Figure 8
<p>Different situation of simulations and real experiments. (<b>a</b>,<b>b</b>) simulation situation; (<b>c</b>–<b>e</b>) real road experiment situation.</p>
Full article ">Figure 9
<p>Processes of discarding abandoned objects. (<b>a</b>,<b>b</b>) are simulation examples; (<b>c</b>–<b>e</b>) are real road experiments.</p>
Full article ">Figure 10
<p>Dual-background model updating. (<b>a</b>) Frame with abandoned objects; (<b>b</b>) Short-term background; and (<b>c</b>) Long-term background.</p>
Full article ">Figure 11
<p>Suspected-abandoned object segmentation. (<b>a</b>) Dual-background segmentation image; (<b>b</b>) Dual-foreground segmentation image; and (<b>c</b>) Static foreground region image based on proposed dual-background difference algorithm.</p>
Full article ">Figure 12
<p>3D object reconstruction result of boxes. (<b>a</b>) left scene; (<b>b</b>) right scene; (<b>c</b>) point cloud; and (<b>d</b>) reconstruction result of boxes.</p>
Full article ">Figure 13
<p>3D object reconstruction result of stone. (<b>a</b>) left scene; (<b>b</b>) right scene; (<b>c</b>) point cloud; and (<b>d</b>) reconstruction result of stone.</p>
Full article ">Figure 14
<p>Abandoned object detection for different scenarios. (<b>a</b>) detection result in simulations situation; (<b>b</b>,<b>c</b>) detection results in real road experiments situation.</p>
Full article ">Figure 15
<p>Results of abandoned object detection system.</p>
Full article ">
1008 KiB  
Article
A CMOS Pressure Sensor Tag Chip for Passive Wireless Applications
by Fangming Deng, Yigang He, Bing Li, Lei Zuo, Xiang Wu and Zhihui Fu
Sensors 2015, 15(3), 6872-6884; https://doi.org/10.3390/s150306872 - 23 Mar 2015
Cited by 19 | Viewed by 7259
Abstract
This paper presents a novel monolithic pressure sensor tag for passive wireless applications. The proposed pressure sensor tag is based on an ultra-high frequency RFID system. The pressure sensor element is implemented in the 0.18 µm CMOS process and the membrane gap is [...] Read more.
This paper presents a novel monolithic pressure sensor tag for passive wireless applications. The proposed pressure sensor tag is based on an ultra-high frequency RFID system. The pressure sensor element is implemented in the 0.18 µm CMOS process and the membrane gap is formed by sacrificial layer release, resulting in a sensitivity of 1.2 fF/kPa within the range from 0 to 600 kPa. A three-stage rectifier adopts a chain of auxiliary floating rectifier cells to boost the gate voltage of the switching transistors, resulting in a power conversion efficiency of 53% at the low input power of −20 dBm. The capacitive sensor interface, using phase-locked loop archietcture, employs fully-digital blocks, which results in a 7.4 bits resolution and 0.8 µW power dissipation at 0.8 V supply voltage. The proposed passive wireless pressure sensor tag costs a total 3.2 µW power dissipation. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Block diagram of the proposed passive wireless pressure sensor tag.</p>
Full article ">Figure 2
<p>ASK backscattering modulation.</p>
Full article ">Figure 3
<p>Fabrication processes of the sensor; (<b>a</b>) after the completion of the CMOS process; (<b>b</b>) etching of the sacrificial lays; (<b>c</b>) sealing the etch holes.</p>
Full article ">Figure 4
<p>SEM picture of the sensor cells.</p>
Full article ">Figure 5
<p>Proposed rectifier; (<b>a</b>) entire architecture; (<b>b</b>) circuit schematic of the second stage.</p>
Full article ">Figure 6
<p>Proposed fully-digital capacitive sensor interface; (<b>a</b>) architecture; (<b>b</b>) corresponding waveforms for a constant sensor value.</p>
Full article ">Figure 7
<p>Current-starved ring oscillator.</p>
Full article ">Figure 8
<p>Sensor capacitance <span class="html-italic">vs.</span> pressure of three sample chips.</p>
Full article ">Figure 9
<p>Hysteresis performance of the proposed sensor tag.</p>
Full article ">Figure 10
<p>Photo of the wireless sensor tag.</p>
Full article ">Figure 11
<p>Performance comparisons between conventional architecture and this work; (<b>a</b>) PCE comparison; (<b>b</b>) output voltage comparison.</p>
Full article ">Figure 12
<p>Duty cycle of backscatter signal <span class="html-italic">vs.</span> sensor value variation.</p>
Full article ">
7652 KiB  
Article
Global Coverage Measurement Planning Strategies for Mobile Robots Equipped with a Remote Gas Sensor
by Muhammad Asif Arain, Marco Trincavelli, Marcello Cirillo, Erik Schaffernicht and Achim J. Lilienthal
Sensors 2015, 15(3), 6845-6871; https://doi.org/10.3390/s150306845 - 20 Mar 2015
Cited by 14 | Viewed by 8550
Abstract
The problem of gas detection is relevant to many real-world applications, such as leak detection in industrial settings and landfill monitoring. In this paper, we address the problem of gas detection in large areas with a mobile robotic platform equipped with a remote [...] Read more.
The problem of gas detection is relevant to many real-world applications, such as leak detection in industrial settings and landfill monitoring. In this paper, we address the problem of gas detection in large areas with a mobile robotic platform equipped with a remote gas sensor. We propose an algorithm that leverages a novel method based on convex relaxation for quickly solving sensor placement problems, and for generating an efficient exploration plan for the robot. To demonstrate the applicability of our method to real-world environments, we performed a large number of experimental trials, both on randomly generated maps and on the map of a real environment. Our approach proves to be highly efficient in terms of computational requirements and to provide nearly-optimal solutions. Full article
(This article belongs to the Section Physical Sensors)
Show Figures


<p>(<b>a</b>) the Remote Methane Leak Detector is a TDLAS sensor which can report the integral concentration of methane along its laser beam (parts per million × meter); (<b>b</b>) <span class="html-italic">Gasbot</span>, a robotic platform for gas detection. Gasbot is a research platform based on a Husky A200. It is specially equipped with a methane sensitive remote sensor (RMLD) mounted in conjunction with a laser scanner on a pan-tilt unit, a laser scanner for self-localization and mapping, an anemometer and a thermal camera.</p>
Full article ">
Full article ">
<p>(<b>a</b>) a candidate sensing configuration <span class="html-italic">c</span> allows the robot to scan a circular sector of central angle <span class="html-italic">ϕ</span> and radius <span class="html-italic">r</span>; (<b>b</b>) <span class="html-italic">υ</span><sub> <span class="html-fig-inline" id="sensors-15-06845i4"> <img alt="Sensors 15 06845i4" src="/sensors/sensors-15-06845/article_deploy/html/images/sensors-15-06845i4.png"/></span></sub>(<span class="html-italic">c</span>) is the visibility function which defines which are the cell that are observable from candidate sensing configuration <span class="html-italic">c</span>.</p>
Full article ">
<p>The graph captures the allowed movements of the robot on a grid map when <math display="inline"> <semantics id="sm34"> <mrow> <mi mathvariant="normal">Θ</mi> <mo>=</mo> <mrow> <mo>{</mo> <mrow> <mn>0</mn> <mo>,</mo> <mfrac> <mi>π</mi> <mn>2</mn></mfrac> <mo>,</mo> <mi>π</mi> <mo>,</mo> <mfrac> <mn>3</mn> <mn>2</mn></mfrac> <mi>π</mi></mrow> <mo>}</mo></mrow></mrow></semantics></math> and only forward movements are allowed. Small circles indicate the poses where the robot can stop (the internal arrow indicates the orientation of the robot) and the directed edges its allowed movements. Note that in the figure the poses do not correspond to the centers of the cells, but this is only for clarity reasons.</p>
Full article ">
<p>(<b>a</b>) a simple test map, where obstructed cells are represented in black and traversable ones in white. Candidate sensing configurations are defined over poses in the cells such that <math display="inline"> <semantics id="sm35"> <mrow> <mi mathvariant="normal">Θ</mi> <mo>=</mo> <mrow> <mo>{</mo> <mrow> <mn>0</mn> <mo>,</mo> <mfrac> <mi>π</mi> <mn>2</mn></mfrac> <mo>,</mo> <mi>π</mi> <mo>,</mo> <mfrac> <mn>3</mn> <mn>2</mn></mfrac> <mi>π</mi></mrow> <mo>}</mo></mrow></mrow></semantics></math> and have identical <span class="html-italic">ϕ</span> and <span class="html-italic">r</span> (<span class="html-italic">r</span> = 2 cells, <math display="inline"> <semantics id="sm36"> <mrow> <mi>ϕ</mi> <mo>=</mo> <mfrac> <mi>π</mi> <mn>2</mn></mfrac></mrow></semantics></math>). In this example setup, the movement from one cell to the next requires 1 s, the rotation of <math display="inline"> <semantics id="sm37"> <mrow> <mfrac> <mi>π</mi> <mn>2</mn></mfrac></mrow></semantics></math> requires 0.5 s, and a sensing action takes 4 s; (<b>b</b>) the optimal solution, when traveling and sensing costs are considered at the same time. Here, the curved arrows represent the minimum distance from a sensing configuration to the next on the underlying graph. In this case, the total exploration time is 52.5 s (16.5 s for traveling and 36 s for sensing). (<b>c,d</b>): Here, traveling time is minimized first (see Section 5.1). (c) the minimum cost closed path from which all cells can be observed is calculated; (d) and then the minimum set of sensing configurations is selected, yielding an overall exploration time of 66.5 s. (<b>e,f</b>): Here, sensing time is minimized first (see Section 5.2). (e) the set of minimum cost sensing configurations is selected from which all cells are observable; (f) and then connecting them with the shortest closed path. This approach yields to an overall exploration time of 55 s.</p>
Full article ">
<p>Comparison of all approaches, optimal and disjoint (<span class="html-italic">Opt</span>, <span class="html-italic">D<sub>T</sub></span>, <span class="html-italic">D<sub>S</sub></span>), on three sets of maps. Each set contains maps of varying size, from 3 × 3 to 5 × 5. The solid bars indicate the average values over the 3 maps for 3 × 3 and 10 maps for the rest, and error bars show the minimum and maximum values observed during the trial. (<b>a</b>) shows the solution quality; and (<b>b</b>) shows the computation time taken by the all three approaches on a logarithmic scale.</p>
Full article ">
<p>Comparison of two disjoint approaches up to maps of size 9 × 9. Notations are similar as in the previous figure, <span class="html-italic">i.e.</span>, bars indicate the average values over the 3 maps for 3 × 3 and 10 maps for the rest, and error bars indicate the minimum and maximum values observed during the trial. (<b>a</b>) shows the solution quality of two disjoint approaches (<span class="html-italic">D<sub>T</sub></span>, <span class="html-italic">D<sub>S</sub></span>); and (<b>b</b>) shows the average computation time on a logarithmic scale for both approaches.</p>
Full article ">
<p>The concave loss function <span class="html-italic">f<sub>log,ϵ</sub></span>(<span class="html-italic">x</span>) approximates better the ℓ<sub>0</sub> sparsity count <span class="html-italic">f</span><sub>0</sub>(<span class="html-italic">x</span>) than by the traditional convex ℓ<sub>1</sub> relaxation <span class="html-italic">f</span><sub>1</sub> (<span class="html-italic">x</span>) [<a href="#b48-sensors-15-06845" class="html-bibr">48</a>].</p>
Full article ">
<p>(<b>a</b>) Smaller value of <span class="html-italic">ϵ</span> defines steeper penalty functions; (<b>b</b>) The exponential decay of <span class="html-italic">ϵ</span> during the iterative procedure.</p>
Full article ">
1746 KiB  
Article
Open Hardware: A Role to Play in Wireless Sensor Networks?
by Roy Fisher, Lehlogonolo Ledwaba, Gerhard Hancke and Carel Kruger
Sensors 2015, 15(3), 6818-6844; https://doi.org/10.3390/s150306818 - 20 Mar 2015
Cited by 75 | Viewed by 9925
Abstract
The concept of the Internet of Things is rapidly becoming a reality, with many applications being deployed within industrial and consumer sectors. At the ‘thing’ level—devices and inter-device network communication—the core technical building blocks are generally the same as those found in wireless [...] Read more.
The concept of the Internet of Things is rapidly becoming a reality, with many applications being deployed within industrial and consumer sectors. At the ‘thing’ level—devices and inter-device network communication—the core technical building blocks are generally the same as those found in wireless sensor network implementations. For the Internet of Things to continue growing, we need more plentiful resources for building intelligent devices and sensor networks. Unfortunately, current commercial devices, e.g., sensor nodes and network gateways, tend to be expensive and proprietary, which presents a barrier to entry and arguably slows down further development. There are, however, an increasing number of open embedded platforms available and also a wide selection of off-the-shelf components that can quickly and easily be built into device and network gateway solutions. The question is whether these solutions measure up to built-for-purpose devices. In the paper, we provide a comparison of existing built-for-purpose devices against open source devices. For comparison, we have also designed and rapidly prototyped a sensor node based on off-the-shelf components. We show that these devices compare favorably to built-for-purpose devices in terms of performance, power and cost. Using open platforms and off-the-shelf components would allow more developers to build intelligent devices and sensor networks, which could result in a better overall development ecosystem, lower barriers to entry and rapid growth in the number of IoT applications. Full article
(This article belongs to the Special Issue Wireless Sensor Networks and the Internet of Things)
Show Figures


<p>Internet of Things generic network implementation.</p>
Full article ">
<p>The rapidly prototyped Council for Scientific and Industrial Research (CSIR) Internet of Things node.</p>
Full article ">
<p>The CSIR Internet of Things node functional diagram.</p>
Full article ">
<p>Wireless sensor nodes' current consumption against duty cycle.</p>
Full article ">
<p>Non-traditional nodes' current consumption against duty cycle.</p>
Full article ">
<p>Comparison of processing speed against cost for WSN nodes (commercial off-the-shelf (COTS) and open platforms indicated in green and red, respectively).</p>
Full article ">
<p>Throughput results when applying both confidentiality and integrity for command line application.</p>
Full article ">
1911 KiB  
Article
Broadband and High Sensitive Time-of-Flight Diffraction Ultrasonic Transducers Based on PMNT/Epoxy 1–3 Piezoelectric Composite
by Dongxu Liu, Qingwen Yue, Ji Deng, Di Lin, Xiaobing Li, Wenning Di, Xi'an Wang, Xiangyong Zhao and Haosu Luo
Sensors 2015, 15(3), 6807-6817; https://doi.org/10.3390/s150306807 - 19 Mar 2015
Cited by 26 | Viewed by 8750
Abstract
5–6 MHz PMNT/epoxy 1–3 composites were prepared by a modified dice-and-fill method. They exhibit excellent properties for ultrasonic transducer applications, such as ultrahigh thickness electromechanical coupling coefficient kt (85.7%), large piezoelectric coefficient d33 (1209 pC/N), and relatively low acoustic impedance Z [...] Read more.
5–6 MHz PMNT/epoxy 1–3 composites were prepared by a modified dice-and-fill method. They exhibit excellent properties for ultrasonic transducer applications, such as ultrahigh thickness electromechanical coupling coefficient kt (85.7%), large piezoelectric coefficient d33 (1209 pC/N), and relatively low acoustic impedance Z (1.82 × 107 kg/(m2·s)). Besides, two types of Time-of-Flight Diffraction (TOFD) ultrasonic transducers have been designed, fabricated, and characterized, which have different matching layer schemes with the acoustic impedance of 4.8 and 5.7 × 106 kg/(m2·s), respectively. In the detection on a backwall of 12.7 mm polystyrene, the former exhibits higher detectivity, the relative pulse-echo sensitivity and −6 dB relative bandwidth are −21.93 dB and 102.7%, respectively, while the later exhibits broader bandwidth, the relative pulse-echo sensitivity and −6 dB relative bandwidth are −24.08 dB and 117.3%, respectively. These TOFD ultrasonic transducers based on PMNT/epoxy 1–3 composite exhibit considerably improved performance over the commercial PZT/epoxy 1–3 composite TOFD ultrasonic transducer. Full article
(This article belongs to the Special Issue Acoustic Waveguide Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Photograph of the prepared PMNT/epoxy 1–3 composites; (<b>b</b>) Enlarged image of a randomly selected area on the composite.</p>
Full article ">Figure 2
<p>The impedance and phase angle spectra of the prepared PMNT/epoxy 1–3 composite.</p>
Full article ">Figure 3
<p>The simulated waveforms and frequency spectra of the designed TOFD ultrasonic transducers with (<b>a</b>) scheme I; (<b>b</b>) scheme II.</p>
Full article ">Figure 4
<p>(<b>a</b>) Schematic diagram of the designed TOFD ultrasonic transducer; (<b>b</b>) Photograph of the fabricated TOFD ultrasonic transducer.</p>
Full article ">Figure 5
<p>Comparison of the waveforms and frequency spectra of (<b>a</b>) PMNT/epoxy 1–3 composite TOFD ultrasonic transducers with scheme I; (<b>b</b>) PMNT/epoxy 1–3 composite TOFD ultrasonic transducers with scheme II, and (<b>c</b>) PZT/epoxy 1–3 composite TOFD ultrasonic transducers.</p>
Full article ">
4143 KiB  
Article
Development of a Microfluidic-Based Optical Sensing Device for Label-Free Detection of Circulating Tumor Cells (CTCs) Through Their Lactic Acid Metabolism
by Tzu-Keng Chiu, Kin-Fong Lei, Chia-Hsun Hsieh, Hung-Bo Hsiao, Hung-Ming Wang and Min-Hsien Wu
Sensors 2015, 15(3), 6789-6806; https://doi.org/10.3390/s150306789 - 19 Mar 2015
Cited by 31 | Viewed by 8391
Abstract
This study reports a microfluidic-based optical sensing device for label-free detection of circulating tumor cells (CTCs), a rare cell species in blood circulation. Based on the metabolic features of cancer cells, live CTCs can be quantified indirectly through their lactic acid production. Compared [...] Read more.
This study reports a microfluidic-based optical sensing device for label-free detection of circulating tumor cells (CTCs), a rare cell species in blood circulation. Based on the metabolic features of cancer cells, live CTCs can be quantified indirectly through their lactic acid production. Compared with the conventional schemes for CTC detection, this label-free approach could prevent the biological bias due to the heterogeneity of the surface antigens on cancer cells. In this study, a microfluidic device was proposed to generate uniform water-in-oil cell-encapsulating micro-droplets, followed by the fluorescence-based optical detection of lactic acid produced within the micro-droplets. To test its feasibility to quantify cancer cells, experiments were carried out. Results showed that the detection signals were proportional to the number of cancer cells within the micro-droplets, whereas such signals were insensitive to the existence and number of leukocytes within. To further demonstrate its feasibility for cancer cell detection, the cancer cells with known cell number in a cell suspension was detected based on the method. Results revealed that there was no significant difference between the detected number and the real number of cancer cells. As a whole, the proposed method opens up a new route to detect live CTCs in a label-free manner. Full article
(This article belongs to the Special Issue On-Chip Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Schematic illustration of the microfluidic device (Top-view layout); (<b>b</b>) The photographs of the continuous micro-droplet generation process ((I)-(IV)); (<b>c</b>) Schematic illustration of the assembly of the microfluidic device ((I)-(III): microfabricated PDMS layers, (IV): micro-capillary tube, (V): glass layer); (<b>d</b>) schematic illustration; and (<b>e</b>) photograph of overall experimental setup.</p>
Full article ">Figure 1 Cont.
<p>(<b>a</b>) Schematic illustration of the microfluidic device (Top-view layout); (<b>b</b>) The photographs of the continuous micro-droplet generation process ((I)-(IV)); (<b>c</b>) Schematic illustration of the assembly of the microfluidic device ((I)-(III): microfabricated PDMS layers, (IV): micro-capillary tube, (V): glass layer); (<b>d</b>) schematic illustration; and (<b>e</b>) photograph of overall experimental setup.</p>
Full article ">Figure 2
<p>(<b>a</b>) The quantitative relationship between the flow rates of oil and cell suspension, and the resultant size (diameter) of micro-droplets; (<b>b</b>) Microscopic images of micro-droplets generated under three different operating conditions (oil flow rate: all 750 µL·h<sup>−1</sup>; cell suspension flow rate: (I) 60, (II) 100, and (III) 140 µL·h<sup>−1</sup>); (<b>c</b>) The size distribution of the micro-droplets (Oil flow rate: 750 µL·h<sup>−1</sup>, Cell suspension flow rate: 110 µL·h<sup>−1</sup>; Inset image: microscopic images of micro-droplet); (<b>d</b>) Microscopic observation of cell viability after micro-droplet-based microencapsulation process, and after 3 h static incubation using the Live/Dead<sup>®</sup> fluorescent dye. Green and red dots represent live and dead cells, respectively. The left and right images show the leukocytes, and OEC-M1 cells, respectively.</p>
Full article ">Figure 2 Cont.
<p>(<b>a</b>) The quantitative relationship between the flow rates of oil and cell suspension, and the resultant size (diameter) of micro-droplets; (<b>b</b>) Microscopic images of micro-droplets generated under three different operating conditions (oil flow rate: all 750 µL·h<sup>−1</sup>; cell suspension flow rate: (I) 60, (II) 100, and (III) 140 µL·h<sup>−1</sup>); (<b>c</b>) The size distribution of the micro-droplets (Oil flow rate: 750 µL·h<sup>−1</sup>, Cell suspension flow rate: 110 µL·h<sup>−1</sup>; Inset image: microscopic images of micro-droplet); (<b>d</b>) Microscopic observation of cell viability after micro-droplet-based microencapsulation process, and after 3 h static incubation using the Live/Dead<sup>®</sup> fluorescent dye. Green and red dots represent live and dead cells, respectively. The left and right images show the leukocytes, and OEC-M1 cells, respectively.</p>
Full article ">Figure 3
<p>The observation of fluorescent intensity with time. The fluorescence-based lactate reagent was mixed with lactate solution. The fluorescence detection was carried out periodically for up to 22 h.</p>
Full article ">Figure 4
<p>The detection signals of micro-droplets containing different levels (1, 2, 4, 8 and 16 cells/micro-droplet) of (<b>a</b>) OEC-M1 cells; (<b>b</b>) leukocytes; and (<b>c</b>) The quantitative relationship between the detection signals and the cell (OEC-M1 cells and leukocytes) number in each micro-droplet. The fluorescence detection was carried out after 3 h static incubation. The results are displayed as the mean ± the standard deviation of three separate experiments. Significant differences are expressed as ★ (<span class="html-italic">p</span> &lt; 0.05).</p>
Full article ">Figure 4 Cont.
<p>The detection signals of micro-droplets containing different levels (1, 2, 4, 8 and 16 cells/micro-droplet) of (<b>a</b>) OEC-M1 cells; (<b>b</b>) leukocytes; and (<b>c</b>) The quantitative relationship between the detection signals and the cell (OEC-M1 cells and leukocytes) number in each micro-droplet. The fluorescence detection was carried out after 3 h static incubation. The results are displayed as the mean ± the standard deviation of three separate experiments. Significant differences are expressed as ★ (<span class="html-italic">p</span> &lt; 0.05).</p>
Full article ">Figure 5
<p>The experimental quantification of the number of OEC-M1 cells through the proposed method and its comparison with the calculated number. The results are displayed as the mean ± the standard deviation of 3 separate experiments.</p>
Full article ">
2658 KiB  
Article
Human Detection Based on the Generation of a Background Image by Using a Far-Infrared Light Camera
by Eun Som Jeon, Jong-Suk Choi, Ji Hoon Lee, Kwang Yong Shin, Yeong Gon Kim, Toan Thanh Le and Kang Ryoung Park
Sensors 2015, 15(3), 6763-6788; https://doi.org/10.3390/s150306763 - 19 Mar 2015
Cited by 43 | Viewed by 12054
Abstract
The need for computer vision-based human detection has increased in fields, such as security, intelligent surveillance and monitoring systems. However, performance enhancement of human detection based on visible light cameras is limited, because of factors, such as nonuniform illumination, shadows and low external [...] Read more.
The need for computer vision-based human detection has increased in fields, such as security, intelligent surveillance and monitoring systems. However, performance enhancement of human detection based on visible light cameras is limited, because of factors, such as nonuniform illumination, shadows and low external light in the evening and night. Consequently, human detection based on thermal (far-infrared light) cameras has been considered as an alternative. However, its performance is influenced by the factors, such as low image resolution, low contrast and the large noises of thermal images. It is also affected by the high temperature of backgrounds during the day. To solve these problems, we propose a new method for detecting human areas in thermal camera images. Compared to previous works, the proposed research is novel in the following four aspects. One background image is generated by median and average filtering. Additional filtering procedures based on maximum gray level, size filtering and region erasing are applied to remove the human areas from the background image. Secondly, candidate human regions in the input image are located by combining the pixel and edge difference images between the input and background images. The thresholds for the difference images are adaptively determined based on the brightness of the generated background image. Noise components are removed by component labeling, a morphological operation and size filtering. Third, detected areas that may have more than two human regions are merged or separated based on the information in the horizontal and vertical histograms of the detected area. This procedure is adaptively operated based on the brightness of the generated background image. Fourth, a further procedure for the separation and removal of the candidate human regions is performed based on the size and ratio of the height to width information of the candidate regions considering the camera viewing direction and perspective projection. Experimental results with two types of databases confirm that the proposed method outperforms other methods. Full article
(This article belongs to the Special Issue Frontiers in Infrared Photodetection)
Show Figures

Figure 1

Figure 1
<p>Overall procedure of the proposed method.</p>
Full article ">Figure 2
<p>Flow chart of generating a background image.</p>
Full article ">Figure 3
<p>Examples of obtaining the background image from an open database: (<b>a</b>) preliminary background image obtained by temporal averaging; (<b>b</b>) extracted human areas by the binarization, labeling, size filtering and a morphological operation of <a href="#sensors-15-06763-f002" class="html-fig">Figure 2</a>; and (<b>c</b>) the generated final background image.</p>
Full article ">Figure 4
<p>The first example for obtaining a background image from our database: (<b>a</b>) preliminary background image obtained by temporal averaging; (<b>b</b>) extracted human areas; and (<b>c</b>) the generated final background image.</p>
Full article ">Figure 5
<p>The second example for obtaining a background image from our database: (<b>a</b>) preliminary background image obtained by temporal averaging; (<b>b</b>) extracted human areas; and (<b>c</b>) the generated final background image.</p>
Full article ">Figure 6
<p>Example of the fusion of two difference images: (<b>a</b>) input image; (<b>b</b>) background image; (<b>c</b>) pixel difference image; (<b>d</b>) edge difference image; and (<b>e</b>) fusion of the pixel and edge difference images.</p>
Full article ">Figure 7
<p>Division of the candidate region within an input image based on the horizontal histogram: (<b>a</b>) input image; (<b>b</b>) detected candidate region and its horizontal histogram; and (<b>c</b>) the division result of the candidate region.</p>
Full article ">Figure 7 Cont.
<p>Division of the candidate region within an input image based on the horizontal histogram: (<b>a</b>) input image; (<b>b</b>) detected candidate region and its horizontal histogram; and (<b>c</b>) the division result of the candidate region.</p>
Full article ">Figure 8
<p>Division of the candidate region within an input image based on the vertical histogram: (<b>a</b>) input image; (<b>b</b>) detected candidate region and its vertical histogram; and (<b>c</b>) the division result of the candidate region.</p>
Full article ">Figure 9
<p>Example of different sizes of human areas resulting from camera viewing direction and perspective projection: (<b>a</b>) input image, including three detected areas of humans; and (<b>b</b>) information of the width, height and size of the three detected human areas, respectively.</p>
Full article ">Figure 10
<p>Result of obtaining the final region of the human area: (<b>a</b>) result after the process based on the separation of histogram information; (<b>b</b>) result after the process based on camera viewing direction and perspective projection; and (<b>c</b>) result of the final detected region of the human area.</p>
Full article ">Figure 10 Cont.
<p>Result of obtaining the final region of the human area: (<b>a</b>) result after the process based on the separation of histogram information; (<b>b</b>) result after the process based on camera viewing direction and perspective projection; and (<b>c</b>) result of the final detected region of the human area.</p>
Full article ">Figure 11
<p>Comparisons of generated background images with the OTCBVS benchmark dataset. The left-upper [<a href="#B28-sensors-15-06763" class="html-bibr">28</a>], right-upper [<a href="#B26-sensors-15-06763" class="html-bibr">26</a>], left-lower [<a href="#B24-sensors-15-06763" class="html-bibr">24</a>,<a href="#B25-sensors-15-06763" class="html-bibr">25</a>,<a href="#B33-sensors-15-06763" class="html-bibr">33</a>], and right-lower figures are generated by previous methods and the proposed one, respectively.</p>
Full article ">Figure 12
<p>Comparisons of generated background images with our database (second database). The left, middle and right figures of (<b>a</b>,<b>b</b>) are by simple temporal averaging operation [<a href="#B28-sensors-15-06763" class="html-bibr">28</a>], averaging the frames in two difference sequences [<a href="#B26-sensors-15-06763" class="html-bibr">26</a>] and the proposed method, respectively: (<b>a</b>) with Sequence 4 of [<a href="#B28-sensors-15-06763" class="html-bibr">28</a>] (left figure) and the proposed method (right figure) and with Sequences 4 and 1 of [<a href="#B26-sensors-15-06763" class="html-bibr">26</a>] (middle figure); (<b>b</b>) with Sequence 5 of [<a href="#B28-sensors-15-06763" class="html-bibr">28</a>] (left figure) and the proposed method (right figure) and with Sequences 5 and 2 of [<a href="#B26-sensors-15-06763" class="html-bibr">26</a>] (middle figure).</p>
Full article ">Figure 13
<p>Detection results with the OTCBVS benchmark dataset (<b>a</b>–<b>f</b>) and our database (<b>g</b>–<b>j</b>). Results of images in: (<b>a</b>) Sequence 1; (<b>b</b>) Sequence 3; (<b>c</b>) Sequence 4; (<b>d</b>) Sequence 5; (<b>e</b>) Sequence 6; (<b>f</b>) Sequence 7; (<b>g</b>) Sequence 2; (<b>h</b>) Sequence 3; (<b>i</b>) Sequence 4; and (<b>j</b>) Sequence 6.</p>
Full article ">Figure 13 Cont.
<p>Detection results with the OTCBVS benchmark dataset (<b>a</b>–<b>f</b>) and our database (<b>g</b>–<b>j</b>). Results of images in: (<b>a</b>) Sequence 1; (<b>b</b>) Sequence 3; (<b>c</b>) Sequence 4; (<b>d</b>) Sequence 5; (<b>e</b>) Sequence 6; (<b>f</b>) Sequence 7; (<b>g</b>) Sequence 2; (<b>h</b>) Sequence 3; (<b>i</b>) Sequence 4; and (<b>j</b>) Sequence 6.</p>
Full article ">Figure 14
<p>Overlapping area of ground truth and detected boxes.</p>
Full article ">Figure 15
<p>Detection error cases with the OTCBVS benchmark dataset: (<b>a</b>) original image; (<b>b</b>) result of the proposed method.</p>
Full article ">Figure 16
<p>Detection error cases with our database: (<b>a</b>) original image; (<b>b</b>) result of the proposed method.</p>
Full article ">
2138 KiB  
Article
Location Detection and Tracking of Moving Targets by a 2D IR-UWB Radar System
by Van-Han Nguyen and Jae-Young Pyun
Sensors 2015, 15(3), 6740-6762; https://doi.org/10.3390/s150306740 - 19 Mar 2015
Cited by 84 | Viewed by 13183
Abstract
In indoor environments, the Global Positioning System (GPS) and long-range tracking radar systems are not optimal, because of signal propagation limitations in the indoor environment. In recent years, the use of ultra-wide band (UWB) technology has become a possible solution for object detection, [...] Read more.
In indoor environments, the Global Positioning System (GPS) and long-range tracking radar systems are not optimal, because of signal propagation limitations in the indoor environment. In recent years, the use of ultra-wide band (UWB) technology has become a possible solution for object detection, localization and tracking in indoor environments, because of its high range resolution, compact size and low cost. This paper presents improved target detection and tracking techniques for moving objects with impulse-radio UWB (IR-UWB) radar in a short-range indoor area. This is achieved through signal-processing steps, such as clutter reduction, target detection, target localization and tracking. In this paper, we introduce a new combination consisting of our proposed signal-processing procedures. In the clutter-reduction step, a filtering method that uses a Kalman filter (KF) is proposed. Then, in the target detection step, a modification of the conventional CLEAN algorithm which is used to estimate the impulse response from observation region is applied for the advanced elimination of false alarms. Then, the output is fed into the target localization and tracking step, in which the target location and trajectory are determined and tracked by using unscented KF in two-dimensional coordinates. In each step, the proposed methods are compared to conventional methods to demonstrate the differences in performance. The experiments are carried out using actual IR-UWB radar under different scenarios. The results verify that the proposed methods can improve the probability and efficiency of target detection and tracking. Full article
(This article belongs to the Special Issue Sensors for Indoor Mapping and Navigation)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Signal processing for localizing and tracking a moving object.</p>
Full article ">Figure 2
<p>CLEAN detection algorithm.</p>
Full article ">Figure 3
<p>Compensation for a weak signal (two targets are moving in this example): (<b>a</b>) the signal observed before compensation and (<b>b</b>) the signal observed after compensation.</p>
Full article ">Figure 4
<p>Jumping-window method for eliminating false alarms: (<b>a</b>) one-dimensional (1D) window and (<b>b</b>) two-dimensional (2D) window.</p>
Full article ">Figure 5
<p>Locations of radars and the directions of target movement in the experiments.</p>
Full article ">Figure 6
<p>Radar scan: (<b>a</b>) before clutter reduction; (<b>b</b>) after application of clutter reduction, (<b>c</b>) after application of SVD clutter reduction; and (<b>d</b>) after application of KF-based clutter reduction.</p>
Full article ">Figure 7
<p>Radargrams: (<b>a</b>) before clutter reduction; (<b>b</b>) after application of EA clutter reduction; (<b>c</b>) after application of SVD clutter reduction; and (<b>d</b>) after application of KF-based clutter reduction.</p>
Full article ">Figure 7 Cont.
<p>Radargrams: (<b>a</b>) before clutter reduction; (<b>b</b>) after application of EA clutter reduction; (<b>c</b>) after application of SVD clutter reduction; and (<b>d</b>) after application of KF-based clutter reduction.</p>
Full article ">Figure 8
<p>Radargrams: (<b>a</b>) before detection; (<b>b</b>) after detection with the conventional CLEAN algorithm; and (<b>c</b>) after detection with the modified CLEAN algorithm.</p>
Full article ">Figure 9
<p>Tracking of a target in two-dimensional coordinates.</p>
Full article ">
2047 KiB  
Article
Multi-Layer Sparse Representation for Weighted LBP-Patches Based Facial Expression Recognition
by Qi Jia, Xinkai Gao, He Guo, Zhongxuan Luo and Yi Wang
Sensors 2015, 15(3), 6719-6739; https://doi.org/10.3390/s150306719 - 19 Mar 2015
Cited by 23 | Viewed by 7255
Abstract
In this paper, a novel facial expression recognition method based on sparse representation is proposed. Most contemporary facial expression recognition systems suffer from limited ability to handle image nuisances such as low resolution and noise. Especially for low intensity expression, most of the [...] Read more.
In this paper, a novel facial expression recognition method based on sparse representation is proposed. Most contemporary facial expression recognition systems suffer from limited ability to handle image nuisances such as low resolution and noise. Especially for low intensity expression, most of the existing training methods have quite low recognition rates. Motivated by sparse representation, the problem can be solved by finding sparse coefficients of the test image by the whole training set. Deriving an effective facial representation from original face images is a vital step for successful facial expression recognition. We evaluate facial representation based on weighted local binary patterns, and Fisher separation criterion is used to calculate the weighs of patches. A multi-layer sparse representation framework is proposed for multi-intensity facial expression recognition, especially for low-intensity expressions and noisy expressions in reality, which is a critical problem but seldom addressed in the existing works. To this end, several experiments based on low-resolution and multi-intensity expressions are carried out. Promising results on publicly available databases demonstrate the potential of the proposed approach. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>The basic LBP operator.</p>
Full article ">Figure 2
<p>LBP feature extracting with weighted patches. (<b>a</b>) Histogram of LBP features; (<b>b</b>) Weighted patches.</p>
Full article ">Figure 3
<p>Classification result for different testing expressions (<b>a</b>) Angry; (<b>b</b>) Disgust; (<b>c</b>) Fear; (<b>d</b>) Happy; (<b>e</b>) Sad; (<b>f</b>) Surprise.</p>
Full article ">Figure 3 Cont.
<p>Classification result for different testing expressions (<b>a</b>) Angry; (<b>b</b>) Disgust; (<b>c</b>) Fear; (<b>d</b>) Happy; (<b>e</b>) Sad; (<b>f</b>) Surprise.</p>
Full article ">Figure 4
<p>Anger and Disgust in different intensity (<b>a</b>) Anger in high intensity; (<b>b</b>) Disgust in high intensity; (<b>c</b>) Anger in low intensity; (<b>d</b>) Disgust in low intensity.</p>
Full article ">Figure 5
<p>Multi-layer sparse representation model.</p>
Full article ">Figure 6
<p>Recognition result of “Fear” with SR and MLSR. (<b>a</b>) “Fear” in low intensity; (<b>b</b>) Recognition result with SR; (<b>c</b>) Recognition result with MLSR.</p>
Full article ">Figure 7
<p>Comparisons on image resolutions between SR and SVM.</p>
Full article ">Figure 8
<p>Multi-intensity example of six expressions: Angry, Disgust, Fear, Happy, Sad, Surprise. (<b>a</b>) High intensity; (<b>b</b>) Moderate intensity; (<b>c</b>) Low intensity.</p>
Full article ">Figure 9
<p>The comparison between MLSR and SR about multi-intensity recognition. (<b>a</b>) High intensity; (<b>b</b>) Moderate intensity; (<b>c</b>) Low intensity.</p>
Full article ">Figure 10
<p>Facial expression with noise. (<b>a</b>) Anger; (<b>b</b>) Disgust; (<b>c</b>) Sadness.</p>
Full article ">Figure 11
<p>The comparison between SR and SVM against noise. (<b>a</b>) Angry; (<b>b</b>) Disgust; (<b>c</b>) Sad.</p>
Full article ">Figure 12
<p>The same expression from different datasets. (<b>a</b>) The CK+ dataset; (<b>b</b>) The JAFFE dataset.</p>
Full article ">
4424 KiB  
Article
Soil Water Content Assessment: Critical Issues Concerning the Operational Application of the Triangle Method
by Antonino Maltese, Fulvio Capodici, Giuseppe Ciraolo and Goffredo La Loggia
Sensors 2015, 15(3), 6699-6718; https://doi.org/10.3390/s150306699 - 19 Mar 2015
Cited by 26 | Viewed by 6192
Abstract
Knowledge of soil water content plays a key role in water management efforts to improve irrigation efficiency. Among the indirect estimation methods of soil water content via Earth Observation data is the triangle method, used to analyze optical and thermal features because these [...] Read more.
Knowledge of soil water content plays a key role in water management efforts to improve irrigation efficiency. Among the indirect estimation methods of soil water content via Earth Observation data is the triangle method, used to analyze optical and thermal features because these are primarily controlled by water content within the near-surface evaporation layer and root zone in bare and vegetated soils. Although the soil-vegetation-atmosphere transfer theory describes the ongoing processes, theoretical models reveal limits for operational use. When applying simplified empirical formulations, meteorological forcing could be replaced with alternative variables when the above-canopy temperature is unknown, to mitigate the effects of calibration inaccuracies or to account for the temporal admittance of the soil. However, if applied over a limited area, a characterization of both dry and wet edges could not be properly achieved; thus, a multi-temporal analysis can be exploited to include outer extremes in soil water content. A diachronic empirical approach introduces the need to assume a constancy of other meteorological forcing variables that control thermal features. Airborne images were acquired on a Sicilian vineyard during most of an entire irrigation period (fruit-set to ripening stages, vintage 2008), during which in situ soil water content was measured to set up the triangle method. Within this framework, we tested the triangle method by employing alternative thermal forcing. The results were inaccurate when air temperature at airborne acquisition was employed. Sonic and aerodynamic air temperatures confirmed and partially explained the limits of simultaneous meteorological forcing, and the use of proxy variables improved model accuracy. The analysis indicates that high spatial resolution does not necessarily imply higher accuracies. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Scatterplot of NDVI <span class="html-italic">vs.</span> Δ<span class="html-italic">T</span> with superimposed dry and wet edges. Blue dots characterize maximum evaporation and transpiration, and minima are represented by red dots. The generic NDVI-Δ<span class="html-italic">T</span> pair is indicated by a K subscript; V subscript represents the triangle vertex.</p>
Full article ">Figure 2
<p>Flow diagram of the best fitting parameters procedure; dashed connectors indicate a fine-tuning procedure.</p>
Full article ">Figure 3
<p>Flow diagram for the optimal spatial resolution determination.</p>
Full article ">Figure 4
<p>The study area included: vine cultivars (reported on the right side legend); the flux tower position (red dot), <span class="html-italic">in situ</span> measurements plots (highlighted in yellow); airborne image footprint (dashed black rectangle); canopy management (vertical trellis system) (lower-left box).</p>
Full article ">Figure 5
<p>Variability with <span class="html-italic">P<sub>min</sub></span> of some statistical parameters used to characterize <span class="html-italic">in situ</span> <span class="html-italic">vs.</span> remote sensing θ: (<b>a</b>) determination coefficient <span class="html-italic">r<sup>2</sup></span>, top left panel; (<b>b</b>) slope <span class="html-italic">m</span>, upper central panel; (<b>c</b>) intercept <span class="html-italic">q</span>, upper right panel; (<b>d</b>) Student test <span class="html-italic">T-test</span>, lower left panel; (<b>e</b>) Fisher test <span class="html-italic">F-test</span>, lower central panel; (<b>f</b>) mean absolute error MAE, lower right panel; An interpolation line is reported in black.</p>
Full article ">Figure 6
<p>Scatterplot of NDVI <span class="html-italic">vs.</span> Δ<span class="html-italic">T</span>. Over imposed dry (<span class="html-italic">P<sub>max</sub></span> = 97) and wet edges (<span class="html-italic">P<sub>min</sub></span> = 7). Pixels from different images are represented with colours ranging from red to blue, to indicate increasing average θ.</p>
Full article ">Figure 7
<p>Variability of some statistical parameters used to characterize <span class="html-italic">in situ</span> <span class="html-italic">vs.</span> remote sensing θ at increasing θ<span class="html-italic"><sub>Rs</sub></span> aggregation scale: (<b>a</b>) <span class="html-italic">r<sup>2</sup></span>, upper left; (<b>b</b>) <span class="html-italic">m</span>, upper right panel; (<b>c</b>) intercept <span class="html-italic">q</span>, lower left panel; (<b>d</b>) MAE, lower right panel.</p>
Full article ">Figure 8
<p>Spatial distribution of θ for the DOY exhibiting the higher spatial variability (DOY 204, left panel) and current percentage of available water to the plant roots (upper-right panel); θ statistics (2nd, 10th, 90th, 98th percentiles and average spatial values) for the whole time series (lower-right panel).</p>
Full article ">
2273 KiB  
Article
Development of a Capacitive Ice Sensor to Measure Ice Growth in Real Time
by Xiang Zhi, Hyo Chang Cho, Bo Wang, Cheol Hee Ahn, Hyeong Soon Moon and Jeung Sang Go
Sensors 2015, 15(3), 6688-6698; https://doi.org/10.3390/s150306688 - 19 Mar 2015
Cited by 22 | Viewed by 9145
Abstract
This paper presents the development of the capacitive sensor to measure the growth of ice on a fuel pipe surface in real time. The ice sensor consists of pairs of electrodes to detect the change in capacitance and a thermocouple temperature sensor to [...] Read more.
This paper presents the development of the capacitive sensor to measure the growth of ice on a fuel pipe surface in real time. The ice sensor consists of pairs of electrodes to detect the change in capacitance and a thermocouple temperature sensor to examine the ice formation situation. In addition, an environmental chamber was specially designed to control the humidity and temperature to simulate the ice formation conditions. From the humidity, a water film is formed on the ice sensor, which results in an increase in capacitance. Ice nucleation occurs, followed by the rapid formation of frost ice that decreases the capacitance suddenly. The capacitance is saturated. The developed ice sensor explains the ice growth providing information about the icing temperature in real time. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Schematic view of the capacitive ice sensor installed in a pipe to measure the ice growth from the inner surface of the fuel pipe.</p>
Full article ">Figure 2
<p>Fabricated ice sensor array on the silicon wafer. (<b>a</b>) Capacitive ice sensor (front); (<b>b</b>) Thermocouple (back)</p>
Full article ">Figure 2 Cont.
<p>Fabricated ice sensor array on the silicon wafer. (<b>a</b>) Capacitive ice sensor (front); (<b>b</b>) Thermocouple (back)</p>
Full article ">Figure 3
<p>Measurement of capacitance for an increasing water height by using the fabricated ice sensor.</p>
Full article ">Figure 4
<p>Temperature measured by the fabricated thermocouple.</p>
Full article ">Figure 5
<p>Cooling chamber to control temperature and humidity.</p>
Full article ">Figure 6
<p>Cooling performance of the fabricated cooling chamber.</p>
Full article ">Figure 7
<p>Environmental control performance of humidity in the cooling chamber.</p>
Full article ">Figure 8
<p>Picture of the plastic packaged ice sensor</p>
Full article ">Figure 9
<p>Real time measurement of ice formation cycle.</p>
Full article ">Figure 10
<p>Real time measurement of ice growth for the different relative humidity: (<b>a</b>) at a relative humidity of 30%; (<b>b</b>) at a relative humidity of 35%; (<b>c</b>) at a relative humidity of 44%; (<b>d</b>) at a relative humidity of 52%; (<b>e</b>) at a relative humidity of 60%.</p>
Full article ">Figure 10 Cont.
<p>Real time measurement of ice growth for the different relative humidity: (<b>a</b>) at a relative humidity of 30%; (<b>b</b>) at a relative humidity of 35%; (<b>c</b>) at a relative humidity of 44%; (<b>d</b>) at a relative humidity of 52%; (<b>e</b>) at a relative humidity of 60%.</p>
Full article ">Figure 10 Cont.
<p>Real time measurement of ice growth for the different relative humidity: (<b>a</b>) at a relative humidity of 30%; (<b>b</b>) at a relative humidity of 35%; (<b>c</b>) at a relative humidity of 44%; (<b>d</b>) at a relative humidity of 52%; (<b>e</b>) at a relative humidity of 60%.</p>
Full article ">
1504 KiB  
Article
Beamforming and Power Control in Sensor Arrays Using Reinforcement Learning
by Náthalee C. Almeida, Marcelo A.C. Fernandes and Adrião D.D. Neto
Sensors 2015, 15(3), 6668-6687; https://doi.org/10.3390/s150306668 - 19 Mar 2015
Cited by 5 | Viewed by 6247
Abstract
The use of beamforming and power control, combined or separately, has advantages and disadvantages, depending on the application. The combined use of beamforming and power control has been shown to be highly effective in applications involving the suppression of interference signals from different [...] Read more.
The use of beamforming and power control, combined or separately, has advantages and disadvantages, depending on the application. The combined use of beamforming and power control has been shown to be highly effective in applications involving the suppression of interference signals from different sources. However, it is necessary to identify efficient methodologies for the combined operation of these two techniques. The most appropriate technique may be obtained by means of the implementation of an intelligent agent capable of making the best selection between beamforming and power control. The present paper proposes an algorithm using reinforcement learning (RL) to determine the optimal combination of beamforming and power control in sensor arrays. The RL algorithm used was Q-learning, employing an ε-greedy policy, and training was performed using the offline method. The simulations showed that RL was effective for implementation of a switching policy involving the different techniques, taking advantage of the positive characteristics of each technique in terms of signal reception. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Functional diagram of an adaptive array.</p>
Full article ">Figure 2
<p>Scheme of interaction between the agent and environment.</p>
Full article ">Figure 3
<p>(<b>a</b>) System response. Agent started with SINR = −0.8 dB; (<b>b</b>) System response. Agent started with SINR = 0 dB; (<b>c</b>) System response. Agent started with SINR = 5 dB.</p>
Full article ">Figure 4
<p>The switching sequence among the two techniques.</p>
Full article ">Figure 5
<p>(<b>a</b>) System response. Agent started with SINR = −0.8 dB; (<b>b</b>) System response. Agent started with SINR = 0 dB; (<b>c</b>) System response. Agent started with SINR = 5 dB.</p>
Full article ">Figure 6
<p>The switching sequence among the two techniques.</p>
Full article ">
3719 KiB  
Article
Water Area Extraction Using RADARSAT SAR Imagery Combined with Landsat Imagery and Terrain Information
by Seunghwan Hong, Hyoseon Jang, Namhoon Kim and Hong-Gyoo Sohn
Sensors 2015, 15(3), 6652-6667; https://doi.org/10.3390/s150306652 - 19 Mar 2015
Cited by 71 | Viewed by 7295
Abstract
This paper exploits an effective water extraction method using SAR imagery in preparation for flood mapping in unpredictable flood situations. The proposed method is based on the thresholding method using SAR amplitude, terrain information, and object-based classification techniques for noise removal. Since the [...] Read more.
This paper exploits an effective water extraction method using SAR imagery in preparation for flood mapping in unpredictable flood situations. The proposed method is based on the thresholding method using SAR amplitude, terrain information, and object-based classification techniques for noise removal. Since the water areas in SAR images have the lowest amplitude value, the thresholding method using SAR amplitude could effectively extract water bodies. However, the reflective properties of water areas in SAR imagery cannot distinguish the occluded areas caused by steep relief and they can be eliminated with terrain information. In spite of the thresholding method using SAR amplitude and terrain information, noises which interfered with users’ interpretation of water maps still remained and the object-based classification using an object size criterion was applied for the noise removal and the criterion was determined by a histogram-based technique. When only using SAR amplitude information, the overall accuracy was 83.67%. However, using SAR amplitude, terrain information and the noise removal technique, the overall classification accuracy over the study area turned out to be 96.42%. In particular, user accuracy was improved by 46.00%. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>The schematic diagram of water area extraction.</p>
Full article ">Figure 2
<p>RADARSAT-1 SAR images acquired over the study site area: (<b>a</b>) Not flooded; (<b>b</b>) Flooded.</p>
Full article ">Figure 3
<p>Terrain information: (<b>a</b>) DEM; (<b>b</b>) DSM.</p>
Full article ">Figure 4
<p>(<b>a</b>) Landsat TM imagery; (<b>b</b>) land cover map created using Landsat imagery (In general, class 1 represented forested land, class 2 was urban land and class 3 contained agricultural land, rangeland and wetland, and barren land and tideland consisted of class 4); (<b>c</b>) orthorectified aerial image over the study area.</p>
Full article ">Figure 5
<p>GCPs used for geometric correction and result of geometric and radiometric topographic correction: (<b>a</b>) GCPs (yellow: Control point, red: Check point, blue: reference water map); (<b>b</b>) orthorectified aerial image and SAR imagery before terrain effect correction (red: an actual ridge, yellow: a ridge represented in SAR imagery, orange: inconsistency of ridge location due to terrain distortion); (<b>c</b>) orthorectified aerial image and SAR imagery after terrain effect correction (red: an actual ridge corresponds with a ridge represented in SAR imagery).</p>
Full article ">Figure 6
<p>The water classification results of: (<b>a</b>) Case #1; (<b>b</b>) Case #2; (<b>c</b>) Case #3.</p>
Full article ">Figure 7
<p>(<b>a</b>) Example of labelled objects (Gray: water, White: non-water); (<b>b</b>) Determination of threshold to remove the misclassified objects.</p>
Full article ">Figure 8
<p>The water classification results of: (<b>a</b>) Case #4; (<b>b</b>) Case #5; (<b>c</b>) Case #6.</p>
Full article ">Figure 9
<p>Flood map (red: flooded areas, white: permanent water areas): (<b>a</b>) Case #1; (<b>b</b>) Case #2; (<b>c</b>) Case #3; (<b>d</b>) Case #4; (<b>e</b>) Case #5; (<b>f</b>) Case #6.</p>
Full article ">Figure 10
<p>Map of flood area (red: flooded areas, white: permanent water areas).</p>
Full article ">
21391 KiB  
Article
Wavelength-Adaptive Dehazing Using Histogram Merging-Based Classification for UAV Images
by Inhye Yoon, Seokhwa Jeong, Jaeheon Jeong, Doochun Seo and Joonki Paik
Sensors 2015, 15(3), 6633-6651; https://doi.org/10.3390/s150306633 - 19 Mar 2015
Cited by 29 | Viewed by 7351
Abstract
Since incoming light to an unmanned aerial vehicle (UAV) platform can be scattered by haze and dust in the atmosphere, the acquired image loses the original color and brightness of the subject. Enhancement of hazy images is an important task in improving the [...] Read more.
Since incoming light to an unmanned aerial vehicle (UAV) platform can be scattered by haze and dust in the atmosphere, the acquired image loses the original color and brightness of the subject. Enhancement of hazy images is an important task in improving the visibility of various UAV images. This paper presents a spatially-adaptive dehazing algorithm that merges color histograms with consideration of the wavelength-dependent atmospheric turbidity. Based on the wavelength-adaptive hazy image acquisition model, the proposed dehazing algorithm consists of three steps: (i) image segmentation based on geometric classes; (ii) generation of the context-adaptive transmission map; and (iii) intensity transformation for enhancing a hazy UAV image. The major contribution of the research is a novel hazy UAV image degradation model by considering the wavelength of light sources. In addition, the proposed transmission map provides a theoretical basis to differentiate visually important regions from others based on the turbidity and merged classification results. Full article
(This article belongs to the Special Issue UAV Sensors for Environmental Monitoring)
Show Figures


<p>Test images acquired from the same indoor scene with a different amount of steam generated by a humidifier: (<b>a</b>) the haze-free image without steam; and (<b>b</b>–<b>d</b>) the hazy images with different amounts of steam.</p>
Full article ">
<p>The distribution of the RGB color of the red rectangular regions in <a href="#f1-sensors-15-06633" class="html-fig">Figure 1</a>.</p>
Full article ">
<p>The RGB color histograms of the white regions in <a href="#f1-sensors-15-06633" class="html-fig">Figure 1: (<b>a</b>)</a> red, green and blue (from left to right) color histograms of the white regions of the haze-free image shown in <a href="#f1-sensors-15-06633" class="html-fig">Figure 1a; and (<b>b</b>–<b>d</b>)</a> red, green and blue color histograms of the hazy images shown in <a href="#f1-sensors-15-06633" class="html-fig">Figure 1b–d</a>.</p>
Full article ">
<p>The proposed wavelength-adaptive UAV image formation model in a hazy atmosphere acquired by a UAV platform. <span class="html-italic">f</span><sub>sun</sub>(λ) and <span class="html-italic">f</span><sub>sky</sub>(λ) respectively represent the sun and sky light. The sky light <span class="html-italic">f</span><sub>sky</sub>(λ) represents the light component that is scattered in the atmosphere.</p>
Full article ">
<p>The proposed single UAV image-based dehazing algorithm for enhancing hazy UAV images.</p>
Full article ">
<p>Illustration of the proposed segmentation algorithm: (<b>a</b>) five superpixels of an input image and (<b>b</b>–<b>d</b>) regions containing <span class="html-italic">s</span><sub>1</sub> for three hypotheses, such as <span class="html-italic">h<sub>j</sub></span><sub>1</sub>, <span class="html-italic">j</span> = 1, 2, 3.</p>
Full article ">
<p>(<b>a</b>) A hazy aerial image; (<b>b</b>) the corresponding histogram classified into the dark, middle and bright ranges; (<b>c</b>) the result of histogram merging-based segmentation; and (<b>d</b>) the result of labeling.</p>
Full article ">
<p>Results of dehazing: (<b>a</b>) the transmission map using the proposed method; (<b>b</b>) the labeled sky image using the proposed segmentation method; and (<b>c</b>) the dehazed image using the proposed modified transmission map and atmospheric light.</p>
Full article ">
<p>Performance evaluation of color restoration using a simulated hazy image: (<b>a</b>) original haze-free image; (<b>b</b>) the simulated hazy image; and (<b>c</b>) the dehazed image using the proposed method.</p>
Full article ">
909 KiB  
Article
Sensing in the Collaborative Internet of Things
by João B. Borges Neto, Thiago H. Silva, Renato Martins Assunção, Raquel A. F. Mini and Antonio A. F. Loureiro
Sensors 2015, 15(3), 6607-6632; https://doi.org/10.3390/s150306607 - 19 Mar 2015
Cited by 26 | Viewed by 8476
Abstract
We are entering a new era of computing technology, the era of Internet of Things (IoT). An important element for this popularization is the large use of off-the-shelf sensors. Most of those sensors will be deployed by different owners, generally common users, creating [...] Read more.
We are entering a new era of computing technology, the era of Internet of Things (IoT). An important element for this popularization is the large use of off-the-shelf sensors. Most of those sensors will be deployed by different owners, generally common users, creating what we call the Collaborative IoT. This collaborative IoT helps to increase considerably the amount and availability of collected data for different purposes, creating new interesting opportunities, but also several challenges. For example, it is very challenging to search for and select a desired sensor or a group of sensors when there is no description about the provided sensed data or when it is imprecise. Given that, in this work we characterize the properties of the sensed data in the Internet of Things, mainly the sensed data contributed by several sources, including sensors from common users. We conclude that, in order to safely use data available in the IoT, we need a filtering process to increase the data reliability. In this direction, we propose a new simple and powerful approach that helps to select reliable sensors. We tested our method for different types of sensed data, and the results reveal the effectiveness in the correct selection of sensor data. Full article
(This article belongs to the Special Issue Wireless Sensor Networks and the Internet of Things)
Show Figures


<p>Comparison between five possible reference services and the average merged sample, for the two datasets.</p>
Full article ">
<p>Reference data samples for different types of data temperature, pressure, humidity and wind speed (Dataset 2).</p>
Full article ">
<p>Different sensor readings for the “temperature” tag, but with different meanings (Dataset 1).</p>
Full article ">
<p>Scatter plot between the reference data (<span class="html-italic">x</span>-axis) and (a) the correlated Sensor 1; and (b) the uncorrelated Sensor 2 (Dataset 1).</p>
Full article ">
<p>(a) Scatter plot between a temperature and a carbon monoxide sensor; and (b) the analysis of its correlation coefficient when the number of samples increases (Dataset 2).</p>
Full article ">
<p>Time series (<b>a1</b>) and scatter plot (<b>a2</b>) analysis for a <span class="html-italic">r</span> = 0.777395 related sensor, and time series (<b>b1</b>) and scatter plot (<b>b2</b>) analysis for a <span class="html-italic">r</span> = 0.864393 sensor (Dataset 1).</p>
Full article ">
<p>Analysis of the number of sensors found in Dataset 1. (<b>a</b>) Number of sensors and their data streams found for different sensor search ranges (Dataset 1); (<b>b</b>) Heat map for the number of data streams found when varying the range of the desired location area and the correlation coefficient limit (Dataset 1).</p>
Full article ">
<p>Analysis of the time-series and pairwise-distances between 14 reliable sensors with <span class="html-italic">r</span> ≥ 0.8 to the reference temperature data (Dataset 1). (<b>a</b>) Time-series analysis of those 14 reliable sensors; (<b>b</b>) Pairwise-distance between the time series of the reliable sensors.</p>
Full article ">
<p>Pairwise-distances for temperature sensors in a range of 100 km from the coordinates of London, UK, in Dataset 1, considering only sensors with correlation <span class="html-italic">r</span> ≥ 0.8.</p>
Full article ">
4074 KiB  
Article
Tracking Systems for Virtual Rehabilitation: Objective Performance vs. Subjective Experience. A Practical Scenario
by Roberto Lloréns, Enrique Noé, Valery Naranjo, Adrián Borrego, Jorge Latorre and Mariano Alcañiz
Sensors 2015, 15(3), 6586-6606; https://doi.org/10.3390/s150306586 - 19 Mar 2015
Cited by 20 | Viewed by 8463
Abstract
Motion tracking systems are commonly used in virtual reality-based interventions to detect movements in the real world and transfer them to the virtual environment. There are different tracking solutions based on different physical principles, which mainly define their performance parameters. However, special requirements [...] Read more.
Motion tracking systems are commonly used in virtual reality-based interventions to detect movements in the real world and transfer them to the virtual environment. There are different tracking solutions based on different physical principles, which mainly define their performance parameters. However, special requirements have to be considered for rehabilitation purposes. This paper studies and compares the accuracy and jitter of three tracking solutions (optical, electromagnetic, and skeleton tracking) in a practical scenario and analyzes the subjective perceptions of 19 healthy subjects, 22 stroke survivors, and 14 physical therapists. The optical tracking system provided the best accuracy (1.074 ± 0.417 cm) while the electromagnetic device provided the most inaccurate results (11.027 ± 2.364 cm). However, this tracking solution provided the best jitter values (0.324 ± 0.093 cm), in contrast to the skeleton tracking, which had the worst results (1.522 ± 0.858 cm). Healthy individuals and professionals preferred the skeleton tracking solution rather than the optical and electromagnetic solution (in that order). Individuals with stroke chose the optical solution over the other options. Our results show that subjective perceptions and preferences are far from being constant among different populations, thus suggesting that these considerations, together with the performance parameters, should be also taken into account when designing a rehabilitation system. Full article
(This article belongs to the Collection Sensors for Globalized Healthy Living and Wellbeing)
Show Figures

Figure 1

Figure 1
<p>Setting of the tracking systems. Three different tracking systems were tested in the study. (<b>a</b>) The optical tracking solution used two cameras (I) and a passive reflective marker (II); (<b>b</b>) The electromagnetic tracking solution used a source (III) and a sensor (IV), wire connected to a hub (V); (<b>c</b>) The skeleton tracking solution used a depth sensor (VI).</p>
Full article ">Figure 2
<p>Participant interacting with the virtual rehabilitation system. The participant’s movements are tracked by two infrared cameras (II), which estimate the position of reflective markers attached to their ankles (III). The position of the markers are then transferred to the virtual environment, shown in a TV screen (I).</p>
Full article ">Figure 3
<p>Measurement grid. A 6 × 6 grid with 25 cm × 25 cm squares covering an area of 1.5 m<sup>2</sup> was used to measure the estimated position of the right ankle joint.</p>
Full article ">Figure 4
<p>Subjective responses of all groups to the first four items of questionnaires <b>A</b> and <b>B</b>. <b>Blue</b>: NaturalPoint<sup>®</sup> OptiTrack<sup>TM</sup>; <b>Orange</b>: Polhemus<sup>TM</sup> G4<sup>TM</sup>; <b>Grey</b>: Microsoft<sup>®</sup> Kinect<sup>TM</sup>. Only significant differences are stated. * <span class="html-italic">p</span> &lt; 0.05, ** <span class="html-italic">p</span> &lt; 0.001.</p>
Full article ">Figure 4 Cont.
<p>Subjective responses of all groups to the first four items of questionnaires <b>A</b> and <b>B</b>. <b>Blue</b>: NaturalPoint<sup>®</sup> OptiTrack<sup>TM</sup>; <b>Orange</b>: Polhemus<sup>TM</sup> G4<sup>TM</sup>; <b>Grey</b>: Microsoft<sup>®</sup> Kinect<sup>TM</sup>. Only significant differences are stated. * <span class="html-italic">p</span> &lt; 0.05, ** <span class="html-italic">p</span> &lt; 0.001.</p>
Full article ">Figure 5
<p>Subjective responses of healthy subjects and individuals with stroke to item five of questionnaire A. <b>Blue</b>: NaturalPoint<sup>®</sup> OptiTrack<sup>TM</sup>; <b>Orange</b>: Polhemus<sup>TM</sup> G4<sup>TM</sup>; <b>Grey</b>: Microsoft<sup>®</sup> Kinect<sup>TM</sup>. Only significant differences are stated. * <span class="html-italic">p</span> &lt; 0.05, ** <span class="html-italic">p</span> &lt; 0.001.</p>
Full article ">Figure 6
<p>Subjective responses of therapists to items five to nine of questionnaire B. <b>Blue</b>: NaturalPoint<sup>®</sup> OptiTrack<sup>TM</sup>; <b>Orange</b>: Polhemus<sup>TM</sup> G4<sup>TM</sup>; <b>Grey</b>: Microsoft<sup>®</sup> Kinect<sup>TM</sup>. Only significant differences are stated. * <span class="html-italic">p</span> &lt; 0.05, ** <span class="html-italic">p</span> &lt; 0.001.</p>
Full article ">Figure 7
<p>Subjective responses of all groups regarding their order of preference. <b>Blue</b>: NaturalPoint<sup>®</sup> OptiTrack<sup>TM</sup>; <b>Orange</b>: Polhemus<sup>TM</sup> G4<sup>TM</sup>; <b>Grey</b>: Microsoft<sup>®</sup> Kinect<sup>TM</sup>. Only significant differences are stated. * <span class="html-italic">p</span> &lt; 0.05, ** <span class="html-italic">p</span> &lt; 0.001.</p>
Full article ">
8337 KiB  
Article
New Calibration Method Using Low Cost MEM IMUs to Verify the Performance of UAV-Borne MMS Payloads
by Kai-Wei Chiang, Meng-Lun Tsai, El-Sheimy Naser, Ayman Habib and Chien-Hsun Chu
Sensors 2015, 15(3), 6560-6585; https://doi.org/10.3390/s150306560 - 19 Mar 2015
Cited by 33 | Viewed by 8371
Abstract
Spatial information plays a critical role in remote sensing and mapping applications such as environment surveying and disaster monitoring. An Unmanned Aerial Vehicle (UAV)-borne mobile mapping system (MMS) can accomplish rapid spatial information acquisition under limited sky conditions with better mobility and flexibility [...] Read more.
Spatial information plays a critical role in remote sensing and mapping applications such as environment surveying and disaster monitoring. An Unmanned Aerial Vehicle (UAV)-borne mobile mapping system (MMS) can accomplish rapid spatial information acquisition under limited sky conditions with better mobility and flexibility than other means. This study proposes a long endurance Direct Geo-referencing (DG)-based fixed-wing UAV photogrammetric platform and two DG modules that each use different commercial Micro-Electro Mechanical Systems’ (MEMS) tactical grade Inertial Measurement Units (IMUs). Furthermore, this study develops a novel kinematic calibration method which includes lever arms, boresight angles and camera shutter delay to improve positioning accuracy. The new calibration method is then compared with the traditional calibration approach. The results show that the accuracy of the DG can be significantly improved by flying at a lower altitude using the new higher specification hardware. The new proposed method improves the accuracy of DG by about 20%. The preliminary results show that two-dimensional (2D) horizontal DG positioning accuracy is around 5.8 m at a flight height of 300 m using the newly designed tactical grade integrated Positioning and Orientation System (POS). The positioning accuracy in three-dimensions (3D) is less than 8 m. Full article
(This article belongs to the Special Issue UAV Sensors for Environmental Monitoring)
Show Figures

Figure 1

Figure 1
<p>The DG module configuration.</p>
Full article ">Figure 2
<p>The LC integration scheme.</p>
Full article ">Figure 3
<p>The concept of airborne DG.</p>
Full article ">Figure 4
<p>Concept of boresight angle calibration.</p>
Full article ">Figure 5
<p>Concept of lever arm calibration.</p>
Full article ">Figure 6
<p>The proposed UAV platform.</p>
Full article ">Figure 7
<p>The configuration of DG module.</p>
Full article ">Figure 8
<p>The GPS receiver of DG module.</p>
Full article ">Figure 9
<p>The IMUs for DG module.</p>
Full article ">Figure 10
<p>Canon EOS 5D Mark II &amp; EF 20 mm f/2.8 USM.</p>
Full article ">Figure 11
<p>The camera control field.</p>
Full article ">Figure 12
<p>Relation between two situations.</p>
Full article ">Figure 13
<p>The distribution of GCPs in two control fields.</p>
Full article ">Figure 14
<p>The process of INS/GPS POS assisted AT, system calibration and DG.</p>
Full article ">Figure 15
<p>The TC integration scheme.</p>
Full article ">Figure 16
<p>POS software.</p>
Full article ">Figure 17
<p>Data processing procedure.</p>
Full article ">Figure 18
<p>The scopes of the two tests.</p>
Full article ">Figure 19
<p>The trajectories of the first test flight. (<b>a</b>) UAV-MMQG-600; (<b>b</b>) UAV-MMQG-300.</p>
Full article ">Figure 20
<p>The trajectories of the second test flight. (<b>a</b>) UAV-ADIS-600; (<b>b</b>) UAV-ADIS-300.</p>
Full article ">Figure 21
<p>The calibration operation of the program.</p>
Full article ">Figure 22
<p>The lever-arm of each epoch.</p>
Full article ">Figure 23
<p>The DG program.</p>
Full article ">Figure 24
<p>DG error based on MMQG with 600 m.</p>
Full article ">Figure 25
<p>DG error based on MMQG with 300 m.</p>
Full article ">Figure 26
<p>DG error based on ADIS 16488 with 600 m.</p>
Full article ">Figure 27
<p>DG error based on ADIS 16488 with 300 m.</p>
Full article ">Figure 28
<p>The positional errors of traditional photogrammetry based on ADIS 16488 with 300 m.</p>
Full article ">
1586 KiB  
Article
Single- and Two-Phase Flow Characterization Using Optical Fiber Bragg Gratings
by Virgínia H.V. Baroncini, Cicero Martelli, Marco José Da Silva and Rigoberto E.M. Morales
Sensors 2015, 15(3), 6549-6559; https://doi.org/10.3390/s150306549 - 17 Mar 2015
Cited by 20 | Viewed by 5938
Abstract
Single- and two-phase flow characterization using optical fiber Bragg gratings (FBGs) is presented. The sensor unit consists of the optical fiber Bragg grating positioned transversely to the flow and fixed in the pipe walls. The hydrodynamic pressure applied by the liquid or air/liquid [...] Read more.
Single- and two-phase flow characterization using optical fiber Bragg gratings (FBGs) is presented. The sensor unit consists of the optical fiber Bragg grating positioned transversely to the flow and fixed in the pipe walls. The hydrodynamic pressure applied by the liquid or air/liquid flow to the optical fiber induces deformation that can be detected by the FBG. Given that the applied pressure is directly related to the mass flow, it is possible to establish a relationship using the grating resonance wavelength shift to determine the mass flow when the flow velocity is well known. For two phase flows of air and liquid, there is a significant change in the force applied to the fiber that accounts for the very distinct densities of these substances. As a consequence, the optical fiber deformation and the correspondent grating wavelength shift as a function of the flow will be very different for an air bubble or a liquid slug, allowing their detection as they flow through the pipe. A quasi-distributed sensing tool with 18 sensors evenly spread along the pipe is developed and characterized, making possible the characterization of the flow, as well as the tracking of the bubbles over a large section of the test bed. Results show good agreement with standard measurement methods and open up plenty of opportunities to both laboratory measurement tools and field applications. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Schematic drawing of the forces that appear in the optical fiber inserted across a flow: (<b>a</b>) side view; (<b>b</b>) force diagram; (<b>c</b>) diagram showing the changes in dimension of the strained optical fiber; and (<b>d</b>) top view of the acting forces. FBG, fiber Bragg grating.</p>
Full article ">Figure 2
<p>(<b>a</b>) Schematic drawing of the optical fiber strain sensors installed in a pipe; (<b>b</b>) Measuring setup; (<b>c</b>) Experimental rig for simulating different flow regimes.</p>
Full article ">Figure 3
<p>(<b>a</b>) Behavior of an FBG strain for single-flow and increasing velocities; the Reynold’s number increases, leading to increasing high frequency oscillations induced on the optical fiber sensor; (<b>b</b>)Typical optical fiber Bragg grating wavelength shift and standard deviation against single-phase flow liquid velocity.</p>
Full article ">Figure 4
<p>(<b>a</b>) Measurement of the flow-induced force to the optical fiber Bragg grating against the flow force determined by the flow properties; (<b>b</b>) Measurement of the mass flow for all 18 sensors, indicating the good agreement between the FBG and standard mass flow meter. In both plots, the solid line is at 45°, indicating the position where all values should converge, and the dashed lines indicate a ±10% error.</p>
Full article ">Figure 5
<p>(<b>a</b>) Schematic representation of the slug flow with detailed information about the air bubble and liquid slug length and shape; (<b>b</b>) Example of a time series detected by the FBG for a typical slug flow for water and air velocities of 2 m/s; (<b>c</b>) Detail of the sensor assembly showing a pair of sensors and the fixed distance <span class="html-italic">d</span> = 5 cm between them; (<b>d</b>) Treated signal for two FBGs as in (c), indicating the time lag Δt between the measured signals that is used to measure the bubble velocity.</p>
Full article ">Figure 6
<p>Bubble velocity measurement, using 18 fiber Bragg gratings sensors.</p>
Full article ">
3901 KiB  
Article
Game Design to Measure Reflexes and Attention Based on Biofeedback Multi-Sensor Interaction
by Inigo De Loyola Ortiz-Vigon Uriarte, Begonya Garcia-Zapirain and Yolanda Garcia-Chimeno
Sensors 2015, 15(3), 6520-6548; https://doi.org/10.3390/s150306520 - 17 Mar 2015
Cited by 14 | Viewed by 8465
Abstract
This paper presents a multi-sensor system for implementing biofeedback as a human-computer interaction technique in a game involving driving cars in risky situations. The sensors used are: Eye Tracker, Kinect, pulsometer, respirometer, electromiography (EMG) and galvanic skin resistance (GSR). An algorithm has been [...] Read more.
This paper presents a multi-sensor system for implementing biofeedback as a human-computer interaction technique in a game involving driving cars in risky situations. The sensors used are: Eye Tracker, Kinect, pulsometer, respirometer, electromiography (EMG) and galvanic skin resistance (GSR). An algorithm has been designed which gives rise to an interaction logic with the game according to the set of physiological constants obtained from the sensors. The results reflect a 72.333 response to the System Usability Scale (SUS), a significant difference of p = 0.026 in GSR values in terms of the difference between the start and end of the game, and an r = 0.659 and p = 0.008 correlation while playing with the Kinect between the breathing level and the energy and joy factor. All the sensors used had an impact on the end results, whereby none of them should be disregarded in future lines of research, even though it would be interesting to obtain separate breathing values from that of the cardio. Full article
(This article belongs to the Special Issue Sensors for Entertainment)
Show Figures

Figure 1

Figure 1
<p>Game interface. (<b>a</b>) Eye Tracker mode; (<b>b</b>) Kinect mode.</p>
Full article ">Figure 2
<p>General Data Packet (Pulsometer and respirometer).</p>
Full article ">Figure 3
<p>High-level design.</p>
Full article ">Figure 4
<p>Low-level design.</p>
Full article ">Figure 5
<p>Block 1: Sensor selection and connection.</p>
Full article ">Figure 6
<p>Block 2: Game.</p>
Full article ">Figure 7
<p>Block 3: Processing and saving data.</p>
Full article ">Figure 8
<p>Sub-block for obtained data processing.</p>
Full article ">Figure 9
<p>Block 4: Data display.</p>
Full article ">Figure 10
<p>Example of a user playing. (<b>a</b>) Eye Tracker mode; (<b>b</b>) Kinect mode.</p>
Full article ">Figure 11
<p>Sensors used in the game.</p>
Full article ">Figure 12
<p>Correlation graph between factor 3 (PSS) and peaks in the breathing sensor.</p>
Full article ">
1421 KiB  
Article
Sparse Component Analysis Using Time-Frequency Representations for Operational Modal Analysis
by Shaoqian Qin, Jie Guo and Changan Zhu
Sensors 2015, 15(3), 6497-6519; https://doi.org/10.3390/s150306497 - 17 Mar 2015
Cited by 31 | Viewed by 6772
Abstract
Sparse component analysis (SCA) has been widely used for blind source separation(BSS) for many years. Recently, SCA has been applied to operational modal analysis (OMA), which is also known as output-only modal identification. This paper considers the sparsity of sources’ time-frequency (TF) representation [...] Read more.
Sparse component analysis (SCA) has been widely used for blind source separation(BSS) for many years. Recently, SCA has been applied to operational modal analysis (OMA), which is also known as output-only modal identification. This paper considers the sparsity of sources’ time-frequency (TF) representation and proposes a new TF-domain SCA under the OMA framework. First, the measurements from the sensors are transformed to the TF domain to get a sparse representation. Then, single-source-points (SSPs) are detected to better reveal the hyperlines which correspond to the columns of the mixing matrix. The K-hyperline clustering algorithm is used to identify the direction vectors of the hyperlines and then the mixing matrix is calculated. Finally, basis pursuit de-noising technique is used to recover the modal responses, from which the modal parameters are computed. The proposed method is valid even if the number of active modes exceed the number of sensors. Numerical simulation and experimental verification demonstrate the good performance of the proposed method. Full article
(This article belongs to the Section Physical Sensors)
Show Figures


<p>Scatter diagram of two mixtures with three sources in (<b>a</b>) time domain; (<b>b</b>) frequency domain by DCT; and (c) TF domain by STFT.</p>
Full article ">
<p>Scatter diagram of an example of speech utterances with five sources and two mixtures: (<b>a</b>) all STFT coefficients; (<b>b</b>) the detected SSPs.</p>
Full article ">
<p>Three-dof linear model.</p>
Full article ">
<p>Free-vibration system responses and their power spectral density in the case of well-separated modes, <span class="html-italic">α</span> = 0.08</p>
Full article ">
<p>Scatter diagram of the identified SSPs using all three sensors in the case of well-separated modes, <span class="html-italic">α</span> = 0.08.</p>
Full article ">
<p>Recovered modal responses using all three sensors in the case of well-separated modes, <span class="html-italic">α</span> = 0.08.</p>
Full article ">
<p>Scatter diagram of the identified SSPs using the first two sensors in the case of well-separated modes, <span class="html-italic">α</span> = 0.08.</p>
Full article ">
<p>Recovered modal responses using the first two sensors in the case of well-separated modes, <span class="html-italic">α</span> = 0.08.</p>
Full article ">
<p>Free-vibration system responses and their power spectral density in the case of closely-spaced modes.</p>
Full article ">
1961 KiB  
Article
Cooperative Environment Scans Based on a Multi-Robot System
by Ji-Wook Kwon
Sensors 2015, 15(3), 6483-6496; https://doi.org/10.3390/s150306483 - 17 Mar 2015
Cited by 5 | Viewed by 6172
Abstract
This paper proposes a cooperative environment scan system (CESS) using multiple robots, where each robot has low-cost range finders and low processing power. To organize and maintain the CESS, a base robot monitors the positions of the child robots, controls them, and builds [...] Read more.
This paper proposes a cooperative environment scan system (CESS) using multiple robots, where each robot has low-cost range finders and low processing power. To organize and maintain the CESS, a base robot monitors the positions of the child robots, controls them, and builds a map of the unknown environment, while the child robots with low performance range finders provide obstacle information. Even though each child robot provides approximated and limited information of the obstacles, CESS replaces the single LRF, which has a high cost, because much of the information is acquired and accumulated by a number of the child robots. Moreover, the proposed CESS extends the measurement boundaries and detects obstacles hidden behind others. To show the performance of the proposed system and compare this with the numerical models of the commercialized 2D and 3D laser scanners, simulation results are included. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>The CESS architecture.</p>
Full article ">Figure 2
<p>The vertical angles of the range finders on the child robots.</p>
Full article ">Figure 3
<p>Two CESS control strategies with respect to the positioning systems. (<b>a</b>) CESS using the vector field based control law with vision based positioning system; (<b>b</b>) CESS using the behavior based control algorithm with the UWB-based positioning system.</p>
Full article ">Figure 4
<p>The motions of the child robots following the desired circular path.</p>
Full article ">Figure 5
<p>The combination of the basic behaviors.</p>
Full article ">Figure 6
<p>The combination of the basic behaviors.</p>
Full article ">Figure 7
<p>The performance of CESS in the 2D plane. (<b>a</b>) The obstacle map; (<b>b</b>) The detected obstacles by LRF; (<b>c</b>) The detected obstacles by CESS, based on the vision-based positioning system; (<b>d</b>) The detected obstacles by CESS based on the UWB-based positioning system.</p>
Full article ">Figure 8
<p>The performance of CESS in a 3D space. (<b>a</b>) The obstacle map; (<b>b</b>) The detected obstacles by LRF; (<b>c</b>) The detected obstacles by CESS, based on the vision-based positioning system; (<b>d</b>) The detected obstacles by CESS based on the UWB-based positioning system.</p>
Full article ">
1345 KiB  
Article
A Solid-State Thin-Film Ag/AgCl Reference Electrode Coated with Graphene Oxide and Its Use in a pH Sensor
by Tae Yong Kim, Sung A Hong and Sung Yang
Sensors 2015, 15(3), 6469-6482; https://doi.org/10.3390/s150306469 - 17 Mar 2015
Cited by 61 | Viewed by 15061
Abstract
In this study, we describe a novel solid-state thin-film Ag/AgCl reference electrode (SSRE) that was coated with a protective layer of graphene oxide (GO). This layer was prepared by drop casting a solution of GO on the Ag/AgCl thin film. The potential differences [...] Read more.
In this study, we describe a novel solid-state thin-film Ag/AgCl reference electrode (SSRE) that was coated with a protective layer of graphene oxide (GO). This layer was prepared by drop casting a solution of GO on the Ag/AgCl thin film. The potential differences exhibited by the SSRE were less than 2 mV for 26 days. The cyclic voltammograms of the SSRE were almost similar to those of a commercial reference electrode, while the diffusion coefficient of Fe(CN)63− as calculated from the cathodic peaks of the SSRE was 6.48 × 10−6 cm2/s. The SSRE was used in conjunction with a laboratory-made working electrode to determine its suitability for practical use. The average pH sensitivity of this combined sensor was 58.5 mV/pH in the acid-to-base direction; the correlation coefficient was greater than 0.99. In addition, an integrated pH sensor that included the SSRE was packaged in a secure digital (SD) card and tested. The average sensitivity of the chip was 56.8 mV/pH, with the correlation coefficient being greater than 0.99. In addition, a pH sensing test was also performed by using a laboratory-made potentiometer, which showed a sensitivity of 55.4 mV/pH, with the correlation coefficient being greater than 0.99. Full article
(This article belongs to the Section Chemical Sensors)
Show Figures

Figure 1

Figure 1
<p>Schematic of the SSRE fabrication process. The sensing part of the electrode is 2 mm in diameter. (<b>a</b>) Deposition of Cr and Ag by a sputter process; (<b>b</b>) SSRE coated with graphene oxide; (<b>c</b>) Cross-sectional view of the electrode.</p>
Full article ">Figure 2
<p>The SSRE coated with GO and the pH working electrode were built on a square-shaped glass substrate. (<b>a</b>) Schematic showing the individual parts of the sensor; (<b>b</b>) Photograph of the actual sensor fabricated on a glass substrate; (<b>c</b>) Photograph of the sensor packaged in an SD card; (<b>d</b>) Laboratory-made potentiometer that was used in the integrated pH sensor.</p>
Full article ">Figure 3
<p>Changes in the surface morphology of the electrode as observed using SEM: (<b>a</b>–<b>c</b>) non-heat-treated electrodes and (<b>d</b>–<b>f</b>) electrodes heat-treated at 320 °C. (The magnification of the images in (<b>a</b>–<b>g</b>) is × 5 K, that for images in the insets of (<b>a</b>–<b>f</b>) and (<b>h</b>) is × 50 K, and that for the image in (<b>i</b>) is × 2 K. Images of the Ag and Ag/AgCl thin films after the following steps are shown: (<b>a</b>) before the heat treatment, (<b>d</b>) after the heat treatment, (<b>b</b>,<b>e</b>) after chlorination with 50 mM FeCl<sub>3</sub>, and (<b>c</b>,<b>f</b>) after overnight storage in a saturated AgCl solution. (<b>g</b>) GO layer on a ready-to-use electrode. (<b>h</b>) and (<b>i</b>) show cross-sectional and top views of the pristine GO layer, respectively.</p>
Full article ">Figure 4
<p>AFM images of the SSRE showing its surface morphology. Images (<b>a</b>) and (<b>b</b>) show the morphology before and after the electrode was coated with GO, respectively.</p>
Full article ">Figure 5
<p>EDS analysis of the electrode surface. (<b>a</b>) GO layer formed on a silicon wafer (shown for comparison); (<b>b</b>) GO layer on a thin film of Ag/AgCl.</p>
Full article ">Figure 6
<p>Effect of pH on the SSRE and the long-term stability of the SSRE. (<b>a</b>) Stability of the SSRE at pH levels ranging from 2.38 to 11.61 in the acid-to-base direction and <span class="html-italic">vice versa</span>. The potentials were measured using the ORE as the reference electrode; (<b>b</b>) Potentials and response times of the SSRE were measured in a 3 M KCl solution at intervals of 2 or 3 days over 26 days.</p>
Full article ">Figure 7
<p>Comparison of the CV curves of the ORE and SSRE for scan rates of 25, 50, 100, 150 and 200 mV/s. (<b>a</b>) and (<b>b</b>) are the CV curves for the ORE (plotted for comparison), and (<b>c</b>) and (<b>d</b>) are the curves for the SSRE. The solid circles (●) and boxes (■) in (<b>b</b>) and (<b>d</b>) stand for the cathodic and anodic peak currents, respectively. (The measurements were made three times. The error bars for the values were not shown as the difference in the currents was smaller than 0.5 μA.)</p>
Full article ">Figure 8
<p>The combined pH sensor and the electrochemical workstation were used for measuring the potentials. The potentials measured with the pH sensor are represented by the straight (―) and dotted (---) lines. The numbers in the graph indicate the pH. (<b>a</b>) and (<b>b</b>) show the proportional relation between the pH and the potentials in the acid-to-base direction and <span class="html-italic">vice versa</span>, respectively (the measurements were made three times, and the arrows indicate the direction for pH sensing).</p>
Full article ">Figure 9
<p>The integrated pH sensors were evaluated in various solutions of different pH values. (<b>a</b>) The integrated pH sensors with an electrochemical workstation, and (<b>b</b>) the integrated pH sensors packaged in an SD card with the laboratory-made potentiometer were used for potential measurements. Each inset indicates the experimental setup used for the test. (The error bars were not shown as the difference in the potentials was smaller than 10 mV for both (<b>a</b>) and (<b>b</b>).)</p>
Full article ">
754 KiB  
Review
MEMS Sensor Technologies for Human Centred Applications in Healthcare, Physical Activities, Safety and Environmental Sensing: A Review on Research Activities in Italy
by Gastone Ciuti, Leonardo Ricotti, Arianna Menciassi and Paolo Dario
Sensors 2015, 15(3), 6441-6468; https://doi.org/10.3390/s150306441 - 17 Mar 2015
Cited by 124 | Viewed by 20047
Abstract
Over the past few decades the increased level of public awareness concerning healthcare, physical activities, safety and environmental sensing has created an emerging need for smart sensor technologies and monitoring devices able to sense, classify, and provide feedbacks to users’ health status and [...] Read more.
Over the past few decades the increased level of public awareness concerning healthcare, physical activities, safety and environmental sensing has created an emerging need for smart sensor technologies and monitoring devices able to sense, classify, and provide feedbacks to users’ health status and physical activities, as well as to evaluate environmental and safety conditions in a pervasive, accurate and reliable fashion. Monitoring and precisely quantifying users’ physical activity with inertial measurement unit-based devices, for instance, has also proven to be important in health management of patients affected by chronic diseases, e.g., Parkinson’s disease, many of which are becoming highly prevalent in Italy and in the Western world. This review paper will focus on MEMS sensor technologies developed in Italy in the last three years describing research achievements for healthcare and physical activity, safety and environmental sensing, in addition to smart systems integration. Innovative and smart integrated solutions for sensing devices, pursued and implemented in Italian research centres, will be highlighted, together with specific applications of such technologies. Finally, the paper will depict the future perspective of sensor technologies and corresponding exploitation opportunities, again with a specific focus on Italy. Full article
(This article belongs to the Special Issue State-of-the-Art Sensors Technology in Italy 2014)
Show Figures

Figure 1

Figure 1
<p>Examples of MEMS for medical applications. (<b>a</b>) Wearable monitoring inertial device for measuring sexual performance. Adapted with permission from [<a href="#B16-sensors-15-06441" class="html-bibr">16</a>] (copyright: Elsevier); (<b>b</b>) Center of pressure displacement maps obtained by means of tri-axial accelerometers mounted on healthy and Parkinsonian subjects. AP = antero-posterior plane, ML = medio-lateral plane. Adapted with permission from [<a href="#B24-sensors-15-06441" class="html-bibr">24</a>] (copyright: Creative Commons); (<b>c</b>) Endoscopic capsules provided with inertial sensors for vibratory motor control. Adapted with permission from [<a href="#B34-sensors-15-06441" class="html-bibr">34</a>] (copyright: Elsevier); (<b>d</b>) MEMS integrated in toys for monitoring preterm infants at risk of neurodevelopmental disorders. Adapted with permission from [<a href="#B38-sensors-15-06441" class="html-bibr">38</a>] (copyright: Creative Commons).</p>
Full article ">Figure 2
<p>Examples of MEMS for assistance and rehabilitation. (<b>a</b>) Wearable inertial sensors for continuous monitoring of turning during spontaneous daily activity. Adapted with permission from [<a href="#B66-sensors-15-06441" class="html-bibr">66</a>] (copyright: MDPI—<span class="html-italic">Sensors</span> journal); (<b>b</b>) Wearable multi-sensor system (composed of a number of small modules that embed high-precision MEMS accelerometers and wireless communications) for human motion monitoring in rehabilitation. Adapted with permission from [<a href="#B72-sensors-15-06441" class="html-bibr">72</a>] (copyright: MDPI—<span class="html-italic">Sensors</span> journal); (<b>c</b>) Silicon MEMS-based piezoresistive sensing array (<span class="html-italic">i.e.</span>, four MEMS based piezoresistive sensors) for tactile sensing. (Courtesy of Calogero Maria Oddo).</p>
Full article ">Figure 3
<p>Examples of MEMS for sport and leisure applications. (<b>a</b>) IMU mounted on the trunk for estimating squat exercise dynamics. Adapted with permission from [<a href="#B91-sensors-15-06441" class="html-bibr">91</a>] (copyright: Elsevier); (<b>b</b>) MEMS pressure sensors used to assess balance abilities and non-cyclic rapidity of soccer players. Adapted with permission from [<a href="#B100-sensors-15-06441" class="html-bibr">100</a>] (copyright: Creative Commons); (<b>c</b>) Climbing dynamics quantified by means of kinematic data associated with vertical plantar reaction forces, measured through MEMS capacitive sensors. Adapted with permission from [<a href="#B104-sensors-15-06441" class="html-bibr">104</a>] (copyright: John Wiley &amp; Sons).</p>
Full article ">Figure 4
<p>Number of research papers published in the period 2011–2014 on MEMS sensors. The analysis was conducted for all the EU member states and for other countries with relatively high income and technological development level. Source: Scopus, searching the word “MEMS sensor” in title, abstract and keywords for journal papers and conference proceedings.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop