logo logo
Robust classification of neonatal apnoea-related desaturations. Monasterio Violeta,Burgess Fred,Clifford Gari D Physiological measurement Respiratory signals monitored in the neonatal intensive care units are usually ignored due to the high prevalence of noise and false alarms (FA). Apneic events are generally therefore indicated by a pulse oximeter alarm reacting to the subsequent desaturation. However, the high FA rate in the photoplethysmogram may desensitize staff, reducing the reaction speed. The main reason for the high FA rates of critical care monitors is the unimodal analysis behaviour. In this work, we propose a multimodal analysis framework to reduce the FA rate in neonatal apnoea monitoring. Information about oxygen saturation, heart rate, respiratory rate and signal quality was extracted from electrocardiogram, impedance pneumogram and photoplethysmographic signals for a total of 20 features in the 5 min interval before a desaturation event. 1616 desaturation events from 27 neonatal admissions were annotated by two independent reviewers as true (physiologically relevant) or false (noise-related). Patients were divided into two independent groups for training and validation, and a support vector machine was trained to classify the events as true or false. The best classification performance was achieved on a combination of 13 features with sensitivity, specificity and accuracy of 100% in the training set, and a sensitivity of 86%, a specificity of 91% and an accuracy of 90% in the validation set. 10.1088/0967-3334/33/9/1503
Reliable emotion recognition system based on dynamic adaptive fusion of forehead biopotentials and physiological signals. Khezri Mahdi,Firoozabadi Mohammad,Sharafat Ahmad Reza Computer methods and programs in biomedicine In this study, we proposed a new adaptive method for fusing multiple emotional modalities to improve the performance of the emotion recognition system. Three-channel forehead biosignals along with peripheral physiological measurements (blood volume pressure, skin conductance, and interbeat intervals) were utilized as emotional modalities. Six basic emotions, i.e., anger, sadness, fear, disgust, happiness, and surprise were elicited by displaying preselected video clips for each of the 25 participants in the experiment; the physiological signals were collected simultaneously. In our multimodal emotion recognition system, recorded signals with the formation of several classification units identified the emotions independently. Then the results were fused using the adaptive weighted linear model to produce the final result. Each classification unit is assigned a weight that is determined dynamically by considering the performance of the units during the testing phase and the training phase results. This dynamic weighting scheme enables the emotion recognition system to adapt itself to each new user. The results showed that the suggested method outperformed conventional fusion of the features and classification units using the majority voting method. In addition, a considerable improvement, compared to the systems that used the static weighting schemes for fusing classification units, was also shown. Using support vector machine (SVM) and k-nearest neighbors (KNN) classifiers, the overall classification accuracies of 84.7% and 80% were obtained in identifying the emotions, respectively. In addition, applying the forehead or physiological signals in the proposed scheme indicates that designing a reliable emotion recognition system is feasible without the need for additional emotional modalities. 10.1016/j.cmpb.2015.07.006
Reducing false arrhythmia alarm rates using robust heart rate estimation and cost-sensitive support vector machines. Zhang Qiang,Chen Xianxiang,Fang Zhen,Zhan Qingyuan,Yang Ting,Xia Shanhong Physiological measurement To lessen the rate of false critical arrhythmia alarms, we used robust heart rate estimation and cost-sensitive support vector machines. The PhysioNet MIMIC II database and the 2015 PhysioNet/CinC Challenge public database were used as the training dataset; the 2015 Challenge hidden dataset was for testing. Each record had an alarm labeled with asystole, extreme bradycardia, extreme tachycardia, ventricular tachycardia or ventricular flutter/fibrillation. Before alarm onsets, 300 s multimodal data was provided, including electrocardiogram, arterial blood pressure and/or photoplethysmogram. A signal quality modified Kalman filter achieved robust heart rate estimation. Based on this, we extracted heart rate variability features and statistical ECG features. Next, we applied a genetic algorithm (GA) to select the optimal feature combination. Finally, considering the high cost of classifying a true arrhythmia as false, we selected cost-sensitive support vector machines (CSSVMs) to classify alarms. Evaluation on the test dataset showed the overall true positive rate was 95%, and the true negative rate was 85%. 10.1088/1361-6579/38/2/259
Simultaneous Recognition and Assessment of Post-Stroke Hemiparetic Gait by Fusing Kinematic, Kinetic, and Electrophysiological Data. Cui Chengkun,Bian Gui-Bin,Hou Zeng-Guang,Zhao Jun,Su Guodong,Zhou Hao,Peng Liang,Wang Weiqun IEEE transactions on neural systems and rehabilitation engineering : a publication of the IEEE Engineering in Medicine and Biology Society Gait analysis for the patients with lower limb motor dysfunction is a useful tool in assisting clinicians for diagnosis, assessment, and rehabilitation strategy making. Implementing accurate automatic gait analysis for the hemiparetic patients after stroke is a great challenge in clinical practice. This study is to develop a new automatic gait analysis system for qualitatively recognizing and quantitatively assessing the gait abnormality of the post-stroke hemiparetic patients. Twenty-one post-stroke patients and twenty-one healthy volunteers participated in the walking trials. Three of the most representative gait data, i.e., marker trajectory (MT), ground reaction force (GRF), and electromyogram, were simultaneously acquired from these subjects during their walking. A multimodal fusion architecture is established by using these different modal data to qualitatively distinguish the hemiparetic gait from normal gait by different pattern recognition techniques and to quantitatively estimate the patient's lower limb motor function by a novel probability-based gait score. Seven decision fusion algorithms have been tested in this architecture, and extensive data analysis experiments have been conducted. The results indicate that the recognition performance and estimation performance of the system become better when more modal gait data are fused. For the recognition performance, the random forest classifier based on the GRF data achieves an accuracy of 92.26% outperformed other single-modal schemes. When combining two modal data, the accuracy can be enhanced to 95.83% by using the support vector machine (SVM) fusion algorithm to fuse the MT and GRF data. When integrating all the three modal data, the accuracy can be further improved to 98.21% by using the SVM fusion algorithm. For the estimation performance, the absolute values of the correlation coefficients between the estimation results of the above three schemes and the Wisconsin gait scale scores for the post-stroke patients are 0.63, 0.75, and 0.84, respectively, which means the clinical relevance becomes more obvious when using more modalities. These promising results demonstrate that the proposed method has considerable potential to promote the future design of automatic gait analysis systems for clinical practice. 10.1109/TNSRE.2018.2811415
Muscle Activation and Inertial Motion Data for Noninvasive Classification of Activities of Daily Living. Totty Michael S,Wade Eric IEEE transactions on bio-medical engineering OBJECTIVE:Remote monitoring of physical activity using body-worn sensors provides an objective alternative to current functional assessment tools. The purpose of this study was to assess the feasibility of classifying categories of activities of daily living from the functional arm activity behavioral observation system (FAABOS) using muscle activation and motion data. METHODS:Ten nondisabled, healthy adults were fitted with a Myo armband on the upper forearm. This multimodal commercial sensor device features surface electromyography (sEMG) sensors, an accelerometer, and a rate gyroscope. Participants performed 17 different activities of daily living, which belonged to one of four functional groups according to the FAABOS. Signal magnitude area (SMA) and mean values were extracted from the acceleration and angular rate of change data; root mean square (RMS) was computed for the sEMG data. A nearest neighbors machine learning algorithm was then applied to predict the FAABOS task category using these raw data as inputs. RESULTS:Mean acceleration, SMA of acceleration, mean angular rate of change, and RMS of sEMG were significantly different across the four FAABOS categories ( in all cases). A classifier using mean acceleration, mean angular rate of change, and sEMG data was able to predict task category with 89.2% accuracy. CONCLUSION:The results demonstrate the feasibility of using a combination of sEMG and motion data to noninvasively classify types of activities of daily living. SIGNIFICANCE:This approach may be useful for quantifying daily activity performance in ambient settings as a more ecologically valid measure of function in healthy and disease-affected individuals. 10.1109/TBME.2017.2738440
The initiation of cannabis use in adolescence is predicted by sex-specific psychosocial and neurobiological features. Spechler Philip A,Allgaier Nicholas,Chaarani Bader,Whelan Robert,Watts Richard,Orr Catherine,Albaugh Matthew D,D'Alberto Nicholas,Higgins Stephen T,Hudson Kelsey E,Mackey Scott,Potter Alexandra,Banaschewski Tobias,Bokde Arun L W,Bromberg Uli,Büchel Christian,Cattrell Anna,Conrod Patricia J,Desrivières Sylvane,Flor Herta,Frouin Vincent,Gallinat Jürgen,Gowland Penny,Heinz Andreas,Ittermann Bernd,Martinot Jean-Luc,Paillère Martinot Marie-Laure,Nees Frauke,Papadopoulos Orfanos Dimitri,Paus Tomáš,Poustka Luise,Smolka Michael N,Walter Henrik,Schumann Gunter,Althoff Robert R,Garavan Hugh, The European journal of neuroscience Cannabis use initiated during adolescence might precipitate negative consequences in adulthood. Thus, predicting adolescent cannabis use prior to any exposure will inform the aetiology of substance abuse by disentangling predictors from consequences of use. In this prediction study, data were drawn from the IMAGEN sample, a longitudinal study of adolescence. All selected participants (n = 1,581) were cannabis-naïve at age 14. Those reporting any cannabis use (out of six ordinal use levels) by age 16 were included in the outcome group (N = 365, males n = 207). Cannabis-naïve participants at age 14 and 16 were included in the comparison group (N = 1,216, males n = 538). Psychosocial, brain and genetic features were measured at age 14 prior to any exposure. Cross-validated regularized logistic regressions for each use level by sex were used to perform feature selection and obtain prediction error statistics on independent observations. Predictors were probed for sex- and drug-specificity using post-hoc logistic regressions. Models reliably predicted use as indicated by satisfactory prediction error statistics, and contained psychosocial features common to both sexes. However, males and females exhibited distinct brain predictors that failed to predict use in the opposite sex or predict binge drinking in independent samples of same-sex participants. Collapsed across sex, genetic variation on catecholamine and opioid receptors marginally predicted use. Using machine learning techniques applied to a large multimodal dataset, we identified a risk profile containing psychosocial and sex-specific brain prognostic markers, which were likely to precede and influence cannabis initiation. 10.1111/ejn.13989
Imputing Missing Data In Large-Scale Multivariate Biomedical Wearable Recordings Using Bidirectional Recurrent Neural Networks With Temporal Activation Regularization. Feng Tiantian,Narayanan Shrikanth Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference Miniaturized and wearable sensor-based measurements offer unprecedented opportunities to study and assess human behavior in natural settings with wide ranging applications including in healthcare, wellness tracking and entertainment. However, wearable sensors are vulnerable to data loss due to body movement, sensor displacement, software malfunctions, etc. This generally hinders advanced data analytics including for clustering, data summarization, and pattern recognition requiring robust solutions for handling missing data to obtain accurate and unbiased analysis. Conventional data imputation strategies to address the challenges of missing data, including statistical fill-in, matrix factorization and traditional machine learning approaches, are inadequate in capturing temporal variations in multivariate time series. In this paper, we investigate data imputation using bidirectional recurrent neural networks with temporal activation regularization, which can directly learn and fill in the missing data. We evaluate the method on a large-scale multimodal wearable recording data-set of bio-behavioral signals we recently collected from over 100 hospital staff for a period of 10 weeks. Experimental results on these multimodal time series show the superiority of the proposed RNN-based method in terms of imputation accuracy. 10.1109/EMBC.2019.8856966
Feature Extraction and Selection for Pain Recognition Using Peripheral Physiological Signals. Campbell Evan,Phinyomark Angkoon,Scheme Erik Frontiers in neuroscience In pattern recognition, the selection of appropriate features is paramount to both the performance and the robustness of the system. Over-reliance on machine learning-based feature selection methods can, therefore, be problematic; especially when conducted using small snapshots of data. The results of these methods, if adopted without proper interpretation, can lead to sub-optimal system design or worse, the abandonment of otherwise viable and important features. In this work, a deep exploration of pain-based emotion classification was conducted to better understand differences in the results of the related literature. In total, 155 different time domain and frequency domain features were explored, derived from electromyogram (EMG), skin conductance levels (SCL), and electrocardiogram (ECG) readings taken from the 85 subjects in response to heat-induced pain. To address the inconsistency in the optimal feature sets found in related works, an exhaustive and interpretable feature selection protocol was followed to obtain a generalizable feature set. Associations between features were then visualized using a topologically-informed chart, called Mapper, of this physiological feature space, including synthesis and comparison of results from previous literature. This topological feature chart was able to identify key sources of information that led to the formation of five main functional feature groups: signal amplitude and power, frequency information, nonlinear complexity, unique, and connecting. These functional groupings were used to extract further insight into observable autonomic responses to pain through a complementary statistical interaction analysis. From this chart, it was observed that EMG and SCL derived features could functionally replace those obtained from ECG. These insights motivate future work on novel sensing modalities, feature design, deep learning approaches, and dimensionality reduction techniques. 10.3389/fnins.2019.00437
A body sensor network with electromyogram and inertial sensors: multimodal interpretation of muscular activities. Ghasemzadeh Hassan,Jafari Roozbeh,Prabhakaran Balakrishnan IEEE transactions on information technology in biomedicine : a publication of the IEEE Engineering in Medicine and Biology Society The evaluation of the postural control system (PCS) has applications in rehabilitation, sports medicine, gait analysis, fall detection, and diagnosis of many diseases associated with a reduction in balance ability. Standing involves significant muscle use to maintain balance, making standing balance a good indicator of the health of the PCS. Inertial sensor systems have been used to quantify standing balance by assessing displacement of the center of mass, resulting in several standardized measures. Electromyogram (EMG) sensors directly measure the muscle control signals. Despite strong evidence of the potential of muscle activity for balance evaluation, less study has been done on extracting unique features from EMG data that express balance abnormalities. In this paper, we present machine learning and statistical techniques to extract parameters from EMG sensors placed on the tibialis anterior and gastrocnemius muscles, which show a strong correlation to the standard parameters extracted from accelerometer data. This novel interpretation of the neuromuscular system provides a unique method of assessing human balance based on EMG signals. In order to verify the effectiveness of the introduced features in measuring postural sway, we conduct several classification tests that operate on the EMG features and predict significance of different balance measures. 10.1109/TITB.2009.2035050
Status epilepticus prevention, ambulatory monitoring, early seizure detection and prediction in at-risk patients. Amengual-Gual Marta,Ulate-Campos Adriana,Loddenkemper Tobias Seizure PURPOSE:Status epilepticus is an often apparently randomly occurring, life-threatening medical emergency which affects the quality of life in patients with epilepsy and their families. The purpose of this review is to summarize information on ambulatory seizure detection, seizure prediction, and status epilepticus prevention. METHOD:Narrative review. RESULTS:Seizure detection devices are currently under investigation with regards to utility and feasibility in the detection of isolated seizures, mainly in adult patients with generalized tonic-clonic seizures, in long-term epilepsy monitoring units, and occasionally in the outpatient setting. Detection modalities include accelerometry, electrocardiogram, electrodermal activity, electroencephalogram, mattress sensors, surface electromyography, video detection systems, gyroscope, peripheral temperature, photoplethysmography, and respiratory sensors, among others. Initial detection results are promising, and improve even further, when several modalities are combined. Some portable devices have already been U.S. FDA approved to detect specific seizures. Improved seizure prediction may be attainable in the future given that epileptic seizure occurrence follows complex patient-specific non-random patterns. The combination of multimodal monitoring devices, big data sets, and machine learning may enhance patient-specific detection and predictive algorithms. The integration of these technological advances and novel approaches into closed-loop warning and treatment systems in the ambulatory setting may help detect seizures sooner, and tentatively prevent status epilepticus in the future. CONCLUSIONS:Ambulatory monitoring systems are being developed to improve seizure detection and the quality of life in patients with epilepsy and their families. 10.1016/j.seizure.2018.09.013
Instant Stress: Detection of Perceived Mental Stress Through Smartphone Photoplethysmography and Thermal Imaging. Cho Youngjun,Julier Simon J,Bianchi-Berthouze Nadia JMIR mental health BACKGROUND:A smartphone is a promising tool for daily cardiovascular measurement and mental stress monitoring. A smartphone camera-based photoplethysmography (PPG) and a low-cost thermal camera can be used to create cheap, convenient, and mobile monitoring systems. However, to ensure reliable monitoring results, a person must remain still for several minutes while a measurement is being taken. This is cumbersome and makes its use in real-life situations impractical. OBJECTIVE:We proposed a system that combines PPG and thermography with the aim of improving cardiovascular signal quality and detecting stress responses quickly. METHODS:Using a smartphone camera with a low-cost thermal camera added on, we built a novel system that continuously and reliably measures 2 different types of cardiovascular events: (1) blood volume pulse and (2) vasoconstriction/dilation-induced temperature changes of the nose tip. 17 participants, involved in stress-inducing mental workload tasks, measured their physiological responses to stressors over a short time period (20 seconds) immediately after each task. Participants reported their perceived stress levels on a 10-cm visual analog scale. For the instant stress inference task, we built novel low-level feature sets representing cardiovascular variability. We then used the automatic feature learning capability of artificial neural networks to improve the mapping between the extracted features and the self-reported ratings. We compared our proposed method with existing hand-engineered features-based machine learning methods. RESULTS:First, we found that the measured PPG signals presented high quality cardiac cyclic information (mean pSQI: 0.755; SD 0.068). We also found that the measured thermal changes of the nose tip presented high-quality breathing cyclic information and filtering helped extract vasoconstriction/dilation-induced patterns with fewer respiratory effects (mean pSQI: from 0.714 to 0.157). Second, we found low correlations between the self-reported stress scores and the existing metrics of the cardiovascular signals (ie, heart rate variability and thermal directionality) from short measurements, suggesting they were not very dependent upon one another. Third, we tested the performance of the instant perceived stress inference method. The proposed method achieved significantly higher accuracies than existing precrafted features-based methods. In addition, the 17-fold leave-one-subject-out cross-validation results showed that combining both modalities produced higher accuracy than using PPG or thermal imaging only (PPG+Thermal: 78.33%; PPG: 68.53%; Thermal: 58.82%). The multimodal results are comparable to the state-of-the-art stress recognition methods that require long-term measurements. Finally, we explored effects of different data labeling strategies on the sensitivity of our inference methods. Our results showed the need for separation of and normalization between individual data. CONCLUSIONS:The results demonstrate the feasibility of using smartphone-based imaging for instant stress detection. Given that this approach does not need long-term measurements requiring attention and reduced mobility, we believe it is more suitable for mobile mental health care solutions in the wild. 10.2196/10140
Learning vector representation of medical objects via EMR-driven nonnegative restricted Boltzmann machines (eNRBM). Tran Truyen,Nguyen Tu Dinh,Phung Dinh,Venkatesh Svetha Journal of biomedical informatics Electronic medical record (EMR) offers promises for novel analytics. However, manual feature engineering from EMR is labor intensive because EMR is complex - it contains temporal, mixed-type and multimodal data packed in irregular episodes. We present a computational framework to harness EMR with minimal human supervision via restricted Boltzmann machine (RBM). The framework derives a new representation of medical objects by embedding them in a low-dimensional vector space. This new representation facilitates algebraic and statistical manipulations such as projection onto 2D plane (thereby offering intuitive visualization), object grouping (hence enabling automated phenotyping), and risk stratification. To enhance model interpretability, we introduced two constraints into model parameters: (a) nonnegative coefficients, and (b) structural smoothness. These result in a novel model called eNRBM (EMR-driven nonnegative RBM). We demonstrate the capability of the eNRBM on a cohort of 7578 mental health patients under suicide risk assessment. The derived representation not only shows clinically meaningful feature grouping but also facilitates short-term risk stratification. The F-scores, 0.21 for moderate-risk and 0.36 for high-risk, are significantly higher than those obtained by clinicians and competitive with the results obtained by support vector machines. 10.1016/j.jbi.2015.01.012
A Deep Learning Architecture for Temporal Sleep Stage Classification Using Multivariate and Multimodal Time Series. Chambon Stanislas,Galtier Mathieu N,Arnal Pierrick J,Wainrib Gilles,Gramfort Alexandre IEEE transactions on neural systems and rehabilitation engineering : a publication of the IEEE Engineering in Medicine and Biology Society Sleep stage classification constitutes an important preliminary exam in the diagnosis of sleep disorders. It is traditionally performed by a sleep expert who assigns to each 30 s of the signal of a sleep stage, based on the visual inspection of signals such as electroencephalograms (EEGs), electrooculograms (EOGs), electrocardiograms, and electromyograms (EMGs). We introduce here the first deep learning approach for sleep stage classification that learns end-to-end without computing spectrograms or extracting handcrafted features, that exploits all multivariate and multimodal polysomnography (PSG) signals (EEG, EMG, and EOG), and that can exploit the temporal context of each 30-s window of data. For each modality, the first layer learns linear spatial filters that exploit the array of sensors to increase the signal-to-noise ratio, and the last layer feeds the learnt representation to a softmax classifier. Our model is compared to alternative automatic approaches based on convolutional networks or decisions trees. Results obtained on 61 publicly available PSG records with up to 20 EEG channels demonstrate that our network architecture yields the state-of-the-art performance. Our study reveals a number of insights on the spatiotemporal distribution of the signal of interest: a good tradeoff for optimal classification performance measured with balanced accuracy is to use 6 EEG with 2 EOG (left and right) and 3 EMG chin channels. Also exploiting 1 min of data before and after each data segment offers the strongest improvement when a limited number of channels are available. As sleep experts, our system exploits the multivariate and multimodal nature of PSG signals in order to deliver the state-of-the-art classification performance with a small computational cost. 10.1109/TNSRE.2018.2813138
A hierarchical multimodal system for motion analysis in patients with epilepsy. Ahmedt-Aristizabal David,Fookes Clinton,Denman Simon,Nguyen Kien,Fernando Tharindu,Sridharan Sridha,Dionisio Sasha Epilepsy & behavior : E&B During seizures, a myriad of clinical manifestations may occur. The analysis of these signs, known as seizure semiology, gives clues to the underlying cerebral networks involved. When patients with drug-resistant epilepsy are monitored to assess their suitability for epilepsy surgery, semiology is a vital component to the presurgical evaluation. Specific patterns of facial movements, head motions, limb posturing and articulations, and hand and finger automatisms may be useful in distinguishing between mesial temporal lobe epilepsy (MTLE) and extratemporal lobe epilepsy (ETLE). However, this analysis is time-consuming and dependent on clinical experience and training. Given this limitation, an automated analysis of semiological patterns, i.e., detection, quantification, and recognition of body movement patterns, has the potential to help increase the diagnostic precision of localization. While a few single modal quantitative approaches are available to assess seizure semiology, the automated quantification of patients' behavior across multiple modalities has seen limited advances in the literature. This is largely due to multiple complicated variables commonly encountered in the clinical setting, such as analyzing subtle physical movements when the patient is covered or room lighting is inadequate. Semiology encompasses the stepwise/temporal progression of signs that is reflective of the integration of connected neuronal networks. Thus, single signs in isolation are far less informative. Taking this into account, here, we describe a novel modular, hierarchical, multimodal system that aims to detect and quantify semiologic signs recorded in 2D monitoring videos. Our approach can jointly learn semiologic features from facial, body, and hand motions based on computer vision and deep learning architectures. A dataset collected from an Australian quaternary referral epilepsy unit analyzing 161 seizures arising from the temporal (n = 90) and extratemporal (n = 71) brain regions has been used in our system to quantitatively classify these types of epilepsy according to the semiology detected. A leave-one-subject-out (LOSO) cross-validation of semiological patterns from the face, body, and hands reached classification accuracies ranging between 12% and 83.4%, 41.2% and 80.1%, and 32.8% and 69.3%, respectively. The proposed hierarchical multimodal system is a potential stepping-stone towards developing a fully automated semiology analysis system to support the assessment of epilepsy. 10.1016/j.yebeh.2018.07.028
Distinguish self- and hetero-perceived stress through behavioral imaging and physiological features. Spodenkiewicz Michel,Aigrain Jonathan,Bourvis Nadège,Dubuisson Séverine,Chetouani Mohamed,Cohen David Progress in neuro-psychopharmacology & biological psychiatry Stress reactivity is a complex phenomenon associated to multiple and multimodal expressions. Response to stressors has an obvious survival function and may be seen as an internal regulation to adapt to threat or danger. The intensity of this internal response can be assessed as the self-perception of the stress response. In species with social organization, this response also serves a communicative function, so-called hetero-perception. Our study presents multimodal stress detection assessment - a new methodology combining behavioral imaging and physiological monitoring for analyzing stress from these two perspectives. The system is based on automatic extraction of 39 behavioral (2D+3D video recording) and 62 physiological (Nexus-10 recording) features during a socially evaluated mental arithmetic test. The analysis with machine learning techniques for automatic classification using Support Vector Machine (SVM) show that self-perception and hetero-perception of social stress are both close but different phenomena: self-perception was significantly correlated with hetero-perception but significantly differed from it. Also, assessing stress with SVM through multimodality gave excellent classification results (F1 score values: 0.9±0.012 for hetero-perception and 0.87±0.021 for self-perception). In the best selected feature subsets, we found some common behavioral and physiological features that allow classification of both self- and hetero-perceived stress. However, we also found the contributing features for automatic classifications had opposite distributions: self-perception classification was mainly based on physiological features and hetero-perception was mainly based on behavioral features. 10.1016/j.pnpbp.2017.11.023
A Multimodal Framework Based on Integration of Cortical and Muscular Activities for Decoding Human Intentions About Lower Limb Motions. Cui Chengkun,Bian Gui-Bin,Hou Zeng-Guang,Zhao Jun,Zhou Hao IEEE transactions on biomedical circuits and systems In this study, a multimodal fusion framework based on three different modal biosignals is developed to recognize human intentions related to lower limb multi-joint motions which commonly appear in daily life. Electroencephalogram (EEG), electromyogram (EMG) and mechanomyogram (MMG) signals were simultaneously recorded from twelve subjects while performing nine lower limb multi-joint motions. These multimodal data are used as the inputs of the fusion framework for identification of different motion intentions. Twelve fusion techniques are evaluated in this framework and a large number of comparative experiments are carried out. The results show that a support vector machine-based three-modal fusion scheme can achieve average accuracies of 98.61%, 97.78% and 96.85%, respectively, under three different data division forms. Furthermore, the relevant statistical tests reveal that this fusion scheme brings significant accuracy improvement in comparison with the cases of two-modal fusion or only a single modality. These promising results indicate the potential of the multimodal fusion framework for facilitating the future development of human-robot interaction for lower limb rehabilitation. 10.1109/TBCAS.2017.2699189
Multimodal predictor of neurodevelopmental outcome in newborns with hypoxic-ischaemic encephalopathy. Temko Andriy,Doyle Orla,Murray Deirdre,Lightbody Gordon,Boylan Geraldine,Marnane William Computers in biology and medicine Automated multimodal prediction of outcome in newborns with hypoxic-ischaemic encephalopathy is investigated in this work. Routine clinical measures and 1h EEG and ECG recordings 24h after birth were obtained from 38 newborns with different grades of HIE. Each newborn was reassessed at 24 months to establish their neurodevelopmental outcome. A set of multimodal features is extracted from the clinical, heart rate and EEG measures and is fed into a support vector machine classifier. The performance is reported with the statistically most unbiased leave-one-patient-out performance assessment routine. A subset of informative features, whose rankings are consistent across all patients, is identified. The best performance is obtained using a subset of 9 EEG, 2h and 1 clinical feature, leading to an area under the ROC curve of 87% and accuracy of 84% which compares favourably to the EEG-based clinical outcome prediction, previously reported on the same data. The work presents a promising step towards the use of multimodal data in building an objective decision support tool for clinical prediction of neurodevelopmental outcome in newborns with hypoxic-ischaemic encephalopathy. 10.1016/j.compbiomed.2015.05.017
Sensor Data Acquisition and Multimodal Sensor Fusion for Human Activity Recognition Using Deep Learning. Chung Seungeun,Lim Jiyoun,Noh Kyoung Ju,Kim Gague,Jeong Hyuntae Sensors (Basel, Switzerland) In this paper, we perform a systematic study about the on-body sensor positioning and data acquisition details for Human Activity Recognition (HAR) systems. We build a testbed that consists of eight body-worn Inertial Measurement Units (IMU) sensors and an Android mobile device for activity data collection. We develop a Long Short-Term Memory (LSTM) network framework to support training of a deep learning model on human activity data, which is acquired in both real-world and controlled environments. From the experiment results, we identify that activity data with sampling rate as low as 10 Hz from four sensors at both sides of wrists, right ankle, and waist is sufficient in recognizing Activities of Daily Living (ADLs) including eating and driving activity. We adopt a two-level ensemble model to combine class-probabilities of multiple sensor modalities, and demonstrate that a classifier-level sensor fusion technique can improve the classification performance. By analyzing the accuracy of each sensor on different types of activity, we elaborate custom weights for multimodal sensor fusion that reflect the characteristic of individual activities. 10.3390/s19071716
The Multimodal Assessment of Adult Attachment Security: Developing the Biometric Attachment Test. Parra Federico,Miljkovitch Raphaële,Persiaux Gwenaelle,Morales Michelle,Scherer Stefan Journal of medical Internet research BACKGROUND:Attachment theory has been proven essential for mental health, including psychopathology, development, and interpersonal relationships. Validated psychometric instruments to measure attachment abound but suffer from shortcomings common to traditional psychometrics. Recent developments in multimodal fusion and machine learning pave the way for new automated and objective psychometric instruments for adult attachment that combine psychophysiological, linguistic, and behavioral analyses in the assessment of the construct. OBJECTIVE:The aim of this study was to present a new exposure-based, automatic, and objective adult-attachment assessment, the Biometric Attachment Test (BAT), which exposes participants to a short standardized set of visual and music stimuli, whereas their immediate reactions and verbal responses, captured by several computer sense modalities, are automatically analyzed for scoring and classification. We also aimed to empirically validate two of its assumptions: its capacity to measure attachment security and the viability of using themes as placeholders for rotating stimuli. METHODS:A total of 59 French participants from the general population were assessed using the Adult Attachment Questionnaire (AAQ), the Adult Attachment Projective Picture System (AAP), and the Attachment Multiple Model Interview (AMMI) as ground truth for attachment security. They were then exposed to three different BAT stimuli sets, whereas their faces, voices, heart rate (HR), and electrodermal activity (EDA) were recorded. Psychophysiological features, such as skin-conductance response (SCR) and Bayevsky stress index; behavioral features, such as gaze and facial expressions; as well as linguistic and paralinguistic features, were automatically extracted. An exploratory analysis was conducted using correlation matrices to uncover the features that are most associated with attachment security. A confirmatory analysis was conducted by creating a single composite effects index and by testing it for correlations with attachment security. The stability of the theory-consistent features across three different stimuli sets was explored using repeated measures analysis of variances (ANOVAs). RESULTS:In total, 46 theory-consistent correlations were found during the exploration (out of 65 total significant correlations). For example, attachment security as measured by the AAP was correlated with positive facial expressions (r=.36, P=.01). AMMI's security with the father was inversely correlated with the low frequency (LF) of HRV (r=-.87, P=.03). Attachment security to partners as measured by the AAQ was inversely correlated with anger facial expression (r=-.43, P=.001). The confirmatory analysis showed that the composite effects index was significantly correlated to security in the AAP (r=.26, P=.05) and the AAQ (r=.30, P=.04) but not in the AMMI. Repeated measures ANOVAs conducted individually on each of the theory-consistent features revealed that only 7 of the 46 (15%) features had significantly different values among responses to three different stimuli sets. CONCLUSIONS:We were able to validate two of the instrument's core assumptions: its capacity to measure attachment security and the viability of using themes as placeholders for rotating stimuli. Future validation of other of its dimensions, as well as the ongoing development of its scoring and classification algorithms is discussed. 10.2196/jmir.6898
Gaussian process dynamical models for multimodal affect recognition. Garcia Hernan F,Alvarez Mauricio A,Orozco Alvaro A Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference Affective computing systems has a great potential in applications for biofeedback systems and cognitive conductual therapies. Here, by analyzing the physiological behavior of a given subject, we can infer the affective state of an emotional process. Since, emotions can be modeled as dynamic manifestations of these signals, a continuous analysis in the valence/arousal space, brings more information of the affective state related to an emotional process. In this paper we propose a method for dynamic affect recognition from multimodal physiological signals. Our model is based on learning a latent space using Gaussian process latent variable models (GP-LVM), which maps high dimensional data (multimodal physiological signals) in a low dimensional latent space. We incorporate the dynamics to the model by learning the latent representation, with associated dynamics. Finally, a support vector classifier is implemented to evaluate the relevance of the latent space features in the affective recognition process. The results show that the proposed method can efficiently model a physiological time-series and recognize with high accuracy an affective process. 10.1109/EMBC.2016.7590834
Towards an automated multimodal clinical decision support system at the post anesthesia care unit. Olsen Rasmus Munch,Aasvang Eske Kvanner,Meyhoff Christian Sahlholt,Dissing Sorensen Helge Bjarup Computers in biology and medicine BACKGROUND:The aim of this study was to develop a predictive algorithm detecting early signs of deterioration (ESODs) in the post anesthesia care unit (PACU), thus being able to intervene earlier in the future to avoid serious adverse events. The algorithm must utilize continuously collected cardiopulmonary vital signs and may serve as an alternative to current practice, in which an alarm is activated by single parameters. METHODS:The study was a single center, prospective cohort study including 178 patients admitted to the PACU after major surgical procedures. Peripheral blood oxygenation, arterial blood pressure, perfusion index, heart rate and respiratory rate were monitored continuously. Potential ESODs were automatically detected and scored by two independent experts with regards to the severity of the observation. Based on features extracted from the obtained measurements, a random forest classifier was trained, classifying each event being either an ESOD or not an ESOD. The algorithm was evaluated and compared to the automated single modality alarm system at the PACU. RESULTS:The algorithm detected ESODs with an accuracy of 92.2% (99% CI: 89.6%-94.8%), sensitivity of 90.6% (99% CI: 85.7%-95.5%), specificity of 93.0% (99% CI: 89.9%-96.2%) and area under the receiver operating characteristic curve of 96.9% (99% CI: 95.3%-98.5%). The number of false alarms decreased by 85% (99% CI: 77%-93%) and the number of missed ESODs decreased by 73% (99% CI: 61%-85%) as compared to the currently used alarm system in the hospital. The algorithm was able to detect an ESOD in average 26.4 (99% CI: 1.1-51.7) minutes before the current single parameter system used in the PACU. CONCLUSION:In conclusion, the proposed biomedical classification algorithm, when compared to the currently used single parameter alarm system of the hospital, showed significantly increased performance in both detecting ESODs fast and classifying these correctly. The clinical effect of the predictive system must be evaluated in future trials. 10.1016/j.compbiomed.2018.07.018
Multimodal Teaching Analytics: Automated Extraction of Orchestration Graphs from Wearable Sensor Data. Prieto Luis P,Sharma Kshitij,Kidzinski Łukasz,Rodríguez-Triana María Jesús,Dillenbourg Pierre Journal of computer assisted learning The pedagogical modelling of everyday classroom practice is an interesting kind of evidence, both for educational research and teachers' own professional development. This paper explores the usage of wearable sensors and machine learning techniques to automatically extract orchestration graphs (teaching activities and their social plane over time), on a dataset of 12 classroom sessions enacted by two different teachers in different classroom settings. The dataset included mobile eye-tracking as well as audiovisual and accelerometry data from sensors worn by the teacher. We evaluated both time-independent and time-aware models, achieving median F1 scores of about 0.7-0.8 on leave-one-session-out k-fold cross-validation. Although these results show the feasibility of this approach, they also highlight the need for larger datasets, recorded in a wider variety of classroom settings, to provide automated tagging of classroom practice that can be used in everyday practice across multiple teachers. 10.1111/jcal.12232
Data-Driven Multimodal Sleep Apnea Events Detection : Synchrosquezing Transform Processing and Riemannian Geometry Classification Approaches. Rutkowski Tomasz M Journal of medical systems A novel multimodal and bio-inspired approach to biomedical signal processing and classification is presented in the paper. This approach allows for an automatic semantic labeling (interpretation) of sleep apnea events based the proposed data-driven biomedical signal processing and classification. The presented signal processing and classification methods have been already successfully applied to real-time unimodal brainwaves (EEG only) decoding in brain-computer interfaces developed by the author. In the current project the very encouraging results are obtained using multimodal biomedical (brainwaves and peripheral physiological) signals in a unified processing approach allowing for the automatic semantic data description. The results thus support a hypothesis of the data-driven and bio-inspired signal processing approach validity for medical data semantic interpretation based on the sleep apnea events machine-learning-related classification. 10.1007/s10916-016-0520-7
Graph-based representation of behavior in detection and prediction of daily living activities. Augustyniak Piotr,Ślusarczyk Grażyna Computers in biology and medicine Various surveillance systems capture signs of human activities of daily living (ADLs) and store multimodal information as time line behavioral records. In this paper, we present a novel approach to the analysis of a behavioral record used in a surveillance system designed for use in elderly smart homes. The description of a subject's activity is first decomposed into elementary poses - easily detectable by dedicated intelligent sensors - and represented by the share coefficients. Then, the activity is represented in the form of an attributed graph, where nodes correspond to elementary poses. As share coefficients of poses are expressed as attributes assigned to graph nodes, their change corresponding to a subject's action is represented by flow in graph edges. The behavioral record is thus a time series of graphs, which tiny size facilitates storage and management of long-term monitoring results. At the system learning stage, the contribution of elementary poses is accumulated, discretized and probability-ordered leading to a finite list representing the possible transitions between states. Such a list is independently built for each room in the supervised residence, and employed for assessment of the current action in the context of subject's habits and a room purpose. The proposed format of a behavioral record, applied to an adaptive surveillance system, is particularly advantageous for representing new activities not known at the setup stage, for providing a quantitative measure of transitions between poses and for expressing the difference between a predicted and actual action in a numerical way. 10.1016/j.compbiomed.2017.11.007
Detection of Craving for Gaming in Adolescents with Internet Gaming Disorder Using Multimodal Biosignals. Kim Hodam,Ha Jihyeon,Chang Won-Du,Park Wanjoo,Kim Laehyun,Im Chang-Hwan Sensors (Basel, Switzerland) The increase in the number of adolescents with internet gaming disorder (IGD), a type of behavioral addiction is becoming an issue of public concern. Teaching adolescents to suppress their craving for gaming in daily life situations is one of the core strategies for treating IGD. Recent studies have demonstrated that computer-aided treatment methods, such as neurofeedback therapy, are effective in relieving the symptoms of a variety of addictions. When a computer-aided treatment strategy is applied to the treatment of IGD, detecting whether an individual is currently experiencing a craving for gaming is important. We aroused a craving for gaming in 57 adolescents with mild to severe IGD using numerous short video clips showing gameplay videos of three addictive games. At the same time, a variety of biosignals were recorded including photoplethysmogram, galvanic skin response, and electrooculogram measurements. After observing the changes in these biosignals during the craving state, we classified each individual participant's craving/non-craving states using a support vector machine. When video clips edited to arouse a craving for gaming were played, significant decreases in the standard deviation of the heart rate, the number of eye blinks, and saccadic eye movements were observed, along with a significant increase in the mean respiratory rate. Based on these results, we were able to classify whether an individual participant felt a craving for gaming with an average accuracy of 87.04%. This is the first study that has attempted to detect a craving for gaming in an individual with IGD using multimodal biosignal measurements. Moreover, this is the first that showed that an electrooculogram could provide useful biosignal markers for detecting a craving for gaming. 10.3390/s18010102
A Multimodal Wearable System for Continuous and Real-Time Breathing Pattern Monitoring During Daily Activity. Qi Wen,Aliverti Andrea IEEE journal of biomedical and health informatics OBJECTIVE:This study aims to understand breathing patterns during daily activities by developing a wearable respiratory and activity monitoring (WRAM) system. METHODS:A novel multimodal fusion architecture is proposed to calculate the respiratory and exercise parameters and simultaneously identify human actions. A hybrid hierarchical classification (HHC) algorithm combining deep learning and threshold-based methods is presented to distinguish 15 complex activities for accuracy enhancement and fast computation. A series of signal processing algorithms are utilized and integrated to calculate breathing and motion indices. The designed wireless communication structure achieves the interactions among chest bands, mobile devices, and the data processing center. RESULTS:The advantage of the proposed HHC method is evaluated by comparing the average accuracy (97.22%) and predictive time (0.0094 s) with machine learning and deep learning approaches. The nine breathing patterns during 15 activities were analyzed by investigating the data from 12 subjects. With 12 hours of naturalistic data collected from one participant, the WRAM system reports the breathing and exercise performance within the identified motions. The demonstration shows the ability of the WRAM system to monitor multiple users breathing and exercise status in real-time. CONCLUSION:The present system demonstrates the usefulness of the framework of breathing pattern monitoring during daily activities, which may be potentially used in healthcare. SIGNIFICANCE:The proposed multimodal based WRAM system offers new insights into the breathing function of exercise in action and presents a novel approach for precision medicine and health state monitoring. 10.1109/JBHI.2019.2963048
Multimodal Ambulatory Sleep Detection Using LSTM Recurrent Neural Networks. Sano Akane,Chen Weixuan,Lopez-Martinez Daniel,Taylor Sara,Picard Rosalind W IEEE journal of biomedical and health informatics Unobtrusive and accurate ambulatory methods are needed to monitor long-term sleep patterns for improving health. Previously developed ambulatory sleep detection methods rely either in whole or in part on self-reported diary data as ground truth, which is a problem, since people often do not fill them out accurately. This paper presents an algorithm that uses multimodal data from smartphones and wearable technologies to detect sleep/wake state and sleep onset/offset using a type of recurrent neural network with long-short-term memory (LSTM) cells for synthesizing temporal information. We collected 5580 days of multimodal data from 186 participants and compared the new method for sleep/wake classification and sleep onset/offset detection to, first, nontemporal machine learning methods and, second, a state-of-the-art actigraphy software. The new LSTM method achieved a sleep/wake classification accuracy of 96.5%, and sleep onset/offset detection F scores of 0.86 and 0.84, respectively, with mean absolute errors of 5.0 and 5.5 min, res-pectively, when compared with sleep/wake state and sleep onset/offset assessed using actigraphy and sleep diaries. The LSTM results were statistically superior to those from nontemporal machine learning algorithms and the actigraphy software. We show good generalization of the new algorithm by comparing participant-dependent and participant-independent models, and we show how to make the model nearly realtime with slightly reduced performance. 10.1109/JBHI.2018.2867619
Multicenter clinical assessment of improved wearable multimodal convulsive seizure detectors. Onorati Francesco,Regalia Giulia,Caborni Chiara,Migliorini Matteo,Bender Daniel,Poh Ming-Zher,Frazier Cherise,Kovitch Thropp Eliana,Mynatt Elizabeth D,Bidwell Jonathan,Mai Roberto,LaFrance W Curt,Blum Andrew S,Friedman Daniel,Loddenkemper Tobias,Mohammadpour-Touserkani Fatemeh,Reinsberger Claus,Tognetti Simone,Picard Rosalind W Epilepsia OBJECTIVE:New devices are needed for monitoring seizures, especially those associated with sudden unexpected death in epilepsy (SUDEP). They must be unobtrusive and automated, and provide false alarm rates (FARs) bearable in everyday life. This study quantifies the performance of new multimodal wrist-worn convulsive seizure detectors. METHODS:Hand-annotated video-electroencephalographic seizure events were collected from 69 patients at six clinical sites. Three different wristbands were used to record electrodermal activity (EDA) and accelerometer (ACM) signals, obtaining 5,928 h of data, including 55 convulsive epileptic seizures (six focal tonic-clonic seizures and 49 focal to bilateral tonic-clonic seizures) from 22 patients. Recordings were analyzed offline to train and test two new machine learning classifiers and a published classifier based on EDA and ACM. Moreover, wristband data were analyzed to estimate seizure-motion duration and autonomic responses. RESULTS:The two novel classifiers consistently outperformed the previous detector. The most efficient (Classifier III) yielded sensitivity of 94.55%, and an FAR of 0.2 events/day. No nocturnal seizures were missed. Most patients had <1 false alarm every 4 days, with an FAR below their seizure frequency. When increasing the sensitivity to 100% (no missed seizures), the FAR is up to 13 times lower than with the previous detector. Furthermore, all detections occurred before the seizure ended, providing reasonable latency (median = 29.3 s, range = 14.8-151 s). Automatically estimated seizure durations were correlated with true durations, enabling reliable annotations. Finally, EDA measurements confirmed the presence of postictal autonomic dysfunction, exhibiting a significant rise in 73% of the convulsive seizures. SIGNIFICANCE:The proposed multimodal wrist-worn convulsive seizure detectors provide seizure counts that are more accurate than previous automated detectors and typical patient self-reports, while maintaining a tolerable FAR for ambulatory monitoring. Furthermore, the multimodal system provides an objective description of motor behavior and autonomic dysfunction, aimed at enriching seizure characterization, with potential utility for SUDEP warning. 10.1111/epi.13899
Comparison of Feature Learning Methods for Human Activity Recognition Using Wearable Sensors. Li Frédéric,Shirahama Kimiaki,Nisar Muhammad Adeel,Köping Lukas,Grzegorzek Marcin Sensors (Basel, Switzerland) Getting a good feature representation of data is paramount for Human Activity Recognition (HAR) using wearable sensors. An increasing number of feature learning approaches-in particular deep-learning based-have been proposed to extract an effective feature representation by analyzing large amounts of data. However, getting an objective interpretation of their performances faces two problems: the lack of a baseline evaluation setup, which makes a strict comparison between them impossible, and the insufficiency of implementation details, which can hinder their use. In this paper, we attempt to address both issues: we firstly propose an evaluation framework allowing a rigorous comparison of features extracted by different methods, and use it to carry out extensive experiments with state-of-the-art feature learning approaches. We then provide all the codes and implementation details to make both the reproduction of the results reported in this paper and the re-use of our framework easier for other researchers. Our studies carried out on the OPPORTUNITY and UniMiB-SHAR datasets highlight the effectiveness of hybrid deep-learning architectures involving convolutional and Long-Short-Term-Memory (LSTM) to obtain features characterising both short- and long-term time dependencies in the data. 10.3390/s18020679
Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition. Ordóñez Francisco Javier,Roggen Daniel Sensors (Basel, Switzerland) Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters' influence on performance to provide insights about their optimisation. 10.3390/s16010115
UP-Fall Detection Dataset: A Multimodal Approach. Martínez-Villaseñor Lourdes,Ponce Hiram,Brieva Jorge,Moya-Albor Ernesto,Núñez-Martínez José,Peñafort-Asturiano Carlos Sensors (Basel, Switzerland) Falls, especially in elderly persons, are an important health problem worldwide. Reliable fall detection systems can mitigate negative consequences of falls. Among the important challenges and issues reported in literature is the difficulty of fair comparison between fall detection systems and machine learning techniques for detection. In this paper, we present UP-Fall Detection Dataset. The dataset comprises raw and feature sets retrieved from 17 healthy young individuals without any impairment that performed 11 activities and falls, with three attempts each. The dataset also summarizes more than 850 GB of information from wearable sensors, ambient sensors and vision devices. Two experimental use cases were shown. The aim of our dataset is to help human activity recognition and machine learning research communities to fairly compare their fall detection solutions. It also provides many experimental possibilities for the signal recognition, vision, and machine learning community. 10.3390/s19091988
Movement error rate for evaluation of machine learning methods for sEMG-based hand movement classification. Gijsberts Arjan,Atzori Manfredo,Castellini Claudio,Muller Henning,Caputo Barbara IEEE transactions on neural systems and rehabilitation engineering : a publication of the IEEE Engineering in Medicine and Biology Society There has been increasing interest in applying learning algorithms to improve the dexterity of myoelectric prostheses. In this work, we present a large-scale benchmark evaluation on the second iteration of the publicly released NinaPro database, which contains surface electromyography data for 6 DOF force activations as well as for 40 discrete hand movements. The evaluation involves a modern kernel method and compares performance of three feature representations and three kernel functions. Both the force regression and movement classification problems can be learned successfully when using a nonlinear kernel function, while the exp- χ(2) kernel outperforms the more popular radial basis function kernel in all cases. Furthermore, combining surface electromyography and accelerometry in a multimodal classifier results in significant increases in accuracy as compared to when either modality is used individually. Since window-based classification accuracy should not be considered in isolation to estimate prosthetic controllability, we also provide results in terms of classification mistakes and prediction delay. To this extent, we propose the movement error rate as an alternative to the standard window-based accuracy. This error rate is insensitive to prediction delays and it allows us therefore to quantify mistakes and delays as independent performance characteristics. This type of analysis confirms that the inclusion of accelerometry is superior, as it results in fewer mistakes while at the same time reducing prediction delay. 10.1109/TNSRE.2014.2303394
Reducing false alarms in the ICU by quantifying self-similarity of multimodal biosignals. Antink Christoph Hoog,Leonhardt Steffen,Walter Marian Physiological measurement False arrhythmia alarms pose a major threat to the quality of care in today's ICU. Thus, the PhysioNet/Computing in Cardiology Challenge 2015 aimed at reducing false alarms by exploiting multimodal cardiac signals recorded by a patient monitor. False alarms for asystole, extreme bradycardia, extreme tachycardia, ventricular flutter/fibrillation as well as ventricular tachycardia were to be reduced using two electrocardiogram channels, up to two cardiac signals of mechanical origin as well as a respiratory signal. In this paper, an approach combining multimodal rhythmicity estimation and machine learning is presented. Using standard short-time autocorrelation and robust beat-to-beat interval estimation, the signal's self-similarity is analyzed. In particular, beat intervals as well as quality measures are derived which are further quantified using basic mathematical operations (min, mean, max, etc). Moreover, methods from the realm of image processing, 2D Fourier transformation combined with principal component analysis, are employed for dimensionality reduction. Several machine learning approaches are evaluated including linear discriminant analysis and random forest. Using an alarm-independent reduction strategy, an overall false alarm reduction with a score of 65.52 in terms of the real-time scoring system of the challenge is achieved on a hidden dataset. Employing an alarm-specific strategy, an overall real-time score of 78.20 at a true positive rate of 95% and a true negative rate of 78% is achieved. While the results for some categories still need improvement, false alarms for extreme tachycardia are suppressed with 100% sensitivity and specificity. 10.1088/0967-3334/37/8/1233
Classifying major depression patients and healthy controls using EEG, eye tracking and galvanic skin response data. Ding Xinfang,Yue Xinxin,Zheng Rui,Bi Cheng,Li Dai,Yao Guizhong Journal of affective disorders OBJECTIVE:Major depression disorder (MDD) is one of the most prevalent mental disorders worldwide. Diagnosing depression in the early stage is crucial to treatment process. However, due to depression's comorbid nature and the subjectivity in diagnosis, an early diagnosis could be challenging. Recently, machine learning approaches have been used to process Electroencephalography (EEG) and neuroimaging data to facilitate the diagnosis. In the present study, we used a multimodal machine learning approach involving EEG, eye tracking and galvanic skin response data as input to classify depression patients and healthy controls. METHODS:One hundred and forty-four MDD depression patients and 204 matched healthy controls were recruited. They were required to watch a series of affective and neutral stimuli while EEG, eye tracking information and galvanic skin response were recorded via a set of low-cost, portable devices. Three machine learning algorithms including Random Forests, Logistic Regression and Support Vector Machine (SVM) were trained to build dichotomous classification model. RESULTS:The results showed that the highest classification f1 score was obtained by Logistic Regression algorithms, with accuracy = 79.63%, precision = 76.67%, recall = 85.19% and f1 score = 80.70% LIMITATIONS: No hospitalized patients were available; only outpatients were included in the present study. The sample consisted mostly of young adult, and no elder patients were included. CONCLUSIONS:The machine learning approach can be a useful tool for classifying MDD patients and healthy controls and may help for diagnostic processes. 10.1016/j.jad.2019.03.058
Multimodal Learning and Intelligent Prediction of Symptom Development in Individual Parkinson's Patients. Przybyszewski Andrzej W,Kon Mark,Szlufik Stanislaw,Szymanski Artur,Habela Piotr,Koziorowski Dariusz M Sensors (Basel, Switzerland) We still do not know how the brain and its computations are affected by nerve cell deaths and their compensatory learning processes, as these develop in neurodegenerative diseases (ND). Compensatory learning processes are ND symptoms usually observed at a point when the disease has already affected large parts of the brain. We can register symptoms of ND such as motor and/or mental disorders (dementias) and even provide symptomatic relief, though the structural effects of these are in most cases not yet understood. It is very important to obtain early diagnosis, which can provide several years in which we can monitor and partly compensate for the disease's symptoms, with the help of various therapies. In the case of Parkinson's disease (PD), in addition to classical neurological tests, measurements of eye movements are diagnostic. We have performed measurements of latency, amplitude, and duration in reflexive saccades (RS) of PD patients. We have compared the results of our measurement-based diagnoses with standard neurological ones. The purpose of our work was to classify how condition attributes predict the neurologist's diagnosis. For n = 10 patients, the patient age and parameters based on RS gave a global accuracy in predictions of neurological symptoms in individual patients of about 80%. Further, by adding three attributes partly related to patient 'well-being' scores, our prediction accuracies increased to 90%. Our predictive algorithms use rough set theory, which we have compared with other classifiers such as Naïve Bayes, Decision Trees/Tables, and Random Forests (implemented in KNIME/WEKA). We have demonstrated that RS are powerful biomarkers for assessment of symptom progression in PD. 10.3390/s16091498
Multimodal wrist-worn devices for seizure detection and advancing research: Focus on the Empatica wristbands. Regalia Giulia,Onorati Francesco,Lai Matteo,Caborni Chiara,Picard Rosalind W Epilepsy research Wearable automated seizure detection devices offer a high potential to improve seizure management, through continuous ambulatory monitoring, accurate seizure counts, and real-time alerts for prompt intervention. More importantly, these devices can be a life-saving help for people with a higher risk of sudden unexpected death in epilepsy (SUDEP), especially in case of generalized tonic-clonic seizures (GTCS). The Embrace and E4 wristbands (Empatica) are the first commercially available multimodal wristbands that were designed to sense the physiological hallmarks of ongoing GTCS: while Embrace only embeds a machine learning-based detection algorithm, both E4 and Embrace devices are equipped with motion (accelerometers, ACC) and electrodermal activity (EDA) sensors and both the devices received medical clearance (E4 from EU CE, Embrace from EU CE and US FDA). The aim of this contribution is to provide updated evidence of the effectiveness of GTCS detection and monitoring relying on the combination of ACM and EDA sensors. A machine learning algorithm able to recognize ACC and EDA signatures of GTCS-like events has been developed on E4 data, labeled using gold-standard video-EEG examined by epileptologists in clinical centers, and has undergone continuous improvement. While keeping an elevated sensitivity to GTCS (92-100%), algorithm improvements and growing data availability led to lower false alarm rate (FAR) from the initial ˜2 down to 0.2-1 false alarms per day, as showed by retrospective and prospective analyses in inpatient settings. Algorithm adjustment to better discriminate real-life physical activities from GTCS, has brought the initial FAR of ˜6 on outpatient real life settings, down to values comparable to best-case clinical settings (FAR < 0.5), with comparable sensitivity. Moreover, using multimodal sensing, it has been possible not only to detect GTCS but also to quantify seizure-induced autonomic dysfunction, based on automatic features of abnormal motion and EDA. The latter biosignal correlates with the duration of post-ictal generalized EEG suppression, a biomarker observed in 100% of monitored SUDEP cases. 10.1016/j.eplepsyres.2019.02.007
Toward Dynamically Adaptive Simulation: Multimodal Classification of User Expertise Using Wearable Devices. Ross Kyle,Sarkar Pritam,Rodenburg Dirk,Ruberto Aaron,Hungler Paul,Szulewski Adam,Howes Daniel,Etemad Ali Sensors (Basel, Switzerland) Simulation-based training has been proven to be a highly effective pedagogical strategy. However, misalignment between the participant's level of expertise and the difficulty of the simulation has been shown to have significant negative impact on learning outcomes. To ensure that learning outcomes are achieved, we propose a novel framework for adaptive simulation with the goal of identifying the level of expertise of the learner, and dynamically modulating the simulation complexity to match the learner's capability. To facilitate the development of this framework, we investigate the classification of expertise using biological signals monitored through wearable sensors. Trauma simulations were developed in which electrocardiogram (ECG) and galvanic skin response (GSR) signals of both novice and expert trauma responders were collected. These signals were then utilized to classify the responders' expertise, successive to feature extraction and selection, using a number of machine learning methods. The results show the feasibility of utilizing these bio-signals for multimodal expertise classification to be used in adaptive simulation applications. 10.3390/s19194270
Prediction of Relative Physical Activity Intensity Using Multimodal Sensing of Physiological Data. Chowdhury Alok Kumar,Tjondronegoro Dian,Chandran Vinod,Zhang Jinglan,Trost Stewart G Sensors (Basel, Switzerland) This study examined the feasibility of a non-laboratory approach that uses machine learning on multimodal sensor data to predict relative physical activity (PA) intensity. A total of 22 participants completed up to 7 PA sessions, where each session comprised 5 trials (sitting and standing, comfortable walk, brisk walk, jogging, running). Participants wore a wrist-strapped sensor that recorded heart-rate (HR), electrodermal activity (Eda) and skin temperature (Temp). After each trial, participants provided ratings of perceived exertion (RPE). Three classifiers, including random forest (RF), neural network (NN) and support vector machine (SVM), were applied independently on each feature set to predict relative PA intensity as low (RPE ≤ 11), moderate (RPE 12-14), or high (RPE ≥ 15). Then, both feature fusion and decision fusion of all combinations of sensor modalities were carried out to investigate the best combination. Among the single modality feature sets, HR provided the best performance. The combination of modalities using feature fusion provided a small improvement in performance. Decision fusion did not improve performance over HR features alone. A machine learning approach using features from HR provided acceptable predictions of relative PA intensity. Adding features from other sensing modalities did not significantly improve performance. 10.3390/s19204509
Multimodal Neuroimaging: Basic Concepts and Classification of Neuropsychiatric Diseases. Tulay Emine Elif,Metin Barış,Tarhan Nevzat,Arıkan Mehmet Kemal Clinical EEG and neuroscience Neuroimaging techniques are widely used in neuroscience to visualize neural activity, to improve our understanding of brain mechanisms, and to identify biomarkers-especially for psychiatric diseases; however, each neuroimaging technique has several limitations. These limitations led to the development of multimodal neuroimaging (MN), which combines data obtained from multiple neuroimaging techniques, such as electroencephalography, functional magnetic resonance imaging, and yields more detailed information about brain dynamics. There are several types of MN, including visual inspection, data integration, and data fusion. This literature review aimed to provide a brief summary and basic information about MN techniques (data fusion approaches in particular) and classification approaches. Data fusion approaches are generally categorized as asymmetric and symmetric. The present review focused exclusively on studies based on symmetric data fusion methods (data-driven methods), such as independent component analysis and principal component analysis. Machine learning techniques have recently been introduced for use in identifying diseases and biomarkers of disease. The machine learning technique most widely used by neuroscientists is classification-especially support vector machine classification. Several studies differentiated patients with psychiatric diseases and healthy controls with using combined datasets. The common conclusion among these studies is that the prediction of diseases increases when combining data via MN techniques; however, there remain a few challenges associated with MN, such as sample size. Perhaps in the future N-way fusion can be used to combine multiple neuroimaging techniques or nonimaging predictors (eg, cognitive ability) to overcome the limitations of MN. 10.1177/1550059418782093
Biomechanics-machine learning system for surgical gesture analysis and development of technologies for minimal access surgery. Cavallo Filippo,Sinigaglia Stefano,Megali Giuseppe,Pietrabissa Andrea,Dario Paolo,Mosca Franco,Cuschieri Alfred Surgical innovation BACKGROUND:The uptake of minimal access surgery (MAS) has by virtue of its clinical benefits become widespread across the surgical specialties. However, despite its advantages in reducing traumatic insult to the patient, it imposes significant ergonomic restriction on the operating surgeons who require training for the safe execution. Recent progress in manipulator technologies (robotic or mechanical) have certainly reduced the level of difficulty, however it requires information for a complete gesture analysis of surgical performance. This article reports on the development and evaluation of such a system capable of full biomechanical and machine learning. METHODS:The system for gesture analysis comprises 5 principal modules, which permit synchronous acquisition of multimodal surgical gesture signals from different sources and settings. The acquired signals are used to perform a biomechanical analysis for investigation of kinematics, dynamics, and muscle parameters of surgical gestures and a machine learning model for segmentation and recognition of principal phases of surgical gesture. RESULTS:The biomechanical system is able to estimate the level of expertise of subjects and the ergonomics in using different instruments. The machine learning approach is able to ascertain the level of expertise of subjects and has the potential for automatic recognition of surgical gesture for surgeon-robot interactions. CONCLUSIONS:Preliminary tests have confirmed the efficacy of the system for surgical gesture analysis, providing an objective evaluation of progress during training of surgeons in their acquisition of proficiency in MAS approach and highlighting useful information for the design and evaluation of master-slave manipulator systems. 10.1177/1553350613510612
Machine Learning in Rehabilitation Assessment for Thermal and Heart Rate Data Processing. Prochazka Ales,Charvatova Hana,Vaseghi Saeed,Vysata Oldrich IEEE transactions on neural systems and rehabilitation engineering : a publication of the IEEE Engineering in Medicine and Biology Society Multimodal signal analysis based on sophisticated noninvasive sensors, efficient communication systems, and machine learning, have a rapidly increasing range of different applications. The present paper is devoted to pattern recognition and the analysis of physiological data acquired by heart rate and thermal camera sensors during rehabilitation. A total number of 56 experimental data sets, each 40 min long, of the heart rate and breathing temperature recorded on an exercise bike have been processed to determine the fitness level and possible medical disorders. The proposed general methodology combines machine learning methods for the detection of the changing temperature ranges of the thermal camera and adaptive image processing methods to evaluate the frequency of breathing. To determine the individual temperature values, a neural network model with the sigmoidal and the probabilistic transfer function in the first and the second layers are applied. Appropriate statistical methods are then used to find the correspondence between the exercise activity and selected physiological functions. The evaluated mean delay of 21 s of the heart rate drop related to the change of the activity level corresponds to results obtained in real cycling conditions. Further results include the average value of the change of the breathing temperature (167 s) and breathing frequency (49 s). 10.1109/TNSRE.2018.2831444
Predicting rehospitalization within 2 years of initial patient admission for a major depressive episode: a multimodal machine learning approach. Cearns Micah,Opel Nils,Clark Scott,Kaehler Claas,Thalamuthu Anbupalam,Heindel Walter,Winter Theresa,Teismann Henning,Minnerup Heike,Dannlowski Udo,Berger Klaus,Baune Bernhard T Translational psychiatry Machine learning methods show promise to translate univariate biomarker findings into clinically useful multivariate decision support systems. At current, works in major depressive disorder have predominantly focused on neuroimaging and clinical predictor modalities, with genetic, blood-biomarker, and cardiovascular modalities lacking. In addition, the prediction of rehospitalization after an initial inpatient major depressive episode is yet to be explored, despite its clinical importance. To address this gap in the literature, we have used baseline clinical, structural imaging, blood-biomarker, genetic (polygenic risk scores), bioelectrical impedance and electrocardiography predictors to predict rehospitalization within 2 years of an initial inpatient episode of major depression. Three hundred and eighty patients from the ongoing 12-year Bidirect study were included in the analysis (rehospitalized: yes = 102, no = 278). Inclusion criteria was age ≥35 and <66 years, a current or recent hospitalisation for a major depressive episode and complete structural imaging and genetic data. Optimal performance was achieved with a multimodal panel containing structural imaging, blood-biomarker, clinical, medication type, and sleep quality predictors, attaining a test AUC of 67.74 (p = 9.99). This multimodal solution outperformed models based on clinical variables alone, combined biomarkers, and individual data modality prognostication for rehospitalization prediction. This finding points to the potential of predictive models that combine multimodal clinical and biomarker data in the development of clinical decision support systems. 10.1038/s41398-019-0615-2
Exploiting Machine Learning Algorithms and Methods for the Prediction of Agitated Delirium After Cardiac Surgery: Models Development and Validation Study. Mufti Hani Nabeel,Hirsch Gregory Marshal,Abidi Samina Raza,Abidi Syed Sibte Raza JMIR medical informatics BACKGROUND:Delirium is a temporary mental disorder that occasionally affects patients undergoing surgery, especially cardiac surgery. It is strongly associated with major adverse events, which in turn leads to increased cost and poor outcomes (eg, need for nursing home due to cognitive impairment, stroke, and death). The ability to foresee patients at risk of delirium will guide the timely initiation of multimodal preventive interventions, which will aid in reducing the burden and negative consequences associated with delirium. Several studies have focused on the prediction of delirium. However, the number of studies in cardiac surgical patients that have used machine learning methods is very limited. OBJECTIVE:This study aimed to explore the application of several machine learning predictive models that can pre-emptively predict delirium in patients undergoing cardiac surgery and compare their performance. METHODS:We investigated a number of machine learning methods to develop models that can predict delirium after cardiac surgery. A clinical dataset comprising over 5000 actual patients who underwent cardiac surgery in a single center was used to develop the models using logistic regression, artificial neural networks (ANN), support vector machines (SVM), Bayesian belief networks (BBN), naïve Bayesian, random forest, and decision trees. RESULTS:Only 507 out of 5584 patients (11.4%) developed delirium. We addressed the underlying class imbalance, using random undersampling, in the training dataset. The final prediction performance was validated on a separate test dataset. Owing to the target class imbalance, several measures were used to evaluate algorithm's performance for the delirium class on the test dataset. Out of the selected algorithms, the SVM algorithm had the best F1 score for positive cases, kappa, and positive predictive value (40.2%, 29.3%, and 29.7%, respectively) with a P=.01, .03, .02, respectively. The ANN had the best receiver-operator area-under the curve (78.2%; P=.03). The BBN had the best precision-recall area-under the curve for detecting positive cases (30.4%; P=.03). CONCLUSIONS:Although delirium is inherently complex, preventive measures to mitigate its negative effect can be applied proactively if patients at risk are prospectively identified. Our results highlight 2 important points: (1) addressing class imbalance on the training dataset will augment machine learning model's performance in identifying patients likely to develop postoperative delirium, and (2) as the prediction of postoperative delirium is difficult because it is multifactorial and has complex pathophysiology, applying machine learning methods (complex or simple) may improve the prediction by revealing hidden patterns, which will lead to cost reduction by prevention of complications and will optimize patients' outcomes. 10.2196/14993
Applications and limitations of machine learning in radiation oncology. The British journal of radiology Machine learning approaches to problem-solving are growing rapidly within healthcare, and radiation oncology is no exception. With the burgeoning interest in machine learning comes the significant risk of misaligned expectations as to what it can and cannot accomplish. This paper evaluates the role of machine learning and the problems it solves within the context of current clinical challenges in radiation oncology. The role of learning algorithms within the workflow for external beam radiation therapy are surveyed, considering simulation imaging, multimodal fusion, image segmentation, treatment planning, quality assurance, and treatment delivery and adaptation. For each aspect, the clinical challenges faced, the learning algorithms proposed, and the successes and limitations of various approaches are analyzed. It is observed that machine learning has largely thrived on reproducibly mimicking conventional human-driven solutions with more efficiency and consistency. On the other hand, since algorithms are generally trained using expert opinion as ground truth, machine learning is of limited utility where problems or ground truths are not well-defined, or if suitable measures of correctness are not available. As a result, machines may excel at replicating, automating and standardizing human behaviour on manual chores, meanwhile the conceptual clinical challenges relating to definition, evaluation, and judgement remain in the realm of human intelligence and insight. 10.1259/bjr.20190001
Machine Learning Techniques in Clinical Vision Sciences. Caixinha Miguel,Nunes Sandrina Current eye research This review presents and discusses the contribution of machine learning techniques for diagnosis and disease monitoring in the context of clinical vision science. Many ocular diseases leading to blindness can be halted or delayed when detected and treated at its earliest stages. With the recent developments in diagnostic devices, imaging and genomics, new sources of data for early disease detection and patients' management are now available. Machine learning techniques emerged in the biomedical sciences as clinical decision-support techniques to improve sensitivity and specificity of disease detection and monitoring, increasing objectively the clinical decision-making process. This manuscript presents a review in multimodal ocular disease diagnosis and monitoring based on machine learning approaches. In the first section, the technical issues related to the different machine learning approaches will be present. Machine learning techniques are used to automatically recognize complex patterns in a given dataset. These techniques allows creating homogeneous groups (unsupervised learning), or creating a classifier predicting group membership of new cases (supervised learning), when a group label is available for each case. To ensure a good performance of the machine learning techniques in a given dataset, all possible sources of bias should be removed or minimized. For that, the representativeness of the input dataset for the true population should be confirmed, the noise should be removed, the missing data should be treated and the data dimensionally (i.e., the number of parameters/features and the number of cases in the dataset) should be adjusted. The application of machine learning techniques in ocular disease diagnosis and monitoring will be presented and discussed in the second section of this manuscript. To show the clinical benefits of machine learning in clinical vision sciences, several examples will be presented in glaucoma, age-related macular degeneration, and diabetic retinopathy, these ocular pathologies being the major causes of irreversible visual impairment. 10.1080/02713683.2016.1175019
Making Individual Prognoses in Psychiatry Using Neuroimaging and Machine Learning. Janssen Ronald J,Mourão-Miranda Janaina,Schnack Hugo G Biological psychiatry. Cognitive neuroscience and neuroimaging Psychiatric prognosis is a difficult problem. Making a prognosis requires looking far into the future, as opposed to making a diagnosis, which is concerned with the current state. During the follow-up period, many factors will influence the course of the disease. Combined with the usually scarcer longitudinal data and the variability in the definition of outcomes/transition, this makes prognostic predictions a challenging endeavor. Employing neuroimaging data in this endeavor introduces the additional hurdle of high dimensionality. Machine learning techniques are especially suited to tackle this challenging problem. This review starts with a brief introduction to machine learning in the context of its application to clinical neuroimaging data. We highlight a few issues that are especially relevant for prediction of outcome and transition using neuroimaging. We then review the literature that discusses the application of machine learning for this purpose. Critical examination of the studies and their results with respect to the relevant issues revealed the following: 1) there is growing evidence for the prognostic capability of machine learning-based models using neuroimaging; and 2) reported accuracies may be too optimistic owing to small sample sizes and the lack of independent test samples. Finally, we discuss options to improve the reliability of (prognostic) prediction models. These include new methodologies and multimodal modeling. Paramount, however, is our conclusion that future work will need to provide properly (cross-)validated accuracy estimates of models trained on sufficiently large datasets. Nevertheless, with the technological advances enabling acquisition of large databases of patients and healthy subjects, machine learning represents a powerful tool in the search for psychiatric biomarkers. 10.1016/j.bpsc.2018.04.004
Machine-Learning-Based Detection of Craving for Gaming Using Multimodal Physiological Signals: Validation of Test-Retest Reliability for Practical Use. Kim Hodam,Kim Laehyun,Im Chang-Hwan Sensors (Basel, Switzerland) Internet gaming disorder in adolescents and young adults has become an increasing public concern because of its high prevalence rate and potential risk of alteration of brain functions and organizations. Cue exposure therapy is designed for reducing or maintaining craving, a core factor of relapse of addiction, and is extensively employed in addiction treatment. In a previous study, we proposed a machine-learning-based method to detect craving for gaming using multimodal physiological signals including photoplethysmogram, galvanic skin response, and electrooculogram. Our previous study demonstrated that a craving for gaming could be detected with a fairly high accuracy; however, as the feature vectors for the machine-learning-based detection of the craving of a user were selected based on the physiological data of the user that were recorded on the same day, the effectiveness of the reuse of the machine learning model constructed during the previous experiments, without any further calibration sessions, was still questionable. This "high test-retest reliability" characteristic is of importance for the practical use of the craving detection system because the system needs to be repeatedly applied to the treatment processes as a tool to monitor the efficacy of the treatment. We presented short video clips of three addictive games to nine participants, during which various physiological signals were recorded. This experiment was repeated with different video clips on three different days. Initially, we investigated the test-retest reliability of 14 features used in a craving detection system by computing the intraclass correlation coefficient. Then, we classified whether each participant experienced a craving for gaming in the third experiment using various classifiers-the support vector machine, k-nearest neighbors (kNN), centroid displacement-based kNN, linear discriminant analysis, and random forest-trained with the physiological signals recorded during the first or second experiment. Consequently, the craving/non-craving states in the third experiment were classified with an accuracy that was comparable to that achieved using the data of the same day; thus, demonstrating a high test-retest reliability and the practicality of our craving detection method. In addition, the classification performance was further enhanced by using both datasets of the first and second experiments to train the classifiers, suggesting that an individually customized game craving detection system with high accuracy can be implemented by accumulating datasets recorded on different days under different experimental conditions. 10.3390/s19163475