Qualitative Analysis of Techniques for Device-Free Human Activity Recognition

Qualitative Analysis of Techniques for Device-Free Human Activity Recognition

Tuhina Raj* Tehmina Nisar Mehak Abbas Rashmi Priyadarshini Shaheen Naz Usha Tiwari

Electrical Electronics and Communication Engineering Sharda University, Greater Noida 201310, U.P., India

Corresponding Author Email: 
tuhina10raj@gmail.com
Page: 
639-653
|
DOI: 
https://doi.org/10.18280/ria.370313
Received: 
1 May 2023
|
Revised: 
18 May 2023
|
Accepted: 
28 May 2023
|
Available online: 
30 June 2023
| Citation

© 2023 IIETA. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

Continuous human monitoring has become increasingly important in various applications, including health, security, intelligent systems, and leisure activities. Human Activity Recognition (HAR) through the use of wearables, tagged objects, and device-free localization (DFL) has gained major attention from researchers. DFL approaches have been particularly recommended due to their non-intrusive nature and its applicability in diverse fields. The use of Artificial Intelligence (AI) has reinvented the utilization of deep concealed information for precise detection and interpretation. However, challenges which includes data collection, dealing with intra-class variability, and real-time recognition in dynamic and instant changing scenarios still persists. This paper provides a review of the various techniques for HAR and their applications in different fields. A comprehensive analysis of methodologies and data from papers published from 2000 to 2023 has been conducted. The paper also discusses research problems and future opportunities in this field.

Keywords: 

Human Activity Recognition, Artificial Intelligence, device-free localization (DFL) approaches, data collection

1. Introduction

Human Activity Recognition (HAR) has seen significant evolution for the past few decades, where it has found its importance in various applications for instance in healthcare, recreational activities, security and in various smart technological advancements. This study has led to the need for a multifaceted activity recognition system. Figure 1 depicts various categorization of HAR.

Figure 1. Categorization of HAR

Analyzing actions from still images or video snippets is the primary objective of Human Activity Recognition. This fact leads systems for understanding human activity, which attempt to precisely group input data into its fundamental activity category [1]. In accordance with their elaborate nature, Human Activity Recognition can be categorized into basic 3 types: (i) Encounter (ii) Activity (iii) Interaction.

Device-free localization (DFL) systems use numerous static connections to track and identify objects (people) by detecting variations in the received signal strength (RSS) over a period [2]. These systems do not need that the targets of espionage carry or wear any gadgets (such as a cell phone, an RFID tag, or a low-power transmitter or receivers) because the radio channel acts as the sole data source. This technology has the potential to be used in surveillance, security, and rescue operations, industrial safety systems, assisted living, and senior care facilities. Under comparison to other technologies, a DFL system has several advantages, such as the capacity to function under congested conditions and see through walls [3].

Several RSS-based DFL methods postulate that a person has an impact on radio signal propagation in the vicinity of where he is located [4]. As an outcome, a person's proximity to a relationship alters its RSS principles in comparison to when the individual in question was farther away.

To be able to quantify the change when a human is close to the link, these algorithms must first learn the reference characteristics of the RSS on each link while a person is not nearby [5, 6]. However, in crowded residential settings, diverse everyday activities result in the movement of things of varied sizes, shapes, and materials around the home. The fundamental properties of RSS on many links are altered by these changes. The DFL system must thus adjust as a result [7].

Device-free localization (DFL), a radio frequency (RF)-based device-free indoor localization, has drawn a lot of research attention in recent years because of its ease of use, low cost, and compatibility with devices already equipped with an RF interface.

Due to the increased need for human-centric applications in healthcare and supported living over the past few decades, human activity monitoring has attracted significant research attention. In this regard, human monitoring activities can be incorporated into innovative building systems to enhance building management and overall quality of life, particularly for the elderly who are experiencing health decline due to age.

However, most older individuals choose to live out their additional years in an unhealthy way, frequently suffering from crippling sickness and disability as a result of the decline in their physical or mental abilities brought on by age-related disorders. There is a little correlation between the ageing of the population and the rise in disability rates worldwide [8].

This can be done without sacrificing crucial factors like safety and energy usage. According to the study [9], most older individuals want to live freely in their own homes and communities. This section of the paper introduces the potential of RF signals, which are often used for wireless communications, as sensing instruments for DFL systems in human activity monitoring. DFL is founded on the idea of radio irregularity, in which the presence of humans in the channel of wireless communication might cause interference and alter the wireless properties.

This paper analyses Human Activity Recognition (HAR) and its uses in security, smart technology, healthcare, and leisure activities. HAR classifies encounter, activity, and interaction-level human behaviour from photos or video clips. Device-free localization systems can be used for surveillance, security, rescue efforts, and supported living. These systems track and identify objects based on changes in the received signal strength. The study highlights the significance of precise data collection, feature extraction, activity classification, and performance assessment. AI and machine learning approaches have an impact on HAR, making it possible to extract secret information for accurate human activity detection and interpretation. Figure 2 depicts various stages of HAR.

Figure 2. Stages of HAR

The technique of automatically recognizing and categorizing human actions using information gathered from sensors is known as Human Activity Recognition (HAR). HAR goes through four stages:

1. Data Collection: The first step is gathering information from sensors that the user is wearing. The sensors can be magnetometers, accelerometers, gyroscopes, or a combination of these. The information gathered must be accurate and of high quality to allow for the identification of actions. The data can also be collected through Radio Frequency based techniques or Vision.

2. Feature Extraction: At this step, characteristics that represent the underlying actions have been extracted from the gathered data through processing. The characteristics might be either frequency domain information like Fast Fourier Transform (FFT) coefficients or statistical data like mean, standard deviation, denoising, data filtering or mean andvariance. The traits that were collected ought to be precise and helpful for recognizing various activities.

3. Activity Classification: The activity is classified at this stage utilizing the retrieved characteristics. The classification job may be carried out using methods of machine learning SVM, K-means, PCA, Ada Boost, or Deep Neural Networks such as CNN, DNN, LSTM, RNN. The complexity of the classification problem affects the algorithm to choose.

4. Evaluation of Performance: The HAR system's performance will be evaluated in the final stage. The system's performance is assessed using performance measures including accuracy, precision, recall, and F1-score. To ensure the system's efficacy and generalization, a wide range of activities and users should be evaluated. The performance evaluation step aids in locating the benefits and drawbacks of the system so that required adjustments could be made.

Human Activity Recognition (HAR) is the practice of using Artificial Intelligence (AI) to recognize and name human actions from raw activity data extracted from a variety of sources (so-called gadgets) Wearable sensors [10, 11], and some commercial off-the-shelf equipment’s [12, 13] are a couple of these gadgets. Although sensors, video cameras, radio frequency identification and Wi-Fi are not new, their use in HAR is presently in its early stages.

The rapid development of technologies like AI, permits the use of these devices in a variety of application fields, is the cause of HAR's advancement [14]. As a result, we may conclude that HAR devices and AI methodologies or models have a mutually beneficial connection. Prior to AI developments, these models were based on a single image or a small number of visuals, but now there are more options.

Advances in technology in the area of AI, have revolutionized the ability to extract deeply concealed information for precise detection and interpretation [15-17], and machine learning methodologies [18, 19]. Determining how these new concepts are affecting HAR devices is crucial.

Table 1. Sensors and their role in HAR

Sensor Type

Description

Example Use Cases

Accelerometer

Measures acceleration and orientation

Detecting walking, running, or cycling

Gyroscope

Measures rotation and orientation

Detecting changes in direction or rotation

Magnetometer

Measures magnetic field strength and direction

Determining orientation and direction

GPS

Determines location using satellite signals

Tracking outdoor activities, such as hiking or running

Barometer

Measures air pressure

Detecting changes in altitude, such as stair climbing

Heart Rate Monitor

Measures heart rate

Monitoring cardiovascular activity during exercise

EMG

Measures electrical activity in muscles

Detecting muscle activation patterns during exercise

Camera

Captures visual data

Recognizing gestures or facial expressions

This involves contemplating an evaluation that has an emphasis on HAR and AI devices that are simultaneously advancing. The primary goal of this study is to explore the HAR framework better in order to encompass devices and application domains in the specialized AI framework. Figure 3 evaluates various sensors which could encounter HAR activities. A number of sensors, including cameras, gyroscopes, magnetometers, GPS, barometers, heart rate monitors, and EMG sensors, are essential to HAR. Knowing these fundamentals helps improve HAR systems. What devices can be employed in what applications and what AI traits can be considered when generating such a framework are a couple of the issues that need to be investigated. Table 1 describes various sensors and its role in HAR.

The table provides an overview of the various sensor types utilised in Human Activity Recognition (HAR), such as accelerometers, gyroscopes, magnetometers, GPS, barometers, heart rate monitors, EMGs, and cameras. The development of precise and thorough activity identification systems depends on these sensors, which are critical for recording many aspects of human activity. Accelerometers track changes in movement and direction while measuring acceleration. Gyroscopes are devices that measure rotation and orientation and may detect changes in direction or rotation. With the use of magnetometers, you may determine your orientation in relation to the magnetic field of the Earth by measuring the magnetic field's strength and direction. Barometers monitor air pressure and detect changes in elevation, while GPS offers precise latitude, longitude, and altitude data.

2. Data Collection

2.1 Survey on sensor-based techniques

The survey that concentrates on sensor-based methods for activity recognition are presented in this section. Video sequences or images of human activity and time-series data of one's movements during numerous tasks captured by sensors like accelerometers, gyroscopes, etc. that are present in electronic devices are the two types of data that can be fed into the HAR system as input [20]. These sensors may be connected to everyday items or worn as a wearable device [21]. The sensors could be classified based on Environmental, wearable, and smartphone-based [22]. The survey's first classification focuses on sensor-based methods. Various methods utilizing wearable sensors, such as accelerometers, Motion Sensors, Proximity sensors, accelerometers, magnetometer, etc. as well as dense sensing are described.

The most widely used wearable sensor for action detection is the low-cost, highly reliable accelerometer, which has high classification accuracy rates of 92.25% [23], 96% [24], and 99.4% [25, 26]. Three-dimensional accelerations can be observed as:

$\vec{A}=\frac{d \vec{v}}{d t}=(\vec{g}+\vec{l}),\left(\begin{array}{c}A_x \\ A_y \\ A_z\end{array}\right)=\left(\begin{array}{c}g_x+l_x \\ g_y+l_y \\ g_z+l_z\end{array}\right)$

where,

A-is the acceleration;

g-is the acceleration due to gravity;

l-is the applied linear acceleration which are measured in m/s2.

Various ways to encounter the sensor for HAR can be classified as:

In the second classification, writers classify the literature into techniques that are knowledge-driven vs. data-driven. The authors go through generative modelling and discriminative modelling strategies for data-driven approaches. Techniques for knowledge-triggered methods are additionally broken down into logic-based technique, ontology-based technique, and mining-based technique. Wang et al. [27] performed a second survey that focused on the various deep learning methods for HAR that uses sensor. This study organises the literature according to the deep model, sensor modality, and application domain. The relevant research is divided into three categories based on the deep model: discriminative deep, generative deep and hybrid deep architecture. According to the application area, the relevant job is divided into categories such as everyday activities, sleep, exercise, and health.

Figure 3. Encounters for HAR for sensors

Ahad et al. [28] detailed the research done on wearable sensor-based activity detection. This review provides a thorough analysis of many HAR system design difficulties, including choosing sensors and characteristics, data collection and protocol, performance in terms of recognition, processing techniques, and energy usage. This study divides the previous work into semi-supervised off-line systems, supervised offline systems, and supervised online systems. provided an in-depth review and separated existing studies into two main categories: local interaction activity, which involved the movement of extremities (such as using objects), and global body motion activity, which involved the movement/displacement of the entire body.

Cheok et al. [29] gave a thorough analysis of sensor-based and vision-based methods for gesture recognition. The study gives a comparison of various strategies in the literature and categories the material based on several HAR steps, including data gathering, pre-processing, segmentation, feature selection, and classification.

Since many systems employ a mobile phone (built-in sensors) for activity identification, several studies also concentrate on mobile phone-based HAR solutions. Shoaib et al. [30] give one such survey that describes the research utilising mobile devices.

2.2 Survey on radio frequency-based techniques

This area of research focuses on techniques based on radio frequency that can detect human activity. extended an overview of the study on device-free radio-based activity recognition. This paper describes current research in device-free radio-based localization and device-free radio-based activity recognition. The researchers discuss many subjects for DFL, including precise existence detection, geographical coverage, adaptable machine learning, radio tomography, and statistical modelling. Statistical modelling-based DFAR, machine learning-based DFAR, and adaptive threshold-based DFAR are the three subcategories of DFAR research that have been published [31].

Wang and Zhou [32] published a survey describing how Internet of Things (IoT)-based solutions based on RFID technology are employed in the realm of health. This paper explains the different applications of RFID devices, including body-centric tags, such as wearable tags and implanted tags, and ambient passive sensors, such as volatile chemical sensors and temperature sensors. This technique also offers some RFID applications for tracking, gesture detection, and remote monitoring in the study of human behaviour. This article addresses the potential uses for RFID technology but does not go into much detail regarding the research that has been done there [33].

This study suggests that the current work can be classified into the following four main categories: RFID-based, Wi-Fi-based, ZigBee radio-based, radar-based, etc., which include parameters like activity types, deployment costs, coverage and accuracy [34]. Significantly presenting a brief summary of the key technologies in WiFi-related work from the literature, the study aims to create the foundation for a WiFi-based task identification system.

Table 2 compares various Radio-Frequency based techniques for the localization purpose:

Table 2. Different RF-based technologies

RF-Based Technology

Localization Principle

Coverage

Accuracy

WIFI

Proximity, TOA, RSSI fingerprinting TDOA

Building level(outdoor/indoor)

1m-5m

RFID

Proximity, TOA, RSSI theoretical propagation model

Indoor

1m-2m

Bluetooth

RSSI fingerprinting & RSSItheoretical propagation model

Indoor

2m-5m

Zigbee

RSSI fingerprinting & RSSItheoretical propagation model

Indoor

3m-5m

FM

RSSI fingerprinting

Indoor

2m-4m

RADAR

TOA, TDOA, AOA, RSSI

(Outdoor/Indoor)

Within 30m

In this research, the two primary areas of activity recognition research are coarse grain activities and fine grain activities [35]. Research focuses on device-free sensing methods, classifying them based on signal characteristics, measurement method, and descriptor employed.

2.3 Surveys on vision-based techniques

The surveys that concentrate on using vision to recognise activities are presented in this section. These cameras have been extensively employed for a variety of purposes, including structure reconstruction, modeling, action detection, position estimation, and human body tracking [36-40]. To determine the separation (or "depth") of points in the context from the camera, depth cameras employ sensing technologies. The image pipeline, optics, and sensors are all included in depth cameras [41]. Noise in depth pictures is significantly influenced by the technique used to get depth information. Depth is calculated using stereo and structured light systems using point correspondences from various viewpoints. Interpolation is used between these locations, which results in inaccurate depth measurements [42-44]. Table 3 relates various visonbased sensors and their respective F1 Score along with the depiction of their particular role.

Table 3. Various sensors (vision based) and their respective F1 score and examples

Sensor Type (Vision-Based)

F1 Score

Example

RGB Camera

0.70-0.95

Distinguishing postures such as standing, walking, and running

Depth Camera

0.65-0.90

Grasping motions like standing, walking, and running with greater precision in low-light conditioning.

Inertial Measurement Unit (IMU)+Camera

0.75-0.95

Recognizing activities such as jumping, squatting, and lifting weights

RGB-D Camera

0.75-0.95

Recognizing activities such as carrying objects, crawling, and cycling

Wearable Sensors (IMU, EMG)

0.60-0.85

Recognizing activities such as walking, running, and jumping with the added benefit of being wearable

The most basic and traditional form of activity recognition involves installing security cameras in the area and watching for people's movements. With the rising demand for advanced video surveillance systems and the transition to digital video surveillance infrastructure, the task of automated video surveillance has become a significant challenge in terms of data analysis and management. The complexity involved in effectively analysing and managing the vast amounts of video data has posed substantial difficulties in the field of automated video surveillance [45]. We've analysed and evaluated the capabilities, efficiency, and restrictions of these cameras. undertook a review of the literature on vision-based activity detection techniques and classified it into two major groups: unimodal and multimodal methods. The unimodal approaches that use single-modality data are further broken down into stochastic, rule-based, space-time-based, and shape-based methods. The multi-modal techniques employ data from various sources and are further classified into social networking, behavioural, and effective ways [46]. This research separates the whole set of work into two major groups: solutions based on deep neural networks and solutions founded on representation. Further classifications for representation-based solutions include holistic, local, and aggregation techniques. Multiple stream networks, temporal coherency networks, generative models, and spatiotemporal networks are subcategories of deep neural network-based solutions [47]. This review analyses sensor-based, radio frequency-based, and vision-based techniques for activity recognition, focusing on wearable sensors, data gathering, segmentation, feature selection, and classification. It discusses knowledge-driven versus data-driven techniques, deep learning methods, radio frequency-based techniques, and vision-based techniques using cameras and depth cameras. The analysis helps understand the various approaches and technologies available for activity recognition.

3. Feature Extraction

Image processing technology utilizes binary, colourful, and grayscale features for identification, classification, diagnosis, clustering, recognition, and detection purposes. In order to obtain as much data from a picture as feasible, feature extraction techniques are used. Choosing characteristics and extracting them effectively are currently rather difficult [48]. There are several ways to extract features, according to geometric traits, statistical traits, texture traits, and colour traits. s. Primary feature types are subdivided into subtypes, like colour characteristics, which are categorized into three: colour, moment, histogram, and average RGB [49].

3.1 FFT

The Fourier transform is a method that is frequently used to extract features from time-dependent data, particularly speech data.

The FFT algorithm can be used to extract features from time-domain inputs that are useful for classifying various HAR. The general procedures for applying FFT in the HAR process are as follows:

  • Acquire information: Acquire time-domain signals that represent various behaviours, such as standing, sitting, running, and so forth.
  • Data segmentation: Divide the signal data into smaller, more manageable time intervals that are typically 1 to 5 seconds long.
  • Apply pre-processing techniques: Filter, normalise, and extract features from the signal data using pre-processing techniques to get the data ready for analysis.
  • Apply FFT: To determine the signal's frequency spectrum, apply the FFT technique to each time frame.
  • Extrapolate characteristics: Examine the features of the frequency spectrum, such as the dominant frequency or power in particular bands, to categorise physical activity.
  • Train a classifier: To categorise different behaviours, train a classifier, such as a decision tree or neural network, using the features that were obtained from the signal data.
  • Validate the classifier: Test the classifier on fresh, unprocessed data to gauge how well it can distinguish between various behaviours.

By detecting the harmonics in signals, the FFT approach makes it easier to distinguish between distinct HAR process activities and improves signal analysis.

The characteristics of image processing technology are used for identification, classification, diagnosis, grouping, recognition, and detection. The efficacy of recognition is increased by methods like feature extraction, denoising, and statistical data processing. Data filtering eliminates noise and unwanted signals while utilising a variety of filters to provide the best results.

3.2 Denoising

The traditional approaches for sensor signal denoising mostly use low-pass filtering based on the idea that noise occurs at high frequencies while signals occur at fixed intervals at low frequencies, such as the Fourier transform (FT) [50], moving average filter [51], and Wiener filter [52]. Recently, innovative algorithms have been proposed. Time frequency analysis is a crucial component of the denoising technique.

For instance, new dyadic filter for spectrum noise is made using the approach of empirical mode decomposition and fractional Gaussian noise is tested numerically [53]. The Hilbert-Huang transformation approach, which incorporates EMD and Hilbert spectral analysis, is accounted to determine the functional degree of balance control in patients by removing the tremor contribution from postural signals extracted from accelerometers [54].

Wavelet analysis is used to denoise the initial vibration data for rolling bearing defect classification [55]. The reduction of noise employs several statistical theory-based techniques. For overflow channel failure investigation, the high order spectrum is used to get rid of Gaussian noise [56].

For discrete unpredictable structures based on echo state networks, supported vector machine (SVM) is used as an adaptive noise generally to remove noise and disturbance and produce effective control effect [57]. Two popular statistical theories based on the denoising approach are (independent component analysis) and the principal component analysis [58, 59]. Researchers from both inside and outside the country have also examined certain additional techniques, including the decomposition of singular values (SVD) [58], dense decomposition [60], and different adaptive filters [61].

3.3 Statistical data

The effectiveness of statistical data processing has been validated by experiments involving different datasets.

The initial phase in the face recognition process, known as face detection, involves searching for and separating the face region from the backdrop of picture and video frames [62]. The subject of face detection and identification has seen the introduction of several algorithms and techniques over the course of the last few years, each with unique benefits and drawbacks [63].

Activity recognition issues have been widely solved using statistical learning techniques [64, 65]. Naive Bayes and K-Nearest Neighbor classifiers were used by Chavarriaga et al [65]. To identify seven movements, including walking, running, and leaping, to correctly differentiate between various activities, they had to rely on hand-crafted characteristics because they couldn't identify any discriminative traits [66].

Although commonly used in human activity identification, feature extraction techniques including symbolic representation [67], statistics of raw data [68], and transform coding [69] are empirical and require expert knowledge in order to build features [70].

3.4 Data filtering

Filtering is an essential phase in the analysis of sensor data. To extract relevant information from raw sensor data while removing noise and other undesired signals, mathematical techniques are used. The goal of filtering is to produce a signal that faithfully depicts the underlying physical fact that the sensor is measuring.

In the processing of sensor data, a wide range of filters, including low-pass, high-pass, band-pass, and notch filters, can be applied. The range of frequencies of each type of filter determines how it affects the frequency components of the forthcoming signal [71].

Figure 4 represents, Filtering the data at a distant point Low-pass filters are used to take out the signal's high-frequency noise while letting low-frequency impulses flow through. On the other hand, high-pass filters eliminate low-frequency noise while allowing high-frequency signals to flow through [73].

The usage of band-pass filters allows a certain frequency range to pass while blocking all others. Band-stop filters, commonly referred to as notch filters, are used to block a certain frequency range while allowing all others to pass [74].

The application and the properties of the signal being monitored determine the filter to use. In some circumstances, a combination of many filters may be utilized to get the desired outcome. It is also important to keep in mind that filtering might cause a little amount of signal delay which needs to be taken into consideration in Real Time Application [75].

Figure 4. Filtering the data at a distant point [72]

4. Activity Classification

In recent years focus has been diverted towards the Artificial Intelligence use for the HAR.

Deep learning has become a potent method for Human Activity Recognition (HAR). In HAR, sensors are implemented to gather information about how people move and how they behave, and machine learning algorithms are then used to analyse that information and identify certain behaviours [76].

Convolutional neural networks (CNNs) and recurrent neural networks (RNNs) have been revealed to be very successful for HAR. While RNNs are better suited for time-series data, CNNs are particularly well-suited for HAR applications that apply image or video data.

More and more technologies and techniques have been used to create sensor-based HAR during the past ten years [76-78]. It has been extensively utilised in a variety of fields, including medicine [79-81], sporting competition [82], smart homes [83, 84], and many more.

4.1 Deep neural network

4.1.1 Convolutional-neural-network

Each of the neurons in a CNN individual acceptor region is connected to the layer above it [85]. This functions as a filter, and a subsequent non-linear function activates it.

$a_{i j}=f\left(\sum_{m=1}^H \sum_{n=1}^K w_{m, n} \cdot x_{i+m, j+n}+b\right)$

In which xi+m, j+n is activation of upper level neurons, which is associated to neuron (i,j), and aij is relative activation, f is a non-linear function, wm,n is (HK) weight matrix of convolution kernel, b is off-set value. Deep convolutional layers characterise data in a manner that is more theoritical, and CNN with many layers of convolution may develop hierarchical representations of data.

(CNN) is offered as a solution to the issue of removing more beneficial elements from the original signal [86]. They consist of two parts: a hierarchical feature extractor and a fully-connected layer. CNNs may construct data hierarchy representations and classifiers and have been extensively utilised in ubiquitous computing and the identification of human activities [87-90].

4.1.2 Long short-term memory

Recently, model optimisation strategies that enable the use of deep (CNNs) in resource-constrained applications have been recommended. Long short-term memory.

The long-term, short-term, and near-term memory and control of the information are represented by the long-term, short-term, and forgetting gates that receive inputs from long short-term memory (LSTM).

The flow of new information is controlled by the input gate, the amount of information that is discarded from the cell is directed by the forget gate, and the flow of information out of the cell is controlled by the output gate.

Recurrent deep learning techniques have become increasingly popular for HAR in ubiquitous and wearable computing, with models based on LSTM units being the most successful [91]. This has led to improved recognition performance for harder jobs [92-94].

Long Short-Term Memory (LSTM) networks struggle with capturing long-term dependencies in sequences due to large time lags and computational cost. They require large training data for effective learning of complex patterns.

5. Machine Learning

Human Activity Recognition, or the process of acknowledging and categorising patterns of motions made by people, is an established use of machine learning. Several sectors, including healthcare, activities such as sports, and privacy, can benefit from this matter [95].

To be able detect human activity via techniques based on machine learning, a computer model must be trained to identify certain patterns in sensor data produced through various sensors [96], namely accelerometers, gyroscopes, and magnetometers. These sensors are frequently included in devices that are worn which are frequently used to identify human activity, such as fitness bands and smartwatches [97, 98].

The machine learning model is trained on an extensive set of sensor data that has been identified, with each label indicating a particular human activity, such as cycling, walking, or running [99]. This dataset is used by the model to identify patterns and links between sensor data and actions that are pertinent. Using the sensor data collected by the wearable device, the model may be used to anticipate the activity being done by a user in real-time once it has been trained [100].

There are various machine learning algorithms used in Human Activity Recognition, including decision trees, random forests, support vector machines (SVM), K-Means, PLA and AdaBoost. Each algorithm has its strengths and weaknesses and is used depending on the specific requirements of the application [101, 102].

This paper reviews various Machine Learning Algorithm for Human Activity Recognition.

5.1 Support- vector- machine (SVM)

HAR uses the widely used supervised machine learning approach Support-Vector-Machine [103]. Based on information gathered from sensors like accelerometers, gyroscopes, and magnetometers, HAR includes recognising various actions carried out by a person [104, 105].

Because they handle data with high dimensions efficiently, can handle nonlinear interactions between data and tags, and apply well to new data, SVMs are of great use in HAR. SVMs split data into numerous classes as possible by establishing a hyperplane in a high-dimensional space.

The initial step in using SVMs in HAR is to collect data or information from user-worn sensors. Thereafter, pertinent characteristics including the mean, standard deviation, and frequency domain features are extracted from the pre-processed data [106, 107]. An SVM classifier that has been trained on a labelled dataset of human activities is then given the features as input [108, 109].

Based on the traits that were obtained from the sensor data during testing, the SVM classifier predicts the user behaviour. The quality of the features, as well as the quantity and calibre of the labelled data used for training, all affect how accurate the SVM classifier is.

Long Short-Term Memory (LSTM) networks struggle with capturing long-term dependencies in sequences due to large time lags and computational cost, requiring large training data for complex patterns and relationships.

5.2 K-means

K-means is an (un-supervised machine learning) technique that may be applied to the detection of human activities. Algorithms for unsupervised learning like k-means work on unmarked data and seek to identify trends and patterns within the data, in contrast to supervised learning, where the algorithm is taught on labelled data [110].

K-means may be used to sort sensor-generated data into groups that reflect various activities in the identification of human activity. The algorithm finds groups of data points with similar features and labels each group with a name that describes an action.

K-means may be used [111, 112], for instance, to cluster sensor data from wearable devices with sensors into clusters that correlate with different activities, such as running, strolling, and resting. The programme finds the underlying patterns and structure within the sensor data rather not requiring any prior knowledge of the activities or labelled data.

The presumption that the data may be represented by a certain number of clusters is one of the limitations of k-means in the identification of human activities. In real-life situations where human actions might be varied and ongoing, this assumption might not always be accurate. It can be difficult to use k-means when the number of clusters to use is uncertain since the number of clusters must be given before the algorithm is applied.

The k-means algorithm involves initialization, assignment, update, and iteration until convergence. The algorithm assigns k data points to the cluster with the closest centroid, using Euclidean distance or other distance metrics. The new centroids are calculated by taking the mean of all data points assigned to each cluster. The algorithm repeats assignment and update steps until convergence, where each data point belongs to a specific cluster and the centroids represent the centre points of each cluster.

5.3 Perceptron Learning Algorithm (PLA)

Perceptron-Learning-Algorithm, is a kind of supervised-learning-algorithm, may be applied to detection of human activity. Based on an array of weights applied to each attribute or given variable, the algorithm is built to categorise incoming data into one of two groups [113].

The PLA may be used to categorise sensor data produced by wearable devices into various human activities, like (cycling), (walking), and (running). The method is trained using a labelled dataset in which each data point represents a distinct action, and during training, appropriate weights for the features are determined [114, 115]. The PLA computes a linear combination of the variables that are entered and the associated weights throughout the classification process to provide a single output value. The method identifies the input data as belonging to one category if the output is greater than a certain threshold value; otherwise, it is categorised as the other category [116].

The fact that the PLA only functions well for data that can be separated into two different groups using a straight line is one of its limitations. This assumption might not always be accurate in human activity identification since the data can be quite complicated and non-linear.

Perceptron Learning Algorithm may not converge if data isn't linearly separable, leading to infinite loops and ineffectiveness in complex nonlinear classification problems.

5.4 Adaptive Boosting (AdaBoost)

AdaBoost (Adaptive Boosting), a well-liked method of ensemble learning in machine learning, may be used to problems requiring the recognition of human activity (HAR) [117]. AdaBoost’s core concept is the creation of a strong classifier by combining several weak classifiers.

Here are the basic steps for using AdaBoost for HAR as shown in Figure 5:

Figure 5. Basic steps for using AdaBoost for HAR

Any classifier that can predict an activity label based on a collection of characteristics is considered a weak classifier in the context of HAR [118, 119]. Examples include decision trees, SVMs, and neural networks. AdaBoost works by periodically training weak classifiers on the training data, with each new classifier being taught on examples that the prior classifier incorrectly categorised. This helps to increase the overall accuracy of the model by giving circumstances that are challenging to categorise correctly greater weight in the algorithm [119].

AdaBoost's performance may be affected by noisy data and outliers, leading to overfitting. Its slow training phase and dependency on weak classifiers make it a challenging choice.

5.5 Evaluation of performance

The technique of recognising and categorising human movements or activities using sensor data gathered from wearable technology or ambient sensors is known as Human Activity Recognition (HAR). To establish the accuracy and dependability of HAR models in practical applications, it is crucial to assess their performance.

The effectiveness of HAR models is measured using many measures, such as: Precision and F1-Score.

It is vital to remember that the evaluation metric used will be contingent on the specific application case and the data's attributes. For instance, recall may be more crucial than precision in a healthcare application where erroneous negative predictions might have catastrophic repercussions.

Contrarily, in a security application, inaccurate forecasts might be cost-prohibitive, and accuracy could be of greater significance than recall.

Precision and F1 Score has been discussed throughout in this paper.

5.6 Precision

The degree of measurement consistency is referred to as precision. In other words, if a certain value were measured again, a perfect sensor would produce the same result each time [120]. Real sensors, however, offer several kinds of values that are dispersed in some way in relation to the actual right value. Consider applying a sensor with a pressure of 150mm Hg, for instance. The sensor's output readings will vary widely even if the applied pressure is constant [121, 122]. When the real value and the sensor's mean value are not close enough to one another (for example, the 1-s range of the normal distribution curve), several subtle difficulties with accuracy develop [123].

5.7 F1-score

For classification in Human Activity Recognition (HAR) is the F1 score [124]. It is an estimate of the extent to which the model can distinguish between various groups of human activity.

You must first determine the accuracy and recall for each class in order to compute the F1 score. Precision is the proportion of occurrences of a given class accurately recognised out of every instance of that class [124, 125]. Recall quantifies the proportion of instances of a specific class that were properly recognised out of all the occurrences of that category in the collection of data.

Whenever you have determined the accuracy and recall for each class, then can use the following formula to determine the F1 score for that class [126]:

$\begin{align}  & \text{ F1  score }=2*(\text{ precision*recall }) \\ & /(\text{ precision }+\text{ recall }) \\\end{align}$

By averaging the F1 scores across all classes, you can then get the overall F1 score for the HAR model [126, 127]. You will then be given one value that sums up how accurate the model is in detecting human activity.

It is crucial to remember that the F1 score is only one performance parameter that may be used to assess a HAR model's performance [127]. Accuracy, precision, recall, and confusion matrix are further popular measures. subsequently it, is crucial to select the right metric(s) based on the application's particular needs and the features of the dataset being implemented [122, 124]. Table 4 describes various sensors and their ML Algorithm with F1 Score and Precision.

Table 4. Various sensors and their ml algorithm with F1 score and precision

Sensor Type

Machine Learning Algorithm

F1 Score

Precision

Accelerometer

SVM

0.75-0.90

0.70-0.90

 

RF

0.70-0.85

0.65-0.85

 

k-NN

0.60-0.80

0.55-0.80

 

ANN

0.75-0.90

0.70-0.90

 

HMM

0.65-0.80

0.60-0.80

Gyroscope

SVM

0.75-0.90

0.70-0.90

 

RF

0.70-0.85

0.65-0.85

 

k-NN

0.60-0.80

0.55-0.80

 

ANN

0.75-0.90

0.70-0.90

 

HMM

0.65-0.80

0.60-0.80

Magnetometer

SVM

0.70-0.85

0.65-0.85

 

RF

0.65-0.80

0.60-0.80

 

k-NN

0.55-0.75

0.50-0.75

 

ANN

0.70-0.85

0.65-0.85

 

HMM

0.60-0.75

0.55-0.75

GPS

SVM

0.80-0.95

0.75-0.95

 

RF

0.75-0.90

0.70-0.90

 

k-NN

0.65-0.85

0.60-0.85

 

ANN

0.80-0.95

0.75-0.95

 

HMM

0.70-0.85

0.65-0.85

6. Future Research Scope

An interdisciplinary study topic called Human Activity Recognition (HAR) includes computer science, signal processing, machine learning, and technologies for sensors. The following are some of the research areas that HAR is actively investigating:

  • Sensor Selection and Placement: The choice and position of sensors is one of the crucial elements affecting HAR systems' performance. To increase the precision and endurance of HAR systems, researchers are investigating multiple ways to optimise sensor location and selection.
  • Extraction and Selection of Relevant characteristics: For accurate classification, HAR systems need to extract and choose the pertinent characteristics from the sensor input. To increase the reliability of HAR systems, researchers are investigating various extraction and selection of features techniques, incorporating deep learning-based strategies.
  • Classification Algorithms: Another important variable that has an enormous effect on HAR system performance is the choice of classification methods. To boost the accuracy and resilience of HAR systems, researchers are investigating a variety of categorization techniques, including conventional machine learning approaches, deep learning-based approaches, and ensemble methods.
  • Transfer Learning: To enhance the performance of HAR systems, transfer learning is a method that uses pre-trained models on associated tasks. To increase the accuracy and efficacy of HAR systems, researchers are investigating a variety of transfer learning approaches.
  • Data Augmentation: To enhance the effectiveness of HAR systems, data augmentation techniques provide new training data by altering current data. To expand the number and range of training datasets, researchers are investigating a variety of data augmentation strategies.
  • HAR in Real-World Environments: HAR systems have to operate properly in noisy, challenging real-world scenarios. To enhance the functionality of HAR systems in real-world situations, researchers are exploring a number of strategies, such as domain adaptation and robust feature extraction.

Sensor placement, feature extraction, classification methods, transfer learning, data augmentation, and HAR in real-life situations are just a few of the subfields that make up the broad study area of HAR. The accuracy, robustness, and efficiency of HAR systems must be improved via continual research in these fields.

7. Research Area Issues

The purpose of the study area known as "Human Activity Recognition" (HAR) is to detect, identify, and categorise human activities using sensor data and machine learning techniques. The following are some HAR research questions:

• Data collection: Collecting a lot of data with labels is one of the greatest challenges in HAR. Building robust and precise models requires gathering data that is characteristic of many populations and environments.

Case Study: Wearable Sensors for Activity Monitoring

Bonomi et al. [82] used wearable sensors for Human Activity Recognition data collection, recording motion data during physical activities like walking, running, and cycling. Machine learning techniques were used to accurately classify different activities.

• Feature selection and extraction method: Identifying and extracting traits that are essential for activity recognition is an additional problem in HAR. To make the algorithms more effective and to make the data less dimensional, the choice of features is crucial.

Case study: Wavelet Transform for Feature Selection

Lara et al. [68] used Wavelet Transform for Human Activity Recognition, transforming accelerometer and gyroscope data to identify relevant features, achieving high accuracy in differentiating between activities.

• Algorithm decision-making and optimisation: For activity recognition, a variety of machine learning methods, including deep learning, support vector machines, and decision trees, can be utilised. For a task to be completed accurately, the right algorithm must be selected, and its parameters must be adjusted.

Case study: Random Forest for Algorithm Optimization.

Chen et al. used Random Forest algorithm for Human Activity Recognition, comparing various machine learning algorithms, revealing higher accuracy in recognizing activities like walking, jogging, and cycling.

• Transfer learning: Transfer learning is the method of using the knowledge acquired from one activity or dataset to do better on a different task or dataset. In situations when there is a shortage of labelled data, transfer learning can help activity identification models perform better.

Case study: Transfer Learning with Convolutional Neural Networks (CNNs)

In a case study by He et al. [84], transfer learning with Convolutional Neural Networks (CNNs) was used for Human Activity Recognition. The researchers employed a pre-trained CNN model, namely Inception-v3, which was originally trained on a large-scale image dataset. They fine-tuned the model using a smaller labeled dataset of sensor data collected from wearable devices. The transfer learning approach allowed the model to leverage the knowledge gained from the large image dataset to improve the accuracy of activity recognition from sensor data

• Real-time activity identification: It is crucial for numerous applications, including tracking one's fitness and monitoring one's health. Real-time recognition of activities models are difficult to create because they need effective algorithms and low-latency processing.

Case study: Real-time Activity Identification Using Recurrent Neural Networks (RNNs).

Ordóñez and Roggen proposed a real-time activity identification method using Recurrent Neural Networks (RNNs) for Human Activity Recognition. They used sensor data from smartphones and smartwatches, achieving high accuracy and performance for instant identification applications.

Confidentiality and security: HAR systems frequently capture personal data, such as information about a user's whereabouts and activities. To avoid illicit access and misuse, it is essential to ensure the confidentiality and safety of this data.

Case study: Privacy-Preserving Human Activity Recognition Using Homomorphic Encryption.

Bonomi et al. [82] proposed a privacy-preserving framework for Human Activity Recognition (HAR) using homomorphic encryption to address privacy concerns in sensitive user data collection and analysis. This method ensures trained and utilized activity recognition models while maintaining individual data privacy.

• Generalisation and scalability: HAR models should be scalable to handle massive data sets with intricate models, as well as generalise to new contexts and populations. An important area for research is creating models that can manage massive volumes of data and are resilient to changes in the data distribution.

Case study: Scalable and Generalizable Human Activity Recognition Using Deep Learning.

Hammerla et al. [93] presented a scalable and generalizable method for Human Activity Recognition (HAR) using deep learning algorithms. They proposed a deep learning architecture using convolutional and recurrent neural networks to learn hierarchical and temporal features from raw sensor data, enabling generalization to unseen data.

Relationship between research issues: The research questions in Human Activity Recognition are interconnected, and solving one of them may have an effect on others. For instance, improved algorithms and less-dimensional data can result from developing better feature selection and extraction techniques, which in turn helps to create real-time activity identification models that are more effective. Additionally, data collecting is crucial since it improves the resilience and accuracy of models by obtaining a variety of labelled data. The selection of suitable algorithms for precise activity recognition is influenced by algorithm decision-making and optimisation. Transfer learning solves the problem of little labelled data while protecting the privacy and security of sensitive data. Models can manage enormous datasets and adapt to different situations and populations thanks to standardisation and scalability.

A state-of-the-art approach to comprehending human behaviours using sensor data is called Human Activity Recognition (HAR). In order to better understand human behaviour, it primarily focuses on the encounter, activity, and interaction categories. Deep neural networks and machine learning techniques are utilised to improve accuracy and efficiency when RF-based technologies are used to record and analyse human activity. HAR applications require real-time performance, and devices with limited resources require effective algorithms and designs. Long-term activity monitoring, context-aware HAR, transfer learning, edge computing, and improved sensor integration are some of the future study areas. But issues like data variability, human differences, privacy issues, and the comprehensibility of deep learning models call for multidisciplinary cooperation and creative solutions. Healthcare, intelligent surroundings, and human-computer interaction are just a few of the areas where HAR seems potential.

Potential solutions and approaches: The paper offers a thorough evaluation of several Human Activity Recognition (HAR) approaches that have been documented in the literature. It classifies the reviewed content according to the numerous HAR processes, such as data collection, pre-processing, segmentation, feature selection, and classification. The study gives insights on the many methods and tactics used in the field of HAR by analysing and categorising these strategies.

8. Ethical Considerations

1. Confidentiality and Data Security: Personal activity data confidentiality is crucial for privacy protection. HAR systems should implement encryption, access controls, and user authentication to prevent unauthorized access, breaches, and misuse. Regular security audits and updates are essential.

2. Ethical Challenges: Ethical challenges in HAR involve responsible handling of personal activity data, addressing risks of re-identification, secondary use without consent, and biases. Prioritizing transparency, accountability, and fairness is crucial.

3. User Consent: In HAR, informed user consent is crucial for clear understanding of data collection, usage, and sharing. Transparent consent processes provide information, users can withdraw consent, and accessible mechanisms enable informed decision-making.

4. Potential Solutions: Establish robust privacy policies, consent frameworks, and privacy by design principles in HAR systems to address ethical concerns, mitigate risks, and utilize anonymization techniques for data de-ide.

9. Conclusions

In this research, we have explored Human Activity Recognition (HAR), a state-of-the-art approach that can be used to comprehend and categorise human activities based on sensor data. It may be broken down into three categories: encounter, activity, and interaction. The multiple Radio Frequency-based technologies are additionally addressed in this paper. The following section of the study covers various deep and machine learning methods for activity recognition. The most current deep neural network and machine learning approaches are presented, which improves the HAR's total capacity. It has also been discussed about how essential the performance parameter is for real-time applications. In order to encourage more research in this crucial and advantageous area of study and development, we additionally speculated on the Future Research scope and the challenges that exist in this research domain.

  References

[1] Pirsiavash, H., Ramanan, D. (2014). Parsing videos of actions with segmental grammars. In 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 612-619. https://doi.org/10.1109/CVPR.2014.85

[2] Patwari, N., Wilson, J. (2010). RF sensor networks for device-free localization: Measurements, models, and algorithms. In Proceedings of the IEEE, 98(11): 1961-1973. https://doi.org/10.1109/JPROC.2010.2052010

[3] Wilson, J., Patwari, N. (2010). See-through walls: Motion tracking using variance-based radio tomography networks. In IEEE Transactions on Mobile Computing, 10(5): 612-621. https://doi.org/10.1109/TMC.2010.175

[4] Kanso, M.A., Rabbat, M.G. (2009). Compressed RF tomography for wireless sensor networks: Centralized and decentralized approaches. In Distributed Computing in Sensor Systems: 5th IEEE International Conference, Springer Berlin Heidelberg, pp. 173-186. https://doi.org/10.1007/978-3-642-02085-8_13

[5] Martin, R.K., Anderson, C., Thomas, R.W., King, A.S. (2011). Modelling and analysis of radio tomography. In 2011 4th IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), IEEE, pp. 377-380. https://doi.org/10.1109/CAMSAP.2011.6136030

[6] Kaltiokallio, O., Bocca, M. (2011). Real-time intrusion detection and tracking in indoor environment through distributed RSSI processing. In 2011 IEEE 17th International Conference on Embedded and Real-Time Computing Systems and Applications, IEEE, 1: 61-70. https://doi.org/10.1109/RTCSA.2011.38

[7] Chen, X., Edelstein, A., Li, Y.P., Coates, M., Rabbat, M., Men, A. (2011). Sequential monte carlo for simultaneous passive device-free tracking and sensor localization using received signal strength measurements. In Proceedings of the 10th ACM/IEEE International Conference on Information Processing in Sensor Networks, IEEE, pp. 342-353.

[8] World Health Organization. (2018). Fact sheet: Disability and health.

[9] Farber, N., Shinkle, D., Lynott, J., Fox-Grage, W., Harrell, R. (2011). Aging in place: A state survey of livability policies and practices. National Conference of State Legislatures, 1-84.

[10] Pham, C., Nguyen-Thai, S., Tran-Quang, H., Tran, S., Vu, H., Tran, T.H., Le, T.L. (2020). Senscapsnet: Deep neural network for non-obtrusive sensing based human activity recognition. In IEEE Access, 8: 86934-86946. https://doi.org/10.1109/ACCESS.2020.2991731

[11] Du, Y., Lim, Y., Tan, Y. (2019). A novel human activity recognition and prediction in smart home based on interaction. Sensors, 19(20): 4474. https://doi.org/10.3390/s19204474

[12] Ding, H., Shangguan, L.F., Yang, Z., Han, J.S., Zhou, Z.M., Yang, P.L., Xi, W., Zhao, J.Z. (2015). Femo: A platform for free-weight exercise monitoring with RFIDs. In Proceedings of the 13th ACM conference on embedded networked sensor systems, pp. 141-154. https://doi.org/10.1145/2809695.2809708

[13] Li, X.Y., Zhang, Y.Y., Marsic, I., Sarcevic, A., Burd, R.S. (2016). Deep learning for RFID-based activity recognition. In Proceedings of the 14th ACM Conference on Embedded Network Sensor Systems CD-ROM, pp. 164-175. https://doi.org/10.1145/2994551.2994569

[14] Suthar, B., Gadhia, B. (2021). Human activity recognition using deep learning: A survey. In Data Science and Intelligent Applications: Proceedings of ICDSIA, Springer Singapore, 2020: 217-223. https://doi.org/10.1007/978-981-15-4474-3_25

[15] Skandha, S.S., Gupta, S.K., Saba, L., Koppula, V.K., Johri, A.M., Khanna, N.N., Mavrogeni, S., Laird, J.R., Pareek, G., Miner, M., Sfikakis, P.P., Protogerou, A., Misra, D.P., Agarwal, V., Sharma, A.M., Viswanathan, V., Rathore, V.S., Turk, M., Kolluri, R., Viskovic, K., Cuadrado-Godia, E., Kitas, G.D., Nicolaides, A., Suri, J.S. (2020). 3-D optimized classification and characterization artificial intelligence paradigm for cardiovascular/stroke risk stratification using carotid ultrasound-based delineated plaque: Atheromatic™ 2.0. Computers in Biology and Medicine, 125: 103958. https://doi.org/10.1016/j.compbiomed.2020.103958

[16] Rausch, K., Scalia, G.M., Sato, K., Edwards, N., Lam, A.K.Y., Platts, D.G., Chan, J. (2021). Left atrial strain imaging differentiates cardiac amyloidosis and hypertensive heart disease. The International Journal of Cardiovascular Imaging, 37: 81-90. https://doi.org/10.1007/s10554-020-01948-9

[17] Su, H.N., Jung, C. (2018). Perceptual enhancement of low light images based on two-step noise suppression. In IEEE Access, 6: 7005-7018. https://doi.org/10.1109/ACCESS.2018.2790433

[18] Jamthikar, A.D., Gupta, D., Mantella, L.E., Saba, L., Laird, J.R., Johri, A.M., Suri, J.S. (2021). Multiclass machine learning vs. conventional calculators for stroke/CVD risk assessment using carotid plaque predictors with coronary angiography scores as gold standard: A 500 participants study. The International Journal of Cardiovascular Imaging, 37: 1171-1187. https://doi.org/10.1007/s10554-020-02099-7

[19] Vishwakarma, D.K., Singh, K. (2016). Human activity recognition based on spatial distribution of gradients at sublevels of average energy silhouette images. IEEE Transactions on Cognitive and Developmental Systems, 9(4): 316-327. https://doi.org/10.1109/TCDS.2016.2577044

[20] Bhattacharya, D., Sharma, D., Kim, W., Ijaz, M.F., Singh, P.K. (2022). Ensem-HAR: An ensemble deep learning model for smartphone sensor-based human activity recognition for measurement of elderly health monitoring. Biosensors, 12(6): 393. https://doi.org/10.3390/bios12060393

[21] Kaltiokallio, O., Bocca, M., Patwari, N. (2012). Follow@ grandma: Long-term device-free localization for residential monitoring. In 37th Annual IEEE Conference on Local Computer Networks-Workshops, IEEE, pp. 991-998. https://doi.org/10.1109/LCNW.2012.6424092

[22] Antar, A.D., Ahmed, M., Ahad, M.A.R. (2019). Challenges in sensor-based human activity recognition and a comparative analysis of benchmark datasets: A review. In 2019 Joint 8th International Conference on Informatics, Electronics & Vision (ICIEV) and 2019 3rd International Conference on Imaging, Vision & Pattern Recognition (icIVPR), IEEE, pp. 134-139. https://doi.org/10.1109/ICIEV.2019.8858508

[23] He, Z.Y., Jin, L.W. (2008). Activity recognition from acceleration data using AR model representation and SVM. In 2008 International Conference on Machine Learning and Cybernetics, IEEE, 4: 2245-2250. https://doi.org/10.1109/ICMLC.2008.4620779

[24] Anguita, D., Ghio, A., Oneto, L., Parra, X., Reyes-Ortiz, J.L. (2013). A public domain dataset for human activity recognition using smartphones. In Esann, 3: 3.

[25] Ugulino, W., Cardador, D., Vega, K., Velloso, E., Milidiú, R., Fuks, H. (2012). Wearable computing: Accelerometers’ data classification of body postures and movements. In Advances in Artificial Intelligence-SBIA 2012: 21th Brazilian Symposium on Artificial Intelligence, Proceedings, Springer Berlin Heidelberg, pp. 52-61. https://doi.org/10.1007/978-3-642-34459-6_6

[26] Janidarmian, M., Roshan Fekr, A., Radecka, K., Zilic, Z. (2017). A comprehensive analysis on wearable acceleration sensors in human activity recognition. Sensors, 17(3): 529. https://doi.org/10.3390/s17030529

[27] Wang, J.D., Chen, Y.Q., Hao, S.J., Peng, X.H., Hu, L.S. (2019). Deep learning for sensor-based activity recognition: A survey. Pattern Recognition Letters, 119: 3-11. https://doi.org/10.1016/j.patrec.2018.02.010

[28] Ahad, M.A.R., Antar, A.D., Ahmed, M. (2021). Deep learning for sensor-based activity recognition: Recent trends. IoT Sensor-Based Activity Recognition: Human Activity Recognition, 149-173. https://doi.org/10.1007/978-3-030-51379-5_9

[29] Cheok, M.J., Omar, Z., Jaward, M.H. (2019). A review of hand gesture and sign language recognition techniques. International Journal of Machine Learning and Cybernetics, 10: 131-153. https://doi.org/10.1007/s13042-017-0705-5

[30] Shoaib, M., Bosch, S., Incel, O.D., Scholten, H., Havinga, P.J. (2015). A survey of online activity recognition using mobile phones. Sensors, 15(1): 2059-2085. https://doi.org/10.3390/s150102059

[31] Amendola, S., Lodato, R., Manzari, S., Occhiuzzi, C., Marrocco, G. (2014). RFID technology for IoT-based personal healthcare in smart spaces. IEEE Internet of Things Journal, 1(2): 144-152. https://doi.org/10.1109/JIOT.2014.2313981

[32] Wang, S.Q., Zhou, G. (2015). A review on radio based activity recognition. Digital Communications and Networks, 1(1): 20-29. https://doi.org/10.1016/j.dcan.2015.02.006

[33] Ma, J.Y., Wang, H., Zhang, D.Q., Wang, Y.S., Wang, Y.X. (2016). A survey on wi-fi based contactless activity recognition. In 2016 Intl IEEE Conferences on Ubiquitous Intelligence & Computing, Advanced and Trusted Computing, Scalable Computing and Communications, Cloud and Big Data Computing, Internet of People, and Smart World Congress (UIC/ATC/ScalCom/CBDCom/IoP/SmartWorld), IEEE, pp. 1086-1091. https://doi.org/10.1109/UIC-ATC-ScalCom-CBDCom-IoP-SmartWorld.2016.0170

[34] Ouerghi, S., Ragot, N., Boutteau, R., Savatier, X. (2020). Comparative study of a commercial tracking camera and ORB-SLAM2 for person localization. In 15th International Conference on Computer Vision Theory and Applications, pp. 357-364. https://dx.doi.org/10.5220/0008980703570364

[35] Halmetschlager-Funek, G., Suchi, M., Kampel, M., Vincze, M. (2018). An empirical evaluation of ten depth cameras: Bias, precision, lateral noise, different lighting conditions and materials, and multiple sensor setups in indoor environments. IEEE Robotics & Automation Magazine, 26(1): 67-77. https://doi.org/10.1109/MRA.2018.2852795

[36] Mallick, T., Das, P.P., Majumdar, A.K. (2014). Characterizations of noise in kinect depth images: A review. IEEE Sensors Journal, 14(6): 1731-1740. https://doi.org/10.1109/JSEN.2014.2309987

[37] Khoshelham, K., Elberink, S.O. (2012). Accuracy and resolution of kinect depth data for indoor mapping applications. Sensors, 12(2): 1437-1454. https://doi.org/10.3390/s120201437

[38] Choo, B., Landau, M., DeVore, M., Beling, P.A. (2014). Statistical analysis-based error models for the microsoft kinect™ depth sensor. Sensors, 14(9): 17430-17450. https://doi.org/10.3390/s140917430

[39] Kazmi, W., Foix, S., Alenyà, G., Andersen, H.J. (2014). Indoor and outdoor depth imaging of leaves with time-of-flight and stereo vision sensors: Analysis and comparison. ISPRS Journal of Photogrammetry and Remote Sensing, 88: 128-146. https://doi.org/10.1016/j.isprsjprs.2013.11.012

[40] Haider, A., Hel-Or, H. (2022). What can we learn from depth camera sensor noise? Sensors, 22(14): 5448. https://doi.org/10.3390/s22145448

[41] Geng, J. (2011). Structured-light 3D surface imaging: A tutorial. Advances in Optics and Photonics, 3(2): 128-160. https://doi.org/10.1364/AOP.3.000128

[42] e Ëéa, A.M., de Medeiros Filho, E.Ë., Carvalho, P.C., Velho, L. (2002). Coded structured light for 3D-photography: An overview. RITA, 4(2): 109-124.

[43] Freedman, B., Shpunt, A., Machline, M., Arieli, Y. (2013). Depth mapping using projected patterns. https://www.researchgate.net/publication/215458988_Depth_Mapping_Using_Projected_Patterns.

[44] Shu, C.F., Hampapur, A., Lu, M., Brown, L., Connell, J., Senior, A., Tian, Y. (2005). IBM smart surveillance system (S3): A open and extensible framework for event based surveillance. In IEEE Conference on Advanced Video and Signal Based Surveillance, 2005: 318-323. https://doi.org/10.1109/AVSS.2005.1577288

[45] Herath, S., Harandi, M., Porikli, F. (2017). Going deeper into action recognition: A survey. Image and Vision Computing, 60: 4-21. https://doi.org/10.1016/j.imavis.2017.01.010

[46] Gupta, D., de Albuquerque, V.H.C. (2020). Special issue on “intelligent biomedical data analysis and processing”. Neural Computing and Applications, 32: 603-605. https://doi.org/10.1007/s00521-019-04513-1 

[47] Karimi-Ghartemani, M., Ziarani, A.K. (2004). A nonlinear time-frequency analysis method. IEEE Transactions on Signal Processing, 52(6): 1585-1595. https://doi.org/10.1109/TSP.2004.827155

[48] Hansen, P.R., Large, J., Lunde, A. (2008). Moving average-based estimators of integrated variance. Econometric Reviews, 27(1-3): 79-111. https://doi.org/10.1080/07474930701853640

[49] Zhou, W., Lu, B., Habetler, T.G., Harley, R.G. (2009). Incipient bearing fault detection via motor stator current noise cancellation using wiener filter. IEEE Transactions on Industry Applications, 45(4): 1309-1317. https://doi.org/10.1109/TIA.2009.2023566

[50] Flandrin, P., Rilling, G., Goncalves, P. (2004). Empirical mode decomposition as a filter bank. IEEE Signal Processing Letters, 11(2): 112-114. https://doi.org/10.1109/LSP.2003.821662

[51] Mellone, S., Palmerini, L., Cappello, A., Chiari, L. (2011). Hilbert-huang-based tremor removal to assess postural properties from accelerometers. IEEE Transactions on Biomedical Engineering, 58(6): 1752-1761. https://doi.org/10.1109/TBME.2011.2116017

[52] Abbasion, S., Rafsanjani, A., Farshidianfar, A., Irani, N. (2007). Rolling element bearings multi-fault classification based on the wavelet denoising and support vector machine. Mechanical Systems and Signal Processing, 21(7): 2933-2945. https://doi.org/10.1016/j.ymssp.2007.02.003

[53] Martis, R.J., Acharya, U.R., Prasad, H., Chua, C.K., Lim, C.M., Suri, J.S. (2013). Application of higher order statistics for atrial arrhythmia classification. Biomedical Signal Processing and Control, 8(6): 888-900. https://doi.org/10.1016/j.bspc.2013.08.008

[54] Li, G.Q., Niu, P.F., Zhang, W.P., Zhang, Y. (2012). Control of discrete chaotic systems based on echo state network modeling with an adaptive noise canceler. Knowledge-Based Systems, 35: 35-40. https://doi.org/10.1016/j.knosys.2012.04.019

[55] Liao, L.D., He, Q.H., Hu, Z.L., Zhang, D.Q. (2012). Independent component analysis of excavator noise in strong interference surrounding. Journal of Central South University (Science and Technology), 43(9): 3426-3430.

[56] Moret, F., Poloschek, C.M., Lagreze, W.A., Bach, M. (2011). Visualization of fundus vessel pulsation using principal component analysis. Investigative Ophthalmology & Visual Science, 52(8): 5457-5464. https://doi.org/10.1167/iovs.10-6806

[57] Fan, L.W., Zhang, F., Fan, H., Zhang, C.M. (2019). Brief review of image denoising techniques. Visual Computing for Industry, Biomedicine, and Art, 2(1): 1-12. https://doi.org/10.1186/s42492-019-0016-7

[58] Demirli, R., Saniie, J. (2012). Model-based estimation pursuit for sparse decomposition of ultrasonic echoes. IET Signal Processing, 6(4): 313-325. https://doi.org/10.1049/iet-spr.2011.0093

[59] Eweda, E., Bershad, N.J. (2012). Stochastic analysis of a stable normalized least mean fourth algorithm for adaptive noise canceling with a white Gaussian reference. IEEE Transactions on Signal Processing, 60(12): 6235-6244. https://doi.org/10.1109/TSP.2012.2215607

[60] Wang, G.L., Li, T.L., Zhang, G.Q., Gui, X.G., Xu, D.G. (2013). Position estimation error reduction using recursive-least-square adaptive filter for model-based sensorless interior permanent-magnet synchronous motor drives. IEEE Transactions on Industrial Electronics, 61(9): 5115-5125. https://doi.org/10.1109/TIE.2013.2264791

[61] Olguın, D.O., Pentland, A.S. (2006). Human activity recognition: Accuracy across common locations for wearable sensors. In Proceedings of 2006 10th IEEE International Symposium on Wearable Computers, Montreux, Switzerland, Citeseer, pp. 11-14.

[62] Bao, L., Intille, S.S. (2004). Activity recognition from user-annotated acceleration data. In Pervasive Computing: Second International Conference, Proceedings, Springer Berlin Heidelberg, pp. 1-17. https://doi.org/10.1007/978-3-540-24646-6_1

[63] Wang, Z.L., Wu, D.H., Gravina, R., Fortino, G., Jiang, Y.M., Tang, K. (2017). Kernel fusion based extreme learning machine for cross-location activity recognition. Information Fusion, 37: 1-9. https://doi.org/10.1016/j.inffus.2017.01.004

[64] Liu, W.Y., Yang, J., Wang, L., Wu, C.S., Zhang, R.J. (2015). Movement behavior recognition based on statistical mobility sensing. Adhoc & Sensor Wireless Networks, 25(3): 325-342.

[65] Chavarriaga, R., Sagha, H., Calatroni, A., Digumarti, S.T., Tröster, G., Millán, J.D.R., Roggen, D. (2013). The opportunity challenge: A benchmark database for on-body sensor-based activity recognition. Pattern Recognition Letters, 34(15): 2033-2042. https://doi.org/10.1016/j.patrec.2012.12.014

[66] Ronao, C.A., Cho, S.B. (2016). Human activity recognition with smartphone sensors using deep learning neural networks. Expert Systems with Applications, 59: 235-244. https://doi.org/10.1016/j.eswa.2016.04.032

[67] Zeng, M., Nguyen, L.T., Yu, B., Mengshoel, O.J., Zhu, J., Wu, P., Zhang, J. (2014). Convolutional neural networks for human activity recognition using mobile sensors. In 6th International Conference on Mobile Computing, Applications and Services, IEEE, pp. 197-205. https://doi.org/10.4108/icst.mobicase.2014.257786

[68] Lara, O.D., Labrador, M.A. (2012). A survey on human activity recognition using wearable sensors. IEEE Communications Surveys & Tutorials, 15(3): 1192-1209. https://doi.org/10.1109/SURV.2012.110112.00192

[69] Chen, Z.H., Zhang, L., Cao, Z.G., Guo, J. (2018). Distilling the knowledge from handcrafted features for human activity recognition. IEEE Transactions on Industrial Informatics, 14(10): 4334-4342. https://doi.org/ 10.1109/TII.2018.2789925

[70] Salivahanan, S. (2011). Digital signal processing. Tata McGraw-Hill Education.

[71] Bashir, A.K., Lim, S.J., Hussain, C.S., Park, M.S. (2011). Energy efficient in-network RFID data filtering scheme in wireless sensor networks. Sensors, 11(7): 7004-7021. https://doi.org/10.3390/s110707004

[72] Karki, J. (2000). Active low-pass filter design. Texas Instruments Application Report.

[73] Christiano, L.J., Fitzgerald, T.J. (2003). The band-pass filter. International Economic Review, 44(2): 435-465. https://doi.org/10.1111/1468-2354.t01-1-00076

[74] Oppenheim, A.V. (1978). Applications of digital signal processing. Englewood Cliffs.

[75] Goodfellow, I., Bengio, Y., Courville, A. (2016). Deep Learning. MIT Press. https://books.google.co.in/books?hl=en&lr=&id=omivDQAAQBAJ&oi=fnd&pg=PR5&dq=%5B75%5D%09Goodfellow,+I.,+Bengio,+Y.,+Courville,+A.+(2016).+Deep+learning.+MIT+press.&ots=MNU7fuqBUV&sig=PtaUTr39RJa8HCpyG-ZwiQgDcMQ&redir_esc=y#v=onepage&q=%5B75%5D%09Goodfellow%2C%20I.%2C%20Bengio%2C%20Y.%2C%20Courville%2C%20A.%20(2016).%20Deep%20learning.%20MIT%20press.&f=false.

[76] Xu, C., He, J., Zhang, X.T., Yao, C., Tseng, P.H. (2018). Geometrical kinematic modeling on human motion using method of multi-sensor fusion. Information Fusion, 41: 243-254. https://doi.org/10.1016/j.inffus.2017.09.014

[77] Avci, A., Bosch, S., Marin-Perianu, M., Marin-Perianu, R., Havinga, P. (2010). Activity recognition using inertial sensing for healthcare, wellbeing and sports applications: A survey. In 23th International Conference on Architecture of Computing Systems, VDE, 2010: 1-10.

[78] Lee, H., Kwon, H. (2017). Going deeper with contextual CNN for hyperspectral image classification. IEEE Transactions on Image Processing, 26(10): 4843-4855. https://doi.org/10.1109/TIP.2017.2725580

[79] Fortino, G., Giannantonio, R., Gravina, R., Kuryloski, P., Jafari, R. (2012). Enabling effective programming and flexible management of efficient body sensor network applications. IEEE Transactions on Human-Machine Systems, 43(1): 115-133. https://doi.org/10.1109/TSMCC.2012.2215852

[80] Xu, C., Chai, D., He, J., Zhang, X.T., Duan, S.H. (2019). InnoHAR: A deep neural network for complex human activity recognition. IEEE Access, 7: 9893-9902. https://doi.org/10.1109/ACCESS.2018.2890675

[81] Elbasiony, R.M., Gomaa, W. (2020). A survey on human activity recognition based on temporal signals of portable inertial sensors. The International Conference on Advanced Machine Learning Technologies and Applications (AMLTA2019), p. 4. https://doi.org/10.1007/978-3-030-14118-9_72 

[82] Bonomi, A.G., Margarito, J., Helaoui, R., Bianchi, A.M., Sartor, F. (2015). User-independent recognition of sports activities from a single wrist-worn accelerometer: A template-matching-based approach. IEEE Transactions on Biomedical Engineering, 63(4): 788-796. https://doi.org/10.1109/TBME.2015.2471094

[83] Roy, P.C., Giroux, S., Bouchard, B., Bouzouane, A., Phua, C., Tolstikov, A., Biswas, J. (2011). A possibilistic approach for activity recognition in smart homes for cognitive assistance to Alzheimer’s patients. Activity Recognition in Pervasive Intelligent Environments, 4: 33-58. https://doi.org/10.2991/978-94-91216-05-3_2

[84] He, K., Zhang, X.Y., Ren, S.Q., Sun, J. (2016). Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, 770-778. https://doi.org/10.1109/CVPR.2016.90

[85] Figo, D., Diniz, P.C., Ferreira, D.R., Cardoso, J.M. (2010). Preprocessing techniques for context recognition from accelerometer data. Personal and Ubiquitous Computing, 14: 645-662. https://doi.org/10.1007/s00779-010-0293-9

[86] Tang, Y., Zhang, L., Wu, H., He, J., Song, A. (2022). Dual-branch interactive networks on multichannel time series for human activity recognition. IEEE Journal of Biomedical and Health Informatics, 26(10): 5223-5234. https://doi.org/10.1109/JBHI.2022.3193148

[87] Ronao, C.A., Cho, S.B. (2015). Deep convolutional neural networks for human activity recognition with smartphone sensors. In Neural Information Processing: 22nd International Conference, ICONIP 2015, pp. 46-53. 

[88] Yang, J.B., Nguyen, M.N., San, P.P., Li, X.L., Krishnaswamy, S. (2015). Deep convolutional neural networks on multichannel time series for human activity recognition. In Proceedings of the 24th International Conference on Artificial Intelligence (IJCAI), 15: 3995-4001. https://doi.org/10.5555/2832747.2832806

[89] Rad, N.M., Bizzego, A., Kia, S.M., Jurman, G., Venuti, P., Furlanello, C. (2015). Convolutional neural network for stereotypical motor movement detection in autism. arXiv Preprint arXiv: 1511.01865. https://doi.org/10.48550/arXiv.1511.01865

[90] Bulling, A., Blanke, U., Schiele, B. (2014). A tutorial on human activity recognition using body-worn inertial sensors. ACM Computing Surveys (CSUR), 46(3): 1-33. https://doi.org/10.1145/2499621

[91] Hochreiter, S., Schmidhuber, J. (1997). Long short-term memory. Neural Computation, 9(8): 1735-1780. https://doi.org/10.1162/neco.1997.9.8.1735

[92] Ordóñez, F.J., Roggen, D. (2016). Deep convolutional and lstm recurrent neural networks for multimodal wearable activity recognition. Sensors, 16(1): 115. https://doi.org/10.3390/s16010115

[93] Hammerla, N., Fisher, J., Andras, P., Rochester, L., Walker, R., Plötz, T. (2015). PD disease state assessment in naturalistic environments using deep learning. In Proceedings of the AAAI Conference on Artificial Intelligence, 29(1). https://doi.org/10.1609/aaai.v29i1.9484

[94] Wang, H., Lei, Z., Zhang, X., Zhou, B., Peng, J. (2016). Machine learning basics. Deep Learning, 98-164.

[95] Ramasamy Ramamurthy, S., Roy, N. (2018). Recent trends in machine learning for human activity recognition-a survey. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 8(4): e1254. https://doi.org/10.1002/widm.1254

[96] Ashraf, H., Brüls, O., Schwartz, C., Boutaayamou, M. (2023). Comparison of machine learning algorithms for human activity recognition. In Proceedings of the 16th International Joint Conference on Biomedical Engineering Systems and Technologie, SciTePress, lisbon, Portugal. https://doi.org/10.5220/0011631500003414

[97] Li, F., Shirahama, K., Nisar, M.A., Köping, L., Grzegorzek, M. (2018). Comparison of feature learning methods for human activity recognition using wearable sensors. Sensors, 18(2): 679. https://doi.org/10.3390/s18020679

[98] Ahmad, R., Wazirali, R., Abu-Ain, T. (2022). Machine learning for wireless sensor networks security: An overview of challenges and issues. Sensors, 22(13): 4730. https://doi.org/10.3390/s22134730

[99] Go, T., Moe, T., Hirotsugu, G., Yuuichi, N. (2016). Machine learning applied to sensor data analysis. Yokogawa Technical Report English Edition, 59(1): 27-30.

[100] Alsheikh, M.A., Lin, S., Niyato, D., Tan, H.P. (2014). Machine learning in wireless sensor networks: Algorithms, strategies, and applications. IEEE Communications Surveys & Tutorials, 16(4): 1996-2018. https://doi.org/10.1109/COMST.2014.2320099

[101] Tong, L., Lin, Q., Qin, C., Peng, L. (2021). A comparison of wearable sensor configuration methods for human activity recognition using CNN. In 2021 IEEE International Conference on Progress in Informatics and Computing (PIC), pp. 288-292. IEEE. https://doi.org/10.1109/PIC53636.2021.9687056

[102] Athavale, V.A., Gupta, S.C., Kumar, D., Savita, S. (2021). Human action recognition using CNN-SVM model. Advances in Science and Technology, 105: 282-290. https://doi.org/10.4028/www.scientific.net/AST.105.282

[103] Shuvo, M.M.H., Ahmed, N., Nouduri, K., Palaniappan, K. (2020). A hybrid approach for human activity recognition with support vector machine and 1D convolutional neural network. In 2020 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), IEEE, pp. 1-5. https://doi.org/10.1109/AIPR50011.2020.9425332

[104] Menhour, I., Abidine, M.B., Fergani, B. (2018). A new framework using PCA, LDA and KNN-SVM to activity recognition based SmartPhone’s sensors. In 2018 6th International Conference on Multimedia Computing and Systems (ICMCS), IEEE, 1-5. https://doi.org/10.1109/ICMCS.2018.8525987

[105] Farquhar, J.D.R., Hardoon, D.R., Meng, H.Y., Shawe-Taylor, J., Szedmak, S.R. (2005). Two view learning: SVM-2K, theory and practice. Advances in Neural Information Processing Systems, 18.

[106] Song, M.J., Civco, D. (2004). Road extraction using SVM and image segmentation. Photogrammetric Engineering & Remote Sensing, 70(12): 1365-1371. https://doi.org/10.14358/PERS.70.12.1365

[107] Keerthi, S.S., Shevade, S.K., Bhattacharyya, C., Murthy, K.R.K. (2001). Improvements to Platt's SMO algorithm for SVM classifier design. Neural Computation, 13(3): 637-649. https://doi.org/10.1162/089976601300014493

[108] Keerthi, S.S., Shevade, S.K., Bhattacharyya, C., Murthy, K.R.K. (2000). A fast iterative nearest point algorithm for support vector machine classifier design. IEEE Transactions on Neural Networks, 11(1): 124-136. https://doi.org/10.1109/72.822516

[109] Har-Peled, S., Mazumdar, S. (2004). On coresets for k-means and k-median clustering. In Proceedings of the Thirty-sixth Annual ACM Symposium on Theory of Computing, 291-300. https://doi.org/10.1145/1007352.1007400

[110] Elshourbagy, M., Hemayed, E., Fayek, M. (2016). Enhanced bag of words using multilevel k-means for human activity recognition. Egyptian Informatics Journal, 17(2): 227-237. https://doi.org/10.1016/j.eij.2015.11.002

[111] Chetty, G., White, M., Akther, F. (2015). Smart phone based data mining for human activity recognition. Procedia Computer Science, 46: 1181-1187. https://doi.org/10.1016/j.procs.2015.01.031

[112] Siddiqui, N., Pryor, L., Dave, R. (2021). User authentication schemes using machine learning methods-A review. In Proceedings of International Conference on Communication and Computational Technologies (ICCCT), Springer Singapore, 2021: 703-723. https://doi.org/10.1007/978-981-16-3246-4_54

[113] Prabowo, O.M., Mutijarsa, K., Supangkat, S.H. (2016). Missing data handling using machine learning for human activity recognition on mobile device. In 2016 International Conference on ICT for Smart Society (ICISS), IEEE, pp. 59-62. https://doi.org/10.1109/ICTSS.2016.7792849

[114] Zebin, T., Scully, P.J., Ozanyan, K.B. (2016). Human activity recognition with inertial sensors using a deep learning approach. In 2016 IEEE Sensors, IEEE, pp. 1-3. https://doi.org/10.1109/ICSENS.2016.7808590

[115] Ketu, S., Mishra, P.K. (2020). Performance analysis of machine learning algorithms for IoT-based human activity recognition. In Advances in Electrical and Computer Technologies: Select Proceedings of ICAECT, Springer Singapore, 2019: 579-591. https://doi.org/10.1007/978-981-15-5558-9

[116] Ding, J.Y., Wang, Y., Si, H.Y., Gao, S., Xing, J.W. (2022). Multimodal fusion-AdaBoost based activity recognition for smart home on WiFi platform. IEEE Sensors Journal, 22(5): 4661-4674. https://doi.org/10.1109/JSEN.2022.3146137

[117] Zerrouki, N., Harrou, F., Sun, Y., Houacine, A. (2017). Adaboost-based algorithm for human action recognition. In 2017 IEEE 15th International Conference on Industrial Informatics (INDIN), IEEE, pp. 189-193. https://doi.org/10.1109/INDIN.2017.8104769

[118] Zubair, M., Song, K., Yoon, C. (2016). Human activity recognition using wearable accelerometer sensors. In 2016 IEEE International Conference on Consumer Electronics-Asia (ICCE-Asia), IEEE, 1-5. https://doi.org/10.1109/ICCE-Asia.2016.7804737

[119] Mekruksavanich, S., Jitpattanakul, A. (2021). Recognition of real-life activities with smartphone sensors using deep learning approaches. In 2021 IEEE 12th International Conference on Software Engineering and Service Science (ICSESS), IEEE, pp. 243-246. https://doi.org/10.1109/ICSESS52187.2021.9522231

[120] Ambati, L.S., El-Gayar, O. (2021). Human activity recognition: A comparison of machine learning approaches. Journal of the Midwest Association for Information Systems (JMWAIS), 2021(1): 4. https://doi.org/10.17705/3jmwa.000065

[121] Tufek, N., Yalcin, M., Altintas, M., Kalaoglu, F., Li, Y., Bahadir, S.K. (2019). Human action recognition using deep learning methods on limited sensory data. IEEE Sensors Journal, 20(6): 3101-3112. https://doi.org/10.1109/JSEN.2019.2956901

[122] Bock, M., Hölzemann, A., Moeller, M., Van Laerhoven, K. (2021). Improving deep learning for HAR with shallow LSTMs. In Proceedings of the 2021 ACM International Symposium on Wearable Computers, pp. 7-12. https://doi.org/10.1145/3460421.3480419

[123] Shakya, S.R., Zhang, C.Y., Zhou, Z.X. (2018). Comparative study of machine learning and deep learning architecture for human activity recognition using accelerometer data. International Journal of Machine Learning and Computing, 8(6): 577-582. https://doi.org/10.18178/ijmlc.2018.8.6.748

[124] Rashid, N., Demirel, B.U., Al Faruque, M.A. (2022). AHAR: Adaptive CNN for energy-efficient human activity recognition in low-power edge devices. IEEE Internet of Things Journal, 9(15): 13041-13051. https://doi.org/10.1109/JIOT.2022.3140465

[125] Wan, S.H., Qi, L.Y., Xu, X.L., Tong, C., Gu, Z.H. (2020). Deep learning models for real-time human activity recognition with smartphones. Mobile Networks and Applications, 25: 743-755. https://doi.org/10.1007/s11036-019-01445-x

[126] O'Halloran, J., Curry, E. (2019). A comparison of deep learning models in human activity recognition and behavioural prediction on the mhealth dataset. In Irish Conference on Artificial Intelligence and Cognitive Science, pp. 212-223. https://ceur-ws.org/Vol-2563/aics_21.pdf.

[127] Mekruksavanich, S., Jitpattanakul, A. (2022). CNN-based deep learning network for human activity recognition during physical exercise from accelerometer and photoplethysmographic sensors. In Computer Networks, Big Data and IoT., Lecture Notes on Data Engineering and Communications Technologies (LNDECT), 117: 531-542. https://doi.org/10.1007/978-981-19-0898-9_42