337
views
0
recommends
+1 Recommend
1 collections
    1
    shares

      King Salman Center for Disability Research is pleased to invite you to submit your scientific research to the Journal of Disability Research. JDR contributes to the Center's strategy to maximize the impact of the field, by supporting and publishing scientific research on disability and related issues, which positively affect the level of services, rehabilitation, and care for individuals with disabilities.
      JDR is an Open Access scientific journal that takes the lead in covering disability research in all areas of health and society at the regional and international level.

      scite_
       
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Internet of Things-driven Human Activity Recognition of Elderly and Disabled People Using Arithmetic Optimization Algorithm with LSTM Autoencoder

      Published
      research-article
      Bookmark

            Abstract

            In recent times, mobile communications and Internet of Things (IoT) techniques have been technologically advanced to gather environmental and human data for many applications and intelligent services. Remote monitoring of disabled and older people living in smart homes is very difficult. Human activity recognition (HAR) is an active research area for classifying human movement and application in many regions like rehabilitation, healthcare systems, medical diagnosis, surveillance from smart homes, and elderly care. HAR data are gathered in wearable devices that contain many kinds of sensors or with the mobile sensor aid. Lately, deep learning (DL) algorithm has shown remarkable performance in classifying human activity on HAR information. This paper presents a new Arithmetic Optimization Algorithm with LSTM Autoencoder (AOA-LSTMAE) for HAR technique in the IoT environment. In the presented AOA-LSTMAE technique, the major intention is to recognize several types of human activities in the IoT environment. To accomplish this, the AOA-LSTMAE technique mainly derives the P-ResNet model for feature extraction. In addition, the AOA-LSTMAE technique utilizes the LSTMAE classification model for the recognition of different activities. For improving the recognition efficacy of the LSTMAE model, AOA is used as a hyperparameter optimization system. The simulation validation of the AOA-LSTMAE technique is tested on benchmark activity recognition data. The simulation results of the AOA-LSTMAE technique and compared methods stated the improvement of the proposed model with an accuracy of 99.12% over other recent algorithms.

            Main article text

            INTRODUCTION

            One of the most popular study fields is the Human Activity Recognition (HAR) subject ( Hussain et al., 2022). Owing to the availability of accelerometers and sensors, low energy consumption and minimum cost, and developments in computer vision, artificial intelligence, and Internet of Things (IoT) applications were built with the human-centered model observing to categorize, recognize, and detect human behavior. Scholars have provided several approaches regarding this topic ( Yadav et al., 2022). HAR has become a crucial tool for monitoring a person’s dynamism, and it is accomplished by utilizing ML techniques ( Brishtel et al., 2023). HAR is a technique of automatically analyzing and detecting human activities related to data needed from different wearable devices and smartphone sensors like location, accelerometer sensors, time, various other environmental sensors, and gyroscope sensors ( Thapa et al., 2023). Combined with other technologies like IoT, it is utilized in diverse application areas like industry, healthcare, and sports ( Park et al., 2019).

            The detection of human activity is applied to different fields like elderly care, health care, and preventive medicine ( Mazzia et al., 2022). Furthermore, with the dramatic increase of devices with built-in sensors like smartphones, the cost of sensing gadgets has decreased significantly. Consequently, studies on mobile activity detection were conducted actively ( Shen et al., 2023). In conventional activity detection methods, authors have often utilized an ML approach like naive Bayes, support vector machine, decision tree, and random forest to detect actions from feature vectors derived from signals in time window utilizing Fourier transformation or statistic values ( Moutinho et al., 2023). Recurrent neural networks (RNNs) have a directed closed cycle. RNNs are appropriate for managing time-series datasets, like video and audio signals and natural language. Currently, hierarchical multi-layered convolutional neural networks (CNNs) have reached visible outcomes in fields like image processing and are grabbing attention to the technique named deep learning (DL). In this direction, as the RNN has deep layers for temporal direction, it comes to capture as a DL approach. Compared with conventional activity identification approaches that can be input feature vectors, in DL, the original datasets are a direct input ( Islam et al., 2022). It allowed the computation of feature vectors to be skipped during recognition and training so that a speedup can be anticipated, particularly in detection. Meanwhile, one can expect that the recognition outcome has to be accurate due to DL ( Khodabandelou et al., 2023).

            This paper presents a new Arithmetic Optimization Algorithm with LSTM Autoencoder (AOA-LSTMAE) for HAR technique in the IoT environment. In the presented AOA-LSTMAE technique, the major intention is to recognize several types of human activities in the IoT environment. To accomplish this, the AOA-LSTMAE technique mainly derives the P-ResNet model for feature extraction. In addition, the AOA-LSTMAE technique utilizes the LSTMAE classification model for the recognition of different activities. For improving the recognition efficacy of the LSTMAE model, AOA is used as a hyperparameter optimization system. The simulation validation of the AOA-LSTMAE algorithm is tested on benchmark activity recognition data.

            LITERATURE REVIEW

            Zhang et al. (2023) introduced a new architecture containing three parts: feature selection related to an oppositional and chaos PSO method, deep decision fusion relating to D-S evidence theory, and entropy multi-input 1D-CNN leveraging frequency-domain and time-domain signals. The presented structure can be assessed on the WIDSM and UCI HAR data. Slim et al. (2021) present a new approach for enhancing the DL structure through GA and adding novel statistical features. For acquiring the optimum value variables of DL, GA can be leveraged as an enhancing approach. Also, from the CNN method, novel statistical attributes are added to the automatically extracted attributes. The authors ( Khan et al., 2022) established a hybrid method by merging LSTM and CNN for activity detection, where CNN was utilized for extracting spatial features, and the LSTM network was leveraged for learning temporal data. A wide ablation study was executed over various conventional DL and ML methods for attaining an optimum solution for HAR. In addition, utilizing the Kinect V2 sensor, a novel challenging dataset is generated.

            Pesenti et al. (2023) modeled a DL-related method using inertial sensors to offer industrial exoskeletons with adaptive payload compensation and HAR. In any industrial exoskeleton, inertial measurement units were embeddable or easily wearable. The author used LSTM networks to perform HAR and categorize the weight of lifted objects. The method can be tested and trained on 12 young, healthy volunteers. Basak et al. (2022), devised DSwarm-Net, a structure that makes use of DL and SI-related meta-heuristic that utilizes 3-D skeleton data for action classification for HAR. Malik et al. (2023) developed a potential multi-view interaction level action detection mechanism utilizing 2-D skeleton data with more precision, whereas minimizing the computation complexity depends on the DL structure. Utilizing the OpenPose approach, the presented system extracted 2-D skeleton data from the dataset. Then, the extraction of 2D skeleton features is fed as input to the CNN-LSTM structure for detecting actions. In Nafea et al. (2021), the authors introduced a novel approach utilizing CNN with changing kernel dimension and Bi-LSTM to capture attributes at various resolutions. The potential extraction of temporal and spatial features in sensor data and the selection of optimum video representation utilizing BiLSTM and CNN is the novelty of this study. Muaaz et al. (2022) presented Wi-Sense—a HAR that utilizes a CNN for detecting human actions related to the fingerprint derived from Wi-Fi channel state information.

            THE PROPOSED MODEL

            In this paper, we have presented a novel AOA-LSTMAE model for automated recognition and classification of human activities to aid elderly and disabled people. The AOA-LSTMAE technique’s major intention is to recognize several types of human activities in the IoT environment. To accomplish this, the AOA-LSTMAE technique comprises a P-ResNet feature extractor, LSTMAE classification, and AOA-based hyperparameter tuning. Figure 1 exemplifies the overall procedure of the AOA-LSTMAE algorithm.

            Figure 1:

            Overall procedure of the AOA-LSTMAE algorithm. Abbreviation: AOA-LSTMAE, Arithmetic Optimization Algorithm with LSTM Autoencoder.

            Feature extraction

            To produce feature vectors, the P-ResNet model is used. P-ResNet is dependent on the development of ResNet, which offers a technique for data classification ( Xu et al., 2022). The network structure of P-ResNet includes six different parts, five of which are the convolutional layer and the final one is the FC layer. The ReLU, convolution function, and BN are exploited as the activation function for completing the output of the convolutional layer. Moreover, to stop overfitting and decrease the computation and amount of parameters from the network, a method of average pooling and max pooling is applied. The input image has been resized to 224×224×3. The convolution layer of the P-ResNet network drives over a 7×7 convolutional layer. The receptive field that is utilized for extracting features of images in these databases is big and sufficient. More subtle features should be extracted to accurately categorize maize seeds. Additionally, a network depth must be developed to minimize the size model. Thus, the convolutional layer of 2-5 layers was enhanced to better suit the classification. The study utilized 24 3×3 convolutional layers for learning, with additional nonlinear activation functions for making decision functions more correct; at the same time, it could efficiently reduce the number of parameters. Additionally, the main area occupies a smaller region of images in online inspection during the seed processing industry, and the proportion of data attained is weak. To avoid useless and redundant data, a pooling layer was added to incorporate spatial data before the convolutional kernel of the remaining model downsampling.

            Activity recognition using the LSTMAE model

            For the identification of several kinds of human activities, the LSTMAE model is utilized. LSTM network is a revised version of RNN, which remembers the long-term dependency in an effective manner ( Faraz et al., 2020). RNN encounters the gradient vanishing problems, while these problems are solved in the LSTM network. The keystone of LSTM is a cell (or memory unit). A cell incorporates one tanh and three sigmoid layers that procedure three gates forming the data outside and inside of cells. The output and input gates control the output and input dataset from the cell correspondingly. The forget gate resets the memory unit and has a sigmoid function. Assume the information x z , the data flow in an LSTM cell is expressed using the following equations:

            (1) fz=σ(Wf.[hz1,xz]+bf)

            (2) iz=σ(Wi.[hz1,xz]+bi)

            (3) C˜z=tanh(WC.[hz1,xz]+bC)

            (4) Cz=fz*Cz1+iz*C˜z

            (5) oz=σ(Wo[hz1,xz]+bo)

            (6) hz=oz* tanh (Cz),

            Where Tanh denotes the hyperbolic tangent function and * signifies a point-wise multiplication operator, the newest cell state ( C) signifies the novel data. o z , f z , and i z indicate the output, forget, and input gates at z time correspondingly. C z denotes the cell state vector, and h z characterizes the hidden layer at z existing time.

            AE is an ANN that takes account of two parts: an encoded h = f( x) and decoded which generates a reconstruction x^=g(h) . The model can be required to provide significance of applicable properties of input; specifically, an AE learns suitable aspects of data.

            AE is used to encode and compress the data. AE refers to an unsupervised ANN, which makes a decreased encoder representation of data and later learns that recreates the data back in them. The AE encompasses the LSTM layer in encoder and decoder units, and we use dropout as a regularization technique to avoid overfitting after each LSTM layer. First, the AE is trained. Then, the encoded part has been exploited as the feature generator. And lastly, the LSTM-based predictor is trained. Figure 2 represents the infrastructure of the LSTMAE model.

            Figure 2:

            Framework of LSTMAE. Abbreviation: LSTMAE, LSTM Autoencoder.

            The w can be defined as a timestep in time sequences data and apply x z, x z+ 1 , x z+w to forecast the final price the next day. X denotes the AE-LSTM network input and can be shown as follows:

            (7) X={xz,xz+1,,xz+w}

            x z+w +1 is exploited as the target during the trained stage.

            AOA-based hyperparameter tuning

            Finally, AOA was used for the optimal hyperparameter tuning of the LSTMAE approach. Arbitrarily creating a candidate solution set having population-oriented systems starts with an improvement procedure ( Deepa and Chokkalingam, 2022). The optimizer rule set incrementally enhances the created group of solutions, but a certain main function calculates it. For the provided issue, the global optimizer technique gains probability. The optimizer method contains two important classes in population-based optimizer approaches: exploitation and exploration. In the exploration step, the final is the improvement of the acquired result. Based on AOA, the subsequent subsections describe intensification (exploitation) and diversification (exploration). Multiplication, addition, division, and subtraction were the main arithmetic operators as explained in Algorithm 1.

            Inspiration

            With geometry, algebra, and analysis, the most important segment of modern mathematics can be an essential element of the number model termed arithmetic. From any group of candidate solutions, an optimum element exposed to specific conditions was defined utilizing an AOA as the mathematical optimizer.

            Initialized step

            Equation (8) displays the candidate solutions ( Y) group in AOA. During every iteration, the best-acquired solution regards an optimum candidate result.

            Algorithm 1

            AOA pseudo-code

            (Input): Initialized AOA parameters with the maximal iteration counts
            Output: Acquire the optimum solution
            While ( c_iteration < M_iteration) do
            Estimate fitness function (FF)
            Attain the optimum solution once it determines any optimum
            Upgrade the value of M OA
            Upgrade the value of M OA
            For ( j = 1 tos olution), do
            For ( j = 1 tos olution), do
            The random values R 1, R 2, and R 3 created among zero and one
            If R 1 > M OA
            Then
            Diversification stage
            If R 2 > 0.5
            Then
            The division math operator was executed
            Upgrades the jth solution position
            Else
            The multiplication math operator was implemented
            Upgrades the kth solution position
            End If
            Else
            Intensification stage
            If R 3 > 0.5
            Then
            The subtraction math operator was performed
            Upgrades the jth solution position
            Else
            The addition math operator was carried out
            Upgrades the kth solution position
            End If
            End If
            End For
            End For
            ( c_iteration < c_iteration+1)
            End While

            (8) Y=[y1,1y1,iy1,m1y1,my2,1y2,iy2,m1y2,myM1,1yM1,iyM1,myM,1yM,iyM,m1yM,m]

            The searching phase was chosen before the AOA initialized working. Equation (9) computes the math optimizer accelerated ( M OA ) function.

            (9) MOA(c_iteration)=Minimum+c_iteration×(MaximumMinimumMiteration)

            At the tth iteration, the function value can be referred to as M OA ( c_ iteration); thus, maximal and minimal iteration can be computed.

            Diversification stage

            It is established as the AOA of diversification or exploration performance. A higher distributed value was attained utilizing multiplication/division operators dependent upon the arithmetic operators. According to multiplication and division, an optimum solution can be defined by exploring AOA exploration operators. The arithmetic operator performance was inspired by utilizing the simplest rule. Equation (10) demonstrates the position upgrade of the exploration step.

            (10) yj,k(c_iteration+1)={B(yi)÷(MOA+δ)×((UiLi)×α+Li)R2<0.5B(yi)×MOA×((UiLi)×α+Li)otherwise

            During the next iteration, the jth and kth solution positions were defined as y j,k ( c_iteration). The smaller integer is δ, with the control parameter being α. The jth positions of upper and lower bounds are U i and L i .

            (11) MOA(c_iteration)=1citeration1βMiteration1β

            At the tth iteration, the function value can be formulated as M OA ( c_iteration). Additionally, the sensitive parameter was β.

            Intensification stage

            Higher dense outcomes can be attained utilizing addition/subtraction of arithmetic operators. Due to lower dispersion, subtraction and addition can simply be the targeted manner. Afterwards, a small iteration assumes the near-optimum solution in the recognition of exploration searches. Equation (12) describes the exploitation step.

            (12) yj,k(citeration+1)={B(yi)MOA×((UiLi)×α+Li)R3<0.5B(yi)×MOA×((UiLi)×α+Li)otherwise.

            Exploit the searching operator of exploitation to avoid being surrounded by the local searching region. The random values R 1, R 2, and R 3 are created among zero and one intervals. The optimum solution can be attained by supporting the exploitation searching step.

            Fitness choice is a key aspect of the AOA system. An encoded result was employed for evaluating the goodness of candidate results. Recently, the accuracy value has been a significant condition applied to propose a fitness function.

            (13) Fitness=max (P)

            (14) P=TPTP+FP,

            where TP signifies the true-positive value and FP refers to the false-positive value.

            RESULTS AND DISCUSSION

            The proposed model is simulated using the Python tool. The experimental outcomes of the AOA-LSTMAE methodology are tested on the UR fall detection dataset. It comprises 314 instances with two classes, as depicted in Table 1. Figure 3 shows the sample images.

            Table 1:

            Details of database.

            ClassNo. of samples
            Fall event74
            Nonfall event240
            Total number of samples314
            Figure 3:

            Sample images.

            The suggested technique is put under simulation by employing Python 3.6.5 tool on PC i5-8600k, 250GB SSD, GeForce 1050Ti 4GB, 16GB RAM, and 1TB HDD. The parameter settings are provided in the following: learning rate: 0.01, activation: ReLU, epoch count: 50, dropout: 0.5, and size of batch: 5.

            In Figure 4, a brief activity recognition result of the AOA-LSTMAE technique is presented in the form of a confusion matrix. The results notified that the AOA-LSTMAE technique recognized the fall and nonfall events effectually.

            Figure 4:

            Confusion matrices of the AOA-LSTMAE system (a-f) Epochs 500-3000. Abbreviation: AOA-LSTMAE, Arithmetic Optimization Algorithm with LSTM Autoencoder.

            In Table 2 and Figure 5, the overall activity detection outcome of the AOA-LSTMAE method is reported under distinct epochs. With 500 epochs, the AOA-LSTMAE technique attains an average accu y of 99.12%, prec n of 99.12%, reca l of 99.12%, spec y of 99.12%, and F score of 99.12%. Simultaneously, with 1500 epochs, the AOA-LSTMAE approach acquires an average accu y of 98.91%, prec n of 98.46%, reca l of 98.91%, spec y of 98.91%, and F score of 98.68%. Concurrently, with 2000 epochs, the AOA-LSTMAE method attains an average accu y of 98.23%, prec n of 98.23%, reca l of 98.23%, spec y of 98.23%, and F score of 98.23%. Finally, with 3000 epochs, the AOA-LSTMAE algorithm reaches an average accu y of 97.35%, prec n of 97.35%, reca l of 97.35%, spec y of 97.35%, and F score of 97.35%.

            Figure 5:

            Average outcome of the AOA-LSTMAE approach under distinct epochs. Abbreviation: AOA-LSTMAE, Arithmetic Optimization Algorithm with LSTM Autoencoder.

            Table 2:

            Activity detection outcome of the AOA-LSTMAE approach under distinct epochs.

            Class Accu y Prec n Reca l Spec y F score
            Epoch 500
             Fall event98.6598.6598.6599.5898.65
             Nonfall event99.5899.5899.5898.6599.58
             Average99.1299.1299.1299.1299.12
            Epoch 1000
             Fall event97.3094.7497.3098.3396.00
             Nonfall event98.3399.1698.3397.3098.74
             Average97.8296.9597.8297.8297.37
            Epoch 1500
             Fall event98.6597.3398.6599.1797.99
             Nonfall event99.1799.5899.1798.6599.37
             Average98.9198.4698.9198.9198.68
            Epoch 2000
             Fall event97.3097.3097.3099.1797.30
             Nonfall event99.1799.1799.1797.3099.17
             Average98.2398.2398.2398.2398.23
            Epoch 2500
             Fall event97.3098.6397.3099.5897.96
             Nonfall event99.5899.1799.5897.3099.38
             Average98.4498.9098.4498.4498.67
            Epoch 3000
             Fall event95.9595.9595.9598.7595.95
             Nonfall event98.7598.7598.7595.9598.75
             Average97.3597.3597.3597.3597.35

            Abbreviation: AOA-LSTMAE, Arithmetic Optimization Algorithm with LSTM Autoencoder.

            Figure 6 portrays the accuracy of the AOA-LSTMAE method in the training and validation of epoch 500. The result specified that the AOA-LSTMAE algorithm gains higher accuracy values over higher epochs. Furthermore, the higher validation accuracy over training accuracy depicted that the AOA-LSTMAE method learns productively on epoch 500.

            Figure 6:

            Accuracy curve of the AOA-LSTMAE approach on epoch 500. Abbreviation: AOA-LSTMAE, Arithmetic Optimization Algorithm with LSTM Autoencoder.

            The loss analysis of the AOA-LSTMAE method in training and validation is given on epoch 500 in Figure 7. The result highlighted that the AOA-LSTMAE method attained closer training and validation loss values. The AOA-LSTMAE algorithm learns productively on epoch 500.

            Figure 7:

            Loss curve of the AOA-LSTMAE approach on epoch 500. Abbreviation: AOA-LSTMAE, Arithmetic Optimization Algorithm with LSTM Autoencoder.

            The detailed precision-recall (PR) curve of the AOA-LSTMAE approach is given on epoch 500 in Figure 8. The figure specified that the AOA-LSTMAE approach leads to higher values of PR. Furthermore, the AOA-LSTMAE method can reach greater PR values on all classes.

            Figure 8:

            The PR curve of the AOA-LSTMAE approach on epoch 500. Abbreviations: AOA-LSTMAE, Arithmetic Optimization Algorithm with LSTM Autoencoder; PR, precision-recall.

            In Figure 9, a ROC study of the AOA-LSTMAE method is shown on epoch 500. The figure described that the AOA-LSTMAE method has improved ROC values. Also, the AOA-LSTMAE approach can extend enhanced ROC values in every class.

            Figure 9:

            The ROC curve of the AOA-LSTMAE approach on epoch 500. Abbreviation: AOA-LSTMAE, Arithmetic Optimization Algorithm with LSTM Autoencoder.

            In Table 3 and Figure 10, a brief comparative accu y result of the AOA-LSTMAE technique and compared methods is given ( Vaiyapuri et al., 2021) The results represented that the ResNet-50 and ResNet-101 approaches accomplish poor performance with accu y of 95.40 and 96.20%, respectively. Then, the VGG-16, VGG-19, and IMEFD-ODCNN models have resulted in closer accu y of 97.60, 98, and 98.57%, respectively. But the AOA-LSTMAE technique exhibits improved results with an accu y of 99.12%.

            Figure 10:

            A ccu y analysis of the AOA-LSTMAE system with existing algorithms. Abbreviation: AOA-LSTMAE, Arithmetic Optimization Algorithm with LSTM Autoencoder.

            Table 3:

            Accu y outcome of the AOA-LSTMAE algorithm with existing approaches.

            MethodsAccuracy (%)
            VGG-1697.60
            VGG-1998.00
            ResNet-5095.40
            ResNet-10196.20
            IMEFD-ODCNN98.57
            AOA-LSTMAE99.12

            Abbreviation: AOA-LSTMAE, Arithmetic Optimization Algorithm with LSTM Autoencoder.

            The computation time analysis of the AOA-LSTMAE algorithm with other existing methods was performed using training time (TRT) and testing time (TST) shown in Table 4 and Figure 11. The simulation values indicate that the AOA-LSTMAE approach reaches effectual outcomes in terms of TRT and TST. Based on TRT, the AOA-LSTMAE algorithm gains the least TRT of 9.01s, whereas the existing models attain increased TRT values. Next, based on TST, the AOA-LSTMAE technique gains the least TST of 8.30s, whereas the existing models attain increased TST values.

            Figure 11:

            TRT and TST analyses of the AOA-LSTMAE system with existing algorithms. Abbreviations: AOA-LSTMAE, Arithmetic Optimization Algorithm with LSTM Autoencoder; TRT, training time; TST, testing time.

            Table 4:

            TRT and TST outcomes of the AOA-LSTMAE system with other algorithms.

            MethodsTraining time (seconds)Testing time (seconds)
            VGG-1639.2118.48
            VGG-1946.3122.87
            ResNet-50 model23.6814.65
            ResNet-101 model25.7615.43
            IMEFD-ODCNN16.9011.29
            AOA-LSTMAE09.0108.30

            Abbreviations: AOA-LSTMAE, Arithmetic Optimization Algorithm with LSTM Autoencoder; TRT, training time; TST, testing time.

            Therefore, the results assured that the AOA-LSTMAE technique improves recognition results over other models.

            CONCLUSION

            In this paper, we have developed a novel AOA-LSTMAE model for the automated recognition and classification of human activities to aid elderly and disabled people. In the presented AOA-LSTMAE technique, the major intention is to recognize several types of human activities in the IoT environment. To accomplish this, the AOA-LSTMAE technique comprises a P-ResNet feature extractor, LSTMAE classification, and AOA-based hyperparameter tuning. For improving the recognition efficacy of the LSTMAE model, AOA is used as a hyperparameter optimization system. The simulation validation of the AOA-LSTMAE system was tested on benchmark activity recognition data. The simulation results of the AOA-LSTMAE technique and compared methods stated the improvement of the proposed model over other recent algorithms. In future, a ten-fold cross-validation approach can be applied to investigate the performance of the proposed model.

            CONFLICTS OF INTEREST

            The authors declare no conflicts of interest in association with the present study.

            DATA AVAILABILITY STATEMENT

            Data sharing does not apply to this article as no datasets were generated during the current study.

            REFERENCES

            1. Basak H, Kundu R, Singh PK, Ijaz MF, Woźniak M, Sarkar R. 2022. A union of deep learning and swarm-based optimization for 3D human action recognition. Sci. Rep. Vol. 12(1):5494

            2. Brishtel I, Krauss S, Chamseddine M, Rambach JR, Stricker D. 2023. Driving activity recognition using UWB radar and deep neural networks. Sensors. Vol. 23(2):818

            3. Deepa N, Chokkalingam SP. 2022. Optimization of VGG16 utilizing the arithmetic optimization algorithm for early detection of Alzheimer’s disease. Biomed. Signal Process. Control. Vol. 74:103455

            4. Faraz M, Khaloozadeh H, Abbasi M. 2020. Stock market prediction-by-prediction based on autoencoder long short-term memory networks2020 28th Iranian Conference on Electrical Engineering (ICEE); 26-28 May 2020; Iran. IEEE. p. 1–5

            5. Hussain A, Hussain T, Ullah W, Baik SW. 2022. Vision transformer and deep sequence learning for human activity recognition in surveillance videos. Comput. Intell. Neurosci. Vol. 2022:

            6. Islam MM, Nooruddin S, Karray F, Muhammad G. 2022. Human activity recognition using tools of convolutional neural networks: a state of the art review, data sets, challenges, and future prospects. Comput. Biol. Med. Vol. 149:106060

            7. Khan IU, Afzal S, Lee JW. 2022. Human activity recognition via hybrid deep learning based model. Sensors. Vol. 22(1):323

            8. Khodabandelou G, Moon H, Amirat Y, Mohammed S. 2023. A fuzzy convolutional attention-based GRU network for human activity recognition. Eng. Appl. Artif. Intell. Vol. 118:105702

            9. Malik N.U.R, Abu-Bakar S.A.R, Sheikh UU, Channa A, Popescu N. 2023. Cascading pose features with CNN-LSTM for multiview human action recognition. Signals. Vol. 4(1):40–55

            10. Mazzia V, Angarano S, Salvetti F, Angelini F, Chiaberge M. 2022. Action transformer: a self-attention model for short-time pose-based human action recognition. Pattern Recognit. Vol. 124:108487

            11. Moutinho D, Rocha LF, Costa CM, Teixeira LF, Veiga G. 2023. Deep learning-based human action recognition to leverage context awareness in collaborative assembly. Robot. Comput. Integr. Manuf. Vol. 80:102449

            12. Muaaz M, Chelli A, Gerdes MW, Pätzold M. 2022. Wi-Sense: a passive human activity recognition system using Wi-Fi and convolutional neural network and its integration in health information systems. Ann. Telecommun. Vol. 77(3-4):163–175

            13. Nafea O, Abdul W, Muhammad G, Alsulaiman M. 2021. Sensor-based human activity recognition with spatio-temporal deep learning. Sensors. Vol. 21(6):2141

            14. Park JH, Salim MM, Jo JH, Sicato J.C.S, Rathore S, Park JH. 2019. CIoT-Net: a scalable cognitive IoT based smart city network architecture. Hum.-centric Comput. Inf. Sci. Vol. 9(1):1–20

            15. Pesenti M, Invernizzi G, Mazzella J, Bocciolone M, Pedrocchi A, Gandolla M. 2023. IMU-based human activity recognition and payload classification for low-back exoskeletons. Sci. Rep. Vol. 13(1):1184

            16. Shen Q, Feng H, Song R, Song D, Xu H. 2023. Federated meta-learning with attention for diversity-aware human activity recognition. Sensors. Vol. 23(3):1083

            17. Slim SO, Elfattah MM, Atia A, Mostafa M.S.M. 2021. IoT system based on parameter optimization of deep learning using genetic algorithm. Int. J. Intell. Eng. Syst. Vol. 14(2):220–235

            18. Thapa K, Seo Y, Yang SH, Kim K. 2023. Semi-supervised adversarial auto-encoder to expedite human activity recognition. Sensors. Vol. 23(2):683

            19. UR Fall Detection Dataset. http://fenix.ur.edu.pl/~mkepski/ds/uf.html

            20. Vaiyapuri T, Lydia EL, Sikkandar MY, Díaz VG, Pustokhina IV, Pustokhin DA. 2021. Internet of things and deep learning enabled elderly fall detection model for smart homecare. IEEE Access. Vol. 9:113879–113888

            21. Xu P, Tan Q, Zhang Y, Zha X, Yang S, Yang R. 2022. Research on maize seed classification and recognition based on machine vision and deep learning. Agriculture. Vol. 12(2):232

            22. Yadav SK, Luthra A, Tiwari K, Pandey HM, Akbar SA. 2022. ARFDNet: an efficient activity recognition & fall detection system using latent feature pooling. Knowl. Based Syst. Vol. 239:107948

            23. Zhang Y, Yao X, Fei Q, Chen Z. 2023. Smartphone sensors-based human activity recognition using feature selection and deep decision fusion. IET Cyber-Phys. Syst.: Theory Appl. Vol. 8:76–90

            Author and article information

            Journal
            jdr
            Journal of Disability Research
            King Salman Centre for Disability Research (Riyadh, Saudi Arabia )
            27 October 2023
            : 2
            : 3
            : 136-146
            Affiliations
            [1 ] Department of Information Science, College of Arts, King Saud University, Riyadh, Saudi Arabia ( https://ror.org/02f81g417)
            [2 ] King Salman Center for Disability Research, Riyadh, Saudi Arabia;
            [3 ] King Salman Center for Disability Research, Riyadh, Saudi Arabia ( https://ror.org/01ht2b307)
            [4 ] Department of Information Technology, College of Computers and Information Technology, Taif University, Taif 21944, Saudi Arabia;
            [5 ] Department of Computer Science, College of Science and Arts at Mahayil, King Khalid University, Abha, Saudi Arabia ( https://ror.org/052kwzs30)
            [6 ] Department of Computer Science, College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Al-Kharj 16273, Saudi Arabia ( https://ror.org/04jt46d36)
            [7 ] Department of Computer and Self Development, Preparatory Year Deanship, Prince Sattam Bin Abdulaziz University, AlKharj, Saudi Arabia ( https://ror.org/04jt46d36)
            Author notes
            Author information
            https://orcid.org/0000-0003-3837-6313
            Article
            10.57197/JDR-2023-0038
            80d4d699-980c-45ef-b50e-4ac49ad253e3
            Copyright © 2023 The Authors.

            This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY) 4.0, which permits unrestricted use, distribution and reproduction in any medium, provided the original author and source are credited.

            History
            : 11 May 2023
            : 21 September 2023
            : 28 September 2023
            Page count
            Figures: 11, Tables: 4, References: 23, Pages: 11
            Funding
            Funded by: funder-id http://dx.doi.org/10.13039/501100019345, King Salman Center for Disability Research;
            Categories

            Computer science
            human activity recognition,autoencoder,arithmetic optimization algorithm,elderly and disabled people,Internet of Things

            Comments

            Comment on this article