331
views
0
recommends
+1 Recommend
1 collections
    1
    shares

      King Salman Center for Disability Research is pleased to invite you to submit your scientific research to the Journal of Disability Research. JDR contributes to the Center's strategy to maximize the impact of the field, by supporting and publishing scientific research on disability and related issues, which positively affect the level of services, rehabilitation, and care for individuals with disabilities.
      JDR is an Open Access scientific journal that takes the lead in covering disability research in all areas of health and society at the regional and international level.

      scite_
       
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Automated Gesture-Recognition Solutions using Optimal Deep Belief Network for Visually Challenged People

      Published
      research-article
      Bookmark

            Abstract

            Gestures are a vital part of our communication. It is a procedure of nonverbal conversation of data which stimulates great concerns regarding the offer of human–computer interaction methods, while permitting users to express themselves intuitively and naturally in various contexts. In most contexts, hand gestures play a vital role in the domain of assistive technologies for visually impaired people (VIP), but an optimum user interaction design is of great significance. The existing studies on the assisting of VIP mostly concentrate on resolving a single task (like reading text or identifying obstacles), thus making the user switch applications for performing other actions. Therefore, this research presents an interactive gesture technique using sand piper optimization with the deep belief network (IGSPO-DBN) technique. The purpose of the IGSPO-DBN technique enables people to handle the devices and exploit different assistance models by the use of different gestures. The IGSPO-DBN technique detects the gestures and classifies them into several kinds using the DBN model. To boost the overall gesture-recognition rate, the IGSPO-DBN technique exploits the SPO algorithm as a hyperparameter optimizer. The simulation outcome of the IGSPO-DBN approach was tested on gesture-recognition dataset and the outcomes showed the improvement of the IGSPO-DBN algorithm over other systems.

            Main article text

            INTRODUCTION

            Today, the prominence of personalized and adaptive human–computer interfaces, contrary to the framework devised for an “average” user, is broadly detected in enormous amount of applications ( Alashhab et al., 2022). Machine learning (ML) methods for automated investigation of body movements and facial expressions are used currently in numerous human–computer interaction (HCI) structures. In the community, people with disabilities are facing numerous problems. Technology is evolving daily but no progress is seen in improving the standard of life of blind people ( Muneeb et al., 2023). Most people all over the world are deaf and dumb. Interactions between a visually challenged and deaf-mute person remain a difficult task ( Pandey, 2023). Sign language aids to interact with blind and dumb persons. Gesture detection can be a broadly used technology to assist dumb and particularly blind persons ( Ryumin et al., 2023). This study is relevant to two significant domains, ML and computer vision (CV). CV can be described as a domain that integrates techniques to understand, acquire, and process images ( Fronteddu et al., 2022). Likewise, it can be employed in different domains namely image reconstruction, HCI, physics, healthcare, etc. ML is a subfield of computer science that progressed from studying pattern detection and computational learning in Artificial Intelligence (AI).

            Hand gestures are a facet of body language, which is taken via the center of the palm, the shape constructed by the hand and the finger position ( Faria Oliveira et al., 2022). Hand gestures are of two types: dynamic and static. Dynamic gesture has a sequence of hand movements as waving whereas static gesture is the unchanging shape of the hand ( Gorobets et al., 2022). There are different movements of hand in gestures, e.g. a handshake differs from one individual to another and varies as per place and time. The major difference between gesture and posture is that posture emphasizes hand shape whereas gestures concentrate on hand movements ( de Oliveira et al., 2022). The key methods to hand gesture study can be categorized into the camera vision-related sensor method and the wearable glove-related sensor method. Hand gestures present an inspirational domain of study as they can enable transmission and offer a natural means of communication that is utilized across various applications ( Moysiadis et al., 2022). Earlier, hand gesture detection has been attained with wearable sensors linked directly to hands with gloves. Such sensors identified physical responses as per hand movement or finger twisting. The data gathered are processed using computers linked to the glove with wires ( Parra-Dominguez et al., 2022). This scheme of glove-related sensors can be made movable through sensors linked to microcontrollers.

            Mukhiddinov et al. (2023) developed a facial emotion detection technique for masked facial imageries by means of feature study of the upper features of face and low-light image enhancement with a CNN. Primarily, the lower parts of facial input imagery are covered with a synthetic mask. Then, the author efficiently implements a facial landmark recognition technique-related feature extraction method. Eventually, the structures and the coordinates of the landmarks can be recognized. Alashhab et al. (2022) devise a scheme for mobile devices managed by hand gestures to let the user control the devices and leverage numerous assistance devices by building easy dynamic and static hand gestures. The scheme depends on a multihead NN that categorizes and identifies the gestures, and reliant on the gesture identified, executes a secondary step that executes the respective actions. Zhou et al. (2023) presented an improved D2NN design to the domain. The wavelet-like pattern diminishes the variables from the network layer by modulating the incident light phase.

            Abdulhussein and Raheem (2020) presented a gesture detection of static ASL by means of DL. The method has two solutions. With Bicubic static ASL binary imageries, the first one can be resized. Apart from that, good detection outcomes are obtained in detecting the boundary hand by means of the Robert edge detection scheme. Moysiadis et al. (2022) developed a twofold system to allow (i) a real-time human–robot interaction structure and test it in diverse situations, and (ii) a real-time skeleton-related detection scheme for five hand gestures via ML and depth camera. Therefore, six ML classifiers are tested, while the ROS software has been applied to “translate” the gestures as five commands that are performed by robot.

            Lu et al. (2023) proposed a gesture-language-recognition (GLR) feedback scheme combining ML technology and strain-sensor arrays. These strain-sensor arrays joined with 3D-printed gloves abstract either temporal or spatial data regarding the movement of fingers. Incorporating multidimensional manipulation, AI-related GLR, and visual feedback, the smart model can precisely identify complicated gestures and offer real-time feedback to users. Mujahid et al. (2021) devised a lightweight method related to DarkNet-53 and YOLOv3-CNN for gesture detection without additional enhancement preprocessing and image filtering. The presented technique has been assessed on labeled data of hand gestures in YOLO format and Pascal VOC.

            This research presents an interactive gesture technique using sand piper optimization with deep belief network (IGSPO-DBN) technique. The purpose of the IGSPO-DBN technique enables people to handle the devices and exploit different assistance models by the use of different gestures. The IGSPO-DBN technique detects the gestures and classifies them into several kinds using the DBN model. To boost the overall gesture-recognition rate, the IGSPO-DBN technique exploits the SPO algorithm as a hyperparameter optimizer. The simulation outcome of the IGSPO-DBN approach was tested on gesture-recognition dataset.

            THE PROPOSED MODEL

            In this research work, we have concentrated on the progress of automated gesture recognition using the IGSPO-DBN technique. Figure 1 exemplifies the overall flow of the IGSPO-DBN algorithm. The purpose of the IGSPO-DBN technique enables people to handle the devices and exploit different assistance models by the use of different gestures. The IGSPO-DBN technique detects the gestures and classifies them into several kinds in three phases such as data preprocessing, DBN-based gesture recognition, and SPO-based hyperparameter optimization.

            Figure 1:

            Overall flow of the IGSPO-DBN approach. Abbreviation: IGSPO-DBN, interactive gesture technique using sand piper optimization with deep belief network.

            Data preprocessing

            For preprocessing the input data, three stages are followed.

            • Missing values of sensor databases can be set by the imputation process with the linear interpolation method.

            • Noise can be removed with median filtering and third-order low-pass Butterworth filter with a 20 Hz cutoff frequency.

            • A normalization method transforms all the sensor data with mean and standard derivation. An input for training model and feature extraction are cleaned and normalized.

            Gesture recognition using the DBN model

            For effectual identification of gestures, the DBN model is utilized. The DBN is a probability generalized process presented by stacking RBMs ( Justin et al., 2023). The RBM is one of the effectual manners for removing and demonstrating the data implemented in ML approaches. The RBM is a form of typical Boltzmann Machine that removed all the links from similar layers, and the connected among visible and hidden layers can be retained. The RBM is an energy-driven method and is utilized as a generalized method for different kinds of data containing speech, images, and text.

            (1) Energy(v,h)=i=1mj=1nWijvjhj,i=1mbjvjj=1ncjhj,

            where W ij signifies the module of W which interrelates the ith visible parameter v i to the jth hidden parameter h j ; b and c define the parameter models. Afterward, the basic Boltzmann distribution was measured as follows:

            (2) P(v,h)=exp(Energy(v,h))ΣvΣhexp(Energy(v,h))=ijeγyiivihi1iebivijeaihiΣvΣhexp(Energy(v,h)).

            In the meantime, v is only detected, so the hidden variable h was marginalized.

            (3) P(v)=heEnergy(v,h)ΣvΣhexp(Energy(v,h)),

            where P( v) implies the probability allocated by nodes to v visible vector. Due to lack of connection among nodes (intra-connection was absent), the respective conditional probability is as follows:

            (4) P(v|h)=ip(vj|h),and(h|v)=jp(hj|v).

            For a binary database, Eq. (4) is modified by:

            (5) P(vi=1|h)=σ(jWijhj+cj),

            (6) P(hj=1|v)=σ(iWijvj+bj).

            At present, σ(·) demonstrates the logistic function and σ( x) = (1+ exp (− x)) −1. It is established that effectual at uncovering the layer-by-layer difficult non-linearity. A fast-learning process for DBN was projected, thus joint distribution among detected vectors χ and hidden states h k are achieved as follows:

            (7) P(x,h1,,hl)=(k=02P(hk|hk+1))P(h1,h),

            where x = h 0 P( h k | h k +1) represents the visible hidden conditional distribution from RBM compared to level k of DBN, and P( h −1, h ) signifies the joint distribution from the topmost-level RBM. The efficiency of energy appearance was improved by combining several layers as DBNs. In this projected technique, 2-stacked RBMs can be used for creating the DBN technique without labeling data. Figure 2 demonstrates the architecture of DBN.

            Figure 2:

            Architecture of DBN. Abbreviation: DBN, deep belief network.

            SPO-based hyperparameter tuning

            To boost the overall gesture-recognition rate, the IGSPO-DBN technique exploits the SPO algorithm as a hyperparameter optimizer. Sandpipers are seabirds that live in groups named colonies ( Sankar et al., 2023). They use their intelligence to trace and attack the prey. It comprises two stages: the migration and attacking phases.

            Migration phase (exploration)

            It can be the seasonal drive of sandpipers from one place to another in search of food for gaining energy.

            • During the migration phase, the sandpiper travels in a group. First, the whole sandpiper begins with dissimilar locations to prevent collision.

            • In the group, the whole sandpiper moves toward the optimal fitness value.

            • Due to the minimization property, Journal of Sensor fitness value is the smallest.

            • The sandpiper updates the location based on the fittest sandpiper.

            • During the migration phase, the sandpiper needs to fulfill three conditions.

            Collision avoidance

            The search agent or sandpiper creates a new location without collision S p , and it is mathematically modeled as follows:

            (8) Sp=Sm×Scp(t),

            where S cp specifies the existing location of the sandpiper, t shows the existing iteration, and S m symbolizes the movement of the sandpiper.

            The sandpiper movement S m is evaluated, and it is shown below:

            (9) Sm=Scf(t×(ScfMaximumiterations)),

            where S cf specifies sandpiper control frequency which is minimized from 2 to 0 and t shows the iteration which differs from 0 to maximal iteration.

            Converge the best position of the sandpiper

            The sandpiper moves toward the existing location S cp to the fittest sandpiper S best in order to converge, and its computation formulated can be given as follows:

            (10) MS=SBC×(Sbest(t)Scp(t)),

            where S BC shows the random integer based on exploration. S BC is evaluated, and it can be expressed as follows:

            (11) SBC=0.5×rand,

            where rand denotes the random integer within [0,1].

            Updating the position to the best sandpiper

            Lastly, the sandpiper upgrades its existing location to the fittest location sandpiper, and it can be shown below:

            (12) Gs=Sp+Ms,

            where G s denotes the gap among the location and the fittest position of the sandpiper.

            Attacking phase (exploration)

            In the attacking stage, the sandpiper creates the spiral behaviors in the 3D plane, where r shows the radius of the spiral, e shows the base of the natural logarithm, j denotes the parameter and its value within [0, 2], and l and m refer to the constant of the spiral value. Consider l and m values as 1.

            The upgraded location of sandpiper S p new ( t) is evaluated as follows:

            (13) Spnew(t)=(Gs×(X+Y+Z))×Sp(t).

            The SPO system grows a fitness function (FF) for making greater classifier solution. It solves a positive integer to portray the good result of candidate performances. In this case, the minimized classifier rate of errors can be regarded to be FF, as defined in Eq. (14), and explained in Table 1.

            (14) fitness(xi)=ClassifierErrorRate(xi)=numberofmisclassifiedsamplesTotalnumberofsamples*100

            Table 1:

            Details of the database.

            ClassLabelsUSC HAD dataset
            No. of samples
            WalkingC-18476
            Walking upstairsC-24709
            Walking downstairsC-34382
            SittingC-45810
            StandingC-55240
            Laying/sleepingC-68331
            Total number of samples36,948

            RESULTS AND DISCUSSION

            The proposed model is simulated using the Python 3.6.5 tool on PC i5-8600k, GeForce 1050Ti 4GB, 16GB RAM, 250GB SSD, and 1TB HDD. The parameter settings are as follows: learning rate: 0.01, dropout: 0.5, batch size: 5, epoch count: 50, and activation: ReLU. In this section, the gesture-recognition outcome of the IGSPO-DBN approach was examined on the USC HAD dataset. It has 36,948 instances with six classes.

            Figure 3 demonstrates the classifier outcomes of the IGSPO-DBN approach under test dataset. Figure 3a and b depicts the confusion matrix offered by the IGSPO-DBN approach on 60:40 of TRP/TSP. The result revealed that the IGSPO-DBN model has identified and classified all six class labels accurately. Also, Figure 3c and d represents the gesture-detection outcome of the IGSPO-DBN approach on 60:40 of TRP/TSP. The outcomes identified that the IGSPO-DBN system reaches effectual recognition rate under all classes.

            Figure 3:

            Classifier outcome of the IGSPO-DBN approach: (a, b) Confusion matrices, (c, d) 60:40 of TRP/TSP. Abbreviation: IGSPO-DBN, interactive gesture technique using sand piper optimization with deep belief network.

            In Table 2 and Figure 4, an extensive gesture-recognition outcome of the IGSPO-DBN system is obviously portrayed. The outcome implied that the better the IGSPO-DBN system is under all classes. For instance, on 60% of TRP, the IGSPO-DBN system acquires average accu y , prec n , reca l , F score , and AUC score of 99.43, 98.26, 98.17, 98.21, and 98.91%, correspondingly. Finally, on 20% of TSP, the IGSPO-DBN method gains average accu y , prec n , reca l , F score , and AUC score of 99.37, 98.04, 98.08, 98.06, and 98.85%, correspondingly.

            Table 2:

            Gesture-recognition outcome of the IGSPO-DBN approach on 60:40 of TRP/TSP.

            Class Accu y Prec n Reca l F score AUC score
            Training phase (60%)
             Walking (C-1)99.2197.8598.7298.2899.04
             Walking upstairs (C-2)99.3897.6197.4797.5498.56
             Walking downstairs (C-3)99.5198.0597.8397.9498.78
             Sitting (C-4)99.5998.6998.7498.7299.25
             Standing (C-5)99.4898.6597.6898.1798.73
             Laying/sleeping (C-6)99.3898.6898.5698.6299.09
             Average99.4398.2698.1798.2198.91
            Testing phase (40%)
             Walking (C-1)99.1598.2098.0998.1498.78
             Walking upstairs (C-2)99.3097.3297.2797.3098.44
             Walking downstairs (C-3)99.4997.5298.2497.8798.95
             Sitting (C-4)99.4998.3198.4098.3599.04
             Standing (C-5)99.4998.3298.0998.2098.91
             Laying/sleeping (C-6)99.3298.5598.4098.4898.99
             Average99.3798.0498.0898.0698.85

            Abbreviation: IGSPO-DBN, interactive gesture technique using sand piper optimization with deep belief network.

            Figure 4:

            Average outcome of the IGSPO-DBN approach on 60:40 of TRP/TSP. Abbreviation: IGSPO-DBN, interactive gesture technique using sand piper optimization with deep belief network.

            Figure 5 scrutinizes the accuracy of the IGSPO-DBN system in the training and validation procedure on the test database. The result stated that the IGSPO-DBN system attains maximal accuracy values over enhanced epochs. Moreover, the maximum validation accuracy over training accuracy outperforms that the IGSPO-DBN system learns capably on the test database.

            Figure 5:

            Accuracy curve of the IGSPO-DBN approach. Abbreviation: IGSPO-DBN, interactive gesture technique using sand piper optimization with deep belief network.

            The loss investigation of the IGSPO-DBN algorithm at the time of training and validation is displayed on the test database in Figure 6. The outcome denoted that the IGSPO-DBN system attains adjacent values of training and validation loss. It could be obvious that the IGSPO-DBN algorithm learns effectively on the test database.

            Figure 6:

            Loss curve of the IGSPO-DBN approach. Abbreviation: IGSPO-DBN, interactive gesture technique using sand piper optimization with deep belief network.

            The experimental gesture-detection outcome of the IGSPO-DBN method is compared with other approaches in Table 3 and Figure 7 ( Tahir et al., 2023). Based on accu y , the IGSPO-DBN system highlights a higher value of 99.43% while the MWHODL-SHAR, CNN-RF, Residual network, Deep CNN, CAE, HARSI, and LSTM approaches indicated reducing values of 99.03, 97.84, 95.86, 94.06, 94.73, 95.76, and 96.74% correspondingly. Moreover, with respect to prec n , the IGSPO-DBN system demonstrated a superior value of 98.26% while the MWHODL-SHAR, CNN-RF, Residual network, Deep CNN, CAE, HARSI, and LSTM systems pointed out minimal values of 97.56, 96.91, 95.03, 96.52, 98, 94.11, and 94.98% correspondingly.

            Table 3:

            Comparative outcome of the IGSPO-DBN system with other approaches.

            Methods Accu y Prec n Reca l F score
            IGSPO-DBN99.4398.2698.1798.21
            MWHODL-SHAR99.0397.5697.5297.54
            CNN-RF97.8496.9195.8797.85
            Residual network95.8695.0396.6194.86
            Deep CNN94.0696.5297.0696.63
            CAE model94.7398.0096.3396.36
            HARSI model95.7694.1195.0896.45
            LSTM model96.7494.9896.5894.46

            Abbreviation: IGSPO-DBN, interactive gesture technique using sand piper optimization with deep belief network.

            Figure 7:

            Comparative outcome of the IGSPO-DBN system with other approaches. Abbreviation: IGSPO-DBN, interactive gesture technique using sand piper optimization with deep belief network.

            Followed by, interms of reca l , the IGSPO-DBN approach highlights enhancing value of 98.17% while the MWHODL-SHAR, CNN-RF, Residual network, Deep CNN, CAE, HARSI, and LSTM methods implied lesser values of 97.52, 95.87, 96.61, 97.06, 96.33, 95.08, and 96.58%, respectively. Eventually, with respect to F score , the IGSPO-DBN methodology depicts a higher value of 8.21% while the MWHODL-SHAR, CNN-RF, Residual network, Deep CNN, CAE, HARSI, and LSTM approaches pointed out reduced values of 97.54, 97.85, 94.86, 96.63, 96.36, 96.45, and 94.46%, correspondingly.

            CONCLUSION

            In this research work, we have focused on the development of automated gesture recognition using the IGSPO-DBN technique. The purpose of the IGSPO-DBN technique enables people to handle the devices and exploit different assistance models by using different gestures. The IGSPO-DBN technique detects the gestures and classifies them into several kinds using the DBN model. To boost the overall gesture-recognition rate, the IGSPO-DBN technique exploits the SPO algorithm as a hyperparameter optimizer. The simulation outcome of the IGSPO-DBN system was tested on a gesture-recognition dataset and the outcome showed the improvement of the IGSPO-DBN algorithm over other systems. In future, the proposed model can be implemented in real-time dataset.

            AUTHOR CONTRIBUTIONS

            The authors contributed equally to all parts of this paper.

            CONFLICTS OF INTEREST

            The authors declare no conflicts of interest in association with the present study.

            REFERENCES

            1. Abdulhussein AA, Raheem FA. 2020. Hand gesture recognition of static letters American sign language (ASL) using deep learning. Eng. Technol. J. Vol. 38(6):926–937

            2. Alashhab S, Gallego AJ, Lozano MÁ. 2022. Efficient gesture recognition for the assistance of visually impaired people using multi-head neural networks. Eng. Appl. Artif. Intell. Vol. 114:105188

            3. Faria Oliveira OD, Carvalho Gonçalves M, de Bettio RW, Pimenta Freire A. 2022. A qualitative study on the needs of visually impaired users in Brazil for smart home interactive technologies. Behav. Inf. Technol. Vol. 42:1–29

            4. Fronteddu G, Porcu S, Floris A, Atzori L. 2022. A dynamic hand gesture recognition dataset for human–computer interfaces. Comput. Net. Vol. 205:108781

            5. Gorobets V, Merkle C, Kunz A. 2022. Pointing, pairing and grouping gesture recognition in virtual realityComputers Helping People with Special Needs: 18th International Conference, ICCHP-AAATE 2022, Lecco, Italy, July 11–15, 2022, Proceedings, Part I; Springer International Publishing. Cham: p. 313–320

            6. Justin S, Saleh W, Lashin MM, Albalawi HM. 2023. Design of metaheuristic optimization with deep-learning-assisted solar-operated on-board smart charging station for mass transport passenger vehicle. Sustainability. Vol. 15(10):7845

            7. Lu WX, Fang P, Zhu ML, Zhu YR, Fan XJ, Zhu TC, et al.. 2023. Artificial intelligence-enabled gesture-language-recognition feedback system using strain-sensor-arrays-based smart glove. Adv. Intell. Syst. 2200453

            8. Moysiadis V, Katikaridis D, Benos L, Busato P, Anagnostis A, Kateris D, et al.. 2022. An integrated real-time hand gesture recognition framework for human–robot interaction in agriculture. Appl. Sci. Vol. 12(16):8160

            9. Mujahid A, Awan MJ, Yasin A, Mohammed MA, Damaševiˇcius R, Maskeli¯unas R, et al.. 2021. Real-time hand gesture recognition based on deep learning YOLOv3 model. Appl. Sci. Vol. 11(9):4164

            10. Mukhiddinov M, Djuraev O, Akhmedov F, Mukhamadiyev A, Cho J. 2023. Masked face emotion recognition based on facial landmarks and deep learning approaches for visually impaired people. Sensors. Vol. 23(3):1080

            11. Muneeb M, Rustam H, Jalal A. 2023. Automate appliances via gestures recognition for elderly living assistance2023 4th International Conference on Advancements in Computational Sciences (ICACS); IEEE. Lahore, Pakistan: p. 1–6

            12. Pandey S. 2023. Automated gesture recognition and speech conversion tool for speech impairedProceedings of Third International Conference on Advances in Computer Engineering and Communication Systems: ICACECS 2022; Springer Nature Singapore. Singapore: p. 467–476

            13. de Oliveira GA, Oliveira ODF, de Abreu S, de Bettio RW, Freire AP. 2022. Opportunities and accessibility challenges for open-source general-purpose home automation mobile applications for visually disabled users. Multimed. Tools Appl. Vol. 81(8):10695–10722

            14. Parra-Dominguez GS, Sanchez-Yanez RE, Garcia-Capulin CH. 2022. Towards facial gesture recognition in photographs of patients with facial palsy. Healthcare. Vol. 10(4):659

            15. Ryumin D, Ivanko D, Ryumina E. 2023. Audio-visual speech and gesture recognition by sensors of mobile devices. Sensors. Vol. 23(4):284

            16. Sankar S, Ramasubbareddy S, Dhanaraj RK, Balusamy B, Gupta P, Ibrahim W, et al.. 2023. Cluster head selection for the internet of things using a sandpiper optimization algorithm (SOA). J. Sens. 2023

            17. Tahir BS, Ageed ZS, Hasan SS, Zeebaree SRM. 2023. Modified wild horse optimization with deep learning enabled symmetric human activity recognition model. Comput. Mater. Contin. Vol. 75(2):4009–4024

            18. Zhou Y, Shui S, Cai Y, Chen C, Chen Y, Abdi-Ghaleh R. 2023. An improved all-optical diffractive deep neural network with less parameters for gesture recognition. J. Vis. Commun. Image Represent. Vol. 90:103688

            Author and article information

            Journal
            jdr
            Journal of Disability Research
            King Salman Centre for Disability Research (Riyadh, Saudi Arabia )
            31 August 2023
            : 2
            : 2
            : 129-136
            Affiliations
            [1 ] Department of Information Systems, College of Computer and Information Sciences, Princess Nourah Bint Abdul Rahman University, Riyadh 11671, Saudi Arabia ( https://ror.org/05b0cyh02)
            [2 ] Department of Mathematics, Faculty of Science, Cairo University, Giza 12613, Egypt ( https://ror.org/03q21mh05)
            [3 ] Department of Computer Science, College of Computer, Qassim University, Buraydah 51452, Saudi Arabia ( https://ror.org/01wsfe280)
            [4 ] Department of Computer and Self Development, Preparatory Year Deanship, Prince Sattam Bin Abdulaziz University, Al-Kharj, Saudi Arabia ( https://ror.org/04jt46d36)
            Author notes
            Author information
            https://orcid.org/0000-0002-5511-8909
            Article
            10.57197/JDR-2023-0028
            193dd00a-e98e-44e4-a124-0da711fa3df6
            Copyright © 2023 The Authors.

            This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY) 4.0, which permits unrestricted use, distribution and reproduction in any medium, provided the original author and source are credited.

            History
            : 23 May 2023
            : 09 August 2023
            : 10 August 2023
            Page count
            Figures: 7, Tables: 3, References: 18, Pages: 8
            Funding
            Funded by: King Salman Center for Disability Research
            Award ID: KSRG-2023-190
            The authors extend their appreciation to the King Salman Center for Disability Research for funding this work through Research Group no KSRG-2023-190.
            Categories

            Computer science
            gesture recognition,human–computer interface,deep learning,sand piper optimization,machine learning

            Comments

            Comment on this article