Towards a GUI Gesture Control Using the Leap Motion Controller

Towards a GUI Gesture Control Using the Leap Motion Controller

Jean-Marc Vannobel* Marie-Hélène Bekaert Jean Baumann

Univ. Lille, CNRS, Centrale Lille, UMR 9189 CRIStAL, F-59000 Lille, France

Corresponding Author Email: 
jean-marc.vannobel@univ-lille.fr
Page: 
7-13
|
DOI: 
https://doi.org/10.18280/mmc_c.831-402
Received: 
14 September 2022
| |
Accepted: 
10 November 2022
| | Citation

OPEN ACCESS

Abstract: 

The Leap Motion Controller is a small hand motion capture device that has been the subject of numerous studies since it was launched in 2013. This sensor is particularly used in the field of disability, diagnosis or functional rehabilitation. This article presents two preliminary studies on the use of this device as a gesture mouse and then as a gesture controller. If the Leap Motion Controller is simple to implement, its lack of precision in some gesture configurations and of reliability (outliers) is a barrier which could certainly be lifted in the context of a multimodal interaction.

Keywords: 

assistive technology, gesture mouse, gesture communication, leap motion, convolutional neural networks

1. Introduction

Interaction between human and machine is nowadays commonly done through a graphical user interface (GUI) controlled by input (keyboard) and pointing (mouse) devices. On the other hand, smartphone allowed to democratize another mode of interaction still around the use of a GUI now directly controlled with a stylus or fingers through a touch screen [1]. Such a tactile interaction makes it possible to easily carry out continuous scrolling, a zoom action and even to draw in a much more natural way compared to the mouse even when used without the need of its push buttons [2].

Contactless interfaces are yet another possible mode of interaction with the machine to control a device by eye contact [3], by voice [4] or by gesture [5, 6]. Such an interaction would be of particular interest for hygiene or safety reasons, or even for communication through a natural language in the context of home automation or applications for the elderly, people with disabilities and maybe for those not used to computers.

In this paper we focus exclusively on gesture interaction, and more specifically on the use of a hand motion capture device, the Leap Motion Controller (LMC). We propose a quick overview of the LMC used as an assistive technology after having first presented the device main characteristics. We then propose two feasibility studies to realize a human-machine gestural interaction. These studies were carried out in an MS Windows environment during a research initiation internship.

Our first study aims the possibility of using an LMC instead of the computer mouse on a PC computer. Faced with the difficulties sometimes encountered by some people (very young, elderly or disabled) in using the mouse, such a more natural human-machine interaction could then help them. A second important specification is to facilitate the Leap Motion Controller integration in the MS Windows environment to make the mouse replacement completely transparent to the software used by the end user. A second study addresses the feasibility of using the LMC as a device to capture meaningful gestures for assistive purposes.

2. The Leap Motion Controller

2.1 Description

The LMC commercialized by Ultraleap [7] is a hand motion capture device usually found in virtual reality environments [8]. Its main interest lies in its ability to return a data set characterizing the hands (orientation, coordinates of the palm and joints...). Figure 1 shows that it is made from two identical infrared cameras ($\lambda$=850nm) separated by 40mm along the x-axis. These 0.15MP (640x240) cameras provide stereo vision views of the region of interest corresponding to an inverted pyramid of 140x120° at a configurable frame rate of 20 to 200 FPS (Figure 2).

Figure 1. The Leap Motion hand capture device and graphic rendering of points of interest (from ultraleap.com)

Figure 2. 140° x-axis and 120° z-axis LMC measurement field (from ultraleap.com)

The LMC is a USB device compatible with Windows 7+, Mac OS X 10.7+ and Linux (Ubuntu 12). Development kits [9] help to use the LEAP Motion Controller with various popular programming languages including C++ for version 4 of the SDK. Finally, the LMC is also compatible with the 3D environments Unreal Engine and Unity.

2.2 Robustness: weaknesses and limitations of the Leap Motion Controller

Just like the Microsoft Kinect which is no longer commercialized, the Ultraleap LMC can detect hand movements [10]. A first limitation that can be mentioned is that the Kinect allows detection inside a room while the LMC has a detection distance of only a few ten centimetres but with a precision 200 times higher.

The LMC seems at first sight to be well adapted to a sign language interaction [11, 12] but pointed out in the study [13] the weaknesses and limitations of the device for such a use because despite a resolution of the sensor lower than 1mm as announced by the manufacturer. Potter et al. insists also on the imprecision and the lack of reliability of the measurements. To Ultraleap's credit, unlike the information provided by data gloves thanks to an instrumentation of the user's hand and loss of freedom of movement [14], the LMC reconstructs the hand data solely from the images captured by its cameras. An inaccuracy inherent to the image processing algorithms used and to the computational assumptions thus appears.

As Potter et al. [13], we observed an inaccuracy of the fingertips position returned by the LMC. More embarrassingly, we found that the Ultraleap algorithms do not respect the real manual configuration of the index, middle and ring fingers and a movement dependence of these three fingers is imposed. Silva et al. [15] already reported that the piano may not be the ideal instrument to be simulated with a single LMC sensor because it requires a particular hand precision. We therefore conducted four campaigns of bending one of these fingers independently of the others by help of a pianist to obtain our own observational data. Figure 3 illustrates one of these campaigns. Starting from an initial position where the right hand is oriented horizontally over the sensor with the fingers outstretched (altitude 0 is attributed to the barycentre of the palm), fingers were individually flexed: first the index finger, then the middle finger and finally the ring finger, with the others remaining extended. Figure 3 shows that parasitic finger flexions are effectively returned by the LMC, mainly concerning the middle finger and at a lesser extent the ring finger.

As a result, it was decided not to use manual configuration involving more than one finger at a time, except for the thumb.

Figure 3. Dynamic absolute value of the vertical position at the palm (mm) of the tips of the index finger (bold), middle finger (grey and bold) and ring finger (thin line)

3. The LMC as an Assistive Technology

The Leap Motion Controller can be easily integrated into virtual reality systems for hand tracking or to ensure immersive interaction [16, 17]. Alimanova et al. describe a hand rehabilitation game to assist in developing muscle tonus and increase precision in gestures (picking up and moving household goods, matching color blocks, throwing in the garbage…) [18]. Taylor and Curran rely on the LMC potential to enhance motivation and involvement of young adults, accustomed to video games, in their hand rehabilitation after trauma [19]. This is ascertained in a study conducted by Tarakci et al. [20] involving 103 patients trained in two games interfaced with the LMC. This study presents video games with LMC as an effective alternative treatment option in children and adolescents with physical disabilities. Similarly, Cortés-Pérez et al. [21] conclude that the LMC is useful and efficient as a haptic virtual reality device to boost various aspects of upper limb motor function in patients with central nervous system diseases.

Beyond video games, the Leap Motion Controller is also deployed to assess and analyze the hand movement in various pathologies. A hand gesture recognition algorithm dedicated to tracking the seven gestures for residential rehabilitation of post-stroke patients is developed by Li et al. [22]. Chopuk et al. [23] use the LMC to measure finger joint angles to assess abnormal finger motion (trigger finger). Four postures are evaluated (a flexion of thumb IP joint, a neutral position of finger PIP joint, a flexion of finger MP joint and a thumb radial abduction).

Two studies by Butt et al. [24, 25] include Parkinsonian patients (PwPD, Patient with Parkinson Disease). In the first one, the authors study the LMC potential for the objective assessment of motor dysfunction in PwPD and show a moderate potential of the LMC to extract the motor performance. The second study aims to achieve an objective and automatic classification of Parkinson's disease with the LMC. The results reveal that this system does not return clinically meaningful data for measuring postural tremor in PwPD, is irrelevant for measuring the forearm pronation or supination but is statistically and clinically significant for the finger tapping and hand opening or closing. Cotton et al. [26] do not advocate using the LMC in finger movement analysis (for item 18 of the Motor Function Measurement) due to a lack of robustness and precision.

Colombini et al. [27] review 19 studies objectively assessing the contribution of LMC in four specific psychological domains (autism spectrum disorder, attention-deficit or hyperactivity disorder, dementia, mild cognitive impairment). The LMC is also successfully used in music therapy sessions [28]. The proposed system captures the user's gesture via the LMC and the generated signals are sent to a software tool that converts the movements into musical notes.

From manual gesture to speech, numerous studies using the LMC to recognize finger and hand movements to transcribe sign language into text or speech are listed by Galván-Ruiz et al. [29]. The signer's training is a key factor in the sign recognition process and the order in which sign movements are performed affects accuracy [30] (American Sign Language). The LMC is also used for the recognition of other sign languages [31] (Arabic Sign Language), [32] (Chinese Sign Language), [33] (Indian Sign Language). In addition to speech, Škraba et al. [34] expect to use the LMC to pilot a wheelchair.

4. Study 1: The Gesture Mouse

4.1 Introduction

As mentioned earlier, a gestural mouse is expected to provide generic mouse features such as screen hovering, pointing, menu navigation as well as a classic mouse. The function chosen to illustrate our point is the drawing in Microsoft Paint but we could just as easily have generated a manual signature in Acrobat.

The characteristics of our gesture mouse must allow a movement in 2D space associated with the movement of the mouse cursor in a GUI and the definition of a symbolic gesture of "click". We have been inspired by the numerous works available in the literature to reproduce some elements [35-38].

4.2 Methodology

We retained the coordinates returned in the (xOy) plane (see illustration Figure 2) to move the mouse cursor on the screen. If tests were first carried out on the coordinates of the right index finger, we finally retained the values returned by the LMC for the right palm after normalization in the screen coordinate plane. This choice was made to avoid parasitic displacements of the mouse cursor, whereas the index finger and more particularly the thumb/index pinch allowed us to materialize the left mouse click. The z-axis was used to simulate the equivalent of the "in the air movements" of the mouse device to reposition the signer's hand in space.

A last manipulation allowed us to realize a gestural multi-touch intended in our study to resize the active window on the screen. Here we wanted to use both hands. Once the mouse cursor is positioned on the resizing icon, the mouse click is triggered by pinching the thumb and index finger of the secondary hand. The resizing is then done by simply moving the mouse cursor with the right hand.

The two possibilities of click, right hand and left hand, could thus materialize the two buttons of a mouse to complete this device of interaction "gestural mouse".

4.3 Practical implementation

Our software development has been done using an example program provided by Ultraleap, the Windows mouse control APIs and the INPUT_MOUSE data structure. The basic operation of the executable we generate is event-driven and uses callback functions provided by Ultraleap. This executable is used as a mouse driver which allows the LMC to replace the mouse.

4.4 Results

The tests were performed in the Paint drawing software. It was necessary to filter the data returned by the LMC. Low-pass filtering of the coordinate stream over a sliding window of about 15 measurements resolved the inaccuracy noise that manifested itself as spurious cursor tremors on the screen (Figure 4). It was sometimes also necessary to eliminate erroneously coordinates returned by the LMC because corresponding to a "ghost hand" in the background (Figure 4, middle).

Figure 4. Drawing in Paint, from top to bottom: unfiltered coordinates, presence of a ghost hand, filtered coordinates

5. Study 2: Communicative Gesture

5.1 Introduction

To consider the LMC as a simple gesture mouse is very simplistic considering the possibilities offered by the device as well as by a sign language.

We then developed a second study with the aim of controlling an external device through gestural communication based on a sequence of basic gestures that become words or control commands. It is not necessary for such an activity to use a model as advanced as those used in sign languages such as French Sign Language (LSF) [39]. The language used and the notion of underlying grammar can remain very primitive.

5.2 Pattern recognition methodology

The analysis task here is very different from the one encountered in the case of the gesture mouse since it does not simply consist of moving a cursor on the screen or simulating a mouse click, but of executing significant gestures. In order to be able to isolate a significant sequence of transition movements, a command is composed of a starting clap, a sequence of signs made up of trajectories to be individually recognized and separated by a clap and a final validation gesture. The main difficulty of the operation is to recognize the different signs of the sequence. We have used two radically different methods for this purpose, which we will detail below.

5.3 Recognition by geometric criteria

The classification by geometric criteria that we have implemented corresponds to a method described by Jorge et al. [40]. It consists in extracting some criteria from the sign trajectory such as the perimeter and the area of its convex envelope (CE) as illustrated in Figure 5. Other criteria such as the perimeter and the area of the maximum surface triangle (MST) inscribed in the convex envelope or the length of the trajectory are used as proposed by Dobkin and Snyder [41]. Figure 6 illustrates the recognition by geometric criteria of three distinct signs: a line and a triangle to constitute a basic alphabet and a deletion to express the wish to delete the last completed gesture. Implemented in the Unity engine for a virtual reality application, the recognition is fast enough to be used in a real time application.

Figure 5. Convex envelope (CE) of a geometrical drawing

Figure 6. Gestural alphabet. From top to bottom: line, triangle, deletion

5.4 Recognition by convolutional neural network

Convolutional neural networks (CNNs) are neural networks inspired by the visual cortex of vertebrate animals and are based on a multilayer stack of single-layer perceptrons. A CNN is totally different from a multilayer perceptron (MLP) because unlike this one, a very small number of connections between a neuron and its neighbours in adjacent layers is used which greatly reduces the complexity of the network and the learning time. CNNs are typically used for image recognition [42] but also for natural language processing [43] where the CNN inputs correspond to words instead of pixels [44]. We therefore modified our mouse cursor control driver to generate pictures of the trajectories drawn on the screen. The convolutional neural network here implemented was created using Google's Tensorflow [45] and Keras [46] machine learning systems according to the example provided in the paper [47] and was trained on the three gestures, already used within the geometric method. The dataset used contained about 60 images for each of these 3 classes.

5.5 Results, comparison of recognition methods

Pattern recognition based on geometric criteria is very fast (a few milliseconds) but has the disadvantage not to be very flexible. Indeed, the user has to draw geometrical shapes close to what is expected, otherwise this one will not be recognized or could be confused with another one. One could try to make the system more tolerant to the sign realization variability but this would increase the risk of confusion. It is then not easy to correctly draw the signs so that they are recognized. Thus, to keep the interface easy to use, the vocabulary must be limited and restricted to very different gestures. One could consider several modes allowing a user to choose the extent of the vocabulary he wants to learn. Another limitation of this method is that to add a new element to the alphabet, and thus a new shape, it is necessary to find new specific characterization criteria.

Pattern recognition by convolutional neural network corrects the two main drawbacks of the geometric method:

  • The system can adapt to the user by updating its data set as the user uses it. So if the user finds a more comfortable signing posture, the model will gradually follow.
  • It is easy to add a new word. It just requires creating a set of training examples large enough to train the model properly, which is expensive in terms of implementation time.

This method is much less reliable than the previous one for reasons related to the description model and not to the user. Moreover, the number of training samples increases as expected with each added gesture.

Finally, it would be interesting to compare the execution speed performances of the two systems. However, since the recognition by CNN is offline and the two systems are not implemented in the same language, it is complicated at this point to have a relevant comparison.

5.6 Perspectives for improvement

As shown in Figure 6, a color gradient was applied on the user screen in the case of the recognition by neural network to describe the direction and progression of the gesture. By choice, this feature was ignored by the neural network, although this new dimension would have allowed the vocabulary of the interface to be extended: a line from left to right would be differentiated from a line from right to left. Similarly, for recognition by geometric criteria, the orientation of the gesture is ignored: a vertical line and a horizontal line are not differentiated.

6. Conclusion and Prospects

With the help of examples provided by Ultraleap and in particular the imagesample.c file as well as some Windows APIs, we have shown that the handling of a general public software such as "Paint", for example, could be done using a generic gesture interaction system that replaces the computer mouse. Totally independent from the final application and easy to install on a computer, this interface can represent an alternative mode of communication to the mouse for people having difficulties with its handling or simply for reasons of hygiene. However, we must remember that even if a gesture interaction seems at first sight more natural than the handling of a mouse, an information capture device must be adapted to the handling of the graphic interface of the software we use. A keyboard is adapted to alphanumeric characters, a mouse allows to navigate precisely in the menus, the content of a text document or a spreadsheet, but a mouse is much less adapted to handwriting or drawing on the screen. A digital tablet and a stylus would be much more suitable for this.

One could then imagine going much further than simply replacing GUI existing devices by elementary gesture controls. The goal would be to design dedicated gestural interfaces especially built for a natural language use and therefore using meaningful gestures.

Our second study was designed with this idea in mind and we proposed the basics of a semantic meaning gestural interaction using an alphabet reduced to a few number of elementary signs easy to produce. It makes then possible to create sign sequences that could be used in the case of assistive technologies or to control machines. Among the possible applications, we could for example consider driving a wheelchair or even easier professional integration of people with disabilities in an Augmented or Virtual Reality environment or even human-robot collaboration [48].

A final important aspect observed during our practical tests concerns the space limitations of the gestural communication. This is due to the LMC camera viewing angles and it is not easy to remain the hands in the sensor 140x120° conical detection space. For a better comfort of use and much less movements restrictions, one could prefer the IR 170 sensor [49] from the same manufacturer that covers a region of 170x170°. However, the IR 170 costs more than double the price of the LMC and is provided without housing case which can compromise its use.

At last if a gesture communication appears to be more natural, its constrained aspect quickly generates a significant fatigue at the arm level. A solution could be provided by resting the wrist on a transparent flat surface as suggested in [26].

Acknowledgment

This work has been carried out by Jean Baumann, a student in 2nd year of engineering school at Polytech Lille as part of a 10-week research internship at the CRIStAL Laboratory.

  References

[1] Martinet, A. (2011). Etude de l'infuence de la séparation des degrés de liberté pour la manipulation 3-D à l'aide de surfaces tactiles multipoints. Doctoral dissertation, Université des Sciences et Technologie de Lille-Lille I.

[2] Deblonde, J.P. (2012). Exploitation de la dynamique du geste en IHM. Application aux fonctions de transfert pour le pointage et l'extraction d'évènements discrets. Doctoral dissertation, Université des Sciences et Technologie de Lille-Lille I.

[3] Ju, Q. (2019). Utilisation de l'eye-tracking pour l'interaction mobile dans un environnement réel augmenté. Doctoral dissertation, Université de Lyon.

[4] Hoy, M.B. (2018). Alexa, Siri, Cortana, and more: an introduction to voice assistants. Medical reference services quarterly, 37(1): 81-88. https://doi.org/10.1080/02763869.2018.1404391

[5] Małecki, K., Nowosielski, A., Kowalicki, M. (2020). Gesture-based user interface for vehicle on-board system: a questionnaire and research approach. Applied Sciences, 10(18): 6620. https://doi.org/10.3390/app10186620

[6] Belissen, V. (2020). From sign recognition to automatic sign language understanding: Addressing the non-conventionalized units. Doctoral dissertation, Université Paris-Saclay.

[7] ULTRALEAP Inc, ULTRALEAP - hand tracking module. https://www.ultraleap.com/, accessed on Jan. 2, 2023.

[8] Wozniak, P., Vauderwange, O., Mandal, A., Javahiraly, N., Curticapean, D. (2016). Possible applications of the LEAP motion controller for more interactive simulated experiments in augmented or virtual reality. In Optics Education and Outreach IV, 9946: 234-245. https://doi.org/10.1117/12.2237673

[9] https://www.ultraleap.com/developers/.

[10] Marin, G., Dominio, F., Zanuttigh, P. (2014). Hand gesture recognition with leap motion and kinect devices. In 2014 IEEE International Conference on Image Processing (ICIP), pp. 1565-1569.

[11] Borysova, A. (2017). Leap motion controller for sign language recognition: A review of the literature. Univ. Cape Town, Cape Town, South Africa, Tech. Rep. HANDGR.

[12] Galván-Ruiz, J., Travieso-González, C.M., Tejera-Fettmilch, A., Pinan-Roescher, A., Esteban-Hernández, L., Domínguez-Quintana, L. (2020). Perspective and evolution of gesture recognition for sign language: A review. Sensors, 20(12): 3571. https://doi.org/10.3390/s20123571

[13] Potter, L.E., Araullo, J., Carter, L. (2013). The leap motion controller: a view on sign language. In Proceedings of the 25th Australian Computer-Human Interaction Conference: Augmentation, Application, Innovation, Collaboration, 175-178.

[14] Gunawardane, P.D.S.H., Medagedara, N.T. (2017). Comparison of hand gesture inputs of leap motion controller & data glove in to a soft finger. In 2017 IEEE International Symposium on Robotics and Intelligent Sensors, (IRIS), 62-68. https://doi.org/10.1109/IRIS.2017.8250099

[15] Silva, E.S., de Abreu, J.A.O., de Almeida, J.H.P., Teichrieb, V., Ramalho, G.L. (2013). A preliminary evaluation of the leap motion sensor as controller of new digital musical instruments. Recife, Brasil, 59-70.

[16] Scheggi, S., Meli, L., Pacchierotti, C., Prattichizzo, D. (2015). Touch the virtual reality: using the leap motion controller for hand tracking and wearable tactile devices for immersive haptic rendering. In ACM SIGGRAPH 2015 Posters,  1-1. https://doi.org/10.1145/2787626.2792651

[17] Gusai, E., Bassano, C., Solari, F., Chessa, M. (2017). Interaction in an immersive collaborative virtual reality environment: a comparison between leap motion and HTC controllers. In International Conference on Image Analysis and Processing, 290-300. https://doi.org/10.1007/978-3-319-70742-6_27

[18] Alimanova, M., Borambayeva, S., Kozhamzharova, D., Kurmangaiyeva, N., Ospanova, D., Tyulepberdinova, G., Kassenkhan, A. (2017). Gamification of hand rehabilitation process using virtual reality tools: Using leap motion for hand rehabilitation. In 2017 First IEEE International Conference on Robotic Computing (IRC), pp. 336-339. https://doi.org/10.1109/IRC.2017.76

[19] Taylor, J., Curran, K. (2016). Using leap motion and gamification to facilitate and encourage rehabilitation for hand injuries: Leap motion for rehabilitation. In Handbook of Research on Holistic Perspectives in Gamification for Clinical Practice, pp. 183-192. https://doi.org/10.4018/978-1-4666-9522-1.ch009

[20] Tarakci, E., Arman, N., Tarakci, D., Kasapcopur, O. (2020). Leap motion controller-based training for upper extremity rehabilitation in children and adolescents with physical disabilities: A randomized controlled trial. Journal of Hand Therapy, 33(2): 220-228. https://doi.org/10.1016/j.jht.2019.03.012

[21] Cortés-Pérez, I., Zagalaz-Anula, N., Montoro-Cárdenas, D., Lomas-Vega, R., Obrero-Gaitán, E., Osuna-Pérez, M. C. (2021). Leap motion controller video game-based therapy for upper extremity motor recovery in patients with central nervous system diseases. A systematic review with meta-analysis. Sensors, 21(6): 2065. https://doi.org/10.3390/s21062065

[22] Li, W.J., Hsieh, C.Y., Lin, L.F., Chu, W.C. (2017). Hand gesture recognition for post-stroke rehabilitation using leap motion. In 2017 International Conference on Applied System Innovation (ICASI), pp. 386-388. https://doi.org/10.1109/ICASI.2017.7988433

[23] Chophuk, P., Chumpen, S., Tungjitkusolmun, S., Phasukkit, P. (2015). Hand postures for evaluating trigger finger using leap motion controller. In 2015 8th Biomedical Engineering International Conference (BMEiCON), pp. 1-4. https://doi.org/10.1109/BMEiCON.2015.7399560

[24] Butt, A.H., Rovini, E., Dolciotti, C., Bongioanni, P., De Petris, G., Cavallo, F. (2017). Leap motion evaluation for assessment of upper limb motor skills in Parkinson's disease. In 2017 international conference on rehabilitation robotics (ICORR), pp. 116-121. https://doi.org/10.1109/ICORR.2017.8009232

[25] Butt, A.H., Rovini, E., Dolciotti, C., De Petris, G., Bongioanni, P., Carboncini, M.C., Cavallo, F. (2018). Objective and automatic classification of Parkinson disease with Leap Motion controller. Biomedical engineering online, 17(1): 1-21. https://doi.org/10.1186/s12938-018-0600-7

[26] Coton, J., Veytizou, J., Thomann, G., Villeneuve, F. (2016). Etude de faisabilité de l'analyse de mouvement de doigts par le capteur LeapMotion. In Conférence Handicap 2016-9ème édition.

[27] Colombini, G., Duradoni, M., Carpi, F., Vagnoli, L., Guazzini, A. (2021). LEAP motion technology and psychology: A mini-review on hand movements sensing for neurodevelopmental and neurocognitive disorders. International Journal of Environmental Research and Public Health, 18(8): 4006.

[28] Baratè, A., Elia, A., Ludovico, L.A., Oriolo, E. (2018). The leap motion controller in clinical music therapy: a computer-based approach to intellectual and motor disabilities. In International Conference on Computer Supported Education, pp. 461-469.

[29] Galván-Ruiz, J., Travieso-González, C.M., Tejera-Fettmilch, A., Pinan-Roescher, A., Esteban-Hernández, L., & Domínguez-Quintana, L. (2020). Perspective and evolution of gesture recognition for sign language: A review. Sensors, 20(12): 3571. https://doi.org/10.3390/s20123571

[30] Quesada, L., López, G., Guerrero, L.A. (2015). Sign language recognition using leap motion. In International Conference on Ubiquitous Computing and Ambient Intelligence, pp. 277-288. https://doi.org/10.1007/978-3-319-26401-1_26

[31] Deriche, M., Aliyu, S.O., Mohandes, M. (2019). An intelligent arabic sign language recognition system using a pair of LMCs with GMM based classification. IEEE Sensors Journal, 19(18): 8067-8078. https://doi.org/10.1109/JSEN.2019.2917525

[32] Xue, Y., Gao, S., Sun, H., Qin, W. (2017). A Chinese sign language recognition system using leap motion. In 2017 International Conference on Virtual Reality and Visualization (ICVRV), pp. 180-185. https://doi.org/10.1109/ICVRV.2017.00044

[33] Naglot, D., Kulkarni, M. (2016). ANN based Indian Sign Language numerals recognition using the leap motion controller. In 2016 international conference on inventive computation technologies (ICICT), 2: 1-6. https://doi.org/10.1109/INVENTIVE.2016.7824830

[34] Škraba, A., Koložvari, A., Kofjač, D., Stojanović, R. (2015, June). Wheelchair maneuvering using leap motion controller and cloud based speech control: Prototype realization. In 2015 4th Mediterranean Conference on Embedded Computing (MECO), pp. 391-394. https://doi.org/10.1109/MECO.2015.7181952

[35] Bachmann, D., Weichert, F., Rinkenauer, G. (2014). Evaluation of the leap motion controller as a new contact-free pointing device. Sensors, 15(1): 214-233. https://doi.org/10.3390/s150100214

[36] Bessa Seixas, M.C.B., Cardoso, J.C.S., Galvão, T. (2015). The leap motion movement for 2D pointing tasks characterisation and comparison to other devices. "5th International Conference on Pervasive and Embedded Computing and Communication Systems (PECCS), pp. 15-24, January 2015.

[37] Aswathi, T., Athira, S., Sowrabiya, G., Shanila, M.K., Cyriac, L., Saidas, S.R. (2019). A paradigm of sixth sense: Finger cursor. In 2019 2nd International Conference on Intelligent Computing, Instrumentation and Control Technologies (ICICICT), 1: 1558-1562. https://doi.org/10.1109/ICICICT46008.2019.8993112

[38] Tung, J.Y., Lulic, T., Gonzalez, D.A., Tran, J., Dickerson, C.R., Roy, E.A. (2015). Evaluation of a portable markerless finger position capture device: accuracy of the Leap Motion controller in healthy adults. Physiological measurement, 36(5): 1025. https://doi.org/10.1088/0967-3334/36/5/1025

[39] Losson, O., Vannobel, J.M. (1998). Sign language formal description and synthesis. International Journal of Virtual Reality, 3(4): 27-34. https://doi.org/10.20870/IJVR.1998.3.4.2634

[40] Jorge, J.A., Fonseca, M.J. (1999). A simple approach to recognise geometric shapes interactively. In International Workshop on Graphics Recognition, Springer, Berlin, Heidelberg, 266-274. https://doi.org/10.1007/3-540-40953-X_23

[41] Dobkin, D.P., Snyder, L. (1979). On a general method for maximizing and minimizing among certain geometric problems. In 20th Annual Symposium on Foundations of Computer Science (sfcs 1979), pp. 9-17. https://doi.org/10.1109/SFCS.1979.28

[42] Zafar, I., Tzanidou, G., Burton, R., Patel, N., Araujo, L. (2018). Hands-on convolutional neural networks with TensorFlow: Solve computer vision problems with modeling in TensorFlow and Python. Packt Publishing Ltd.

[43] Kim, Y. (2014). Convolutional neural networks for sentence classification. Conference on Empirical Methods in Natural Language Processing (EMNLP 2014), pp. 1746–1751.

[44] Towards Data Science Inc. Taha Binhuraib - Machine learning engineer. https://towardsdatascience.com/nlp-with-cnns-a6aa743bdc1e, accessed on Jan. 2, 2023.

[45] TensorFlow. an end-to-end open-source platform, https://github.com/tensorflow/tensorflows, accessed on Jan. 2, 2023.

[46] © 2023 GitHub Inc. Keras – Deep Learning for humans, https://github.com/keras-team/keras, accessed on Jan. 2, 2023.

[47] TensoFlow. An end-to-end open-source platform, https://www.tensorflow.org/tutorials/images/classification?hl=fr, accessed on Jan. 2, 2023.

[48] Al, G.A., Estrela, P., Martinez-Hernandez, U. (2020). Towards an intuitive human-robot interaction based on hand gesture recognition and proximity sensors. In 2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI), pp. 330-335. https://doi.org/10.1109/MFI49285.2020.9235264

[49] ULTRALEAP Inc, ULTRALEAP - Camera Module, https://www.ultraleap.com/product/stereo-ir-170/, accessed on Jan. 2, 2023.