Prototype of automatic translation to the sign language of French-speaking Belgium evaluation by the deaf community

Prototype of automatic translation to the sign language of French-speaking Belgium evaluation by the deaf community

David B.Bouillon P. 

Faculty of Translation and Interpreting, University of Geneva, Switzerland

Corresponding Author Email: 
bdavid1203@gmail.com
Page: 
162-167
|
DOI: 
https://doi.org/10.18280/mmc_c.790402
Received: 
28 September 2018
| |
Accepted: 
31 October 2018
| | Citation

OPEN ACCESS

Abstract: 

This article presents a prototype of automatic translation from French into the sign language of French-speaking Belgium (LSFB). Its main objective is to improve the accessibility of public information to deaf people by means of Virtual Signing, and more specifically, oral information disseminated by loudspeakers placed in stations and trains of the Belgian National Railway Company (SNCB). The application was developed with the Regulus Lite platform, made available by the University of Geneva. Manual and non-manual avatar animations are generated with JA Signing software. The evaluation of the prototype was based on online questionnaires and interviews with members of the deaf community.

Keywords: 

LSFB, machine translation, JA Signing, translate, evaluation

1. Introduction
2. Regulus Lite Platform
3. Methodology
4. Results
5. Conclusion
  References

[1] Brooke J. (1996). SUS: A quick and dirty usability scale. In Jordan PW, Thomas B, McClelland IL, Weerdmeester B (ed.). Usability Evaluation in Industry, CRC Press, Londres, United Kingdom, pp. 189-194. 

[2] Crasborn OA. (2006). Nonmanual structures in sign language. Encyclopedia of Language and Linguistics 8: 668-672. http://hdl.handle.net/2066/42832 

[3] David B. (2017). Traduction automatique de la parole vers la langue des signes de Belgique francophone. Codage d’un avatar destine aux transports en commun belges. Université Libre de Bruxelles, Bruxelles, Belgique. 

[4] Ebling S. (2013). Evaluating a swiss German sign language avatar among the deaf community. Proceedings of the Third International Symposium on Sign Language Translation and Avatar Technology, Chicago, United States of America, Octobe. https://www.researchgate.net/publication/276757609

[5] Ebling S, Glauert J. (2013). Exploiting the full potential of jasigning to build an avatar signing train annoncements. Proceedings of the Third International Symposium on Sign Language Translation and Avatar Technology, Chicago, United States of America, October 2013. https://www.researchgate.net/publication/276757820_Exploiting_the_Full_Potential_of_JASigning_to_Build_an_Avatar_Signing_Train_Announcements 

[6] Ebling S. (2016). Automatic translation from german to synthesized swiss german sign language. University of Zurich, Zurich, Swiss. http://www.zora.uzh.ch/id/eprint/127751/1/ebling-2016c.pdf

[7] Ekman P. (1992). Facial expression and emotion. American Psychologist 48(4): 384-392. https://www.paulekman.com/wp-content/uploads/2013/07/Facial-Expression-And-Emotion1.pdf

[8] Elliott R, Glauert J, Kennaway J. (2004). SiGML notation and SiGMLSigning system. University of East Anglia, Norwich, United Kingdom. http://www.visicast.cmp.uea.ac.uk/eSIGN/Images/Flyer_SiGMLSigning.pdf 

[9] Elliott R, Glauert J, Kennaway J, Marshall I, Safar E. (2008). Linguistic modelling and language-processing technologies for avatar-based sign language presentation. Universal Access in the Information Society 6(4): 375-391. https://doi.org/10.1007/s10209-007-0102-z

[10] Glauert J, Elliott R. (2011). Extending the SiGML Notation – a Progress Report. Second International Workshop on Sign Language Translation and Avatar Technology, Dundee, United Kingdom, October 2011. http://vhg.cmp.uea.ac.uk/demo/SLTAT2011Dundee/8.pdf 

[11] T. Hanke, G. Langer, and C. Metzger. (2001). Encoding nonmanual aspect of sign language. In T. Hanke (ed.), Interface definitions. ViSiCAST Deliverable D5-2. 

[12] Hanke T. (2004). HamNoSys – representing sign language data in language resources and language processing contexts. In O. Streiter and C. Vettori, (ed.), Sign Language Processing Satellite Workshop of the Fourth International Conference on Language Resources and Evaluation, Lisbon, Portugal, May 2004. http://www.lrec-conf.org/proceedings/lrec2004/ws/ws18.pdf 

[13] Hanke T. (2007). HamNoSys – hamburg notation system for sign languages. Institute of German Sign Language. 

[14] Jennings V, Elliott R, Kennaway R, Glauert J. (2010). Requirements for a signing avatar. Fourth Workshop on the Representation and Processing of Sign Languages: Corpora and Sign Language Technologies, University of East Anglia, Norwich, United Kingdom. http://www.academia.edu/2348900/Requirements_for_a_signing_avatar

[15] Kipp M, Heloir A, Nguyen Q. (2011). Sign language avatars: Animation and comprehensibility. In Vilhjálmsson HH, Kopp S, Marsella S, and Thórisson KR. (ed.), Intelligent Virtual Agents 6895: 113-126. https://doi.org/10.1007/978-3-642-23974-8_13

[16] Martin A. (2017). Interprètes en langue des signes : un manque criant en Belgique. Le Soir, Février. https://references.lesoir.be/article/interpretes-en-languedes-signes-un-manque-criant-en-belgique-/?TrackID=3#sc=socialmedia&me=socialmedia&cm=0

[17] Rayner M, Armando A, Bouillon P, Ebling S, Gerlach J, Halimi S, Strasly I, and Tsourakis N. (2015). Helping domain experts build phrasal speech translation systems. In J. F. Quesada, F.-J. M. Mateos, T. Lopez-Soto (ed.), Future and Emergent Trent in Language Technology, Sevilla, Spain, November 2015, pp. 41-52. https://link.springer.com/book/10.1007/978-3-319-33500-1

[18] Rayner E, Baur C, Bouillon P, Chua C, and Tsourakis N. (2016). Helping non-expert users develop online spoken CALL courses. Workshop on Speech and Language Technology in Education (SLaTE), Leipzig, Germany. 

[19] Rayner E, Bouillon P, Ebling S, Strasly I, and Tsourakis N. (2016). A framework for rapid development of limited-domain speech-to-sign phrasal translators. 12th International Conference on Theoretical Issues in Sign Language Research (TISLR12), Melbourne, Australia. http://archive-ouverte.unige.ch/unige:79988

[20] Rayner E. (2016). Using the Regulus Lite Speech2Sign Platform. http://www.issco.unige.ch/en/research/projects/Speech2Si gnDoc/ build/html/index.html

[21] Trainslate. Université de Genève, Mars 2017. http://speech2sign.unige.ch/en/applications/trainslate/

[22] Wells JC. (1997). SAMPA computer readable phonetic alphabet. In D. Gibbon, R. Moore, and R. Winski, (ed.), Handbook of Standards and Resources for Spoken Language Systems, Berlin, Germany – New York, United States of America. http://www.phon.ucl.ac.uk/home/sampa/