SMOG sign language recognition with Google Glass

CONTEXT

SMOG (NL: Spreken Met ondersteuning van Gebaren, EN: Speaking with gesture support) is a form of supportive communication. This allows children, young people and adults with a communication disability to clarify their needs and wishes and to better understand their environment. Unfortunately, most people do not understand or even know about SMOG. This thesis aims to close this gap by allowing a broader audience to understand the gestures with the aid of technology.

GOAL

The goal of this thesis is to recognize SMOG sign language gestures using a Google Glas. The word corresponding to the gesture must be shown to the user. The model must be able to operate in (near) real-time. 

METHODOLOGY

A machine learning model should process the Google Glass’ camera feed to detect (a subset of) the 500 base SMOG gestures. Optionally, the user can use the Glass’ controls to indicate the start and end of a gesture. A model must then be able to classify the gesture and return the corresponding word to the user, using the Glass’ display.

PROFILE/REQUIRED SKILLS

Google Glass (Enterprise Edition 2) applications are based on the Android Orea 8.1 SDK. MLKi and MediaPip can be used for machine learning workloads. The ML model must be developed in TensorFlow. Android and/or TensorFlow experience is a plus. The thesis contains both a theoretical and a practical aspect. Research into optimal lightweight model architectures is required in addition to the development of an Android app.