The paper tackles the problem of zero-shot sign language recognition (ZSSLR),
where the goal is to leverage models learned over the seen sign classes to recognize the instances of unseen sign classes.
In this context, readily available textual sign descriptions and attributes collected from sign language dictionaries are
utilized as semantic class representations for knowledge transfer. For this novel problem setup,
Move both S hands in alternating forward circles, palms facing down, in front of each side of the body.
Bring the fingertips of the right flattened O hand, palm facing in, to the lips with a repeated movement.
With the right index finger extended up, move the right hand, palm facing back, in a small repeated circle in front of the right shoulder.
Move the right L hand, palm facing forward, in a circle in front of the right shoulder.
Beginning with the palms of both open hands together in front of the chest, fingers angled forward, bring the hands apart at the top while keeping the little fingers together.
With the extended thumb of the right U hand against the right side of the forehead, palm facing forward, bend the fingers of the U hand up and down with a double movement.
Beginning with the right C hand in front of the mouth, palm facing left, squeeze the fingers open and closed with a repeated movement, forming an S hand each time.
While touching the wrist of the right open hand, palm facing left, with the extended left index finger, swing the right hand back and forth with a double movement.