Confidence
—
Sign Language
Translation Log
Start signing — translations appear here.
Linguistic Architecture
Sign languages are complete natural languages — not gesture systems, not codes for spoken language. They have rich, independent grammars. This is what SignForge currently tracks and what remains on the roadmap.
Grammar · Facial
Non-Manual Markers
Facial expressions in sign languages are full grammatical morphemes — not emotional decoration. Raised eyebrows mark yes/no questions; furrowed brows with a forward head tilt mark wh-questions (who, what, where, why). Head movements encode negation, affirmation, and topic boundaries. Mouth morphemes — mm (normal/calmly), th (carelessly), oo (small/thin), pah (suddenly/finally) — are independent grammatical morphemes with no English equivalent. Eye gaze signals role shift, verb agreement, and discourse structure. None of this is optional or stylistic.
Baker & Padden (1978) · Liddell (1980) · Sandler & Lillo-Martin (2006) · Wilbur (2000)
Morphology · Syntax
Spatial Grammar
Signers establish referents at locations in the signing space (loci) and encode pronouns, verb agreement, and spatial relationships by pointing to or moving between those loci. Agreement verbs arc through space to index subject and object simultaneously — expressing in a single movement what English needs a full clause for. Role shift (body lean + gaze realignment) marks perspective-taking and reported speech. This simultaneous, spatial morphology is typologically unlike any spoken language syntax.
Padden (1988) · Klima & Bellugi (1979) · Emmorey (2002)
Phonology · Structure
Sign Components
Stokoe (1960) established that each sign decomposes into discrete sub-lexical primes analogous to phonemes: (1) handshape, (2) location relative to the body, (3) movement, (4) palm orientation, and (5) non-manual features. Minimal pairs exist across all five dimensions — ASL MOTHER and FATHER differ only in location (chin vs. forehead). This discovery demonstrated that sign languages are not holistic gestures but structured phonological systems, overturning a century of assumptions about language modality.
Stokoe (1960) · Battison (1978) · Brentari (1998)
Current Scope
What SignForge Reads Now
The current model reads 21 hand landmarks per hand (42 normalized floats: x, y coordinates translated to wrist origin, scaled by max absolute value) and classifies static ASL fingerspelling A–Z. This captures handshape and orientation only. Movement trajectories, body-relative location, non-manual features, and temporal sequences are not yet modeled. Full sentence-level ASL interpretation requires multi-frame temporal architectures incorporating face mesh, body pose, and bilateral hand landmarks simultaneously — an active area of research (Li et al., 2020; Desai et al., 2024).
Architecture: MediaPipe Tasks Vision · ONNX Runtime Web · Roadmap in progress
References
- Stokoe, W. C. (1960). Sign language structure: An outline of the visual communication system of the American deaf. Studies in Linguistics, Occasional Papers No. 8. University of Buffalo.
- Battison, R. (1978). Lexical Borrowing in American Sign Language. Linstok Press.
- Baker, C., & Padden, C. (1978). Focusing on the nonmanual components of ASL. In P. Siple (Ed.), Understanding Language Through Sign Language Research (pp. 27–57). Academic Press.
- Klima, E. S., & Bellugi, U. (1979). The Signs of Language. Harvard University Press.
- Liddell, S. K. (1980). American Sign Language Syntax. Mouton.
- Padden, C. A. (1988). Interaction of Morphology and Syntax in American Sign Language. Garland.
- Brentari, D. (1998). A Prosodic Model of Sign Language Phonology. MIT Press.
- Emmorey, K. (2002). Language, Cognition, and the Brain: Insights from Sign Language Research. Lawrence Erlbaum Associates.
- Sandler, W., & Lillo-Martin, D. (2006). Sign Language and Linguistic Universals. Cambridge University Press.
- Wilbur, R. B. (2000). Phonological and prosodic layering of nonmanuals in American Sign Language. In K. Emmorey & H. Lane (Eds.), The Signs of Language Revisited (pp. 61–96). Lawrence Erlbaum Associates.
- Li, D., Rodriguez, C., Yu, X., & Li, H. (2020). Word-level deep sign language recognition from video: A new large-scale dataset and methods comparison. Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) (pp. 1459–1469).
- Desai, S., Berger, A., Minakov, D., Santa Cruz, M., Singh, A., Sepahi, K., … Bhatt, S. (2024). ASL Citizen: A community-sourced dataset for advancing isolated sign language recognition. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).