June 7th 2021.
Well-intentioned hearing inventors have been trying for years to create devices that help Deaf folks translate American Sign Language to written and/or spoken English. The devices, which look like gloves decked out with technical wires and sensors, are cumbersome. Because they are not invented with Deaf input, they are also mostly ineffective. CymaSpace imagines what it would be like to turn the tables and have Deaf people create a device for hearing folks to translate their speech to sign language in this comedy sketch: “Mouth Language Device.”
This pioneering short video brings to bear all the technical video skills we have been up-skilling our team and developing over the last two years at the CymaSpace studio. For the first time ever, a film shoot was able to occur in Virtual Reality and incorporate Sign Language with the ASL performer streaming live from Washington D.C to Portland Oregon where the Director guided them through the shots. An estimated 500 volunteer hours went into this project including scriptwriting, casting costumes and filming. Our green screen studio patches physical performers into seamless virtual worlds. 3D backgrounds for “Mouth Language Device” were created in Unreal Engine, post production compositing in Aximmetry, and editing in Premiere. Our technical software, camera and lighting resources produce a virtual production unequaled by any Deaf-led media studio.