I had the pleasure of attending an Emergency Management Interpreter Training session this past weekend. No, I’m not fluent in American Sign Language (ASL), but was invited to an open community forum session with Deaf and Hard of Hearing leaders and interpreters from 12 different states, as they discussed recent emergencies, how people were affected, and how we can all work together more effectively in the future. I have actually engaged with the Deaf and Hard of Hearing community on a number of occasions while soliciting feedback on some of our safety solutions but for me this was a truly transformational event in terms of understanding the unique needs of that community.
Here’s a couple of my key take-aways:
A lot of emergency management relies on verbal communication. My eyes were opened to some of the ways the communication needs of the deaf and hard of hearing community are overlooked. Look at emergency notification systems as an example. If I’m sleeping, my cell phone wakes me with a buzzing sound when I get a text message or the sirens wake me if there is an imminent tornado impact. In a large scale incident, police patrols with loud speakers can give instructions. I listen to the radio to get the latest news, or listen to the governor’s state of emergency address on the television. Once I’m aware of a pending event, I can go online and watch things unfold. For some of the deaf community, the written word is a second language. Now imagine an EMT performing triage in mass casualty event. A deaf person cannot respond to verbal instructions in the same way a hearing person can and because of that may be improperly categorized – or at a minimum will have difficulty describing their condition [side note: some agencies now carry visual cards to help communication]. There is a HUGE need to ensure public safety agencies have information about the deaf and hard of hearing community in their area so that they can appropriately modify both 9-1-1 and outbound emergency communications (this realization has driven much of our development on both Smart911 and SmartPrepare).
Hey… Why don’t you get what I’m saying? As a visual language, sign language is obviously very different from the spoken word, but I never considered the regional nuances and differences. For example, I was shown two completely different ways to sign “tornado” from two different regions of the country. I didn’t really understand how that was possible until I was given the example of “soda” vs. “pop”. Two completely different words that would probably also stump someone from Europe hearing one or the other for the first time. Often you can probably discern a “word” from its context, but imagine if the message was just “A Tornado Warning has been issued for Worcester County” or “I’ll have a pop” to an English speaking German. This has some big implications for communicating effectively. The second aspect of this visual language is one that can cause challenges between the hearing and deaf in one-on-one situations. A key way to get a point across in ASL is to be very animated. This can easily come across as aggression or rudeness to a hearing individual. I have to confess to having thought a person “aggressively” signing was confrontational. During a stressful emergency event you can easily see how two individuals speaking different languages can already come to a loggerhead, let alone if one thinks the other is being offensive in some way.
So what do I do with this new perspective? Well, at a minimum I’m definitely more aware of the hurdles in effective communication between the hearing and deaf and hard of hearing. Hopefully my learnings will also translate into some new and useful features in our products!