On Saturday, January 13th, 2018, an employee at the Hawaii Emergency Management Agency initiated an internal test of the emergency missile warning system to practice sending an emergency alert to the public. It was supposed to be a drill and not actually sent to the public. According to the state, the employee had two options from a drop-down menu on a computer program: “Test missile alert” or “Missile alert.”
As most of us now know, he chose the real-life, public-facing missile alert. Not only did the false alert send panic to thousands across the state – people were left not knowing it was a mistake for a whole 38 minutes between the initial alert and the alert stating the missile warning was a false alert.
This is clearly the case of human error – an impactful mistake made by one person. Fortunately, there are key things we can take away and learn from this incident to ensure something like this doesn’t happen again.
Minimize Risk of a False Alert Caused by Human Error
We’re human, and humans make mistakes. When you’re doing the same thing day after day, you are naturally more likely to have slipups. Now, put someone in a time-sensitive situation where he or she needs to act both quickly and accurately – this increases the probability of human error.
(Related article: Why UX Matters When Seconds Count)
Most notification systems have a confirmation process in place, and altering those steps regularly so it doesn’t always look the same can help reduce error. When executing the same actions day after day, people often become numb to the possibility of making a mistake and will cut crucial corners in the process. Changing that process and language frequently will inherently force people to slow down, read and double-check their work with each step.
Designate Clear Roles and Implement Proper Training
Consider role-based training and think through your permission allocations. Don’t allow too many people to be responsible for the big decisions. If you limit how many can distribute a mass crisis notification, the chances of a mistake will be reduced to only a select group of individuals. For example, identify the officials with access to sending alerts about broken water mains versus the select group of high-ranking officials responsible for releasing nationwide alerts when necessary. The fewer number of people authorized to push THE button statistically leads to fewer chances of someone mistakenly sending a false alert that ignites widespread panic across the state.
False Alert Retractions and Response
Emergency notification systems, both the software and processes involved in sending messages, need to walk the careful balance between speed and ease of use and ensuring the right controls are in place to minimize the likelihood of a false activation. While a false alert can cause undue panic, it’s an even greater issue if a system is so cumbersome to use that a message is never sent out.
In Hawaii’s case, the biggest problem wasn’t the initial message delivery, but instead how long it took to deliver the error message (38 minutes). However, the more pressing aspect of the response to the false alert was the propensity of individuals to immediately engage on social media. This highlights the importance of a multi-modal approach to disseminating the truth around situations. Even if the agency involved had not been able to send an IPAWS retraction for some reason, rapid engagement on social media across all the different channels available to state agencies could have mitigated the terror.
Utilize a System with a Simple and Clear User Interface
The notification system in place should provide clear instructions to enable faster and easier retraction in rare situations like the false missile alert in Hawaii on January 13th. The ability to counteract errors is critical, and clear labeling of test-modes increases efficiencies. According to the Hawaii Emergency Management Agency officials who responded to the incident, the “test mode” button option and the “real mode” button option are closely similar in the interface of their mass notification system, which caused confusion. It’s important that systems are undoubtedly clear, with no hesitations or questions asked.
Here at Rave, we spend a lot of time thinking about the text colors and fonts within the interface of our mass notification system. When there is something that requires highlighting (like a ballistic missile alert), we make sure it stands out to users – bright colors, different fonts, clear warning signs. We also recommend that users scroll down a bit for the last, “are you sure?” prompt. Incorporating additional steps in the message delivery process has effectively prevented false alert activations in the past. In addition to leveraging different colors and steps for confirmation, it’s also important to find the right balance of speed and control to ensure users are able to act quickly and accurately.
While there are many, these are just a few ways to help prevent false alert activations from occurring in any location or within any organization. Policies, processes, and training should be reviewed and put into practice, not just for real emergencies but for error retraction as well. If we have learned anything from the situation in Hawaii, public safety officials must find the appropriate balance between elements of design and user experience, as well as identify the right controls and safeguards to ensure effectiveness and minimize the risk of error well ahead of an emergency situation.
Government Computer News shares more tips and professional insights from Todd Piett, Rave’s president and CEO, in the following article: Avoiding a repeat of Hawaii's 'wrong button' mistake
you may also like
Managing Building Security Amid Closures
May 11, 2020
In March, college and university campuses across the United States halted in person classes, shuttering on campus facilities, sending students home, and transitioning to a...