Just this year there have been 38 mass shootings in the United States. In August alone, over 53 were killed by firearms, renewing the national conversation about gun safety. Following the massacre at a Walmart in El Paso, Texas, which left over 22 people dead and 26 more injured, as well as the tragic shooting in Odessa, Texas, where 7 people were killed, many people have been left with new anxieties in a country already on-edge amid rising rates of gun-violence. For state and local safety managers, finding ways to prevent and mitigate gun-related incidents, which often are unpredictable, must be a priority.
One strategy for preventing mass shootings, social media and technology monitoring, has been a subject of much debate. Proponents think social media keyword searches are useful in creating valuable leads for police investigations, while opponents argue that the data-harvesting technology violates privacy and may push young people to further conceal behavior from adults. Even so, data monitoring is picking up support on the federal level, and certain agencies want to go a step further in gathering smart-device data.
Now, could monitoring of FitBit data really be part of a proposed plan flag potential gun-violence threats?
A plan reportedly put forth by federal government officials aims to monitor “neurobehavioral” patterns. Although the plan could work, it is unlikely to come to fruition as it raises serious privacy and ethical concerns for people across the United States. On August 22, the Washington Post reported that the White House had been briefed on a proposal which would develop strategies to monitor individuals with mental illness to identify early warning signs of violent behavior. The proposal has already received pushback from mental health professionals and privacy experts, who argue that data does not support a strong enough link between mental illness and mass murder to justify creating a citizen’s watchlist.
A Closer Look At the FitBit Data Controversy
According to the Washington Post, a new proposed plan aims to establish an agency called the Health Advanced Research Projects Agency, or HARPA, which would be a subsection of the Health and Human Services Department. The director of the agency would reportedly be appointed by the president and the agency would have a it's own separate operational budget, as per sources close to the plan. DARPA, or the Defense Advanced Research Projects Agency, would act as a model for the plan, collaborating with federal agencies, the private sector, and academia.
According to the proposal plan obtained by the Washington Post, a key initiative implemented by HARPA would be to monitor “smart-device” data from Apple Watches, Fitbits, Amazon Echo and Google Home devices. The document also reported plans to collect information from healthcare providers from fMRIs, tractography, and image analysis. Then, once collected, AI and other technology would scan for “red flag” indicators of neuropsychiatric disturbances. According to the proposal, “Advanced analytical tools based on artificial intelligence and machine learning are rapidly improving and must be applied to the data.”
For many, reports of the proposal may raise concerns in an era where people are already worried about compromised privacy and increased surveillance. In Gizmodo’s report on the proposal, mental health researchers and experts pointed out that ethical concerns arise from monitoring personal devices, and running such data through any risk-factor parameters will likely result in thousands, if not millions, of false-positives. Surveillance of this extreme may also stir distrust in the public safety system, and potentially discourage people struggling with mental illness from seeking professional help for fear of ending up on a government watchlist, as pointed out by Gizmodo.
Researchers and mental health professionals also remain skeptical about the link between mental illness and “explosive acts of violence.” Mental illness can play a role in violent acts, but studies don’t suggest it is a reliable predictor, showing that no more than a quarter of mass shooters have a formal diagnosis, according to the Washington Post. Studies also indicate that other factors are a much stronger indicators of a mass violence event, including involvement with extremist organizations, expressing a strong sense of resentment, desire for notoriety or narcissism, history of domestic violence, and access to firearms.
In an interview with GizModo, George David Annas, a deputy director of Forensic Psychiatry at SUNY Upstate Medical University, expressed extreme skepticism that FitBit data could provide national security value. ““The proposed data collection goes beyond absurdity when they mention the desire to collect FitBit data. I am unaware of any study linking walking too much and committing mass murder,” he told the publication. “As for the other technologies, what are these people expecting? ‘Alexa, tell me the best way to kill a lot of people really quickly’? Really?”
The concept of social media monitoring already raised controversy in communities across the United States. Back in 2016, Orange County, Florida implemented SnapTrends a monitoring software which collects data from public posts on student’s social media accounts, as per WashPo. The program would scan for keywords that indicate incidents of cyberbullying, suicide threats, or criminal activity. Technology used in communities around Orlando made some public safety, local officials, parents, and students wary. Since social media monitoring technology is fairly new, and there are few mandates on how transparent collection of data must be, or how long such data is stored.
Psychologists worry that keeping tabs on students may push them to keep behavior further out of view. There have also been reports of such technologies unfairly targeting minority students, further perpetuating racial bias in school discipline and the school to prison pipeline.
How Can Technology Improve Safety (While Respecting Consumer Privacy)
In recent years, a national debate over the role technology in public safety and ethical privacy concerns has developed. From facial-recognition technology to social media monitoring, it’s important to recognize driving forces behind controversy, such as the prevalence of racial bias in these practices. For technology developers, finding tools that empower individuals to take safety into their own hands without compromising personal data should be a priority.
On the other hand, reliable data can be a powerful tool for emergency response, and there are ethical applications for data-driven solutions. A vulnerable needs-registry, for example, allows emergency managers to collect resident-provided data for analysis, planning, and emergency response. The data can prove critical during power outages, evacuations, transportation interruptions, or shelter-resource planning. A secure platform which values residential privacy, utilizing volunteered data only within the context of an emergency, is essential.
An anonymous tip-texting system is another way to raise situational awareness for first response teams without violating resident privacy rights. Students who spot suspicious activity on social media, for example, are able to report posts to administrators or local law enforcement. For those wary to come forward for fear of retaliation, the tool allows a secure line of communication. The tool empowers emergency managers to solicit tips from their community on rumored weapons or crime, incidents of bullying, or mental health emergencies, and respond with appropriate care. This technology also has real-time reporting for logs, trends, and incident patterns over time, which allows administrators to track data without violating resident’s privacy via monitoring tools.