Big Tech is known to “move fast and break things.” But who is left in pieces? Residents in Baltimore County recently witnessed a new technology nearly shatter a child’s life.  

Last week, a squad of police officers pulled out their weapons and rushed towards a group of unarmed Black teenagers at a Baltimore County high school to search for a gun that did not exist. Why? Because the school district’s $2.4 million AI weapons detection system misidentified a bag of Doritos as a handgun. 

A 16-year-old student was placed under arrest as officers recovered the bag of chips from a nearby trash can, realizing the AI’s mistake. That did little to console the traumatized child, who shared in a recent interview, “The first thing I was wondering was, am I about to die?” 

It can be tempting to view what happened in Baltimore County as an isolated glitch in an otherwise functional system. However, in our work as youth digital rights experts, we see growing evidence that AI surveillance in schools reflects what Princeton sociologist Ruha Benjamin describes as “the New Jim Code” — a human rights crisis in which digital technologies reproduce social and racial injustice. In public education, we are concerned that AI surveillance technologies are reinventing the school-to-prison pipeline. 

School leaders should not ask how AI surveillance can keep students safe, but instead how they will keep students safe from AI surveillance.

In light of these developments, school leaders should not ask how AI surveillance can keep students safe, but instead how they will keep students safe from AI surveillance. School officials can do this by following the lead of communities that have decommissioned harmful AI. 

For example, a school district in New York state decommissioned its AI weapons detection program after the system misidentified a 7-year-old’s lunch box as a bomb while failing to identify a child who brought a knife to school.  

Another weapons detection system came under intense public scrutiny this year after failing to detect a gunman’s rifle during an active school shooting at a suburban Nashville high school, where two students died and another was injured.  

In 2024, the Federal Trade Commission took action against an AI weapons detection company for making false claims that deceived schools about the AI’s capabilities. The FTC found that the AI routinely misidentified common school items, such as Chromebooks, binders, and water bottles as weapons at rates of up to 50% — a multimillion-dollar technology with the accuracy of a coin flip. Researchers have emphasized that there is virtually no peer-reviewed evidence that weapons detection technology prevents violence on campus. 

Other school surveillance technologies have also come under public scrutiny. Earlier this year, a North Carolina school board voted on a bipartisan basis to reject a $3.2 million AI School Safety Pilot Program after public hearings led lawmakers to believe the technology was “not trustworthy.” 

Meanwhile, the United States Department of Justice found that a central Florida school district used a risk-assessment algorithm in a school-based predictive policing program in violation of the Americans with Disabilities Act. A separate federal court declared the program unconstitutional. 

These examples show that AI surveillance can systematically violate children’s rights and should have no place in public education. 

Baltimore County leaders should seize this opportunity to develop a comprehensive strategy to prevent the harmful use of AI in public schools through three key steps:

First, school officials must offer the community complete transparency on the weapons detection program by releasing all available data on the number of false alerts, police contacts, and student searches associated with the program. 

Second, elected leaders must hold public hearings and work with community members and domain experts to design a comprehensive AI policy for the school district that prioritizes student safety and civil rights. This policy should prohibit high-risk uses of AI, similar to New York’s ban on facial recognition in schools. Policies that provide redress to harmed individuals must also be considered — merely offering counseling is not enough. 

Third, the public workforce needs to be trained on critical AI and digital literacy. School administrators and procurement officers must be trained on best practices in AI governance, including the 2024 federal guidance on AI and civil rights from the US Department of Education, the 2022 White House Blueprint for an AI Bill of Rights, and the AI Risk Management Framework from the National Institute of Standards and Technology. 

To be sure, schools must do everything within their power to keep kids safe. However, as many community leaders have made clear, surveillance is not safety. Our communities must protect children from Big Tech’s digital dystopia. 

Clarence Okoh is a senior attorney for civil rights and technology at TechTonic Justice. He is a cofounder of the NOTICE Coalition: No Tech Criminalization in Education. Marika Pfefferkorn is a co-founder of the Twin Cities Innovation Alliance, the Midwest Center for School Transformation, and the NOTICE Coalition.