"Across the country, police departments have adopted automated software platforms driven by artificial intelligence (AI) to compile and analyze data. These data fusion tools are poised to change the face of American policing; they promise to help departments forecast crimes, flag suspicious patterns of activity, identify threats, and resolve cases faster. However, many nascent data fusion systems have yet to prove their worth. Without robust safeguards, they risk generating inaccurate results, perpetuating bias, and undermining individual rights.
Police departments have ready access to crime-related data like arrest records and crime trends, commercially available information purchased from data brokers, and data collected through surveillance technologies such as social media monitoring software and video surveillance networks. Police officers analyze this and other data with the aim of responding to crime in real time, expeditiously solving cases, and even predicting where crimes are likely to occur. Data fusion software vendors make lofty claims that their technologies use AI to supercharge this process. One company describes its tool as âAI providing steroids or creating superhuman capabilitiesâ for crime analysts.
The growing use of these tools raises serious concerns. Data fusion software allows users to extract volumes of information about people not suspected of criminal activity. It also relies on data from systems that are susceptible to bias and inaccuracy, including social media monitoring tools that cannot parse the complexities of online lingo, gunshot detection systems that wrongly flag innocuous sounds, and facial recognition software whose determinations are often flawed or inconsistent â particularly when applied to people of color."
https://www.brennancenter.org/our-work/research-reports/dangers-unregulated-ai-policing
#AI #AIPolicing #PredictivePolicing #Surveillance #PoliceState