- Police refuses to provide details, claims it follows guidelines
This week, a newspaper reported the capture of a crime suspect. The account read that eight different images of the suspect had been made by the Police through “AI technology” and that they had been used to identify him as he had changed his appearance.
When The Sunday Morning inquired about the use of Artificial Intelligence (AI) in law enforcement from Deputy MInister of Public Security Sunil Watagala, he suggested speaking to the Police, citing his inability to answer the question.
Admitting that “latest technology” including AI was used in investigations, Police Spokesperson Assistant Superintendent of Police (ASP) F.U. Wootler refused to give out details of the methods of use and the guidelines followed when using such tools.
“Overall, you can say that it is AI, but I am not going to divulge the way we did it because it might hinder future operations. As to guidelines, we do have guidelines for its use, but I cannot share them.”
Further, the Spokesperson assured that the Police was “using all modern technology and that nobody could escape” law enforcement.
AI algorithms used in forensic processes
Most commonly, references to AI are about Large Language Models (LLM) like OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude. Studies have suggested the eventuality of AI algorithms being used in forensic processes such as fingerprint analysis, facial recognition, and ballistic comparison.
This is because the tools show potential due to their ability to rapidly process large data sets and identify subtle patterns that human analysts may miss.
Prior to the use of generative AI, the common visual aid used in detecting suspects was a hand-drawn pencil sketch which would be shown to a witness and corroborated. This would then be shared with the media.
However, an emerging trend where generative AI images are being used for this purpose can be seen.
Corroboration a must
“They probably still do a pencil sketch and get generative AI to fill in the blanks,” said cybersecurity and AI policy expert Asela Waidyalankara.
While admitting that this might be a helpful tool for law enforcement, he highlighted the importance of these generated images being corroborated by witnesses.
“That is where due process and transparency come in,” he asserted.
Generative AI platforms use pre-trained data which could lead to biases in the images they generate. Waidyalankara illustrated this through an example: “If you prompt ChatGPT to generate an analogue watch on which the time is 2.08, it cannot do that. It will always come as 10.10 because that is the time on all the watch images on the internet, which is how watch makers show the watch face.”
The Sunday Morning generated this image and the result was as suggested.
“What we do not want is the misidentification of an innocent person,” Waidyalankara added.
Recently, a man from Brooklyn, New York was reportedly arrested after facial recognition technology used by the New York City Police Department misidentified him.
However, an argument made in favor of the use of generative AI is that the use of pencil sketches could also lead to misidentifications.
AI literacy to the fore
This is when the issue of AI literacy enters the conversation. While those who see a pencil sketch would know that it is not 100% accurate because it is only a sketch, not all members of the public who see an AI-generated image of a crime suspect would know that the image could be inaccurate.
Waidyalankara highlighted the importance of law enforcement bodies maintaining transparency about the processes followed to generate images and indicating this clearly.
“Sometimes photorealistic images are generated so it is important to mark them as such. You have to be transparent about its use. If not, innocents will be rounded up because AI does not know the difference,” he said.
Noting that there were special platforms built for the use of AI in law enforcement, he highlighted however that these could be expensive, whereas ‘off-the-shelf’ software like ChatGPT may have more cause for concern.
Recently, California, which is the home of major tech companies, passed a law requiring the Police to disclose how it uses generative AI.
“I would like the Police to have a formal procedure when it comes to these processes”, Waidyalankara said.
Research reveals that the question of accuracy in information, caused by hallucinations, and the ‘black box’ effect where the reasoning behind a conclusion made by AI are areas of concern, while privacy and data protection are some other areas in which the use of AI in criminal procedures and forensic investigations come into question.
Presumption of innocence
Committee for Protecting Rights of Prisoners (CPRP) Chairman Attorney-at-Law Senaka Perera emphasised the importance of leading with the principle of the presumption of innocence.
“Presumption of innocence should be maintained and not damaged; this should be considered during the use of AI.”
He felt that the use of AI tools in investigations was essential as criminals too were using the latest technology to aid their criminal activity.