Unveiling Cybercheck: Insights into Legal Battles Over AI’s Role in Criminal Prosecutions

Keerthi KasturiLaw

Unveiling Cybercheck: Insights into Legal Battles Over AI's Role in Criminal Prosecutions

When scientists and engineers said robots would steal our jobs, they were not wrong. Let me introduce you to Cybercheck, an artificial intelligence (AI) programme determined to overtake lawyers’ jobs in the US. Law enforcement and prosecutors across the US have increasingly used this tool to assist in investigating and prosecuting serious crimes like murder. The potential benefits of using AI in the legal system are truly remarkable. They include increased efficiency, faster processing of large amounts of data, and the ability to identify patterns and connections that humans might miss. This is a promising step towards a more effective and streamlined legal system. However, it is essential to note that the use of AI in the legal system also raises significant concerns, which we explore in this article.

What is Cybercheck?

Developed by Adam Mosher, a renowned AI expert, Cybercheck was created to revolutionise the legal system. It uses advanced machine learning algorithms to analyse large amounts of online public information, such as social media and email addresses, to pinpoint suspects’ locations and other details. The company claims the tool has over 90% accuracy and can perform research tasks in hours that would take humans much longer. This essentially demonstrates the potential of AI in expediting legal processes.

Why are we talking about it?

Despite its widespread use, defence lawyers have raised significant concerns about the tool’s reliability and lack of transparency. They argue that its methodology is unclear and has not been verified independently. In a New York case, a judge ruled that the court would not use Cybercheck evidence as the prosecutors failed to provide its reliability. Similarly, in Ohio, a judge blocked Cybercheck analysis when Mosher refused to disclose how the programme worked.

These issues raise serious questions about using such opaque technology for evidence and its potential to undermine serious questions. This has sparked a necessary debate about the ethical implications of AI in the legal system, urging us to tread carefully in this era of technological advancements.

Legal Challenges and Allegations

Cybercheck aids law enforcement by searching parts of the web that search engines do not index. The tool compiles reports from its findings and presents them as actionable intelligence. NBC News reported that various states have paid Global Intelligence Inc., the Canadian company behind Cybercheck, between $11,000 and $35,000 for Cybercheck services.

In a recent motion related to a fatal robbery case in Akron, Ohio, defence lawyers representing two murder defendants demanded that Mosher disclose the proprietor code and algorithm. Their filings on 10th April also accused him of lying about his expertise and the usage history of the underlying technology. These severe allegations underscore the potential legal challenges and ethical concerns surrounding the use of AI in the legal system, emphasising the need for a thorough examination of its implications. Mosher and Global Intelligence declined to comment on the ongoing court matters.

In another Ohio homicide case, Mosher refused to provide Cybercheck’s software to defence experts, citing its proprietary nature. This refusal led the presiding judge to block Cybercheck’s analysis. An Akron Police Department spokesperson did not respond to inquiries about the investigation, and the County Prosecutor’s Office refrained from commenting due to pending litigation. However, in one of the previous hearings, the prosecution acknowledged Mosher’s strengths in software and open-source intelligence but admitted that Mosher was still developing his legal expertise. The prosecutor, Stano, contended that any issues were likely due to misunderstandings rather than misconduct.

Cybercheck and the Akron Robbery Case

In the Akron robbery case, the police arrested two men, Coleman and Farrey Jr., in July 2021, nine months after the crime. The prosecution charged them with aggravated murder and robbery. Evidence against them included ballistics analysis and surveillance footage. In December 2022, Cybercheck’s report placed both men at the crime scene by analysing over 1 million gigabytes of web data to create cyber profiles for Coleman and Farrey. These profiles were connected to the crime scene via a network address from a Wi-Fi-enabled security camera.

However, the Cybercheck’s report did not explain how it verified the connection of the suspects’ devices to the scene. Moreover, the defence experts could not find the cited social media accounts. One of the testimonies stated that Cybercheck’s conclusions were 98.2% accurate. However, the details about this methodology and claim were not available. Additionally, Cybercheck had not undergone peer review. The company, Global Intelligence, later claimed that the University of Saskatchewan had reviewed the programme. Subsequently, the university claimed it had a research contract with the company. However, it did not create or review the software document, which was never peer-reviewed.

Where does that leave us?

Cybercheck indeed comes with many questionable means. Moreover, the tool or programme does not preserve the data it uses for its analysis. This essentially complicates the defence’s ability to verify the findings. The creator maintains that Cybercheck processes and analyses data without indexing or collecting it, citing file sizes and governance considerations. Purely from a legal perspective, this raises a critical question. This directly prevents the defence from verifying the evidence. In other words, stopping the evidence from verifiability or integrity checks impedes the defendant’s right to a fair trial.