In a news release published on 10 April 2018, the Serious Fraud Office (SFO) announced its intention to use artificial intelligence in all new investigations. The organisation said its new AI system, which automates document analysis, will scan millions of documents and be able to detect patterns, remove duplicates, group information by topic and eventually sift for relevance. The new ‘robot’ was trialled in the Rolls-Royce case to detect material that might attract legal professional privilege in around 30 million documents. This was the first criminal case in the UK to make use of AI, and it was hailed a success by the SFO.
Up until this point, the job of sifting through complex documents was carried out by independent barristers. The SFO highlighted various benefits of its new and improved document review capability, including quicker investigations; reduced costs; a lower error rate (compared to humans doing the work); and the ability of officers to spend more time on investigation and prosecution work. For the SFO, which must analyse vast amounts of data during its investigations, the advantages of using AI to automate these document-heavy tasks are clear.
However, as a barrister who specialises in defending individuals accused of serious, high-value fraud, I’m naturally concerned about the risks of the SFO’s new, so-called ‘Robo-Lawyer’. The outcome of an SFO investigation can have a significant impact on those involved, most obviously through financial penalties – Rolls-Royce paid £671m under the deferred prosecution agreement (DPA) made with the SFO, which was approximately equivalent to its forecasted profits for 2016. Damage to the reputation of companies and individuals can also be significant, as well as to future employment opportunities for employees who are dismissed and prosecuted as a result of the investigation. With so much at risk, it is important that I remain vigilant when it comes to the processes used by the SFO and other agencies to investigate those suspected of committing fraud.
As with all current and future uses of AI in the legal justice system, the critical concern is how accurate these technologies are. It is widely recommended by professionals from both the legal and tech sectors that these systems be used to complement the work of humans, rather than replace it. In theory, this means errors are much less likely to occur as the expertise of both the robot and human have been involved in the process. Human input, including insight, creativity and background knowledge are all essential to ensure the outcome, i.e. the decision whether to prosecute, is the right one.
In its news release, the SFO explained:
“By automating document analysis, AI technology allows the SFO to investigate more quickly, reduce costs and achieve a lower error rate than through the work of human lawyers, alone.”
This at least suggests the SFO intends for its AI platform to facilitate rather than take over the document analysis work of its case teams. However, clear and precise details are needed on how this will work in practice. To avoid serious errors, risk assessments in addition to increased and continued transparency around the relationship between automated tasks and human input is essential to ensure the process is monitored and standards are built.
Human rights and the rule of law
An emphasis on AI technology to assist with criminal investigations raises concerns about the potential to weaken the fundamental principles that underpin our justice system, namely the rule of law and an individual’s right to a fair trial. Data privacy is another key concern. It is essential that the SFO, and other enforcement agencies, do not overstep their powers when using AI to investigate those suspected of serious fraud. Indeed, systematic bias has been observed through the use of AI systems to predict crime due to the pattern-detecting nature of algorithms.
Disclosure of evidence
Another pressing question that I have is will the defence have access to the AI generated data relied on by the prosecution? Under the current law on disclosure of evidence in criminal prosecutions, which does not take account of algorithms being used to review evidence, this seems unlikely. To avoid unfairness and lack of transparency, those acting for the defence should have access to the AI platforms used by the investigating body in order to review and question the process, and carry out an independent analysis. Disclosure of evidence in criminal cases is currently under scrutiny in an ongoing Justice Committee review into “extensive issues” with the current system. I await the outcome and whether the issue of AI is sufficiently addressed.
The use of AI by criminal investigation and prosecution bodies is inevitable. Indeed, these state-of-the-art systems that can produce results at speeds 2,000 faster than a human have many benefits for enforcement bodies under time and financial pressure to review masses of data and trace stolen assets. To reduce errors and protect the fundamental principles of our justice system, robust mechanisms are required to monitor the use of these platforms. It is important that as a member of the criminal defence community I remain up-to-date with these changes and aware of the risk of injustice.
Mark Kelly Fraud Defence Barrister in London, UK
My professional experience, approachability and considerable expertise mean that you will be in a very safe pair of hands when itcomes to your defence, and my track record is second to none. I consistently obtain favourable results for clients accused of a variety of fraud and financial and regulatory cases.
Serving Manchester, Birmingham, Leeds, London, Bristol and the rest of the UK. If you have any queries about issues raised in this blog, or if you want to discuss your case, please do not hesitate to get in touch with me directly on 020 8108 7186 or fill out a contact form and we will get back to you as soon as possible.