The Law That Shapes Us: "Law, Artificial Intelligence and Human Rights; Connecting the Dots"

Banner of John Croker's The Law That Shapes Us article

Launching our new project The Law That Shapes UsJohn Croker, researcher at the Bonavero Institute of Human Rights, explores Law, AI, and Human Rights through two high-profile case studies.

It’s an iconic scene in the British comedy oeuvre: a bank worker with thickset glasses sits behind a large computer monitor, which looms large over her. Another woman enters the room, using crutches and carrying visible injuries. She sits at a chair opposite the desk, and begins outlining how she was knocked to the ground, and her handbag and all her credit cards have been taken. She needs to pay her gas bill, and buy a birthday gift for her grandson. The bank worker, who appears emotionally detached from the situation, says ‘well we better get you some new ones then’. After typing some text into the computer, the bank worker looks up to the injured woman and says ‘computer says no’, before coughing on the injured woman as her face wilts.

The computer says no skit (of which there are many variations) is both funny and shocking because of how harsh, abrupt and cold the response is from the person sitting behind the computer (and from the computer itself) to a wide range of situations, from the mundane day to day request right through to circumstances that warrant a compassionate response such as the skit above.

Regardless of the merits of the person seeking assistance, the computer always says no.

Technology is permeating through every part of our lives. Many people have embraced technology as a part of how they navigate each day. There are apps that can ‘assist’ people with pretty much any task imaginable, and there are people who will now turn to an app to undertake these tasks. Technology can alleviate suffering and bring joy to our lives, both individually and also to our collective wellbeing. As an example, Artificial Intelligence (AI) has been used successfully to make our transport system work more effectively, which in turn reduces how much time we spend going to the places we need to go. AI has been used to improve the response times of emergency services to calls for help. It has been used to help medical practitioners diagnose certain types of skin cancers more effectively, and eye checks that utilise AI can even identify the risk of heart disease in a non-intrusive manner in under a minute.  

Technology can help reduce our everyday burdens, and can make our lives easier, healthier and more fulfilling.

[The Post Office Scandal and the Robodebt System] show that where technology and AI goes wrong, the scale of the injustice can result in unprecedented impacts across societies.
John Croker

So how does this gel with the vision of technology from the Little Britain skit? Commentators often foresee either a stress free future for us all that is facilitated by technology, or a dystopian future where we become subservient to the technology that increasingly controls our lives.

Both futures are possible, but the reality is more likely to take aspects of the good and the bad.

Technology has been at the heart of two injustices that courts have labelled significant miscarriages of justice. The first example will be familiar now to many people in the UK: colloquially known as the ‘post office’ or ‘horizon’ scandal. The second is from Australia, where the Commonwealth Government sought to utilise AI to identify overpayment in the welfare system through what is colloquially known as the ‘Robodebt System’. The first example resulted in the most widespread miscarriage of justice in the UK legal system’s history.[1] The second example was labelled “a shameful chapter” in government administration in Australia and led to the government unlawfully asserting debts amounting to $1.763 billion against 433,000 Australians,[2] and is now the subject of a Royal Commission seeking to identify how public policy failures could have been made on such a significant scale.[3] 

Both examples show that where technology and AI goes wrong, the scale of the injustice can result in unprecedented impacts across societies.

The Horizon/Post Office Scandal

The post office is at the centre of many communities across the UK, not only for post but also for banking, commerce and to obtain other services. Post offices are managed by small business people called sub-postmasters. These sub-postmasters are at the heart of how these businesses work, and are central to the trust that communities place in the Post Office as an institution.

In 1999 the UK Post Office procured a computer system called ‘Horizon’. The system was ‘cutting edge’ and brought the Post Office into the computer age.[4] The system was designed to account for all transactions and track all stock relating to how Post Offices operated, and was used to oversee the work of sub-postmasters.

Soon after the Horizon system rolled out across Post Offices in the UK, many sub-postmasters noticed shortfalls in their accounts. Occasionally these shortfalls were for modest amounts, or even were calculated in favour of the sub-postmaster, but there were numerous occasions where the shortfalls ran into the thousands of pounds.[5] And the response of the Post Office was often to prosecute these sub-postmasters to the fullest extent (including pushing for prison sentences in a number of cases).[6]

The Post Office relied entirely on the Horizon system to identify these shortfalls, and used the data created by the system to prosecute or pursue civil remedies from the sub-postmasters. The Post Office did not seek corroborating evidence to substantiate any of the shortfalls as fraudulent or theft, instead wholly relying on the system as presenting sufficient evidence to purse the sub-postmasters for ‘repayment’ through legal means.[7] This occurred for many years before a group of sub-postmasters, through public campaigns and legal proceedings, clearly demonstrated that the Horizon system contained a large number of software coding errors, bugs and defects, and that the system itself had incorrectly ‘identified’ accounting shortfalls. The Horizon software errors resulted in many tragic cases of hardworking sub-postmasters across the country being wrongly prosecuted, and in some cases imprisoned, for crimes they had not committed.[8]

It was difficult for the sub-postmasters to identify how the system went wrong. One sub-postmaster who operated a Post Office in Wales, Mr Alan Bates, had a shortfall of just over one thousand pounds emerge on the Horizon system soon after it was installed at his Post Office.[9] Mr Bates sought assistance from auditors from the Post Office, and neither he nor they could identify the shortfall identified by the system.[10] Mr Bates explained in a subsequent court hearing, "due to the way that Horizon worked in practice, it was impossible for me, as sub-postmaster, to accurately track and understand transactions that had taken place and, therefore, determine whether an actual loss had occurred and satisfy myself that it had arisen due to my negligence, carelessness or error." [11]

The Horizon software errors resulted in many tragic cases of hardworking sub-postmasters across the country being wrongly prosecuted, and in some cases imprisoned, for crimes they had not committed.
John Croker

Another sub-postmaster, Mrs Stubbs, similarly found significant and growing shortfalls identified by the Horizon system in the lead up to the Post Office cancelling her licence to operate a Post Office: "I investigated matters as far as I possibly could. However, I was unable to ascertain the possible cause(s) of the shortfalls without assistance from the Post Office. I was able to review the data printable from the Horizon system, but I could not interrogate it without access to the data that the Post Office held or had access to. When you are looking for a reason for an error, you need to have all the information … There is simply nothing you can do if you do not have sight of the other end of the transactions - particularly if the person who does is requiring you to find and prove the cause of the problem. The transaction logs gave me no more information than was showing on the Horizon system so I was unable to investigate further." [12]

Both Mr Bates and Mrs Stubbs tried to understand how the ‘debts’ that the system had identified came about, but it was impossible for them to understand how the system worked and how it had calculated those amounts. The system was inscrutable, and they were unable to ‘prove’ their innocence.

The Robodebt System – Injustice at Scale

In the ‘Robodebt Case’, the Australian Government procured a ‘datamatching’ system that sought to cross check welfare data with tax office data, and identify where welfare recipients had been ‘overpaid’.[13] This in turn resulted in the government unlawfully (as they did not have a legal power to assert these debts in this manner) seeking repayment for what the algorithm had identified the person owed, including repayment being sought through private debt collection agencies.[14]

The Federal Court of Australia noted that the nature of this technology, deployed against welfare recipients “who are marginalised or vulnerable and ill-equipped to properly understand or to challenge the basis of the asserted debts so as to protect their own legal rights” was a particularly egregious aspect of the system that amount to an injustice.[15] Many of the individuals who were identified by the ‘Robodebt’ algorithm spoke in court proceedings of the ’shame and hurt’ they felt at being wrongly branded “welfare cheats”, with heart-wrenching experiences shared that included a mother who attributed her son’s suicide to having been targeted by government authorities following the use of this system.[16]

The Australian Government has now established a Royal Commission into the Robodebt scheme, to establish how the system was established, designed and implemented, and why the clear risks associated with using an algorithm in this way were not appropriately addressed before it was deployed against welfare recipients.[17]

Conclusion

Both examples show how significant injustices can result where technology is used without appropriate design, transparency, and regulation. And while the computer says no skit is amusing in the various guises deployed in the Little Britain series, the risk of injustice resulting from the use (or misuse) of technologies such as AI is anything but comical, and demands urgent research and regulation to avoid these outcomes.

A recent House of Lords committee report into the use of AI in the legal system has cautioned against the unregulated development of AI and the intersection with both government and private systems such as the above examples. One significant challenge is the transparency and accountability of technology. The experience of both Mr Bates and Mrs Stubbs, who were unable to ‘peek under the hood’ of the Horizon system to understand how it had calculated the shortfalls against them, is one of the key risks identified in the House of Lords report.[18] Indeed, as one of the expert witnesses stated to that Committee, “the humans have to understand how [the technology] is working in order to be able to spot the times when they need to not trust the technology”.[19]

... while the computer says no skit is amusing in the various guises deployed in the Little Britain series, the risk of injustice resulting from the use (or misuse) of technologies such as AI is anything but comical, and demands urgent research and regulation to avoid these outcomes.
John Croker

Such difficulties are an added burden to processes that are already complex and costly for individuals to challenge, such as contesting adverse decisions made by government departments or seeking review of adverse credit scores made by banks. There is a significant risk that the Horizon and Robodebt examples will be repeated unless AI is designed and used in a manner that is consistent with human rights law. This includes ensuring that algorithms utilise the ‘human rights by design’ principles to ensure accountability, transparency and rights of appeal are a feature of how these tools work.

As Baroness Hamwee recently asked: "What would it be like to be convicted and imprisoned on the basis of AI which you don’t understand and which you can’t challenge? Without proper safeguards, advanced technologies may affect human rights, undermine the fairness of trials, worsen inequalities and weaken the rule of law. The tools available must be fit for purpose, and not be used unchecked.[20]

Both the Post Office Scandal and Robodebt give us some insight into what is at stake here. It is imperative that we can harness emerging technologies whilst also ensuring such tools are used transparently, accountably, and in a manner that protects and promotes the human rights of all people who are impacted by them.

Follow the conversation via #TheLawThatShapesUs and find John on LinkedIn.

 

References

[1] BBC News

[2] Prygodicz v Commonwealth of Australia (No 2) [2021] FCA 634, 4, 5.

[3] Royal Commission into the Robodebt Scheme

[4] Bates v Post Office (No 3) [2019] EWHC 606 (QB), 6.

[5] Ibid, 10.

[6] Ibid, 16.

[7] Ibid, 18.

[8] Ibid, 7.

[9] Ibid, 107.

[10] Ibid, 107.

[11] Ibid, 107.

[12] Ibid, 170.

[13] Prygodicz v Commonwealth of Australia (No 2) [2021] FCA 634, 2.

[14] Ibid, 4.

[15] Ibid, 7.

[16] Ibid, 23.

[17] Royal Commission into the Robodebt Scheme, Letters Patent

[18] House of Lords, Justice and Home Affairs Committee, 1st Report of Session 2021-22, Technology Rules? The advent of new technologies in the justice system, 30 March 2022, 50.

[19] Professor Carole McCartney, Ibid.

[20] Baroness Hamwee, Chair of Justice and Home Affairs Committee, House of Lords

Other news

Found within