The Suspicious System: Difference between revisions

From titipi
Jump to navigation Jump to search
mNo edit summary
mNo edit summary
Line 1: Line 1:
== The Suspicious System: a conversation on the rise of automated bureaucracies ==
== The Suspicious System: a conversation on the rise of automated bureaucracies ==


''What do the rise of govtech, ethnic profiling and cloud computation have in common? At [https://titipi.org The Institute for Technology in the Public Interest], we have been trying to make sense of "digital transformation" and the way it changes the ''what'', ''who'', and ''how'' of public institutions. We are concerned about their wide adoption of Big Tech solutionism, and increasing reliance on automated bureaucracies. Can governments still be and be held responsible for their policies once they have handed skills and expertise over to consultancies and tech companies? So when an international team of investigative journalists reported on yet another harmful application of “[[https://www.lighthousereports.com/investigation/suspicion-machines/ algorithmic fraud prediction]”, we were anything but surprised. The machine learning algorithms used by the municipality of Rotterdam were developed by Accenture, a global consultancy company, and left to make their biased decisions on who to investigate for welfare fraud for many years.''<br>
''What do the rise of govtech, ethnic profiling and cloud computation have in common? At [https://titipi.org The Institute for Technology in the Public Interest], we have been trying to make sense of "digital transformation" and the way it changes the ''what'', ''who'', and ''how'' of public institutions. We are concerned about their wide adoption of Big Tech solutionism, and increasing reliance on automated bureaucracies. Can governments still be and be held responsible for their policies once they have handed skills and expertise over to consultancies and tech companies? So when an international team of investigative journalists reported on yet another harmful application of “[https://www.lighthousereports.com/investigation/suspicion-machines/ algorithmic fraud prediction]”, we were anything but surprised. The machine learning algorithms used by the municipality of Rotterdam were developed by Accenture, a global consultancy company, and left to make their biased decisions on who to investigate for welfare fraud for many years.''<br>
''In the weeks that followed, we started to string together a long list of scandalous algorithmic experiments in The Netherlands, wondering if the case might or might not be related to a shift to cloud services, how the promise of fair algorithms blocks urgent conversations on systemic racism and oppression, and the importance of staying with actual case studies and lived experience.''
''In the weeks that followed, we started to string together a long list of scandalous algorithmic experiments in The Netherlands, wondering if the case might or might not be related to a shift to cloud services, how the promise of fair algorithms blocks urgent conversations on systemic racism and oppression, and the importance of staying with actual case studies and lived experience.''
<br>
<br>

Revision as of 10:50, 13 April 2023

The Suspicious System: a conversation on the rise of automated bureaucracies

What do the rise of govtech, ethnic profiling and cloud computation have in common? At The Institute for Technology in the Public Interest, we have been trying to make sense of "digital transformation" and the way it changes the what, who, and how of public institutions. We are concerned about their wide adoption of Big Tech solutionism, and increasing reliance on automated bureaucracies. Can governments still be and be held responsible for their policies once they have handed skills and expertise over to consultancies and tech companies? So when an international team of investigative journalists reported on yet another harmful application of “algorithmic fraud prediction”, we were anything but surprised. The machine learning algorithms used by the municipality of Rotterdam were developed by Accenture, a global consultancy company, and left to make their biased decisions on who to investigate for welfare fraud for many years.
In the weeks that followed, we started to string together a long list of scandalous algorithmic experiments in The Netherlands, wondering if the case might or might not be related to a shift to cloud services, how the promise of fair algorithms blocks urgent conversations on systemic racism and oppression, and the importance of staying with actual case studies and lived experience.


Q: When the story came out, we felt right away that we needed to draw a connection between the case in Rotterdam, the Toeslagenaffaire (“Allowances affair”), and reports from whistle-blowers on ethnic profiling at the police in The Hague. Why?

A: Looking at these different cases over time, you start to see how violent and ugly profiling and data bases can be and –once designed into replicable algorithms– function as oppressive operations. Formally meant to administer policy, they chart people, govern communities and instruct rules and regulations based on stereotypes. The application of algorithmic fraud detection in Rotterdam is not a stand-alone case: the Toeslagenaffaire, ethnic profiling by police forces and another case concerning donations to Muslim organizations, shed a dark light onto a broader social and economic reality. In each case, the underlying racist or ethnic profiling would not have been possible without the design and help of public institutions.

Only a few years ago, the Dutch national government used a machine learning system to spot “irregularities” in the allocation of childcare benefits. It lead to more than 20.000 households being wrongly accused of fraud, and subsequently cut off of support. Of course this had devastating consequences for many already precarized lives. It is important to remember that the first batch of the data bases for the Toeslagenaffaire were mostly migrant parents or families with “non-western” names; what most had in common with other eventual victims was their class disposition. There are many tragic stories. In one chilling case a claimant committed suicide after she was charged with fraud. There are many broken families, divorce, and people who ended up on the street. For years, desperate families were crying for help, and no one believed them. The Toeslagenaffaire finally led to a major major political crisis when a group of journalists that was following these parallel stories joined forces and investigated. In the initial phases, none of the accountable institutions and ministries admitted anything, maintaining their racist and classist lies instead, until the evidence of distortion and wilful neglect grew into undeniable proportions and a growing number of whistle-blowers joined ranks. At present, the investigations, compensations and retributions are lagging behind and, and we can probably assume, are wilfully delayed. Another 'welfare-fraud-prediction' case, almost identical to the one in Rotterdam, concerned the foundation of so-called “black lists” used in the banking and finance sector to filter out Muslim clients; both in the case of income tax “irregularities” and so-called “donation fraud”, meaning charity gifts to and from Muslims and Muslim-organisations. The maths were fairly simple: criminal conduct was assumed to be detectable by collecting data about a combination of three basic features: Muslim charity/gifting (finance), double-nationality (race) and welfare (class). It has led to the tragic case of the wrongful allegation of theology Professor Sofjan Siregar. The long process to clear his name (he died from a stroke, which his friends ascribed to the enormous stress, shame and anxiety this has caused him, before he was acquitted) exposed that between 2013 and 2020 ALL Muslims gifting and charity to ALL possible mosques were marked as suspect by data-analists. Groups like Meldpunt Islamofobie have argued for years that this needs to be analysed as an intersection between de-radicalisation policy, counter-terrorism legality and Islamophobia in banking, amounting to institutional racism. Their meticulous investigation about the so-called "bunq casus" and additional surveys among Muslim communities showing similar examples, have resulted in hundreds of stories about Muslim clients not being able to open business bank accounts. For many civil society organisations it is nearly impossible to function without.

Ethnic profiling is of course a well-known remit of policing. Bureau Hoefkade the police in The Hague, and specifically Bureau Hoefkade, is notorious for its racist profiling and violent subjugation of black and brown youth. A high profile officer of Moroccan descent tried to raise awareness internally about colleagues speaking and acting in explicitly racist manners such as in WhatsApp-groups messages where police officers consistently referred to Moroccans as “kutmarokkanen” (cunt Moroccans) and to themselves as “Marokkanenverdelgers” (eradicators of Moroccans/weedkillers). After repeated attempts to change the culture, and her requests for better policy were ignored and refused, she shared her experiences of racism online throughout 2019. Aboulouafa was initially valued as a whistle-blower and her witnesses led to many similar stories coming out. Everybody knew this was happening but the sheer level of corroboration and such a high-ranking officer taking the lead, was different. She was eventually blamed for creating unrest and for casting her colleagues in a negative light. She not only lost her job but had to appear in court. In an astonishing twist, the judge concluded in 2022 that the police force was not to be held accountable and the whistle-blower was to lose her redundancy compensation.

Q: It is really worrying to see the systemic issues emerge when we look at these cases together. It is also obvious that when Muslims are profiled there is apparently no reason to doubt the state? But when families, childcare, and single mothers are at the centre, doubting the policies and their execution becomes a possibility. Anyway, to come back to the phenomena of automated administration that we have been looking at: what do you think happens when digital transformation meets institutional racism?

A: The automation of administration reduces citizenship rights by increasing executive control over the enforcement of government policies. At scale, such systems can be used to enforce policies across a population in a decontextualized manner. In other words, these systems increase the power of executives in policy making, while reducing that of the individuals who are subject to these policies. Administrations use implicit or explicit racist discourse to justify the implementation of these techniques that simply erode democratic rights. These systems will eventually be rolled out beyond racialized populations, but the reckless experimentation with peoples’ lives finds little pushback because it concerns people who are already considered “not quite worthy of our rights”. So digital transformation or e-governance shifts the way governments interact with citizens. The shift is typically framed as a technical matter, which effects only internal operations, but as the horrifying cases that we threaded together clearly show, the automation of internal operations are social in fundamental ways, and have crucial external consequences. Failures in algorithmic systems are often treated as either yet another example of technological birth-pains, or as a side-effect of opaque, complex and biased technologies in the hands of underskilled civil servants. But it is important to remember that the implementation of digital transformation is technologically, economically and politically charged.

Q: The temporal ramifications are different, right?

A: Yes. The computational, digitized version of discriminatory thinking and practice is more efficient and can be transported over time, this is important. The techniques are replicable from one context to the next, from one ministry to the next, and from the present to the future. Those people that have been on a blacklist, or that were targeted by fraud detection algorithms will be traceable forever. It also matters that these algorithms are developed by commercial companies. It means that the mode of production of digital transformation is not any more part of the public policy sphere is it not any more part of the public policy sphere, isn't it more like a form of delegation to the sphere of profit accumulation?, it has become part of the sphere of profit accumulation.

A: The implications are also harmful because of the criminalization of the welfare system. Racialised and precarized communities have many reasons to distrust national and local governments. But the use of e-governance ‒such as in the Rotterdam fraud detection case‒ demonstrates how quickly but also silently inequalities and violence performed towards communities can be further damaging through the administration of welfare.

A: It is the collision of a complex set of conditions partly because of the way in which these technologies can work at a much bigger scale, but also partly because of the false promise that the consultancy groups hired by the municipality to develop the algorithms and technological infrastructures, will be able to deliver a decision that is based on the "right" analysis. Instead they could have trusted and trained civil servants to make these decisions, or find ways for citizens to self-certify their need for welfare support. And sometimes, like in the case of the global consultancy company Mckinsey making decisions in France, it is justified as better-than a decision made by a burnt out worker.

A: Some of our friends who experienced or witnessed deliberate bias and discrimination in their workplace wonder if AI could be maybe less biased than co-workers. One of them works at a Dutch ministry. If you read the stories coming out of the Dutch ministry of foreign affairs, which just recently was investigated and shows astonishing levels of racism, bullying and misuse of power, it becomes clear. In this context, you can imagine that people would say: I'd rather have a robot making decisions. But two wrongs do not make a right, and one type of wrong has very particular implications.

Q: Maybe we can say that digital transformation is about reducing what can be considered "risky relationality"?

A: Indeed. Instead of building the capacity to address people in a way that accommodates the social conflicts they may be entangled in, like racism, poverty, e-governance systems promise to remove the "biased case worker", a risk factor, and turns to treating people as data-points in a scaleable population. These same systems invite a view on policy enforcement that is "agile" like in software production, with digitized policies that can be experimented with, updated and changed at any moment. This produces great insecurity for anyone depending on social support, like people counting on childcare benefits or welfare. However natural it seems for governments to continually increase digitisation, it is a political choice to implement processes that systemically externalize risk. And these risks have obviously different ramifications depending on who you are.

A: The issue is that the consultancy firms and the city administrators trust AI to make the "right" decision, and for them the right decision is also a racist decision. In their eyes they can trust the AI to be more racist, than they can trust their colleagues to be. In addition it also protects particular groups from being targeted, because part of the right wing agenda is that they don't want to accuse lots of middle class white mothers of fraud, because it effects their voter base. They want to uphold the conservative values of the white middle class family.

Q: Right. And if there are still some reasonable humans in the loop, these people might not have the time, training, authority or capacity to question the decision of the algorithm?

A: Yes. So instead of addressing the causes of oppression and systemic racism, situations for friction and encounter are being replaced by technocratic processes such as “automated fraud prediction”. If we can just make them fair or neutral, so the stories go, these processes then can claim to act as arbiters of social conflicts and disparities. In the process, digital bureaucracies systematically reduce relationality and remove or even suppress possibilities for confronting inequalities collectively.

A: Therefore we should not forget to situate the story in Rotterdam, home of right-wing Islamophobe politician Pim Fortuyn.

A: Yes. Of course we know that similar techniques are implemented by many municipalities all over Europe. But it is unsurprising that Rotterdam is on the forefront of implementing automated bureaucracies on this particular area of government responsibility, the distribution of welfare benefits to those who need it. Leefbaar Rotterdam, the right wing populist party that Fortuyn founded, has been the cities’ largest political party for more than 20 years. They have made the “crackdown on welfare fraud” a recurring point in their election programme. The implementation of so-called innovative technologies such as algorithmic fraud prediction shift attention away from social benefits as a basic form of solidarity, a political agreement to guarantee some level of social security for everyone. Instead, welfare is turned into a fake opportunity for optimizing government spending and a platform for righteous right-wing demonstrations of fraudmania. In the process, anyone who dares to ask for government support, is turned into a potential fraudster.

A: The city of Rotterdam was sadly one of the few cities that allowed journalists to investigate the actual technology at work. This should be of course a basic principle for any policy implementation, that they are open for public scrutiny, and in the case of using algorithmic techniques, the algorithm, the training data, and the software should be made available.

Q: Can we draw a connection between core operations of public institutions shifting to cloud services, the rise of consultancy companies specialised in technology for governments, or "govtech" and the increase in algorithmic policy administration?

A: As part of a liberal anti-bureaucracy discourse, public institutions are often cast as inefficient, backward and “bureaucratic” in contrast with the suave agile modes of start-ups and tech consultancies. In the process, governments have started to adopt Big Tech framings, such as considering its populations through the prism of scale, collapsing contexts instead of exploring approaches that serve different publics, and linear technocratic solutions for complex problems. This process accelerated in the last few years when immunity certificates or COVID passport apps where introduced, and lockdowns forced an increased dependency on digital platforms. It was felt that digital bureaucracy had to be deployed on a massive scale to enforce policies, from managing risky exemptions for so-called front line workers, to extending border control into everyday environments. It paved the way for handing over core operations to global consultancy firms, like Accenture in the case of Rotterdam, and Big Tech companies such as Microsoft and Google. The business of govtech is booming, but they only offer solutions to problems these companies know only themselves best to solve.

Q: But this process of outsourcing government policy to consultancy companies has been going on for a while though?

A: Yes, but the current connection between consultancy companies and cloud services is relatively new. Companies such as Deloitte, Mckinsey and Accenture are successfully closing the loop because they manage and sometimes even sell computational infrastructure, develop and deliver software and algorithms, write policy documents and provide auditing services. So it seems they set the conditions and also deliver the tools for their execution. It is a win-win situation.

Q: Some people would say we need to focus on ethics and fairness, asking “how can these technologies be made more fair?”

A: A focus on “algorithmic bias” reduces the devastating implications of those algorithms to a design issue, a problem that can only be solved by the same companies that produced the technology in the first place. Part of the issue is that techniques like machine learning require costly production environments. And this in turn requires local and national governments to commit to an expansionist model of computational infrastructures, of clouds, mobile phones, and sensor networks, driven by Big Tech.

Digitizing a whole society means there is already a silent agreement that these kinds of investments can and should be taken, and that the fact that these techniques intensify structural discrimination and inequalities, is apparently OK. This is the structural racism of so-called smart governance, that the price that is being paid is of course not equally distributed.

Q: And another classic that we’ve been asked many times, “So what do you think we should do now?”

A: Before we answer this question we first need to take a step back. In this conversation, we have tried to link different cases to show how they are mapped onto a long history of thinking and scholarship that categorizes people, and on to existing logics and power relations. Historicizing creates space to find answers. The second part is that we need to think about who are our partners, who are the activists or engineers or historians or political economists who can do the critical analysis of these cases with us, who can we start to think with about complex answers to these complex problems. We need to build coalitions to bring all those different strings together. We learn a lot from grassroots- and community organisers at Meldpunt Islamofobie and CtrlAltDel, and from their incredible labour on algorithmic bias and ethnic profiling which goes back much longer than most current, big research projects on AI ethics. Their work is based on the trust they have earned in the actual communities that are impacted by these big cases, and we should include them when we try to think what to do. The third part is trying to distinguish between past, present and future. What needs to be done now, is different from what has to be done now for the future. Often the long term answers are delegated to others. When we say: we need to overhaul the whole system, we need to have a radical change in the way public policy includes for profit, it needs to be for the public by the public. This means we need to start today. Because in a few years, there will be another scandal and then in ten years yet another. And then we will say: if only we had started ten years ago to undo the system that produces these cases.

A: A very concrete element is that we need to hold accountable those parties, ministers, groups that are responsible. In the cases of the Dutch context, this is exactly what has been avoided by commissioning reports. It is a joke that there has not been any consequence, no punishment for those that have done damage. It is not enough to document, we need to have those responsible trialled, to strip them of their power, to remove their credentials. This is what we can do about it. Because the abolishment of a racist system is not a mythical moment where everything collapses at once. You need to start with the bricks and it starts now.


The Institute for Technology in the Public Interest, April 2023