AI: engine of inequality?

IA inequality

A number of books have been published recently in the United States – including Weapons of Math Destruction by Cathy O'Neil and Automating Inequality by Virginia Eubanks – which highlight the risks for some sections of the population arising from the widespread use of Big Data and Artificial Intelligence (AI). In an interview earlier this yearVirginia Eubanks recounted the anecdote which prompted her to write the book. It was a short conversation with a mother living on public welfare assistance who had been issued with an Electronic Benefits Transfer (EBT) card, which replaces food stamps, the original method of distributing this kind of social assistance. "You think these EBT cards are a good thing because holders feel less stigmatized when they pay for their shopping at the checkout. Rather than pay with food stamps, they pay with a card just like anyone else. But this woman told me that although the cards are a good thing, they also enable tracking of all her purchases." This mother's fears turned out to be well-founded. In 2014, the Governor of the State of Maine, Paul LePage, commissioned an analysis of all transactions carried out with these cards, which revealed that 3,650 payments had been made in stores selling cigarettes and alcohol or outside the state. These transactions represented a mere 0.03% of the 1.1 million purchases made during the period, but it prompted the Governor to table a bill for legislation requiring EBT beneficiaries to keep all their receipts for potential checks. The proposed legislation was rejected, but Eubanks believes that the Governor's move has had the effect of stigmatizing beneficiaries of government aid. In her book, she gives a number of examples illustrating just how efficient algorithms can be in fraud detection in the field of health insurance and social welfare grants. In Indiana, the Temporary Assistance for Needy Families (TANF) program provides cash assistance and support services to families with children under 18 so as to help them achieve economic self-sufficiency. Nowadays just 8% of the working-class poor with children are in receipt of welfare benefits, down from 40% before the TANF system was set up. Fraudulent claimants have been squeezed out of the system, but so have many honest people whose files were incomplete. "The designers of these systems have tended to assume that the only place that discrimination enters the system is in front-line case worker decision-making (…) so it makes sense to have a systems-engineering approach which (…) obtains data on case-workers' decisions and can then maybe shape their decision-making towards equity", explains Eubanks, warning nevertheless: "However, data scientists, engineers and top-level administrative brass in social services have all kinds of biases too, and those biases get built into these systems in ways which are much more invisible and, I think, much more dangerous, because they scale so quickly, and these systems are so fast."

Their designers have made a poor assessment of the complexity of the way society is made up and their algorithms create secondary effects which run counter to what they initially intended.
Laurent Alexandre

Laurent Alexandre

Laurent Alexandre, CEO of DNAVision and author of La guerre des intelligences (The War of Intelligences), underlines the dangers of bias in the algorithms used to make this kind of decision. He explains: "Algorithms produce a number of different levels of risk. There are overtly malicious algorithms that have been designed specifically to harm a given set of people. Then there are also algorithms which I'll call hazardous. Their designers have made a poor assessment of the complexity of the way society is made up and their algorithms create secondary effects which run counter to what they initially intended. Lastly, we have the problem of Deep Learning. Here the algorithm is no longer programmed 'by hand' but learns on the basis of the data that is fed to it. This has a 'black box' effect and thus of course raises the issue of how to monitor a system based on Deep Learning, given that the database which has served to teach the neural network can itself be directed one way or another."

AI HELPING THE POLICE WITH CRIME PREVENTION

police ia

Shutterstock

Police crime prevention algorithms in the spotlight

predpol MAPS CRIME

predpol

If there is one domain where algorithms and their potential bias could have very direct consequences for minority groups and social cohesion it is the field of security. In the United States and subsequently in Europe, predictive approaches have increasingly been adopted, often with convincing results.Using a system such as Predpol, police patrols are planned by a predictive algorithm based on past offenses. The providers of these policing tools claim a significant reduction in crime but there is a frequently-observed bias, namely the fact that the algorithm tends to focus police patrols on areas with the highest historic levels of criminality. This concentration of police presence often has the predictable result that the number of incidents in these sensitive areas increases, further augmenting the algorithm's feedback loop. This bias was criticized by the San Francisco-based non-profit organisation the Human Rights Data Analysis Group at a time when Chicago was planning to extend its use of predictive algorithms in a bid to curb the number of homicides in the city. Laurent Alexandre stresses: "When you consider that the prison population of Afro-American origin is over-represented in the United States, any automated Deep Learning system which uses this data will encourage an increase in monitoring of this section of the population. The system assumes a higher-than-average probability that a member of that community will commit a misdemeanor", arguing: "it's a self-perpetuating system so if you want to control for and correct the undesirable consequences of this type of algorithm then you have to do so by hand."

desbiolles
Regard d'expert

Jean-Philippe Desbiolles

Vice President

IBM Watson France

We have to constantly watch out that the way these systems learn is consistent, that the system doesn't go off the rails, and that the learning loop is supervised by skilled people. 

Jean-Philippe Desbiolles, Vice President of IBM Watson France, also talks about the need for human beings to supervise algorithms. "I'm a fervent advocate of supervised systems; I'm staunchly opposed to using unsupervised systems. In practice, when we roll out our solutions, in parallel we create a cognitive competence centre with people who supervise how the systems learn", he reveals, warning: "We have to constantly watch out that the way these systems learn is consistent, that the system doesn’t go off the rails, and that the learning loop is supervised by skilled people." This French pioneer of Artificial Intelligence does not share the 'black box' view of algorithms that Laurent Alexandre talked about above. "Since I came back to France, people have been talking non-stop about AI as a 'black box'; but there's nothing more 'black box' than human beings themselves!" he argues, explaining: "Paradoxically, I don't think see an AI system as a 'black box' because it has been put together in such a way that when it's about to make a recommendation it will go into its body of data and seeks out all the evidence and facts on which to base its recommendation." In fact, faced with the biases thrown up by its predictive policing system, the city of Oakland, California, has decided to cease using PredPol for the moment. The increase in police pressure in certain communities was seen as stigmatizing, and the people affected believed that the predictive system amounted to nothing else than racial profiling. The publisher of the PredPol software claimed that its algorithm was neutral in terms of the racial background of the residents of those districts but it nonetheless led to twice as many Afro-Americans being arrested than white citizens. And the bias inherent in such algorithms can sometimes be rather subtle. Researchers have uncovered disparities in the performance of face recognition software, which is increasingly being used in the United States and China, demonstrating that the algorithms are not entirely free of racial bias. A number of studies have demonstrated that recognition performance fluctuates according to racial origin. One example of this is the algorithm developed by face recognition specialist Cognitec, based in the German town of Dresden, which is used by the police in a number of US states. The recognition performance of Cognitec's software is 5-10% lower for Afro-American faces than for Caucasian subjects, which means there is a risk that monitoring of the black community will be stepped up.

AUDITING ALGORITHMS 

audit

CNN MONEY

Can ethics be instilled into algorithms?

Given this bias, many people, including Chelsea Manning, have called on data scientists to sign up to a code of ethics. Virginia Eubanks even talks about getting all algorithm developers to take a sort of 'Hippocratic oath' but she also wants to see greater transparency in the decisions made by algorithms. However, Laurent Alexandre counters that "the Hippocratic oath is already difficult enough to apply in the field of medicine; applying it to data is likely to be very hard indeed! You know your doctor, but no-one actually knows the data scientists who have developed the algorithms that analyze your data on a global scale. Making data scientists from a number of countries, together with their many subcontractors, sign a Hippocratic oath when there are sometimes thousands of data scientists working on an expert system… well, introducing that sort of monitoring would be no small matter", he argues. 

The French government seems to believe that, while it may not be feasible to impose a code of ethics on all data scientists worldwide, it would be a good idea to make algorithm-based decisions subject to an audit process. Mounir Mahjoubi, France's Secretary of State for Digital Affairs, raised this subject at the Big Data Paris 2018 event in March.

BEING ABLE TO AUDIT ALGORITHMS WILL BE CRUCIAL GOING FORWARd

algorithme

This need for transparency came to the fore in France in 2016 when the Droits des lycéens ('High School Students' Rights') organisation obtained publication of the APB algorithm, which is used to place secondary school pupils who have obtained their 'baccalauréat' (secondary education diploma) in higher education establishments, taking into account their aims and wishes. The publication of the programme used by the French Ministry of Education highlighted the need for transparency.French law does not permit an individual decision by the State to be taken by an algorithm alone; it should only be used as an aid to decision-making. Moreover, those affected must be informed and are entitled to obtain the basic information which led the algorithm to make a particular decision. "Being able to audit algorithms will be crucial going forward", stresses Jean-Gabriel Ganascia, a teacher and computer scientist doing research into AI at the Pierre and Marie Curie campus at the Sorbonne University in Paris who chairs the Ethics Committee at the French National Centre for Scientific Research (CNRS). He reveals: "When the APB issue was being discussed we talked a lot about being able to audit algorithms. But the APB wasn't an AI algorithm, it was simply a software program designed to help students get on to the higher education courses they wanted to take, taking into account any special criteria set by that particular educational establishment, combined with a lottery system for institutes with no specific criteria." Ganascia points out that "when the criteria are clear, there's no problem auditing the system: a candidate doesn't get a place because s/he doesn't meet a given criterion. If we use machine learning, the algorithm works on the basis of experience data to train itself and create rules. The problem is that those rules can't be verified." Lack of transparency when an algorithm predicts the failure of manufacturing equipment or triggers orders on the stock exchange may raise few issues, but in certain areas the law requires that algorithms be traceable, as for example in the banking sector where a bank must be able to explain exactly why a customer was refused a loan.

HOW TO ASSESS DECISIONS TAKEN BYA NEURAL NETWORK?

machine learning

Shutterstock

Explaining decisions made by a neural network: a challenge for researchers

Auditing a particular decision would mean taking the whole mass of data and understanding the status of each neuron and the millions of links activated between the neurons at the moment when the decision was made. It's just unthinkable!
Laurent Alexandre

Laurent Alexandre

This need for algorithm transparency also raises questions in the medical sector, where Deep Learning is increasingly being used in diagnostics. In a report entitled 'Médecins et patients dans le monde des Data, des algorithmes et de l'Intelligence Artificielle' ('Doctors and Patients in the World of Data –Algorithms and Artificial Intelligence') the French medical sector's professional, administrative and legal body the Conseil National de l'ordre des médecins points the finger at the 'black box' approach to deep learning algorithms and argues that it is currently impossible to analyse the reasoning behind the outcomes. The report underlines the work being done by the US Defense Advanced Research Projects Agency (DARPA) with its Explainable Artificial Intelligence (XAI) program, and under the TransAlgo initiative by the French Institute for Research in Computer Science and Automation (INRIA), which is designed to assess the accountability and transparency of algorithmic systems. "At this moment in time we cannot give reasons for the decisions of a predictive algorithm based on formal neural networks", reveals Jean-Gabriel Ganascia, pointing out: "With certain algorithms, we can perfectly well explain how each criterion has an impact on the result and we can go back to the information which fed into the decision. A number of research groups are focusing their work on this topic with a view to gaining a better understanding of the information components that contribute to the decision-making process."

So will we have to give up using these algorithms because we cannot formally explain their decisions? Laurent Alexandre does not think so. He points out: "Forbidding the use of non-auditable algorithms would have some very damaging consequences. In the medical sector you cannot audit the algorithms used, so would it therefore be acceptable to accept an increase in child mortality due to leukemia or in adult mortality due to prostate cancer? We can cite plenty of such examples. The reason why we're using Deep Learning is precisely because the problem is too complex for the human brain to model and so the 'black box' effect will always be with us. We're talking hereabout setting up a network of 800 million neurons arranged in 20 layers so as to process terabytes of data. Auditing a particular decision would mean taking the whole mass of data and understanding the status of each neuron and the millions of links activated between the neurons at the moment when the decision was made. It's just unthinkable!" concludes the DNAVision CEO. So ought we to be putting the brakes on the use of AI on ethical grounds or should we continue to draw the benefits from it, for better or for worse? The question remains open.

By Alain Clapaud
Independent journalist specialising in the new technologies