Social injustices in the era of the fourth industrial revolution: the use of artificial intelligence decision-making processes as an act of social engineering
Virginia Eubanks studied the impact of the social services in the United States employing artificial intelligence decision-making processes (AI decision-making processes) to make determinations, such as whether an individual is entitled to welfare benefits. She found that the employment of this technology limits the opportunities of poor individuals and disempowers them. The operation of these systems deepens inequality and automates discrimination. She concluded that that when AI decision-making processes draw on information from multiple databases their use gives rise to a process of social sorting. Eubanks’ argument is echoed by Linnet Taylor. Taylor suggests that data injustices occur on a collective level and it is therefore necessary to look beyond the individual.
The present paper argues that the societal impact of using AI decision-making processes in terms of bearing on equality, diversity and social justice can be understood by using a two-step process. The first area of inquiry is how AI decision-making processes construct the individual to whom the decision is addressed and how this affects the individual’s access to resources. This inquiry builds on Taylor’s observation that data justice is intimately linked with how an individual is represented. What emerges from the discussion is that it is too limiting to confine the investigation of how AI decision-making processes bear on equality and diversity by reference to the construction of categories and demarcation of human difference. The design and operation of AI decision-making processes is intimately connected to structuring the social. Marlies van Eck argues that the use of automated decision-making processes to make assessments regarding the entitlement of individuals to receive a payment from the state changes the relationship the citizens have with the administrative body. Moreover, Abeba Birhane maintains that AI decision-making processes “restructure the very fabric of the social world.” The second stage of analysis assesses the nature of social relationships the operation of AI decision-making processes creates. Such social relationships intersect and give rise to societal structures. In addition to evaluating what societal structures the operation of AI decision-making processes generates, it is examined the mechanisms through which the operation of AI decision-making processes brings about social injustices.
If concerns for the advancement of equality and social justice are to be addressed, then it is important to evaluate what types of values come to underpin social relationships and structures arising out of the cumulative employment of AI decision-making processes. Of course, the context of how the decision-maker uses the AI decision-making system matters. There is a difference between using AI processes to detect patterns of unequal treatment in the past and using AI processes to render predictions about the future. The adverse impacts associated with using AI processes to make predictions are not confined to contexts where the AI decision-making system generates the decision. They extend to cases where the decision-maker uses the AI decision-making processes as an aid for reaching the decision. The investigation corroborates Virginia Eubank’s finding that the cumulative use of artificial intelligence decision-making systems in different domains has the potential to produce unjust social arrangements. This leads to individuals across generations experiencing greater obstacles to realising their potential and leading fulfilling lives. The use of technical solutions to give effect to fairness, such as requiring the AI decision-making process to reach an appropriate trade-off between the accuracy of prediction and fairness, are inadequate. It is desirable to preserve human decision-making in domains where a holistic evaluation of the candidate is crucial for ensuring that socially just outcomes can be achieved. The article draws on interdisciplinary methodology spanning data science, legal theory, feminism, queer legal theory and critical disability studies in order to carry out the present inquiry.