Artificial equality? AI and social (im)mobility
Artificial intelligence (AI) has taken a firm spot in recent popular discourse; take the unprecedented media frenzy that accompanied the release of ChatGPT-4, for example. As AI continues to develop, so have the criticisms of its dangers. We must begin to consider what a revolutionised technology industry will mean for societal inequalities and if what seems like a time of exciting innovation is really disguising an industry that is further deepening societal divides.
Technological innovation feels futuristic in its very nature, but AI feels particularly disconnected from the world that we know today. When you think of AI, you think of something that stretches far beyond human capacity, something that is immaterial, intangible, and incomprehensible. This image of autonomous intelligence is the shiny result of Silicon Valley’s relentless PR campaigns. They don’t want us to associate AI with its programmers, its data analysts, its engineers, its marketers. In actual fact, AI is deeply entwined with the world we know today. AI technologies are trained on, and so perpetuate, knowledge (data) from humans who have their own beliefs, opinions, and biases. By erasing these humans from the picture, Silicon Valley hopes to erase any public demands for corporate accountability.
The reality is that the rapid expansion of AI technologies has resulted in a largely unchecked concentration of power in the hands of those involved in designing and developing these systems. Iman Sheikhansari refers to this phenomenon as the “dictatorship of the algorithm” and explores the idea that controlling the input of powerful AI systems results in AI exhibiting pre-existing biases. The dictatorship of the algorithm allows AI to replicate biased models that work to reinforce existing power structures within society.
When examining the societal dangers of AI, it makes sense to start with representation within the tech industry more broadly, to understand the creators of these systems. While diversity is a goal that companies often commit themselves to, there have been calls for the AI industry to take this a step further. Diversity within the field of AI is so important, as AI exhibits the same biases that are within the information it processes, and so an industry dominated by cis, white, wealthy males means that the implicit biases that exist within those very individuals are reflected in the output of AI systems. Currently, the technology industry is racially unrepresentative, especially among senior roles, and there have been disturbing examples of AI technologies favouring mortgage applications from white people. A workforce that reflects the power inequalities in today’s society will embed these inequalities in the technology that it develops.
Ruha Benjamin further explores the threat of AI providing a means for perpetuating institutionalised imbalances. Unaccountable machines executing the biases of their human programmers allow for the responsibility to be removed from the individual and transferred onto an unprosecutable machine with no human thoughts or ability to feel emotion. This is aptly illustrated by the case of Amazon’s AI recruitment tool, which shows bias towards women. The system was supposed to vet CVs but was trained using resumes submitted to the company over a 10-year period, which reflected the male dominance of the technology industry and therefore favoured male candidates. Furthermore, there is evidence that voice recognition tools perform worse for women and non-white people, an issue that has significant real-world impacts as the use of speech recognition in various aspects of life—from immigration tests to hiring processes—continues to grow. Benjamin neatly dubs this phenomenon ‘the new Jim Code’ due to the way that the technologies inherit racial biases and so systematically work to maintain hierarchies of melanin that are already entrenched in our society. We need to begin to recognise the dangers surrounding this and open up the conversation to ensure that AI works to help society develop rather than deepen current divides.
Sexism and racism are, unsurprisingly, not the only -isms that can be brought against AI development. The tech industry has a glaring class problem: in the UK tech sector, 19% of workers are from a working-class background, compared to 33.3% of the national population.
When assessing the barriers preventing people from low socio-economic backgrounds from progressing into the tech industry, the accessibility of equipment is an immediate concern. Students from less privileged backgrounds have less access to high-quality equipment, which is perpetuated in the school setting. Researchers have discovered that 54% of UK state-funded primary schools lack the necessary equipment required to teach science effectively. The more that technology develops, the bigger the threat it poses to creating a digital divide between people without access to things like high-speed internet, digital devices, or digital skills. Digital skills will also become necessary in most job roles in the future, and individuals will be expected to be equipped with these new technical competencies. If the current trends in private and state schools' access to technology continue, this will only worsen, given the ever-increasing technological advancements, further deepening the digital divide and concentrating power in the hands of the wealthy and privileged.
There have also been concerns regarding widespread misconceptions among young people from low socio-economic backgrounds regarding the skills required to work in the technology sector. Despite the tech sector having a variety of roles at differing skill levels, knowledge of difficult programming skills are often viewed as a prerequisite to working in the tech sector, creating a barrier for students from low socio-economic backgrounds. If these young people are not exposed to technological innovation, are not taught about the opportunities on offer, and don't see people like them as part of this innovation, the class gap in the tech sector will only continue to grow. By focusing on diversifying the technology industry, we can ensure that AI develops alongside us rather than cementing pre-existing divides.
These cycles of socio-economic discrimination are further perpetuated by, as already mentioned, the fact that AI systems are built out of systems of inequality. It’s no surprise that people with lower levels of education and lower incomes are engaging less with AI technologies when compared with more privileged people. These systems are designed to be efficient and help society progress; however, the emerging evidence continues to show that AI is disproportionately used by the privileged, to favour the privileged. The underrepresentation of certain groups within the tech industry makes it inevitable that AI will maintain, if not exacerbate, structural inequality. If the tech industry continues to underrepresent minorities, then technological innovation will continue to favour the privileged. AI exacerbates and deepens existing institutional inequalities because AI technology is born of the same inequalities that exist in other industries.
There are concerns that AI tools will be used by institutions to technologise violent and discriminatory practices—an idea exemplified by the US police force, which has adopted the use of facial recognition tools to aggressively surveil people of colour, particularly black men. This is a clear example of how a helpful, time-saving, everyday tool such as facial recognition can easily be weaponised to target and harm disadvantaged or vulnerable people. We need to begin to recognise the dangers surrounding this and open up the conversation to ensure that AI works to help society develop rather than deepen current divides.
As long as there continue to be barriers for underrepresented groups to break into the technology industry, the dictatorship of the algorithm will continue to benefit those in the most privileged positions whilst harming marginalised groups. Underrepresentation within the field means that those who are writing the code and involved in the development of AI don’t represent the population that AI is supposed to assist. Furthermore, the technology industry has been built on an exploitative class gap, meaning it’s no surprise that one of the biggest tech giants, Amazon, has been involved in outsourcing low-skilled technological tasks to the Global South for below minimum wage. We need to acknowledge the fact that systems of power have been built by the privileged to serve those who are most powerful within society, and AI is not isolated from this.
Going forward, large companies and people in power need to do more to diversify the tech industry and, in turn, bring the conversation of the threats that technology poses to inequality to the table. Only in this way can we begin to mitigate strategies to protect those who will be most impacted by the consequences of AI. While it may be too soon to make judgements about AI and the long-term dangers it poses for civilisation as a whole, it is not too soon to begin shaping a fairer society that promotes equal opportunities. We must work towards changing the narrative surrounding AI to acknowledge the consequences of a developing industry that risks exacerbating inequality among the most disadvantaged groups in society.