Artificial intelligence concerns

Ethical concerns with the development of artificial intelligence (AI) technology.

As we debate and consider the ways in which AI could be used in the classroom, we also need to be aware of the ethical questions raised by AI’s growth and expansion, particularly its effects on our environment and the rights of workers and the producers of online content. 

The development and operation of AI technologies, while seen as existing in a virtual space, have wide ranging social and ecological implications.

Environmental impact and sustainability

The computing power required to develop and run Generative AI, is driving a boom in the construction of new data centres which consume large amounts of energy, as well as natural resources such as water.

For example, Goldman Sachs estimates that a ChatGPT query needs nearly 10 times as much electricity to process as a Google search, and data centre power demand will grow 160% by 2030. Google and Microsoft both recorded significant spikes in water use – 20% and 34% respectively - in one year as they prepared their GenAI products. 

Three of the largest tech companies, MicrosoftGoogle and Meta, have reported soaring greenhouse gas emissions since 2020, with the expansion of data centres due to demand for AI tech the driving force behind this. 

As well as increasing harmful carbon emissions, the production of computer hardware for data centres involved resource-intensive mining for metals and other minerals such as silicon. Mining for these materials contributes to water pollution and has other impacts on the environment. 

Datacentre construction also has local impacts as they compete for energy and other resources including water with existing civilian infrastructure. Local resistance to new data centres has been increasing as a result of this, with movements growing around the world including in Chile, Ireland and the Netherlands.

Exploitation of workers

Exploitation of low paid workers is a central feature of the development of current AI technologies. 

The training and ongoing operations of GenAI models require an extensive process of data labelling: roughly 80% of the time spent on training AI consists of annotating datasets. Within this, there is a particular focus on finding ways to ensure AI-generated content does not reproduce “toxic” and “harmful” text and content. 

This work is being outsourced to low paid and precariously employed workers in the Global South.

Exploitative practices and appalling conditions in this growing area of employment are now well documented. A TIME investigation published in January 2023 found that OpenAI had outsourced this work to Kenyan workers being paid less than $2 per hour. The workers were also exposed to “harmful” content which they were required to label, this included textual descriptions of sexual abuse, hate speech, and violence. The workers have since filed a petition to the Kenyan government calling for an investigation into what they describe as exploitative conditions for contractors reviewing the content that powers artificial intelligence programs. 

Similar stories of exploitation, including the use of underage labourextremely low pay, long hours and lack of support for witnessing harrowing content, have emerged across the Global South.

Copyright infringement and commercial exploitation of creative output

Current GenAI products are trained on large volumes of data obtained by “scraping” the internet (the process of using automated software to extract data from websites). This  includes using copyrighted text and image content without paying or acknowledging its creators. Furthermore, GenAI generated content may closely resemble or even reproduce copyrighted material.

There are now numerous legal challenges by various individuals and organisations, including the New York Times, whose data and content has been used to train GenAI models.

Depending on the outcomes of these legal challenges and the costs to tech companies of any potential settlements, the commercial viability of GenAI in its current form may be affected.

Some media companies and publishers have entered into partnerships with GenAI developers to gain access to copyright-protected content. However, workers have not always been consulted or informed about these partnerships and have raised concerns about the terms of these deals, lack of transparency, and impact it may have on their work. 

Tech companies such as Meta, OpenAI and Google are lobbying governments for changes to laws so that they can continue to use copyrighted content in training their GenAI models. The claim that it is impossible to train GenAI models without using copyrighted content opens up ethical questions about profiting from creative work without paying the producers as well as concerns about the broader impact GenAI may have on creative industries.

Back to top