Outright AI-Generated Art Misses the Mark

AI-generated artwork upsets a lot of people. It has been condemned as a form of “cheating”, and as a significant threat to intellectual property. Many fear this form of automation, which encroaches on the creative sector, will render humans useless and non-thinking wastes of space. This response is, however, curious; so many of our day-to-day, basic social functions are already automated and AI-mediated, so what is it about this specific area of AI development that sets alarm-bells ringing, and creatives raging?  

Before delving into this subject further, it is important to clarify some terms. There are, within this discourse, two areas of automated art to consider. The first is AI-mediated art, which is artwork that has in some way been assisted by computer automation or AI programming, but is predominantly reliant on artists’ hands and their physical, creative abilities. The second form of AI artwork is that which is 100% AI-generated. This is wholly coded, and has not had any handiwork made part of its process. In this case, the artwork is produced via interpreted code or literature which AI programs transfigure into visual images. This marks a departure from photography, and digital art as we have known them to date. 

The last two years have witnessed the growth of programs such as DALL-E 2, which generates viable-looking art in a variety of styles via user supplied text prompts. Deep Dream Generator (DDG) is another such program, which relies on a neural network, trained with millions of images, to create artworks. One can use these programs to produce either AI-mediated artwork or 100% AI generated artwork. For the former, artists can use these programs to realistically map out or plan their artworks by testing different strokes, compositions, and imaging without touching a paintbrush. DALL-E 2 and DDG grant artists the ability to achieve their end goal with fewer painstaking mistakes and do-overs, subsequently reducing needless wasting of physical materials and labour. Digital libraries, Instagram and Google algorithms—which form subconscious visual reference points that artists subsequently incorporate into their works—also render artworks as AI-mediated. Thus, whether it be through standardised online, social media platforms, or the more sophisticated AI programs quoted above, most contemporary artwork is now AI-mediated.

This function of AI is largely positive, rather than parasitic or displacing. Here, AI does not pose a means of competition for the artist, because the artist is still the one deciding where the paint goes. Indeed, this form of AI-related art might even broaden our creative palette: it allows us to experiment with new creative ideas and erase them, if they fail, at the press of a button. Through practices like this we can escape the repetitive actions that we know ‘work’. AI will also likely create new jobs within the art sector, such as that of data curators who are able to program AI to perform creative functions that incorporate real-life artwork.   

Man Hugging Volcano

This piece was generated by Ellie Lachs using AI

100% AI-generated artwork, however, enters us into different waters. This form of artwork, though stimulated by human-written code, ultimately defers to the AI-generating program for the end-result, and this has had controversial effects.  In 2022 the Colorado State Fair’s annual art competition awarded Jason M. Allen the blue ribbon for his AI generated artwork, Théâtre D’opéra Spatial. Allen created this work using the AI program, MidJourney, which turns lines of text into hyper-realistic graphics. The work was uniquely intricate and developed for an 100% AI-generated piece; the judges’ accolades caused a stir. Competition contributors accused Allen’s use of MidJourney as a form of cheating; one claimed that the work was breeding the “death of artistry”. This might also evoke memories of “The Next Rembrandt" a Rembrandt ‘original’ that was created in 2016 by a deep learning AI that was imprinted with 346 of Rembrandt’s works to create a unique 3D printed image in Rembrandt’s style. Allen’s work poses the next iteration of this study. The level of backlash towards Allen’s Théâtre D’opéra Spatial might then relate to its being the first of its kind to receive such a high extent of praise—an extent not even imagined by its 2016 prototype. New means of technology always elicit a discomfort of the unknown, and a feeling of threat as to what it might mean long term. We saw this in 1896: the first cinema screening is rumoured to have ended with people running out of the theatre for the fear that the train on the screen was actually going to crash into them. Similarly, film “talkies” elicited mass “chortling” and disbelief from film producers and audiences. But more specifically to Allen’s artwork, the criticisms pertain to the level, or in this case, lack thereof, of human hands in its creation—there is something about its being 100% AI generated, rather than 80% or 50% AI-generated which does not sit right. What kind of sorcery exists within that last 20% of human involvement that elicits so many raised hands to protect it. 

Online campaigns such as #NoToAIArt provide some explanation for this. They have outlined concerns over 100% AI-generated art’s legality and intellectual property, pointing out that MidJourney creates art from databases that comprise copyrighted online images. Author and illustrator, Harry Woodgate, adds that these kinds of programs “rely entirely on the pirated intellectual property of countless working artists, photographers, illustrators and other rights holders.” AI-generated art only works because of the labour of hands that remain unacknowledged in its development and marketing. Journalist Sarah Shaffi goes further, highlighting more morally and politically insidious dangers of art generating AI programs, such as their ability to ‘create images that are potentially illegal’. Kaloyan Chernev, founder of Deep Dream Generator, admitted that in the first launch of Text 2 Dream, individuals attempted to “generate images of nude children, despite the fact that no such images were present in the training dataset”.  

Such problems quickly seep into the mainstream, where every-day examples of AI-generated artwork come into the fore. Lensa AI, a photo editing app, poses an example here; Lensa AI repackaged the open-source, image generator Stable Diffusion within an easy-to-use, sleek-looking interface. MIT Technology Review’s Melissa Heikkilä, however, noticed a problem; while all of her friends were able to create varied and exciting avatars of themselves, the technology identified very little other than her Asian heritage. Her avatars were undressed, and peddled a clearly erotised product of biases. It became obvious that Lensa AI had been trained on data that contain the biases of the real world, per se the fetishization of South-East Asian women. This case also shows us how important it is for us to be able to critique technology within art, and otherwise; if AI dominates that space, then art or literature will be too imbricated in the technological net to creatively criticise it.

Granted, many of these charges rely on critics’ having basic levels of AI literacy, and yet, it is not just the AI-literate who are jumping onto this critical bandwagon; there seems to be as much, if not more, deepseated unease about the notion of 100% AI-generated art amongst technology laymen. In fact, we who have less knowledge surrounding AI generally feel more fear towards it. Such lower understanding of AI datasets and capabilities results in blinkered perceptions towards the creative and beneficial ways that AI-generated art can be used. Beyond 100% AI generated artworks’ esoteric, legal, and political facets, are its more philosophical implications, and these naturally get the laymens’ anxieties whirring. 

Rococo scuba diver

This piece was generated by Ellie Lachs using AI

Research shows that this predominantly relates to what people consider to be the end-goal of AI. From Chen Qiufan and Kai-Fu Lee’s book AI 2041 (2021), to a series of YouGov polls, there is a significant amount of AI-related literature that orients around the idea that an AI-dominated world will free humans from menial, banal and easily automated worktasks, and will allow them to return to completing creative, “fun” and typically “unproductive” tasks - a radical idea that will flip the negative bias that we hold towards the arts, and replace work with the freedom to just “be”. Marcus de Suetoy adds to this narrative, claiming that AI will ‘kick us out of behaving like machines, that mechanistic way of thinking, and allow us to be more creative humans again’. This philosophy is further backed by public opinion, YouGov notes that, while 45% of adults believe that robots will be able to develop higher levels of intelligence than humans in the future, they also believe that “AI could help save us time and energy”, and that companies should “consider automating repetitive and tedious tasks using AI”. This will free up humans to focus on originality and creativity.’ What automation will add to people's lives, is seemingly an unprecedented amount of freedom. If AI resolves this area of life as ‘completed’ too, perhaps we really will become mush or the sausage-fingered surrealist parodies pictured in Everything Everywhere All At Once (2022).

While these arguments are idealistic and reductive, they do provide firm indication of what individuals—from the UK, within this dataset—want from AI, and they do not want a bot to do their painting for them. Engaging with this idealistic frame of thinking more critically poses a prescient question:

What will the world look like if AI does takes charge of our ‘menial’ tasks, in order to grant us the freedom to jump, paint and create?

The grim version of this reality is more likely one in which big-tech companies dominate. Notoriously difficult to hold accountable, they would remain invested in keeping us laymen uneducated about AI, to prevent us from carrying out informed scrutiny of their actions–and their profits. We would need a new swath of laws that are both tech-interrogative, and binding. The kinds of jobs that will be replaced by AI will probably be those that are low-skilled (we have already seen this in the rise of self-checkouts) and this will create an even greater class divide. All of this would benefit the global West above all else. AI programs and automation technology are at their basic level worked by cheap labour in the global South—factories in India and South-East Asia—and this too would perpetuate global inequalities. 

I am pointing out this austere potential future not to illuminate doom and gloom but more to indicate why 100% AI-generated artwork causes such a stir—it is the ultimate line, or litmus test for the ways in which AI might fail us rather than help us. When we pose questions about 100% AI-generated tasks, we are really pushing the button on what we want AI to to do for us long term, and once we identify this, the discrepancies between our reality and our idealisations relating to AI become clear. 

Only from here can we begin to focus on who or what is driving that gap between our reality and the ideal, to hold AI companies accountable at the core, and call for a ceasefire between the “few” (tech bros) and the “many” (technological laymen). Ultimately, the unexplainable, uneasy “gut feeling” that 100% AI generated art gives us—whether we are steeped in the complex critical discourse, or watching from afar—correlates to the bigger picture questions surrounding the systemic function we want AI to hold in our future. We can only hope that this will be one in which AI covers our accounting, and our admin, but leaves the joy of painting alone.  

Ellie Lachs

(she/her)

BA English @ UCL

MA History of Art @ The Courtauld

Previous
Previous

Battling the Deepfake Epidemic: Will the UK’s Online Safety Bill go far enough?

Next
Next

Down the Reddit Hole: on informal trans [health]care networks