By now, your feed is probably full of beautiful portraits of your friends and family in varying styles. In an age where Instagram and Snapchat filters can mimic plastic surgery and transport you into almost any place, it's no surprise that technology has learned to turn anyone into the work of a famed painter.
But just because it can, does that mean it should?
The relationship between art and artificial intelligence has been complicated. AI is capable of generating convenient building blocks for artists to then manipulate and transform into their own style. It is also capable of generating original music of famous artists by learning from the patterns in rhythm, instrumentation, vocals and more. As a teaching tool and a way to break out of a creative rut, AI has been praised for continuously improving.
However, ethical concerns are now being raised in the art space as AI begins to take pages out of artist's playbooks, some of whom are still alive.
Related | How Afraid of AI Should We Be?
Lensa AI, the app that's responsible for generating all those fancy portraits, is the latest target in the ongoing discussion of this technology in art. All you have to do is plug in a minimum of 10 user-generated images and the AI does the rest, processing the images through the neural network, Stable Diffusion, to build a composite of your face and turn it into a wide variety of art styles that can then be used for almost anything you can think of.
Stable Diffusion is just one of many image synthesis models that learn how to generate images by scraping the internet for examples to train itself. For example, by searching through countless images by Pablo Picasso, it will pick up everything from the color palette to the shapes, eventually being able to roughly replicate it. Even prompts such as "Pablo Picasso's 'The Old Guitarist' with a cat" can spawn a rough idea of what that composition would look like, which can then be fine tuned through the model. Likewise, searching through images of different people can train the model to learn the features of a human face. Add in other identifiers such as race, skin tone, makeup, hair color, expression and more, you can use the model to create an endless array of people, recognizable or not. A prime example of this is from the website, This Person Does Not Exist, which shows a randomly generated person each time you refresh. No two are the same, and the shockingly realistic portraits show just what AI is capable of.
Lensa AI is making it affordable and easy for people to generate AI portraits for only a fraction of the cost of commissioning an artist to do it, leading to professionals to accuse the app of taking food out of artist's mouths. The app's $7.99 price tag allows for 50 avatars to be generated. Prisma Labs, the company that owns Lensa, attempted to address their concerns.
\u201cSeeing plenty of thoughts online about the future of digital art in connection with AI generations, we decided to share some information on how AI generates images and why it will not replace digital artists. \ud83e\uddf5\ud83e\uddf5\ud83e\uddf5\u201d— Prisma Labs (@Prisma Labs) 1670337230
The company assured that the app learns from the prompts but does not produce copies of any specific artwork. It also automatically deletes the data from its server, thereby not posing a security threat.
However, the app is garnering a lot of criticism for what it generates, even if it is "unintentional." Since the model is learning from prompts, images from the internet and more, wires can be crossed. Many women have reported that even if they upload modest photos, several images come back of them nude.
\u201cI\u2019m testing the Lensa AI avatars people are doing on instagram. \n\nSo far, most of the mens are coming back great. \n\nMine (woman) is coming back literally naked or NSFW images. I\u2019ve tried 3 sets of images, including ones that don\u2019t show past neck\n\nHas any other women tried this?\u201d— Sarah White (@Sarah White) 1670254713
\u201cLMFAO GUYS WTF I did that lensa AI thing and it made me naked in every picture......????\u201d— Kaladin's #1 slut (@Kaladin's #1 slut) 1670173995
WIRED writer Olivia Snow discussed the sexualization and whitewashing of feminine-presenting subjects, noting that darker skin was lightened and certain races were generated with sexual expressions. What was more alarming was when she uploaded photos of her as a child despite the app's terms of service prohibiting children's photos from being used. In several photos, they were obviously sexual in nature, either putting her in compromising positions or containing rough shapes of what can be seen as breasts. When fed a mixture of childhood and adolescent photos, fully nude images were produced with a distinct adult body and childlike face.
People have already been using everything from deepfake technology to AI programs and apps such as Lensa to create nude images of everyone from classmates to celebrities. While no one is actually exploited since the images are fake, there is no consent given to generate such explicit images. As Snow notes in relation to child sexual abuse material and the moderation put in place to catch these things on the internet, "AI art generators evade content moderation entirely."
Unfortunately, not much can be done to combat AI from learning about art so long as images are uploaded on the web. Until a compromise is made, artists are encouraging people interested in their fancy new avatars to consider supporting smaller artists when possible, whether that be subscribing to a Pattern or commissioning art.
Photo courtesy of Shutterstock
MORE ON PAPER
ATF Story
Rosé Is On Top
Story by Steffanee Wang / Photography by Ellen von Unwerth / Styling by Nicola Formichetti / Hair by Seonyeong Lee / Makeup by Jungyo Won / Nails by Juan Alvear / Set design by Milena Gorum