Why AI isn't going to make art
Relevant texts:
- Why AI Isn't Going To Make Art, Ted Chaing
- I Quit Teaching Because of ChatGPT, Victoria Livingstone
“We are all products of what has come before us, but it’s by living our lives in interaction with others that we bring meaning into the world. That is something that an auto-complete algorithm [AI] can never do, and don’t let anyone tell you otherwise.”
Notes: Chaing comments on photography, "the artistry lies in the many choices that a photographer makes." I think this is a fundamental principle of what makes art. An artist, in the process of making art, makes many decisions, many of which are nuanced and would be difficult to fully describe in a text prompt.
Art is iterative. Chaing argues that using computer programs like Ps to make things could still be considered art because of the nature of the process. It’s a process done over time, making specific very specific changes using fine-grained controls. Using generative AI to create images with little eMort isn’t quite the same iterative process.
I think that LLMs and generative AI models could be powerful when used as thought partners—a thing/being/partner who could help you think diMerently, challenge your own assumptions, and introduce new perspectives—rather than thinking machines. OpenAI's ChatGPT and other LLMs are built and advertised to be used as thinking machines, systems that can perform intellectual tasks for its users.
In academia. Using AI to write essays, complete assignments, and do work for you, impairs cognitive fitness. It defeats the purpose of those assignments which are designed to build important critical thinking skills.