Why the Ethics of AI Art Matter

The intersection of artificial intelligence and art is not just a technical marvel but also a philosophical and ethical frontier. As AI-generated art becomes more prevalent, the questions surrounding its ethical implications grow in complexity and importance. AI art represents a significant evolution in the creative process, merging human ingenuity with advanced computational capabilities. This post delves into the multifaceted ethical issues related to AI art, providing a rational, balanced perspective that honors freedom of speech and encourages principled ethics.

AI’s ability to generate artworks that rival or even surpass human creations in terms of detail, creativity, and emotional impact is both awe-inspiring and unsettling. It challenges our traditional notions of art and creativity, prompting us to rethink what it means to be an artist and what constitutes a work of art. This technological advancement also forces us to confront ethical dilemmas that were previously unimagined, compelling artists, developers, and consumers to navigate a new moral landscape.

The ethics of AI art matter primarily because of the profound impact AI can have on society. Art is a powerful medium for expression and communication, influencing public opinion, culture, and individual identities. When AI-generated art becomes part of this influential domain, it brings with it the potential to shape perceptions and realities in ways that must be carefully considered. The ethical use of AI in art is essential to ensure that this power is wielded responsibly and that the creations produced do not cause harm or perpetuate injustice.

Moreover, the integration of AI into the creative process raises fundamental questions about human agency and autonomy. When machines are capable of producing art, what role do human artists play? How do we value the work of human creators in an era where algorithms can generate similar or superior outputs? These questions touch on deeper philosophical issues about the nature of creativity, the essence of human contribution, and the intrinsic value of artistic expression. Addressing these concerns ethically ensures that the role of human artists is not diminished and that their unique contributions are recognized and celebrated.

The Ethics of AI Art

The Artist Dimension

At the core of AI-generated art lies the tension between innovation and intellectual property. Copyright infringement is a significant concern, as AI models often train on vast datasets that include copyrighted works without explicit permission. This can lead to unintentional plagiarism, where AI-generated art closely resembles existing works, raising questions about authenticity and originality. The authenticity of AI art is further complicated by the challenge of attribution. Who should be credited for the artwork—the creator of the AI, the user who generated the image, or both?

Recent legal developments highlight the complexity of these issues. In August 2023, a U.S. federal court ruled that AI-generated artwork cannot be copyrighted due to the lack of human authorship. The case, involving an AI-created image titled “A Recent Entrance to Paradise,” underscored the legal stance that human creativity is essential for copyright protection. This decision aligns with earlier rulings by the U.S. Patent and Trademark Office, which emphasize that intellectual property rights apply only to works created by humans.

The situation in Europe mirrors the U.S. perspective. According to the European Court of Justice (CJEU), a work must originate from a human creator to qualify for copyright. The court’s decision in the Cofemel case established that a work is only considered original if it reflects the creator’s personality through independent creative decisions. This principle is also enshrined in Germany’s Copyright Act, which states that “the author is the creator of the work.”

However, there are nuances in both jurisdictions that might allow for some AI-generated works to enjoy copyright protection. For instance, if an AI makes only minor alterations to a pre-existing copyrighted work, or if a specific prompt largely dictates the result, the human user might still be able to claim copyright. This is particularly likely in scenarios where there is significant human involvement in refining or adjusting the AI-generated output.

Artist compensation is another critical issue. Traditional artists rely on their unique skills and creativity to earn a living, but AI art can disrupt this economic model. AI can produce high-quality art quickly and at a lower cost, potentially reducing demand for human artists and affecting their livelihood. This economic displacement poses significant ethical challenges, as it may undermine the financial stability of artists who depend on their craft.

Fair use, a doctrine that allows limited use of copyrighted material without permission, becomes a gray area when applied to AI-generated content. Determining ownership rights and intellectual property in AI art is legally ambiguous and still largely unregulated. Fair use typically involves considerations like the purpose and character of the use, the nature of the copyrighted work, the amount used, and the effect on the market for the original work. Applying these factors to AI-generated content is complex and context-dependent.

The use of AI in art also raises questions about creative control. When AI-generated art is produced, the vision and originality of the human artist may be diluted. This concern is particularly relevant in contexts where AI models generate art based on broad and often ambiguous prompts. Ensuring that artists retain control over their creative processes and that their unique styles and intentions are preserved is essential for maintaining the integrity of the artistic process.

Moreover, the challenge of authorship and ownership of AI-generated art complicates this issue further. Current legal frameworks are not well-equipped to address the nuances of AI-assisted creation. While some AI art platforms claim ownership of the generated images, others grant users certain rights, leading to a lack of standardization in the industry. This ambiguity can undermine the autonomy of artists and complicate their ability to protect and monetize their work.

Deep Dream Generator (DDG) urges that users should respect the original creators’ rights and provide proper attribution where possible. This respect for intellectual property helps maintain the integrity of the creative process and ensures that human artists receive recognition for their contributions. Personal discretion is crucial in navigating these complex ethical landscapes, ensuring that AI art fosters creativity without undermining the contributions of human artists.

The intersection of AI-generated art and intellectual property law is a dynamic and evolving field. Legal precedents in the U.S. and Europe emphasize the necessity of human authorship for copyright protection, yet there are potential avenues for recognizing human contributions in AI-assisted creations. Balancing innovation with respect for traditional artists’ rights is essential to fostering a fair and vibrant creative ecosystem. As AI continues to advance, ongoing dialogue and legal refinement will be necessary to address these intricate issues.

The Ethics of AI Art

The Human Dimension

The ethical landscape of AI art extends beyond artistic matters as such, to include broader human concerns such as consent, privacy, and the danger of economic displacement. These issues are complex and multifaceted, reflecting the profound impact of AI technology on society and individual rights.

AI models often use images of individuals without their explicit consent, raising significant privacy concerns. This unauthorized use of data can occur when AI systems scrape images from the internet, incorporating them into datasets used to train models. This practice has led to significant ethical debates, particularly around platforms like Stable Diffusion, which have been accused of using billions of copyrighted images without the consent of the artists (source: https://sites.duke.edu/airtist5/2023/02/02/the-ethics-in-ai-art/).

To address these concerns, some initiatives now allow individuals and companies to opt out of having their images included in AI training datasets. For example, according to MIT Technology Review, since December 2022, around 5,000 people and several large online art platforms have requested the removal of over 80 million images from the LAION dataset used by Stable Diffusion. The concept of a “consent layer” for AI is gaining traction, emphasizing the need for users to have control over whether their work is used in training AI models. This approach aligns with broader data protection practices, such as the General Data Protection Regulation (GDPR) in the EU, which aims to give individuals more control over their digital data.

Economic displacement is another pressing issue. AI-generated art can reduce the demand for human artists, potentially leading to job loss and decreased income for those in creative professions. This impact is already being felt, with AI creations like those from Botto, an AI that produces 350 new images a week, generating significant revenue from NFT sales. These sales, totaling nearly $1 million, represent a substantial amount of money that could have gone to human artists. Similarly, AI-generated illustrations for children’s books and entries in digital art competitions are displacing work that traditionally required human creativity and skill, according to Beautiful Bizarre Magazine.

However, some argue that AI can also enhance human creativity rather than replace it. By automating certain aspects of the creative process, AI tools can free artists to focus on more complex and nuanced aspects of their work. This potential for enhancement rather than replacement suggests a more balanced view of AI’s impact on the creative economy, find the University of Texas at Austin Center for Media Engagement.

The ethical concerns surrounding AI-generated art in the human dimension are significant and multifaceted. Ensuring explicit consent and robust privacy protections, balancing economic impacts, and maintaining creative control are crucial for navigating this evolving landscape. By addressing these issues thoughtfully and ethically, we can harness the potential of AI to enhance human creativity while respecting the rights and dignity of all individuals involved.

The Ethics of AI Art

The Truth Dimension

In an era where information is easily manipulated, AI-generated art can significantly contribute to the spread of misinformation and propaganda. This dimension encompasses deepfakes, transparency, accountability, and the ethical use of training data.

Deepfakes, which use AI to create highly realistic but fake images and videos, are a prime example of the risks posed by AI-generated content. These falsified media can be used to spread misinformation, create propaganda, and undermine trust in legitimate information. Deepfakes often exploit generative adversarial networks (GANs) to produce convincing yet fake visual and audio content, with the intent to deceive. The proliferation of deepfakes has severe implications for public discourse, particularly during times of political tension. For instance, deepfake videos have been used to create fake news reports, manipulate political events, and even fabricate speeches by public figures.

The integrity of public discourse is foundational to democratic societies. Deepfakes can threaten this integrity by fabricating scandals, falsifying public statements, and manipulating electoral processes. For example, a deepfake video of Ukrainian President Volodymyr Zelenskyy appeared to show him urging his troops to surrender, which could have had severe implications if believed to be true. Such incidents highlight the critical need for effective detection and countermeasures to combat the spread of false information.

Transparency is crucial in combating misinformation. Users should be aware of how AI models create content, including the data used for training and the decision-making processes of the algorithms. Transparency involves making the workings of AI systems understandable to non-experts and ensuring that the provenance of AI-generated content can be verified. Initiatives like Intel’s “FakeCatcher” and MIT’s “Detect Fakes” project are examples of efforts to develop tools that can detect deepfakes and verify the authenticity of digital content. By embedding digital watermarks and using blockchain technology to track the creation and modification of digital media, the authenticity and origin of content can be better ensured.

Accountability is necessary to ensure that creators and users of AI art are responsible for the ethical implications of their work. This includes not only the developers of AI technologies but also those who deploy these tools in various applications. Legal frameworks currently lag behind technological advancements, often failing to address the nuances of deepfake creation and dissemination. For instance, while some jurisdictions have started to enact laws targeting deepfake pornography and election interference, comprehensive regulations that cover all forms of AI-generated content are still lacking.

Ethical guidelines and regulatory measures are essential to hold creators accountable. This could involve legal requirements for disclosure when content is AI-generated and stringent penalties for malicious use of AI to spread misinformation. As AI technologies continue to evolve, ongoing collaboration between technologists, policymakers, and ethicists is necessary to develop effective and fair regulations.

The ethical use of training data is a cornerstone of responsible AI art. This involves ensuring that datasets used to train AI models do not infringe on privacy or intellectual property rights and are not biased or harmful. For example, the controversial use of the LAION-5B dataset by Stable Diffusion, which included billions of copyrighted images without explicit consent, highlights the ethical challenges in this area. Ensuring that data is collected and used ethically can help mitigate these risks.

Efforts to provide more control over training data are emerging. For instance, platforms like ArtStation and Shutterstock have allowed users to opt-out of having their images included in training datasets. Such measures respect the rights of creators and help build trust in AI technologies.

Addressing the Truth Dimension of AI art involves combating misinformation through effective detection and transparency, ensuring accountability for AI-generated content, and using training data ethically. By adhering to these principles, AI art can be a force for good, promoting truth and integrity in the digital age.

The Ethics of AI Art

The Minority Dimension

Bias and stereotyping are significant ethical issues in AI-generated art. AI models can inadvertently perpetuate harmful stereotypes or biases present in their training data, which often reflect the prejudices and social inequalities embedded in the sources they learn from. For example, a study by the University of Washington found that Stable Diffusion, a popular AI image generator, frequently produced images that overrepresented light-skinned men and sexualized women of certain ethnicities. When asked to generate images of “a person from Oceania,” the model primarily depicted light-skinned individuals from Australia and New Zealand, failing to represent the Indigenous populations of Papua New Guinea accurately.

Tools developed to visualize and analyze these biases reveal that AI models like DALL-E 2 and Stable Diffusion tend to depict professions and attributes in stereotypically gendered and racial ways. For instance, adding adjectives such as “compassionate” or “emotional” to a prompt often results in images of women, while terms like “stubborn” or “intellectual” generate images of men. Such biases can reinforce harmful stereotypes and reduce the diversity of representation in AI-generated content.

Efforts to mitigate these biases include the development of new tools and methodologies. Researchers at the University of California, Santa Cruz, have created an automatic tool based on the Implicit Association Test to evaluate and measure biases in AI models. This tool helps identify how closely concepts such as professions are associated with gender or race, allowing developers to address and reduce these biases during the training and fine-tuning phases.

“Cultural appropriation” is another concern in AI-generated art, where AI might use elements from one culture without proper context or respect, leading to misrepresentation or exploitation. AI models trained on diverse datasets might generate images that mix cultural symbols and artifacts in ways that are insensitive or inaccurate. This issue is particularly prevalent in AI-generated fashion designs, where traditional clothing elements are often used without understanding their cultural significance.

For instance, AI-generated images of Native Americans often depict them wearing traditional headdresses, a stereotype that does not reflect the diverse and contemporary realities of Native American communities. This over-simplification can perpetuate outdated and reductive views of cultures, failing to capture their richness and diversity.

Efforts to address cultural appropriation in AI art include refining training datasets to ensure they accurately represent cultural contexts and consulting with cultural experts to validate the appropriateness of AI-generated content. Moreover, platforms and developers can implement guidelines to prevent the use of culturally significant elements in ways that are disrespectful or exploitative.

Of course, “political correctness,” intended to prevent offensive or harmful content, can also introduce its own form of undesirable bias. For instance, Google’s Gemini AI model has faced criticism for its “woke” bias, where it avoided generating imagery that could be deemed controversial. In one notable example, Gemini refused to generate an image of the 1989 Tiananmen Square protests, leading to accusations of censorship. Additionally, Gemini produced historically inaccurate and racially diverse images of well-known figures, such as the US founding fathers, which sparked further debate about the model’s approach to political correctness.

Protecting children from harmful content, such as AI-generated porn or illegal images, is paramount. AI technologies can be misused to create explicit content, sometimes involving minors, which is both illegal and deeply wrong. Ensuring that AI art platforms have strict policies and robust detection mechanisms is crucial to prevent the generation and dissemination of such content.

Deep Dream Generator (DDG) strictly prohibits any form of sexual content, ensuring a safe and respectful platform for all users. This policy is part of a broader effort to protect vulnerable individuals, particularly children, from being exploited or exposed to harmful material. AI developers and platforms need to implement and enforce comprehensive content moderation strategies to detect and remove inappropriate content swiftly.

The minority dimension of AI-generated art highlights the need for continuous efforts to mitigate bias and protect vulnerable populations. By addressing these ethical concerns, we can ensure that AI art promotes diversity, respects cultural integrity, and maintains a safe environment for all users. These measures, coupled with ongoing research and collaboration among technologists, ethicists, and cultural experts, are essential to fostering an inclusive and responsible AI art ecosystem.

The Ethics of AI Art

The Environmental Dimension

The environmental impact of AI art is an often-overlooked ethical issue. Training and running AI models require significant computational power, leading to substantial energy consumption and carbon emissions. This section explores the environmental footprint of AI art and the measures that can be taken to mitigate these impacts, ensuring that the benefits of AI art do not come at the expense of the planet.

The computational requirements for training AI models are immense. According to the Columbia University Climate School, for instance, training OpenAI’s GPT-3, which is not even the largest model, consumed 1,287 megawatt-hours of electricity, resulting in carbon emissions equivalent to those produced by 502 metric tons of carbon dioxide. This is comparable to the emissions from approximately 100 average cars driven for a year. The energy consumption does not stop at training; inference, or the process of using the AI model to generate outputs, also consumes substantial energy. For instance, a single request to ChatGPT can consume up to 100 times more energy than a typical Google search.

In addition to energy consumption, the disposal of electronic waste (e-waste) generated by AI technology poses serious environmental challenges. AI hardware often contains hazardous chemicals like lead, mercury, and cadmium, which can contaminate soil and water supplies if not properly managed, according to Earth.org, an environmental news, data and research website. Proper e-waste management and recycling practices are essential to mitigate these risks and ensure the secure processing of AI-related electronic waste.

AI’s applications, such as autonomous vehicles and drones, can also have indirect environmental impacts. These technologies can disrupt natural ecosystems and contribute to environmental degradation through increased consumption and waste. For example, the overuse of pesticides and fertilizers in AI-driven agricultural practices can harm biodiversity and contaminate water supplies.

Several initiatives aim to reduce the environmental footprint of AI. One approach involves developing energy-efficient algorithms and hardware. For example, researchers are exploring the use of green federated learning to make distributed AI systems more sustainable by minimizing energy expenditure during both training and inference phases.

Moreover, running AI models in data centers powered by renewable energy can significantly reduce their carbon footprint. Companies like Google and Microsoft have committed to running their data centers on 100% renewable energy by 2030 and 2025, respectively. This shift towards renewable energy can make a substantial difference, as demonstrated by the training of the AI model BLOOM on a French supercomputer powered mainly by nuclear energy, which resulted in significantly lower carbon emissions compared to models trained on fossil fuel-based grids.

Transparency is crucial for addressing the environmental impact of AI. Accurate measurement and standardization of AI’s carbon footprint are necessary to make informed decisions about its use. Tools like Microsoft’s Emissions Impact Dashboard and trackers developed by researchers at Stanford, Facebook, and McGill University help measure and compare the energy use and carbon emissions of AI models.

By developing and using energy-efficient algorithms, promoting the use of renewable energy, and ensuring transparent measurement of carbon footprints, the AI art community can mitigate these impacts. Sustainable practices in AI development are essential to balance the benefits of AI art with the need to protect our planet. As AI technologies continue to advance, fostering a culture of environmental responsibility will be crucial for ensuring a sustainable future where technology and nature can coexist harmoniously.

The Ethics of AI Art

Embracing Ethical Responsibility in AI Art

In this fluid and evolving field, AI artists must rely on their moral compass to act responsibly. Ethical guidelines can provide a framework, but ultimately, personal discretion and sound moral sense are vital in navigating the ethical challenges of AI-generated art. By embracing these principles, AI art can continue to flourish as a creative and innovative force, enriching the artistic landscape while respecting the rights and dignity of all individuals.

The field of AI ethics is still in its nascent stages, and firm norms have not yet been established. This ambiguity presents both challenges and opportunities for AI practitioners and users. As technology advances rapidly, the ethical landscape is continuously reshaped by new capabilities and unforeseen consequences. It is crucial for individuals and organizations involved in AI art to remain vigilant and proactive in their ethical considerations.

Deep Dream Generator (DDG) is committed to promoting ethical AI art. DDG emphasizes the importance of transparency, accountability, and respect for intellectual property and cultural integrity. By adhering to these principles, DDG aims to foster a community where creativity and innovation thrive within an ethical framework.

In the absence of comprehensive regulations, personal discretion plays a crucial role in ethical AI art creation. For instance, while legal frameworks may not yet fully address issues like the ownership of AI-generated art or the nuances of fair use in AI training data, ethical guidelines suggest respecting the rights and contributions of original creators. This respect includes proper attribution, seeking consent, and ensuring privacy, which are essential to maintaining the integrity of the creative process.

Ethical guidelines serve as a vital tool in navigating the complex landscape of AI art. Organizations such as UNESCO and the Montreal AI Ethics Institute provide valuable resources and frameworks for understanding and addressing ethical issues in AI. These guidelines emphasize principles like fairness, accountability, and transparency, which are essential for responsible AI development and use.

Promoting sustainable practices is another critical aspect of ethical AI art. The environmental impact of AI technologies, including the energy consumption and carbon emissions associated with training and running AI models, is a significant concern. Developing energy-efficient algorithms, using renewable energy sources, and implementing effective e-waste management strategies are essential steps towards reducing the environmental footprint of AI art.

Ultimately, the responsibility for ethical AI art lies with individual creators and users. By making informed decisions, being mindful of the potential impacts of their work, and adhering to ethical principles, AI artists can contribute to a more responsible and inclusive AI art community. This approach not only benefits the broader society but also enriches the creative process, ensuring that AI art remains a powerful and positive force.

The ethical challenges of AI-generated art require a thoughtful and proactive approach. By relying on a strong moral compass, embracing ethical guidelines, and fostering a culture of responsibility, AI artists and platforms like DDG can navigate the complexities of this evolving field. Through these efforts, AI art can continue to inspire and innovate while upholding the values of respect, integrity, and sustainability.

The Ethics of AI Art

8 Comments

  1. AlatarOfValinor on

    An excellent article. It clearly exposes the broad and complex many aspects of this revolutionary new technology. Personally, I fully support it and believe it has high potencial to be beneficial. It will be a slow and gradual evolution for society and individuals to learn and deal with it, with successes steps and also mistakes, but we are on a good and amazing path.
    Congratulations to the author(s) and the entire DDG team.
    Alatar of Valinor.

  2. Wow, this is a extremely long article. Thanks.
    I should take the time to read the whole article another time…

    I also had some thoughts about AI, Art and Copyrights in the past months or so.

    What does “art” mean?
    In german we use the word “Kunst” und not every painting is labeled as art here.
    We also label things as Kitsch or use other Categorisations.
    Most of the images here on DDG is kitsch in my opinion and that is OK because it seem to be the taste of many users of DDG and it fits easily to the guidelines of DDG. Most users seem not so much care about those images that look kind of artistic or maybe the AI that chooses images for the trending section does not like those so much.

    Recently I saw a documentation by a community of artist blaming AI.
    They claimed that their “art” is stealed by AI and showed a short interview of an “artist” who made Spiderman drawings! For me it seems that making Spiderman drawings is nothing else than making copies and stealing property. The “art” that he made is nothing else than craft and he is nothing else as a stealer of ideas and properties of somebody elses ideas.
    So the question is if we truly want all that copyright stuff or if we want a community of free knowledge and free creation? Copyright is a mechanism for rich people who have enough money to pay lawyers to protect their so called property. Poor people do not have that right because they can’t afford to pay for “protection”.

    What I also heard is that all the images used for the training got blurred with noise (at least for the diffusion model). So what is left from the original art?

    Since hundreds of years artist learn and copy from each other.
    Why should it be criminalized if we use a similar process for AI?

  3. oops, another comment:

    In many cases we have a new category of “art” with AI generated images:

    Some differences between AI images and drawings/paintings/photographs are:
    – it is a collaboration, sometimes a dialog, between the “artist” and one or more AI models.
    – an AI image never comes alone. Sometimes the prompt could be art for itself or is the true art. The image ist just a reflection or result of the prompt. A prompt could be a poem or a story with some grade of abstraction.
    It is a difference if we observe the same image here on DDG (where we may habe a look at the prompt) or if we distribute that picture on another platform or print it on paper.
    – AI giving us new possibilities of expression and collage. We freely are able to mix different genres and techniques as an illusion of painting mixed with photography and 3D and animation and of cause with the text which can be a part of the prompt.

  4. AI and AI art software are tools. Tools that enable the creative process to soar to new heights and works created. AI and associate software are not to be feared. Use the new tools and discover what is created.

  5. Adil YAHYA on

    Too long the article i had not yet finish it but find your way is good enaugh to let people loves your work DDG ( thank you shortten your Website Name.art/.ai :
    Some points
    inspired prompt should be with permissions and user published profil : Give Writing Methods or ways of thinking much more values
    Ethics of Ai are simply Ethic of Its devellopper: when Dev thinking is attached to money stratigies, techniques to control the user to forbidde or deny something to a user as stratigies to control the users mentality ( i say No to that think) everything gose wrong , you can’t combine between two different world , sharing and winning because every share bring negatives acts , judgement by life experience,
    i love your idea of social platform : make the devellopement processe open for all ideas fulish comes to brain and do a money stratigies out of the Social act ” don’t do the bad error like facebook or others did ” example, facebook publish on its first page that is free but when it comes to businesses inside you get lost to be forced to pay some small prices for new professional ways ‘ yes is good and agreed to pay for best values but why the ethic here not found they just lies every startup , so will finished worst “

  6. I believe in AI tools for art, not AI art. Art made by an AI from existing art isn’t art. Art made by an AI from something like a mathematical formula is a form of art, especially illustrating patterns. Maybe AI tools can help artists with better lines and things like that – that’s where I believe it should be used, if at all.

  7. Artificial intelligence (AI) indeed has a significant impact on many spheres of life, including art. I believe that AI should be seen not as a threat but as a tool that can expand the creative possibilities for people, particularly artists. Historically, we have always adapted to technological changes and used them to improve our work and lives. For example, when computers started to replace manual methods of work, it did not eliminate jobs but changed their nature. In the case of art, AI can serve as an assistant that helps artists experiment with new ideas and techniques. AI has the ability to create complex and beautiful works of art, but it lacks the emotional depth and uniqueness that humans bring to their work. True art goes beyond technical mastery and includes personal mood, emotional response, and cultural context. These are aspects that AI cannot fully reproduce yet. However, it is important that we do not forget the value of human contribution to art. Artists can breathe life and emotions into their work, which remains inaccessible to AI for now. This underscores the importance of using our unique human capabilities in areas where machines are powerless. Thus, I believe that AI does not replace artists but provides them with new tools for expression. Artists who can integrate AI into their work can create something truly new and unique, continuing to develop the art of the 21st century.

  8. This was a well written and thorough article! I enjoy doing both traditional art and AI art. I find the AI art useful to experiment with different creative ideas and the results are often amazing. But for the current state of AI, AI artists should not try to sell their art due to potentially being sued for copyright violations. This is especially true when their “text prompts” are referencing living artists in order to use their “styles” as part of the desired results.