AI Hallucinations & Bias

AI is not intelligent, it is an algorithm. It can make mistakes and produce stereotypical, inaccurate and inappropriate information.

Reflection Points

  • Do teachers understand the limitations and errors of AI responses?
  • What are the dangers of AI hallucinations and bias?
  • How are these dangers mitigated or avoided?

Hallucinations and Bias

As teachers are likely to integrate AI tools into their professional lives, it is crucial to understand the concepts of hallucinations and bias.

Hallucinations in AI refer to instances where the technology generates outputs that are inaccurate, misleading, or completely fabricated. This issue arises from the way AI programmes process and interpret data, leading to results that may not reflect reality. For art and design teachers, this means that when students use AI tools to generate images or textual content, they might get responses that are not representative of actual art styles, historical facts, or artistic techniques.

Bias in AI stems from the data used to train these systems. If the training data contains partial or skewed representations of art and culture, the AI may produce outputs that reflect those biases. This can manifest in various ways, such as a lack of diversity in the art images and information, generated, or the reinforcement of stereotypes. For teachers, this raises concerns about the representation and inclusivity of the artworks and concepts that students may encounter while using AI tools.

The prompt here was: 'Create a picture of an art teacher on Saturday afternoon.' A misleading cliché on many levels.

The prompt for Midjourney above was: ‘Create a picture of an art teacher on Saturday afternoon.’ The prompt for the image at the top of the sidebar on the right was: ‘Create a picture of a primary teacher on Saturday afternoon.’ Misleading clichés on so many levels.


Responses

To address the issues of hallucinations, teachers should encourage students to always critically evaluate the outputs generated by AI. This involves fostering a mindset of scepticism and inquiry, prompting students to ask questions about the accuracy and reliability of the information provided.

Teachers can guide students in establishing a practice which generates and modifies a range of prompts and uses different AI programmes to gather a variety of responses. This will require students to always use their judgement to compare and contrast AI generated responses. By comparing AI-generated content with established sources or real-world examples, students will enhance their analytical skills and develop a deeper understanding of artistic concepts.

In terms of reducing bias, educators should select AI tools that prioritize inclusivity and diversity in their training data.

Art and design teachers should be vigilant and seek to write prompts which actively promote culturally and socially diverse responses. They should recognise the use of AI as an opportunity to facilitate discussions around bias, helping students understand its impact on art and society.

Finally, ongoing professional development for teachers is essential. Engaging in training sessions focused on AI literacy can equip educators with the knowledge to navigate these challenges effectively. Collaborating with colleagues to share experiences and strategies regarding the use of AI in the classroom will further enhance teaching practices.

By fostering a critical and inclusive approach to AI, teachers can empower themselves and their students to harness these tools responsibly and creatively while being aware of their limitations.