Artificial Intelligence (AI) is about creating computer systems that can mimic certain parts of human intelligence — learning, reasoning, problem-solving. AI is everywhere: your phone’s face unlock, Netflix recommendations, grammar checkers, even your favourite music app’s “suggested playlist.”
In simple terms, AI focuses on developing computer systems to simulate human intelligence and problem-solving capabilities. These AI systems are designed to reason, learn, and act autonomously. AI is embedded into a range of technologies we encounter in day-to-day life, so while Generative Artificial Intelligence (GenAI) gets much more attention, the broader concepts of AI have been around for a long time as a well-established field of research, development, and real-world application.
AI has advanced quickly in recent years and has become a significant force in human affairs. Let’s start by unpacking some key terms (Tzirides et al., 2023):
Generative AI: Learns patterns from existing data (training) and then creates new text, images, and sounds that has similar statistical properties to the training data.
Machine intelligence: Symbol manipulations by computers that are larger and more complicated than is feasible for human minds.
Artificial intelligence: The ability of computers and machines to perform tasks that typically require human intelligence, such as learning, problem-solving, decision-making, and understanding language.
The following video briefly introduces the first two terms – machine intelligence and artificial intelligence. We will explore GenAI more closely next.
Key Types of AI
Type of AI
Examples
What it does
Text Analysis
ChatGPT, Grammarly
Reads and generates text
Text Reading (OCR)
Google Lens, Adobe Scan
Turns images into text
Computer Vision
Face ID, self-driving cars
Detects and understands visuals
Listening (Speech Recognition)
Alexa, Siri, Zoom captions
Turns speech into text and vice versa
💡 Quick quiz:Test your knowledge by identifying if the following apps/tools use AI. Click the image to access the quiz:
This quiz was created with the help of AI (EdCafe)
Where did it struggle? Why do you think that happened?
🔍 Understanding Image Recognition
Quick, Draw! is a game created by Google where you doodle something (like a cat or a house), and the computer tries to guess what it is. Behind the scenes, it uses machine learning and a type of system called a neural network.
A neural network works a bit like the human brain:
Your doodle is turned into data (pixels).
The neural network looks for patterns—shapes, lines, and features—that match examples it has already learned from millions of other doodles.
Based on those patterns, it makes a prediction: “This looks like a cat!” or “That’s probably a house.”
The more data it trains on, the better it gets at recognising drawings—even when they are messy or drawn differently by different people.
Quick, Draw! therefore uses image recognition, where an AI model tries to identify your doodles. Millions of doodles are collected and used as training data. The AI then learns to recognise patterns from this dataset.
It uses machine learning to identify and then predict what you are drawing. There are three parts to machine learning:
Dataset – A collection of curated data
Examples: images, measurements, text, video
Training Data – Labelled examples used to teach the model
Neural Networks – Systems inspired by the human brain that learn patterns to classify and predict
Quick, Draw! uses supervised learning is a type of machine learning where the model is trained on labeled data. The algorithm learns to map inputs to outputs based on example input-output pairs.
⚖️ Bias in AI
Some doodles (cats, trees, rainbows) look similar worldwide.
BUT others (ambulances, hospitals, houses) vary across cultures. Example: Quick, Draw! struggles if an ambulance isn’t drawn with a red cross.
Bias occurs when outcomes are systematically unfair due to the training data. To support fairness in AI systems, we must ensure we are careful about what data we use and that we have enough data that is representative.
📝 Reflection Questions
What did you notice about AI’s ability to interpret your drawings?
How does data shape what AI “knows”?
What responsibilities do developers have to reduce bias?
How might you design AI systems that are fair and ethical?
3. Generative AI (GenAI) – The New Wave
GenAI doesn’t just analyse data — it creates new content like text, images, music, or video based on patterns it has learned.
To learn more on how GenAI fits into AI watch this video:
Think:
ChatGPT writing essays.
Midjourney making art.
Elicit helping you research.
Many AI tools can create images as standalone or embedded into other tools. These include OpenAI’s DALL.E, Microsoft’s Bing Image Creator, Google’s Imagen, and Midjourney. The following blog post highlights some of the best AI image generators: The 7 best AI image generators in 2024
Watch this video that takes you through the process of creating an AI image using prompting.
“A cartoon image of a student holding a tablet in a library.”
Reflection Question: Representation and Image Generators
AI image generators like DALL·E, Midjourney, and Stable Diffusion can create stunning visuals — but when it comes to representing people, they can cause serious problems.
This is an image used by NZ National Party in one of their campaigns:
Later, it was determined that this was an AI-generated image.
Do you see any issues with using AI to generate pictures? Is there anything we need to consider, especially when depicting people of different races or in spaces where there a gender stereotypes?
AI has Built-in Bias
AI image models are trained on huge collections of images from the internet.
These collections are often dominated by Western imagery and perspectives.
As a result, prompts like “Māori leader” or “Indigenous ceremony” may produce stereotyped, inaccurate, or exoticised images.
Example: The model might show headdresses that are not part of Māori culture, or mix cultural elements from different Indigenous groups, erasing their uniqueness.
3. Large Language Models (LLMs) – How ChatGPT Works
LLMs like ChatGPT, Gemini, Claude, and LLaMA are trained on massive amounts of text. They:
Predict the next word in a sentence (kind of like super-charged autocomplete).
Can summarise, translate, or create content.
Don’t actually understand — they work with patterns, not meaning.
LLMs work like a super-powered autocomplete. They don’t actually understand the words, but guess the most likely next word based on patterns they’ve seen before. The model doesn’t “know” the answer in a human sense. Instead, it calculates a probability distribution over possible next words, based on patterns it has learned from training data.
Input: The phrase “The boy went to the …” is given to the LLM. This is the context—the previous words the model uses to predict what comes next. The LLMs then considers each possible continuation (e.g., Playground, School, Park, Cafe, Hospital). Each has a different probability. Output: The model then samples one of these options. Often it picks the highest probability (Playground), but randomness (controlled by settings like “temperature”) can lead it to pick others, making responses more varied.
An analogy LLMs are like an octopus:Emily Bender’s “octopus experiment” shows how LLMs work. The octopus in the story doesn’t understand the conversation it is listening to – it just matches new messages to patterns it has seen before and produces a reply that fits. LLMs do the same: they know the form of words (what usually comes next) but not the meaning. They sound convincing, but don’t actually understand the ideas behind the text.
These LLMs are like an octopus guessing your conversation based on patterns, no real-world knowledge.
Image created using Magicphotos using the prompt “A smaller octopus in the water between the two islands”
4. Risks and Limitations
Using AI without thinking can be risky:
Hallucinations: AI might make up “facts” or references.
Bias: Can reinforce stereotypes (e.g., certain jobs shown mostly as white men).
Shallow knowledge: Good writing, weak depth.
Cultural gaps: Often US/Western-focused.
Environmental impact: High energy use.
Digital divide: Premium tools leave some learners behind.
⚠️ Always fact-check AI output before using it in your work.
7. Key Takeaways
AI has been around for decades, but GenAI is changing how we create content.
LLMs are powerful but not perfect — they don’t “think” like humans.
Using AI wisely means knowing both its capabilities and its limits.
Your ideas, cultural context, and critical thinking are what make your work truly yours.