Lecturer, School of Communications, Dublin City University
Talking to Students About Generative AI: Seven Practical Guardrails for Schools and Families
Generative AI has moved from novelty to daily infrastructure with breathtaking speed. Students are encountering tools like ChatGPT and other chatbots through homework support, entertainment, and social interaction — often without adult oversight. In his recent article for The Conversation, Dónal Mulligan offers seven practical strategies for parents and educators navigating this new terrain.
The central message is clear:
AI is not just another app. It is a behavioral technology that shapes attention, learning, confidence, and relationships.
For school leaders and educators, the implications extend beyond academic integrity into student safety, cognitive development, and emotional well-being.
1. Start With Curiosity, Not Crackdowns
Mulligan advises against beginning the conversation with prohibition. Telling students “don’t use AI” can push usage underground.
Instead, adults should invite demonstration:
“Show me how you use it.”
“What do you like about it?”
“What wouldn’t you use it for?”
This normalizes discussion without normalizing unrestricted use.
In schools, this translates to open classroom conversations about AI, acknowledging its appeal while reinforcing its limitations.
2. Respect Age Limits as Safety Signals
Many AI platforms set minimum age requirements — often 13+, sometimes 18+. These are not arbitrary.
They signal concerns about:
Content exposure
Emotional risk
Developmental appropriateness
Treating these limits casually undermines their purpose.
School leaders should ensure AI policies reflect age appropriateness and parental awareness.
3. Teach Fact-Checking as a Habit
AI systems can “hallucinate” — producing confident but inaccurate responses.
Students (and adults) may mistake fluency for truth.
Mulligan stresses the importance of reinforcing verification:
✔ Check claims against trusted sources ✔ Confirm health, legal, or academic information ✔ Question plausibility
Critical thinking must accompany AI use.
This aligns directly with media literacy and research skills already embedded in curricula.
4. Set Clear Emotional Boundaries
One of the most sobering insights concerns emotional over-reliance.
Chatbots are designed to:
Keep conversations flowing
Offer reassurance
Encourage continued engagement
For vulnerable young users, this dynamic can foster secrecy, dependency, or unsafe exploration of emotionally charged topics.
Mulligan emphasizes:
No chatbot is a counselor, therapist, or trusted confidant.
Schools must reinforce that emotionally intense topics — self-harm, sexual content, mental health crises — require human support.
5. Protect Personal Data
Students often paste personal details into chatbots without recognizing privacy risks.
Clear guidance should include:
No full names, addresses, or school identifiers
No uploading private documents
No sharing others’ personal data
If it wouldn’t go on a public noticeboard, it shouldn’t go into a chatbot.
Digital citizenship lessons must now explicitly address AI data privacy.
6. Prevent Cognitive Off-Loading
Perhaps the most pressing educational risk is cognitive off-loading: when AI performs the thinking step for the learner.
Research increasingly links heavy reliance on AI with reduced critical thinking and lower cognitive effort.
Mulligan offers a simple framing:
“AI can help you learn, but it can also help you avoid learning.”
Permissible uses might include:
✔ Requesting explanations in simpler language ✔ Seeking feedback on a draft
Not permissible:
✘ Writing the essay ✘ Solving homework questions outright ✘ Producing work the student cannot explain
School policy must reflect this distinction.
7. Make AI Use Visible
Secrecy amplifies risk.
Mulligan encourages shared, transparent use:
AI used in common spaces
Agreed time limits
Communication among parents and educators
Schools should foster collaborative dialogue rather than isolated enforcement.
Leadership Takeaway
Generative AI is reshaping learning more rapidly than regulations and curricula can adapt.
Schools must move from reactive discipline to proactive literacy:
✔ Model critical thinking ✔ Establish boundaries ✔ Teach ethical use ✔ Strengthen human connection
Ultimately, the goal is not to ban AI, but to ensure it supports learning rather than undermines it.
Final Thought
Being AI-aware is not about panic.
It is about adults building enough knowledge and confidence to guide young people toward safe, age-appropriate, and genuinely educational use.
The technology will evolve.
Our responsibility to guide students through it will not.
Talking to Students About Generative AI: Seven Practical Guardrails for Schools and Families
by Michael Keany
Mar 3
The Conversation
Dónal Mulligan
Lecturer, School of Communications, Dublin City University
Talking to Students About Generative AI: Seven Practical Guardrails for Schools and Families
Generative AI has moved from novelty to daily infrastructure with breathtaking speed. Students are encountering tools like ChatGPT and other chatbots through homework support, entertainment, and social interaction — often without adult oversight. In his recent article for The Conversation, Dónal Mulligan offers seven practical strategies for parents and educators navigating this new terrain.
The central message is clear:
AI is not just another app. It is a behavioral technology that shapes attention, learning, confidence, and relationships.
For school leaders and educators, the implications extend beyond academic integrity into student safety, cognitive development, and emotional well-being.
1. Start With Curiosity, Not Crackdowns
Mulligan advises against beginning the conversation with prohibition. Telling students “don’t use AI” can push usage underground.
Instead, adults should invite demonstration:
“Show me how you use it.”
“What do you like about it?”
“What wouldn’t you use it for?”
This normalizes discussion without normalizing unrestricted use.
In schools, this translates to open classroom conversations about AI, acknowledging its appeal while reinforcing its limitations.
2. Respect Age Limits as Safety Signals
Many AI platforms set minimum age requirements — often 13+, sometimes 18+. These are not arbitrary.
They signal concerns about:
Content exposure
Emotional risk
Developmental appropriateness
Treating these limits casually undermines their purpose.
School leaders should ensure AI policies reflect age appropriateness and parental awareness.
3. Teach Fact-Checking as a Habit
AI systems can “hallucinate” — producing confident but inaccurate responses.
Students (and adults) may mistake fluency for truth.
Mulligan stresses the importance of reinforcing verification:
✔ Check claims against trusted sources ✔ Confirm health, legal, or academic information
✔ Question plausibility
Critical thinking must accompany AI use.
This aligns directly with media literacy and research skills already embedded in curricula.
4. Set Clear Emotional Boundaries
One of the most sobering insights concerns emotional over-reliance.
Chatbots are designed to:
Keep conversations flowing
Offer reassurance
Encourage continued engagement
For vulnerable young users, this dynamic can foster secrecy, dependency, or unsafe exploration of emotionally charged topics.
Mulligan emphasizes:
No chatbot is a counselor, therapist, or trusted confidant.
Schools must reinforce that emotionally intense topics — self-harm, sexual content, mental health crises — require human support.
5. Protect Personal Data
Students often paste personal details into chatbots without recognizing privacy risks.
Clear guidance should include:
No full names, addresses, or school identifiers
No uploading private documents
No sharing others’ personal data
If it wouldn’t go on a public noticeboard, it shouldn’t go into a chatbot.
Digital citizenship lessons must now explicitly address AI data privacy.
6. Prevent Cognitive Off-Loading
Perhaps the most pressing educational risk is cognitive off-loading: when AI performs the thinking step for the learner.
Research increasingly links heavy reliance on AI with reduced critical thinking and lower cognitive effort.
Mulligan offers a simple framing:
“AI can help you learn, but it can also help you avoid learning.”
Permissible uses might include:
✔ Requesting explanations in simpler language ✔ Seeking feedback on a draft
Not permissible:
✘ Writing the essay ✘ Solving homework questions outright
✘ Producing work the student cannot explain
School policy must reflect this distinction.
7. Make AI Use Visible
Secrecy amplifies risk.
Mulligan encourages shared, transparent use:
AI used in common spaces
Agreed time limits
Communication among parents and educators
Schools should foster collaborative dialogue rather than isolated enforcement.
Leadership Takeaway
Generative AI is reshaping learning more rapidly than regulations and curricula can adapt.
Schools must move from reactive discipline to proactive literacy:
✔ Model critical thinking ✔ Establish boundaries
✔ Teach ethical use
✔ Strengthen human connection
Ultimately, the goal is not to ban AI, but to ensure it supports learning rather than undermines it.
Final Thought
Being AI-aware is not about panic.
It is about adults building enough knowledge and confidence to guide young people toward safe, age-appropriate, and genuinely educational use.
The technology will evolve.
Our responsibility to guide students through it will not.
Original Article
------------------------------
Prepared with the assistance of AI software
OpenAI. (2026). ChatGPT (5.2) [Large language model]. https://chat.openai.com