Talking to Students About Generative AI: Seven Practical Guardrails for Schools and Families

The Conversation

Lecturer, School of Communications, Dublin City University

Talking to Students About Generative AI: Seven Practical Guardrails for Schools and Families

Generative AI has moved from novelty to daily infrastructure with breathtaking speed. Students are encountering tools like ChatGPT and other chatbots through homework support, entertainment, and social interaction — often without adult oversight. In his recent article for The Conversation, Dónal Mulligan offers seven practical strategies for parents and educators navigating this new terrain.

The central message is clear:

AI is not just another app. It is a behavioral technology that shapes attention, learning, confidence, and relationships.

For school leaders and educators, the implications extend beyond academic integrity into student safety, cognitive development, and emotional well-being.


1. Start With Curiosity, Not Crackdowns

Mulligan advises against beginning the conversation with prohibition. Telling students “don’t use AI” can push usage underground.

Instead, adults should invite demonstration:

  • “Show me how you use it.”

  • “What do you like about it?”

  • “What wouldn’t you use it for?”

This normalizes discussion without normalizing unrestricted use.

In schools, this translates to open classroom conversations about AI, acknowledging its appeal while reinforcing its limitations.


2. Respect Age Limits as Safety Signals

Many AI platforms set minimum age requirements — often 13+, sometimes 18+. These are not arbitrary.

They signal concerns about:

  • Content exposure

  • Emotional risk

  • Developmental appropriateness

Treating these limits casually undermines their purpose.

School leaders should ensure AI policies reflect age appropriateness and parental awareness.


3. Teach Fact-Checking as a Habit

AI systems can “hallucinate” — producing confident but inaccurate responses.

Students (and adults) may mistake fluency for truth.

Mulligan stresses the importance of reinforcing verification:

✔ Check claims against trusted sources ✔ Confirm health, legal, or academic information
✔ Question plausibility

Critical thinking must accompany AI use.

This aligns directly with media literacy and research skills already embedded in curricula.


4. Set Clear Emotional Boundaries

One of the most sobering insights concerns emotional over-reliance.

Chatbots are designed to:

  • Keep conversations flowing

  • Offer reassurance

  • Encourage continued engagement

For vulnerable young users, this dynamic can foster secrecy, dependency, or unsafe exploration of emotionally charged topics.

Mulligan emphasizes:

No chatbot is a counselor, therapist, or trusted confidant.

Schools must reinforce that emotionally intense topics — self-harm, sexual content, mental health crises — require human support.


5. Protect Personal Data

Students often paste personal details into chatbots without recognizing privacy risks.

Clear guidance should include:

  • No full names, addresses, or school identifiers

  • No uploading private documents

  • No sharing others’ personal data

If it wouldn’t go on a public noticeboard, it shouldn’t go into a chatbot. 

Digital citizenship lessons must now explicitly address AI data privacy.


6. Prevent Cognitive Off-Loading

Perhaps the most pressing educational risk is cognitive off-loading: when AI performs the thinking step for the learner.

Research increasingly links heavy reliance on AI with reduced critical thinking and lower cognitive effort.

Mulligan offers a simple framing:

“AI can help you learn, but it can also help you avoid learning.”

Permissible uses might include:

✔ Requesting explanations in simpler language ✔ Seeking feedback on a draft

Not permissible:

✘ Writing the essay ✘ Solving homework questions outright
✘ Producing work the student cannot explain

School policy must reflect this distinction.


7. Make AI Use Visible

Secrecy amplifies risk.

Mulligan encourages shared, transparent use:

  • AI used in common spaces

  • Agreed time limits

  • Communication among parents and educators

Schools should foster collaborative dialogue rather than isolated enforcement.


Leadership Takeaway

Generative AI is reshaping learning more rapidly than regulations and curricula can adapt.

Schools must move from reactive discipline to proactive literacy:

✔ Model critical thinking ✔ Establish boundaries
✔ Teach ethical use
✔ Strengthen human connection

Ultimately, the goal is not to ban AI, but to ensure it supports learning rather than undermines it.


Final Thought

Being AI-aware is not about panic.

It is about adults building enough knowledge and confidence to guide young people toward safe, age-appropriate, and genuinely educational use.

The technology will evolve.

Our responsibility to guide students through it will not.

Original Article

------------------------------

Prepared with the assistance of AI software

OpenAI. (2026). ChatGPT (5.2) [Large language model]. https://chat.openai.com

Views: 5

Reply to This

JOIN SL 2.0

SUBSCRIBE TO

SCHOOL LEADERSHIP 2.0

Feedspot named School Leadership 2.0 one of the "Top 25 Educational Leadership Blogs"

"School Leadership 2.0 is the premier virtual learning community for school leaders from around the globe."

---------------------------

 Our community is a subscription-based paid service ($19.95/year or only $1.99 per month for a trial membership)  that will provide school leaders with outstanding resources. Learn more about membership to this service by clicking one of our links below.

 

Click HERE to subscribe as an individual.

 

Click HERE to learn about group membership (i.e., association, leadership teams)

__________________

CREATE AN EMPLOYER PROFILE AND GET JOB ALERTS AT 

SCHOOLLEADERSHIPJOBS.COM

New Partnership

Mentors.net - a Professional Development Resource

Mentors.net was founded in 1995 as a professional development resource for school administrators leading new teacher induction programs. It soon evolved into a destination where both new and student teachers could reflect on their teaching experiences. Now, nearly thirty years later, Mentors.net has taken on a new direction—serving as a platform for beginning teachers, preservice educators, and

other professionals to share their insights and experiences from the early years of teaching, with a focus on integrating artificial intelligence. We invite you to contribute by sharing your experiences in the form of a journal article, story, reflection, or timely tips, especially on how you incorporate AI into your teaching

practice. Submissions may range from a 500-word personal reflection to a 2,000-word article with formal citations.

© 2026   Created by William Brennan and Michael Keany   Powered by

Badges  |  Report an Issue  |  Terms of Service