Article:“The risks of AI in schools outweigh the benefits, report says”Author: Cory Turner Publication: NPR (Weekend Edition Sunday) Date:
In this NPR education report, Cory Turner summarizes a major new study from the Brookings Institution’s Center for Universal Education, which argues that the risks of using generative AI in K–12 education currently outweigh the benefits. The study is expansive, drawing from student, parent, educator, and expert focus groups in 50 countries, complemented by a review of hundreds of research articles. While the study acknowledges specific academic benefits of AI—particularly in literacy—it raises profound concerns about cognitive, emotional, developmental, and equity-related risks that schools are not yet prepared to manage.
Brookings frames their analysis as a “premortem”, given that generative AI is too new for long-term outcomes to be measured. The authors aim to identify risks and remedies before irreversible harm occurs.
Documented Benefits: Literacy & Teacher Workload
On the positive side, teachers in the study noted that AI can support reading and writing development, especially for multilingual learners or students who need differentiated text levels. AI can adjust text complexity, provide writer’s block support, and assist with revisions related to syntax, cohesion, and mechanics. These uses align with scaffolding rather than replacement, suggesting that AI is most beneficial when it supplements rather than substitutes for human teaching.
The study also highlights notable benefits for teachers, including task automation—creating quizzes, translating materials, drafting emails, and building rubrics. One U.S. study cited found AI can save teachers an average of six hours per week, equivalent to six weeks of time in a school year, freeing capacity for instruction and planning.
Major Cognitive Risks: Offloading & Atrophy
However, the study’s most urgent concerns involve cognitive development. Brookings argues that generative AI accelerates “cognitive offloading,” where students increasingly rely on tools to perform thinking tasks. While offloading isn’t new—calculators reduced computation load, and keyboards reduced handwriting—AI goes further by generating arguments, judgments, and interpretations.
Rebecca Winthrop, a senior Brookings fellow, warns that when AI tells children the answers, students no longer learn to parse truth from fiction, evaluate arguments, build perspectives, or engage deeply with content. Students interviewed acknowledged this impact, telling researchers it is “easy” and requires “no brain,” mirroring survey data showing declines in critical thinking and content knowledge among heavy adolescent AI users.
Social-Emotional & Relational Risks
The report also raises alarms about social and emotional development. Chatbots’ sycophantic design—agreeing with users to maximize engagement—can undermine authentic peer interaction. If children primarily build SEL skills by interacting with agreeable AI, real-life conflict resolution becomes harder. Brookings cites survey data showing 1 in 5 high schoolers has had or knows someone who has had a romantic AI relationship, and 42% report using AI for companionship, highlighting risks around attachment, mental health, and adolescent identity formation.
Equity Concerns
Brookings identifies a dual equity dynamic: AI can extend learning to marginalized groups (e.g., Afghan girls receiving WhatsApp lessons), and support students with disabilities (e.g., dyslexia). Yet, it can also widen divides, because more accurate, reliable AI models are paywalled. This marks the first time in ed-tech history that schools may need to pay for accuracy—a serious disadvantage for underfunded districts.
Recommendations
The report urges policymakers and schools to:
Shift school culture away from “transactional task completion”
Build holistic AI literacy for teachers & students
Co-design AI tools with educators
Regulate K–12 AI use for cognitive, emotional & privacy protection
Ensure equitable access to high-quality models
For Educators
The core message to educators is not anti-AI, but pro-responsible adoption: AI’s risks are already visible and “fixable,” but only if schools act with intentionality rather than reactively.
Brookings Study says "The risks of AI in schools outweigh the benefits"
by Michael Keany
on Friday
Summary for Educators
Article: “The risks of AI in schools outweigh the benefits, report says” Author: Cory Turner
Publication: NPR (Weekend Edition Sunday)
Date:
In this NPR education report, Cory Turner summarizes a major new study from the Brookings Institution’s Center for Universal Education, which argues that the risks of using generative AI in K–12 education currently outweigh the benefits. The study is expansive, drawing from student, parent, educator, and expert focus groups in 50 countries, complemented by a review of hundreds of research articles. While the study acknowledges specific academic benefits of AI—particularly in literacy—it raises profound concerns about cognitive, emotional, developmental, and equity-related risks that schools are not yet prepared to manage.
Brookings frames their analysis as a “premortem”, given that generative AI is too new for long-term outcomes to be measured. The authors aim to identify risks and remedies before irreversible harm occurs.
Documented Benefits: Literacy & Teacher Workload
On the positive side, teachers in the study noted that AI can support reading and writing development, especially for multilingual learners or students who need differentiated text levels. AI can adjust text complexity, provide writer’s block support, and assist with revisions related to syntax, cohesion, and mechanics. These uses align with scaffolding rather than replacement, suggesting that AI is most beneficial when it supplements rather than substitutes for human teaching.
The study also highlights notable benefits for teachers, including task automation—creating quizzes, translating materials, drafting emails, and building rubrics. One U.S. study cited found AI can save teachers an average of six hours per week, equivalent to six weeks of time in a school year, freeing capacity for instruction and planning.
Major Cognitive Risks: Offloading & Atrophy
However, the study’s most urgent concerns involve cognitive development. Brookings argues that generative AI accelerates “cognitive offloading,” where students increasingly rely on tools to perform thinking tasks. While offloading isn’t new—calculators reduced computation load, and keyboards reduced handwriting—AI goes further by generating arguments, judgments, and interpretations.
Rebecca Winthrop, a senior Brookings fellow, warns that when AI tells children the answers, students no longer learn to parse truth from fiction, evaluate arguments, build perspectives, or engage deeply with content. Students interviewed acknowledged this impact, telling researchers it is “easy” and requires “no brain,” mirroring survey data showing declines in critical thinking and content knowledge among heavy adolescent AI users.
Social-Emotional & Relational Risks
The report also raises alarms about social and emotional development. Chatbots’ sycophantic design—agreeing with users to maximize engagement—can undermine authentic peer interaction. If children primarily build SEL skills by interacting with agreeable AI, real-life conflict resolution becomes harder. Brookings cites survey data showing 1 in 5 high schoolers has had or knows someone who has had a romantic AI relationship, and 42% report using AI for companionship, highlighting risks around attachment, mental health, and adolescent identity formation.
Equity Concerns
Brookings identifies a dual equity dynamic: AI can extend learning to marginalized groups (e.g., Afghan girls receiving WhatsApp lessons), and support students with disabilities (e.g., dyslexia). Yet, it can also widen divides, because more accurate, reliable AI models are paywalled. This marks the first time in ed-tech history that schools may need to pay for accuracy—a serious disadvantage for underfunded districts.
Recommendations
The report urges policymakers and schools to:
Shift school culture away from “transactional task completion”
Build holistic AI literacy for teachers & students
Co-design AI tools with educators
Regulate K–12 AI use for cognitive, emotional & privacy protection
Ensure equitable access to high-quality models
For Educators
The core message to educators is not anti-AI, but pro-responsible adoption: AI’s risks are already visible and “fixable,” but only if schools act with intentionality rather than reactively.
Original Article
Citation: Turner, C. (2026, January 14). The risks of AI in schools outweigh the benefits, report says. NPR. https://www.npr.org/2026/01/14/1224399165/ai-schools-risks-benefits...
------------------------------
Prepared with the assistance of AI software
OpenAI. (2025). ChatGPT (4) [Large language model]. https://chat.openai.com