41% of Schools Report AI Cyber Incidents

By John P. Mello Jr.

Some 41% of schools in the United States and the United Kingdom have experienced AI-related cyber incidents, ranging from phishing campaigns to harmful student-generated content, according to a recently released study from a systems identity and access management firm.
Among the 41% of schools experiencing an AI-related cyber incident, 11% reported that the incident caused a disruption, while 30% noted that the incident was contained quickly, according to a survey of 1,460 education administrators in the U.S. and U.K. conducted by TrendCandy for Keeper Security.
Most institutions (82%) said they feel at least “somewhat prepared” to handle AI-related cybersecurity threats, though that number falls to 32% for those who feel “very prepared,” the survey noted. This confidence, tempered by caution, shows that schools are aware of risks, but sizable gaps remain in overall preparedness, and uncertainty persists about the effectiveness of existing safeguards.
“Our research found that while almost every education leader is concerned about AI-related threats, only one in four feels confident in identifying them,” said Keeper Security Cybersecurity Evangelist Anne Cutler.
“The challenge is not a lack of awareness, but the difficulty of knowing when AI crosses the line from helpful to harmful,” she told TechNewsWorld. “The same tools that help a student brainstorm an essay can also be misused to create a convincing phishing message or even a deepfake of a classmate. Without visibility, schools struggle to separate legitimate use from activity that introduces risk.”
AI Cyber Incidents Underreported?
The finding that 41% of schools have already experienced an AI-related cyber incident is surprisingly high, though perhaps not unexpected given the rapid and largely uncontrolled proliferation of AI tools in educational settings, observed David Bader, director of the Institute for Data Science at the New Jersey Institute of Technology (NJIT), in Newark, N.J.
“This number is concerning because it suggests that nearly half of our educational institutions are dealing with security challenges before they’ve had the opportunity to establish proper safeguards,” he told TechNewsWorld.
“Schools have historically been vulnerable targets for cyberattacks due to limited cybersecurity budgets and IT staffing, and the introduction of AI tools — many of which students and faculty adopt independently without institutional vetting — has dramatically expanded the attack surface.
“What’s particularly troubling is that this 41% likely represents only the incidents that schools have detected and reported, meaning the actual number could be considerably higher,” he said.
James McQuiggan, CISO advisor at KnowBe4, a security awareness training provider in Clearwater, Fla., agreed. “Based on how quickly schools are trying to adopt AI tools and most likely without any strong cybersecurity hygiene practices, this number could be conservative,” he told TechNewsWorld.
“Unfortunately,” he continued, “many schools do not have the necessary resources and governance to manage AI securely and safely for their students, which increases the risk of data exposure and misuse.”
“The number is not surprising when you consider those incidents include phishing emails,” added Paul Bischoff, consumer privacy advocate at Comparitech, a reviews, advice, and information website for consumer security products.
“Many phishing campaigns are not run by native English speakers,” he told TechNewsWorld. “The AI helps them craft more convincing phishing emails with fewer mistakes.”
According to the 2025 Verizon Data Breach Investigations Report, phishing accounts for 77% of breaches in the education sector, making it the most common attack on that sector.
AI Adoption Spreads Across Schools
The study also found that AI is now a common part of classrooms and faculty offices. Eighty-six percent of institutions permit the use of AI tools by students, while only 2% have banned them outright. Among faculty, it added, adoption is even higher at 91%.
It reported that students are primarily using AI for supportive and exploratory tasks. The most common uses are research (62%), brainstorming (60%), and language assistance (49%). Creative projects (45%) and revision support (40%) follow, while more sensitive tasks, such as coding (30%) and completing assignments (27%), are more tightly controlled.
“While the report shows that 86% of schools allow student use of AI and 91% of faculty use it, the reality is that schools have largely lost the ability to meaningfully prohibit AI use even if they wanted to,” argued NJIT’s Bader. “AI tools are freely available on personal devices, and students are accessing them outside school networks regardless of institutional policies.”
“The more productive question isn’t whether to allow AI, but how to integrate it responsibly,” he said. “Schools face a choice not about whether AI will be used, but whether they’ll take a leadership role in shaping that use through education, ethical frameworks, and appropriate guardrails.”
“Attempting to ban AI would be both futile and counterproductive,” he added. “It would simply push usage underground while denying students the digital literacy skills they’ll need in an AI-augmented world.”
Sam Whitaker, vice president of social impact and strategic initiatives at StudyFetch, an AI-powered platform that specializes in transforming course materials into interactive study tools for students and educators, in Los Angeles, warned that there’s a difference between “AI use” and “responsible AI use.”
“Unrestricted use of productivity tools like ChatGPT or AI learning platforms that aren’t built for learning is not only dangerous, it’s potentially catastrophic for students’ long-term creativity and critical thinking,” he told TechNewsWorld.
“Schools have not only a choice but a responsibility to provide responsible solutions that are truly built for learning and not cheating,” he added.
School Policies Lag Behind AI Use
While schools and universities are building frameworks to govern AI use, the study noted, implementation is uneven. Policy development is still playing catch-up to practice, it explained, with just over half having detailed policies (51%) or informal guidance (53%) in place, while less than 60% deploy AI detection tools and student education programs.
With more than 40% already impacted, it continued, the fact that only a third (34%) have dedicated budgets and just 37% have incident response plans indicates a concerning gap in preparedness.
Keeper’s Cutler pointed out that relying on informal guidelines rather than formal policies leaves both students and faculty uncertain about how AI can safely be used to enhance learning and where it could create unintended risks.
“What we found is that the absence of policy is less about reluctance and more about being in catch-up mode,” she said. “Schools are embracing AI use, but governance hasn’t kept pace.”
“Policies provide a necessary framework that balances innovation with accountability,” she explained. “That means setting expectations for how AI can support learning, ensuring sensitive information such as student records or intellectual property cannot be shared with external platforms, and mandating transparency about when and how AI is used in coursework or research. Taken together, these steps preserve academic integrity and protect sensitive data.”
While policies governing AI are necessary, they shouldn’t be formulated with a cookie-cutter approach. “It is important to remember that a one-size-fits-all approach most likely won’t work here,” warned Elyse J. Thulin, a research assistant professor at the University of Michigan’s Institute for Firearm Injury Prevention. “A baseline of guidance is important, but different organizations will need to tailor their approaches based on their needs and infrastructure.”
“With any new technology comes potential for harm and wrongful use, as well as the benefits of advancement,” she added. “This does not necessarily mean the technology is harmful or wrong. We just need to work to identify ways to prevent it from being used improperly.”
“AI is included in that and is developing at an extremely rapid pace, so continued support for research in this area is absolutely critical,” she continued. “The more we can study and identify these use patterns, the better we establish evidence-based strategies and solutions to ultimately protect students and make school environments safer for everyone.”
https://www.technewsworld.com/story/41-of-schools-report-ai-cyber-incidents-179949.html