Complete Story
 
The Hidden Curriculum of AI Syllabus Statements: How Fear-Based Framing Shapes Student Behaviour and Academic Integrity

12/08/2025

The Hidden Curriculum of AI Syllabus Statements: How Fear-Based Framing Shapes Student Behaviour and Academic Integrity

By Brooklin Schneider

Image generated with Google’s Gemini 2.5 Flash

Across higher education, the syllabus is one of the first points of contact between students and their institution's expectations. Increasingly, it is also where students first encounter institutional guidance on artificial intelligence. Unfortunately, what they often find is not support or clarity, but a warning. When AI is framed primarily in terms of prohibition, misconduct, and punishment, institutions unintentionally anchor students to the belief that AI is inherently suspect or morally compromised.

This anchoring effect is well documented in the literature. Wang et al. (2019) argue that the first message students receive forms a cognitive anchor that shapes all subsequent interpretations. When the anchor presents AI as a threat, students internalize a powerful heuristic: AI = bad or AI = cheating. This insight helps explain why so many students experience anxiety around AI, even in classes where instructors have explicitly permitted its use.

What Students are Experiencing

Recent research confirms this mismatch between what students need and what institutions provide. Farinosi and Melchior (2025) conducted a multi-method study across European institutions and found that nearly 85 percent of students perceived no active response from professors regarding AI. Many described faculty attitudes as abstract or oppositional. Yet these same students overwhelmingly wanted clearer guidance. The authors note that 80 percent desired explicit support for responsible AI use.

This disconnect points to a larger issue in the student experience. When institutions fail to articulate how AI can be used ethically, students default to fear-based assumptions. Even when instructors attempt to correct this, the early anchoring persists. In my own courses this term, I asked students early on whether they were considering exploring AI as they worked on their assignments. Only one student raised their hand, despite my course’s syllabus statement permitting AI and providing detailed, student-centered guidelines on its use. It was not even October, yet the anchoring bias appeared to have already set in.

How Fear Reshapes Learning

Fear-based framing dulls the pedagogical opportunities AI tools can provide. Emerging research illustrates how generative AI can support personalized and scaffolded learning that aligns with constructivist theories and Vygotsgy’s Zone of Proximal Development. Studies by Kanont et al. (2024) and Zheldibayeva (2025) describe the possibilities for AI to act as a learning buddy that enhances listening, writing, and other literacy skills.

But when institutional framing casts AI as illicit, these learning supports become off-limits. This has troubling consequences for accessibility. Students with disabilities depend on AI-enhanced assistive technologies. Kuerban et al. (2025) demonstrate how specialized AI tools can provide essential support for learners with dyslexia by blending generative AI and augmented reality. For these students, AI is not an advantage but a bridge to equitable learning. Similarly, Zhao’s (2025) work shows how L2 learners use AI to strengthen reading, writing, and creative expression. A ban based on a biased heuristic punishes students for needing tools that make learning accessible. Such policies also conflict with universal design for learning. Choi and Seo (2024) argue that accessibility, usability, and UDL require flexibility, multiple modes of engagement, and design that accounts for diverse learners. Prohibiting AI wholesale ignores these principles and reinforces inequities.

What Institutions Can Do Differently

Evidence shows that students do not want unchecked permissiveness. They want guidance and transparency. Farinosi and Melchior’s (2025) participants explicitly stated that they favor regulation over prohibition. They want to understand when and how AI is appropriate within the learning process.

Some institutions are beginning to shift toward this approach. Danny Liu recently highlighted the University of Sydney’s decision to “[ban] the banning of AI’ in non-observed assessments” (2025, September 19). Their aim is to reduce the rising climate of fear and replace a control-oriented mindset with an educative one. This shift aligns with Corbin et al.’s (2025) argument that discursive warnings are insufficient. They call instead for structural changes to assessment design that anticipate and help shape future educational needs. Kohen-Vacs et al. (2024) extend this argument by urging institutions to reimagine governance frameworks to meet the realities of AI-enhanced learning.

These approaches move beyond fear to create conditions where integrity, clarity, and responsibility can take root.

Toward an Integrity-centered Model

To strengthen academic integrity in an AI-mediated world, institutions can reconsider the role of syllabus statements. Instead of anchoring students to fear, statements can clarify:

  • what responsible AI use looks like in the discipline
  • how AI can support learning goals without replacing core skills
  • which activities require human-only work and why
  • how accessibility considerations intersect with AI policy
  • where students can seek guidance without risk

This approach recognizes that transparency and partnership are stronger integrity strategies than prohibition alone.

Confronting the Hidden Curriculum

The content of a syllabus statement shapes more than compliance. It shapes trust, belonging, and the hidden curriculum of what the institution believes about its students. When AI is framed as a monster in the shadows, students learn to hide. When AI is framed as a tool that requires discernment, they learn to engage thoughtfully.

Rewriting the syllabus statement is not a superficial change. It is a chance for institutions to align academic integrity with learning science, accessibility, and equity. And it is a chance to replace fear with confidence, clarity, and shared responsibility. A syllabus statement cannot carry all of this work. But it can set the tone. And shifting that tone from suspicion to support may be one of the most meaningful steps institutions can take as AI becomes a permanent part of higher education.

References

Choi, G. W., & Seo, J. (2024). Accessibility, usability, and universal design for learning: Discussion of three key LX/UX elements for inclusive learning design. TechTrends, 68, 936–945. https://doi.org/10.1007/s11528-024-00987-6

Corbin, T., Dawson, P., & Liu, D. (2025). Talk is cheap: why structural assessment changes are needed for a time of GAI. Assessment & Evaluation in Higher Education, 1–11. https://doi.org/10.1080/02602938.2025.2503964 

Farinosi, M., & Melchior, C. (2025). ‘I use ChatGPT, but should I?’ A multi-method analysis of students’ practices and attitudes towards AI in higher education. European Journal of Education, (60)2, 1–15. https://doi.org/10.1111/ejed.70094

Google. (2025). Gemini 2.5 Flash (Sept 21 version). [Large language model]. https://gemini.google.com/app

Kanont, K., Pingmuang, P., Simasathien, T., Wisnuwong, S., Wiwatsiripong, B., Poonpirome, K., Songkram, N., & Khlaisang, J. (2024). Generative-AI, a learning assistant? Factors influencing higher-ed students' technology acceptance. Electronic Journal of E-Learning, 22(6), 18-33. https://doi.org/10.34190/ejel.22.6.3196

Kohen-Vacs, D., Amzalag, M., Weigelt-Marom, H., Gal, L., Kahana, O., Raz-Fogel, N., Ben-Aharon, O., Reznik, N., Elnekave, M., & Usher, M. (2024). Towards a call for transformative practices in academia enhanced by generative AI. European Journal of Open, Distance and E-Learning, 26(S1), 35-50. https://doi.org/10.2478/eurodl-2024-0006

Kuerban, Y., Oyelere, S. S., & Sanusi, I. T. (2025). ReadSmart: Generative AI and augmented reality solution for supporting students with dyslexia learning disabilities. International Journal of Technology in Education and Science, 9(1), 159-176. https://doi.org/10.46328/ijtes.599

Liu, D. (2025, September 19). In July the University of Sydney introduced new policies that effectively 'banned the banning of AI' in non-observed assessments [photo of student and faculty discussion panel]. LinkedIn. https://www.linkedin.com/posts/dannydotliu_generativeai-highereducation-academicintegrity-activity-7374177844152877056-NHaj?utm_source=share&utm_medium=member_desktop&rcm=ACoAAA0xsx8BmQYRrtds0ZlpIX_vmQCIl-83gLc

Wang, D., Yang, Q., Abdul, A., & Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19, Paper 601, pp. 1–15). Association for Computing Machinery. https://doi.org/10.1145/3290605.3300831

Zhao, D. (2025). The impact of AI-enhanced natural language processing tools on writing proficiency: an analysis of language precision, content summarization, and creative writing facilitation. Education and Information Technologies, 30(6), 8055–8086. https://doi.org/10.1007/s10639-024-13145-5

Zheldibayeva, R. (2025). GenAI as a learning buddy for non-English majors: Effects on listening and writing performance. Educational Process: International Journal, 14, e2025051. https://doi.org/10.22521/edupij.2025.14.51

 


Brooklin Schneider (MA, MEd Candidate) is a faculty member and educational developer at NorQuest College in Edmonton, Alberta, Canada. She is also a researcher and graduate student in the University of Calgary’s Generative AI and Educational Innovation program.  Her current work examines the intersections of academic integrity and generative AI in college and polytechnic settings.

 

The authors' views are their own.

Thank you for being a member of ICAI. Not a member of ICAI yet? Check out the benefits of membership and consider joining us by visiting our membership page. Be part of something great!‌ 

Printer-Friendly Version

0 Comments