Complete Story
03/16/2026
Elevating Integrity in the Age of AI: Reflections from ICAI 2026
by Stephen Bunbury
Image credit: Author (ICAI Conference presentation)
Conferences often leave you with two things: new ideas and renewed energy. The recent ICAI conference certainly delivered both. With around 150 delegates attending sessions exploring the rapidly evolving world of artificial intelligence and academic integrity, the event created space for critical conversations about how higher education can respond thoughtfully and ethically to technological change. I had the privilege of contributing to this discussion through a session on inclusive assessment in the age of AI, drawing on my research on inclusive curriculum design and academic integrity. My central argument was simple but important: inclusion and integrity are not competing values; they reinforce one another when assessments are well designed.
Inclusive assessment as the foundation of integrity
A key message from my talk was that academic integrity is fundamentally a design issue, not simply a policing issue (Bertram Gallant & Rettinger, 2025). When assessments are overly rigid, inaccessible, or disconnected from meaningful learning, the conditions for misconduct increase. Research shows that students’ perceptions of fairness and accessibility strongly influence decisions about academic misconduct (Bertram Gallant & Rettinger, 2025). When learning environments feel inequitable or overly pressured, students are more likely to rationalise dishonest behaviour.
Inclusive curriculum design offers an important response. Proactive design that anticipates diversity reduces barriers before they arise and minimises the need for reactive adjustments. My earlier research (Bunbury, 2020) shows that staff often rely on a reactive “reasonable adjustments” model rather than embedding accessibility from the outset. Approaches such as Universal Design for Learning (UDL) (CAST, 2024) encourage educators to build flexibility into assessment design through multiple means of engagement, representation, and expression, enabling students to demonstrate their learning without unnecessary structural barriers.
Where AI meets inclusion
Artificial intelligence was unsurprisingly the dominant theme across the conference. Many discussions centred on the question: what should universities do about AI? The conversation is often framed around prohibition, but the reality is far more complex. AI tools can support students with diverse cognitive, linguistic and organisational needs by helping them structure ideas, summarise complex materials, translate language, and draft preliminary text. Used transparently, these tools can serve as scaffolding that supports learning rather than undermines it. This raises an important inclusion question. If we simply ban AI tools outright, we may inadvertently remove support mechanisms that some students rely on to access learning. Instead, the focus should shift toward designing assessments that require human reasoning, judgement and reflection, even when AI tools are available.
One practical approach is to assess the process rather than simply the product or final output. Elevating integrity requires us to move with the times. In the past, essay assessments tended to focus primarily on the final answer. While research was always important, it was largely taken for granted as part of the process. Today, however, students engage with learning in different ways, and assessment design must reflect this shift. As educators, we have a responsibility to adapt and ensure that the assessments we set are inclusive in all senses, not only from a disability perspective, but also by recognising diverse learners and different approaches to learning. For example, requiring students to submit drafts, reflections, or decision logs enables educators to evaluate how students think, not just the final answer they produce. Similarly, asking students to critique or improve AI-generated outputs encourages critical engagement with technology. These strategies align closely with Universal Design for Learning (UDL) principles, offering multiple ways for students to demonstrate understanding and engage with complex ideas, thereby supporting a more inclusive learning environment.
Faculty attitudes and disciplinary realities
An interesting discussion emerged during and after the session about faculty attitudes toward AI. Many colleagues remain understandably cautious, particularly in disciplines where professional stakes are high. Medicine and law, for example, were frequently cited. The question posed was provocative: would we feel comfortable relying on AI in medical training or legal reasoning? The answer is not straightforward. But perhaps the better question is whether we should design assessments that help students critically engage with AI rather than ignore its presence. AI use is not uniform across disciplines, and assessment design must reflect this. What matters is appropriateness to the task and discipline, not blanket acceptance or rejection. In some contexts, AI may play a supporting role; in others, it may need to be limited or carefully structured.
Integrity in a rapidly changing environment
Another fascinating session I attended featured Dr Thomas Lancaster discussing the practical realities of academic integrity policy. One key takeaway was that policy frameworks often struggle to keep pace with technological change. By the time policies move through committees and ratification processes, the technological landscape has already shifted. This is particularly true with the emergence of AI “agents” and rapidly evolving tools. It reinforces the need for policies that are principle-based rather than overly prescriptive, allowing institutions to adapt as technologies develop.
I also attended a session discussing the STRIVE mode (Anselmo et al, 2024) presented by Dr Alysia Wright, an emerging framework for designing assessments in the context of AI. Developed to guide educators in aligning technology with learning design, STRIVE emphasises principles such as student-centredness, transparency, responsibility, integrity, validity, and equity. Importantly, as the speakers noted, frameworks like STRIVE are not rigid rules, but reflective tools that help educators think more carefully about how assessments can be designed to support both integrity and inclusion when AI is part of the learning environment.
The human side of integrity
One of the highlights of the conference was Paul Osincup’s keynote, which blended humour, research insight, and practical strategies for navigating the pressures of academic life. His talk was a reminder that integrity is not just about rules and policies. It is also about building healthier professional cultures where staff and students feel supported. In times when the challenges facing higher education can feel overwhelming, maintaining wellbeing and a sense of community is itself an act of sustaining integrity.
Looking forward
If there was one clear message emerging from the conference, it was this: AI cannot be ignored. But neither should it dominate the conversation to the point of obscuring the core purpose of education. The challenge ahead is not simply controlling AI but designing learning environments that continue to require human thinking, ethical reasoning and intellectual engagement. Inclusive assessment offers one of the most promising pathways forward. By embedding UDL principles, authentic assessment design, and transparent AI engagement, universities can simultaneously strengthen inclusion, support student learning, and maintain academic standards. In short, the future of academic integrity may depend less on detection technologies and more on thoughtful assessment design.
References
Anselmo, L., Eaton, S. E., Jivani, R., Moya, B., & Wright, A. (2024). STRIVE: Emerging Considerations When Designing Assessments for Artificial Intelligence Use.
Bunbury, S. (2020). Disability in higher education–do reasonable adjustments contribute to an inclusive curriculum?. International Journal of Inclusive Education, 24(9), 964-979.
Bertram Gallant, T. & Rettinger, D. A. (2025). The opposite of cheating: teaching for integrity in the age of AI (Vol. 4). University of Oklahoma Press.
CAST (2024). CAST Universal Design for Learning Guidelines version 3.0. Retrieved from https://udlguidelines.cast.org
Stephen Bunbury PFHEA, NTF is a Reader in Law/Associate Professor at the University of Westminster, Institutional Academic Integrity Lead, and an ICAI Board Director.
Thank you for being a member of ICAI. Not a member of ICAI yet? Check out the benefits of membership and consider joining us by visiting our membership page. Be part of something great!

