Image credit: Author using ChatGPT 5.4 with prompt about students using free Gemini agentic AI
Imagine a student sitting down to complete an assignment.
Not too long ago, the process would begin with uncertainty. They would search for sources, open multiple tabs, read through articles that often contradicted each other, struggle to make sense of arguments, and slowly begin forming their own understanding. Then came the phase where they had to prompt, re-prompt, and prompt again to try to get the answer they were looking for their assignment.
Today, that same student can simply instruct an AI agent: “Research this topic, compare viewpoints, and draft a response.”
Within seconds, the task is complete.
The output is structured, coherent, and confident.
The student submits it.
But pause for a moment and ask a simple question: What exactly did the student learn?
This question lies at the heart of agentic AI, a new generation of artificial intelligence systems embedded directly into browsers and workflows. Unlike earlier AI tools that waited for prompts and responded to specific instructions, agentic AI can autonomously browse, gather information, compare sources, synthesise arguments, and generate structured outputs.
Students are no longer just asking questions.
They are delegating thinking.
The Cognitive Industrial Revolution
History offers us a powerful lens to understand this shift.
Before the Industrial Revolution, craftsmen built things themselves. They understood materials, processes, and construction from beginning to end. Their knowledge was inseparable from the act of creation.
Then came automation.
Machines began performing the physical work, and humans transitioned into supervisors of automated systems. Productivity increased, but direct engagement with the process decreased.
Today, we are witnessing a similar transformation in cognition.
Students, once builders of knowledge they acquire, risk becoming supervisors of automated reasoning.
This is not the same as copying and pasting from the internet, nor is it the same as using earlier generative AI tools. Even then, students had to search, evaluate, and decide. They had to engage with the intellectual process.
Agentic AI removes those intermediate steps.
This shift is no longer theoretical. In January 2026, agentic browsing capabilities were introduced into mainstream web browsers, enabling AI systems to autonomously navigate webpages, compare sources, synthesise information, and generate structured outputs across multiple tabs. For the first time, elements of the research process itself, not just writing, could be delegated. Tasks that previously required students to actively search, evaluate, and synthesise information can now be executed by the system within the same browser environment where learning already takes place. This capability extends beyond research alone. The student’s role shifts from solving the problem to supervising the solution.
It searches.
It compares.
It synthesises.
It concludes.
The student observes.
Learning, however, has never been defined by the correctness of the output. It has always been defined by the intellectual journey taken to arrive there. When that journey is fully delegated, learning itself is at risk of becoming observational rather than experiential.
Integrity in the Age of Delegated Cognition
Academic integrity has always been grounded in responsibility. Not simply responsibility for producing work, but responsibility for understanding, verifying, and standing behind that work.
Agentic AI introduces a new layer of ethical ambiguity.
When an AI system gathers sources, synthesises arguments, and produces conclusions, authorship becomes less visible. Students may submit work they cannot fully explain, not because they intended to deceive, but because they trusted the system to think on their behalf.
This creates a subtle but significant shift.
Integrity risks are no longer limited to intentional misconduct.
They arise from uncritical delegation.
Students may assume that if the AI agent produced the answer, it must be correct. This creates automation bias… the tendency to trust machine outputs without sufficient scrutiny.
Integrity, in this new context, must evolve. It is no longer enough to ask whether students completed the work themselves. We must ask whether students remain intellectually and morally accountable for the work they submit.
The Illusion of Learning
One of the most concerning risks of agentic AI is the illusion of understanding.
The output looks complete. It sounds authoritative. It feels intelligent.
But true learning requires struggle. It requires comparison, doubt, evaluation, and synthesis. These processes are not inefficiencies to be eliminated. They are the very mechanisms through which knowledge becomes internalised.
When AI compresses these processes into a single output, students may feel confident without developing competence.
Over time, this creates dependency… not just on technology, but on delegated cognition itself.
Students may become highly capable at directing AI systems, but less capable at reasoning independently.
This is not a failure of students. It is a reflection of how learning environments are being reshaped by technology.
Designing for Trust, Not Control
The response to this shift cannot simply be stricter policies, detection tools, or technological restrictions. These measures may address symptoms, but they do not address the underlying transformation.
Integrity has never been about control. It has always been about trust.
This is the focus of the Swiss MENA Leading House funded Designing for Trust project, which I lead alongside Prof Elena Denisova-Schmidt. Our work examines how trust, transparency, and ethical responsibility can be intentionally embedded into AI-enabled learning environments.
Rather than asking how we prevent students from using AI, we ask more fundamental questions:
How do we ensure students remain accountable when AI assists their thinking?
How do we teach students to verify, question, and critically evaluate AI outputs?
How do we preserve human intellectual agency in AI-augmented environments?
Trust cannot be assumed. It must be intentionally designed into our pedagogical practices, our assessment structures, and our educational cultures.
The Role of Educators Moving Forward
As educators our responsibility is not to resist technological progress, but to ensure that human development progresses alongside it.
We must design assessments that prioritise reasoning, not just results.
We must teach students that AI is a tool, not an authority.
We must help them understand that responsibility for knowledge cannot be delegated.
Because ultimately, education has never been about producing answers. It has always been about developing thinkers.
Agentic AI does not eliminate the importance of academic integrity. It makes it more essential than ever.
The question is not whether students will use AI.
They will.
The question is whether they will remain active participants in their own learning, or passive supervisors of automated cognition.
In the age of agentic AI, designing for trust is no longer optional.
It is foundational.
Reference
Hes-so. (n.d.) Designing for trust: Embedding moral responsibility in generative AI use in the classroom. MENA funded project https://www.hes-so.ch/la-hes-so/international/leading-house-mena/projets/detail-projet/designing-for-trust-embedding-moral-responsibility-in-generative-ai-use-in-the-classroom-1
Dr Zeenath Reza Khan is Associate Professor at the University of Wollongong in Dubai, Founding President of the ENAI WG Centre for Academic Integrity in the UAE, Lead on a Dubai Future Foundation RDI grant on Trustworthy AI, and Co-Principal Investigator of the Swiss MENA Leading House “Designing for Trust” project.
Thank you for being a member of ICAI. Not a member of ICAI yet? Check out the benefits of membership and consider joining us by visiting our membership page. Be part of something great!