Complete Story
 
Generative AI as a Modern Socrates: Why Integrity Begins With Better Questions

02/09/2026

Generative AI as a Modern Socrates: Why Integrity Begins With Better Questions

by Earle Abrahamson

Image credit: OpenAI. (2026). AI-generated illustration created using DALL·E via ChatGPT-5 Mini [Image]. https://openai.com/chatgpt

 

What if the greatest gift of generative AI is not that it can answer our questions but that it exposes how poor many of our questions have become?

Much discussion of generative AI centres on efficiency. Faster outputs. Quicker decisions. Apparent expertise on demand. Within an Integrity Matters context, this framing is incomplete and at times misleading. Integrity is not about speed or volume. It is about responsibility, judgement, fairness and care. When we rush to answers we often bypass those values. Generative AI brings this problem into sharp focus by responding fluently even when our questions are poorly framed, ethically shallow, or conceptually weak.

Questions are not neutral. They carry assumptions, priorities and values. When a question is narrow, it can exclude voices. When it is rushed, it can hide harm. When it is framed only around optimisation, it can ignore consequences. Integrity in AI use therefore begins not with outputs, but with inquiry. What are we really asking? Why are we asking it? Who benefits and who might be harmed by the way the question is framed?

There is a temptation to treat questions as functional tools rather than ethical acts. We often judge them as right or wrong, efficient or inefficient. This binary mindset mirrors assessment cultures and performance metrics, rather than reflective practice. From an integrity perspective, a better question is one that expands understanding, surfaces risk and invites accountability, even if it complicates decision making.

Generative AI makes this visible because it will answer almost anything we ask. It does not pause to consider whether the question itself is fair, proportionate or responsible, unless we explicitly ask it to. This places the ethical burden firmly back on the human user. The integrity challenge is not that AI lacks values, but that it will faithfully follow the values embedded in our questions.

Used with care, generative AI can support ethical questioning, rather than undermine it. It can help identify hidden assumptions, language bias and missing perspectives. Asking it to critique a question for fairness or to surface who is excluded by a particular framing shifts the interaction from extraction to reflection. In this way AI becomes a tool for ethical self-examination, rather than a shortcut to certainty.

This is particularly important in areas such as education, research governance, journalism and public policy where questions shape outcomes with real world consequences. For example, asking ‘what is the most efficient solution to a problem?’ is very different from asking ‘who bears the cost of that efficiency?’. Asking ‘what policy works best?’ is not the same as asking ‘whose interests does it serve?’ and ‘whose interests does it marginalise?’. Generative AI can assist in exploring these distinctions, but only if integrity is built into the inquiry.

Prompting therefore becomes an ethical practice, not a technical one. A prompt reflects intent. It reveals whether the user is seeking understanding or justification, whether they are open to challenge or simply confirmation. From an integrity standpoint, the most valuable prompts are those that invite scrutiny, rather than closure: ‘What assumptions am I making?’ ‘What values are embedded here?’ ‘What might I be overlooking?’ ‘What would responsible disagreement look like?’

There is growing evidence from human AI collaboration research that the greatest benefits arise when AI is used to reframe problems, rather than resolve them. Integrity improves when questions are tested before answers are trusted. This approach resists automation bias and supports human judgement, rather than replacing it.

It is also where ethical risk is reduced. Poorly framed questions can lead to misleading outputs, which are then treated as objective. This can reinforce bias, legitimise weak decisions and obscure accountability. Integrity requires recognising that AI outputs inherit the limitations of the questions that produce them. Responsibility cannot be delegated.

The central ethical risk of generative AI is not that it will replace human thinking, but that it will mask its absence. When users rely on AI to generate answers without interrogating the question, they weaken their own judgement. When they use it to examine assumptions, explore consequences, and challenge framing, they strengthen it.

Integrity in AI use is therefore inseparable from the integrity of questioning. A valuable question is not one that produces a neat answer, but one that improves the quality of reasoning, decision making and ethical awareness. Discomfort, uncertainty and complexity are not failures. They are signals that integrity is being taken seriously.

Generative AI is not an authority. It is a mirror. It reflects the quality of our inquiry back to us. If we see shallow answers, we should look first at the questions that produced them. In doing so, we may rediscover that ethical understanding begins not with certainty, but with the courage to ask better questions.

 


Professor Earle Abrahamson PhD, NTF, PFHEA, FISSOTL is Professor in the Scholarship of Teaching and Learning and Head of Anatomy for the MBBS programme at the University of Hertfordshire, UK.

 

The authors' views are their own.

Thank you for being a member of ICAI. Not a member of ICAI yet? Check out the benefits of membership and consider joining us by visiting our membership page. Be part of something great!‌ 

Printer-Friendly Version

0 Comments