AI Policy in Higher Education: Navigating the New Academic Landscape

AI policy in higher education is currently facing a critical juncture. As universities worldwide scramble to adapt to generative AI, the gap between written rules and lived reality is widening. This page explores the findings of the latest research, Trajectories of AI policy in higher education: Interpretations, discourses, and enactments of students and teachers, published in Computers & Education: Artificial Intelligence.

The Reality of AI Policy in Higher Education

A critical challenge facing universities is how AI policy in higher education is actually experienced by those on the ground. Through interviews with 58 students and 12 teachers at the University of Hong Kong, this research reveals the messy reality beneath institutional guidelines.

The study utilises Stephen Ball’s policy trajectory framework to show how policies aren’t simply implemented; they are actively interpreted, resisted, and remade through daily practices. This creates a complex web of ethical dilemmas regarding academic integrity, equity, and the nature of learning itself.

Key Findings: The Impact of Decentralized Policies

The research highlights several critical areas where current approaches to AI policy in higher education are struggling:

  1. Interpretive Burden: A decentralized approach, while intended to foster innovation, shifted an enormous burden onto teachers. Without clear guidance, faculty struggled to determine fair use and penalties.
  2. Student Anxiety: Students reported anxiety about “crossing invisible lines.” What counted as acceptable grammar help in one class was considered cheating in another due to inconsistent standards.
  3. Access Disparities: A major concern emerged regarding equity. Students who could afford premium AI tools gained significant advantages, while detection tools like Turnitin often created false positives that disproportionately affected non-native English speakers.
  4. Cognitive Dependence: Both students and teachers observed patterns where students “robotically” followed AI outputs without understanding the underlying concepts.


Implications for Policy and Practice

To improve AI policy in higher education, institutions must move beyond simple detection and surveillance.

Move Beyond Detection

Rather than escalating surveillance through tools like Turnitin Clarity, institutions should invest in authentic assessments. Oral defences, reflective portfolios, and field work make AI assistance irrelevant or transparent. This shift requires supporting teachers with concrete examples and reducing the invisible labour of individual policy interpretation.

Establish Equity-Focused Access

While universities can address the “digital divide 2.0” by providing institutional access to a broad range of AI tools, it also requires strengthening human support systems—such as writing centres and peer tutoring— to ensure students aren’t forced to rely solely on AI.

Design for Critical Engagement

Effective AI policy in higher education should intentionally create “AI-free spaces.” Institutions need to experiment and build capacities around teaching students not just how to use AI, but when not to use it. This involves understanding the difference between productive emulation (learning from examples and AI support) and “passive” consumption.

A diagram showing how policy text is interpreted, discussed, and enacted in the context of material conditions, and the varying institutional control and individual agencies available.

For a deeper dive into these findings, you can read the full paper here:

Trajectories of AI policy in higher education: Interpretations, discourses, and enactments of students and teachers. Computers & Education: Artificial Intelligence 8 (2025), Article 100496 https://doi.org/10.1016/j.caeai.2025.100496