Artificial Intelligence is not just a tool: Epistemic Interoperability, Sympoiesis
By Jack Tsao, Danielle H. Heinrichs, and Michael Camit
Proponents often promise that AI and epistemic interoperability—the ability to access and translate knowledge across boundaries—will revolutionize education. However, this study challenges the assumption that Large Language Models (LLMs) are neutral, universal translators. Instead, it proposes a “sympoietic” approach. Here, humans and machines “make-with” one another. Together, they navigate the complex reality of cross-cultural health translation.
About the Study
This participatory action research examines university students in Hong Kong and Australia. They collaborated with LLMs to translate medical materials regarding neurofibromatosis for ethnic minority communities. Specifically, the target languages included Chinese (Mandarin and Cantonese), Indonesian, and Russian.
The research uses Donna Haraway’s concept of sympoiesis (making-with). Consequently, it moves beyond the hype of automated efficiency. It investigates the “frictions” arising when AI meets the nuances of real-world culture. Furthermore, the study used surveys, focus groups, and AI-enabled photovoice reflections. It documents how students acted as caregivers, not just translators. They had to “train,” “feed,” and “guide” the AI to produce culturally safe information.
Key Findings and Contributions
- LLMs demonstrated a “quasi-autonomous momentum,” often ignoring specific prompts or hallucinating content. Students found the need for constant intervention to stop the AI from reverting to English-centric or literal translations.
- There are significant gaps in the AI’s “situated knowledge.” For example, the AI translated “little ones” (children) into a culturally insulting term in Russian and translated “investigation” into a criminal context rather than a medical one in Indonesian.
- Students conceptualised their relationship with AI through metaphors of care, such as “training a robot dog” or “feeding a toddler.” This shift in perspective—from using a tool to nurturing a partner—was essential for overcoming the system’s limitations.
- Trust in AI-generated health materials was not inherent; it had to be constructed. Students acted as the “human-in-the-loop,” applying ethical care to ensure that translations did not stigmatise patients (e.g., correcting “disabled” to “children with disabilities”).
Implications for Education and Theory
- We must move beyond the idea of AI as a simple automation tool. Effective use requires “sympoietic” practices where human context and machine processing power are woven together.
- Education systems must teach students not just how to prompt, but how to recognise the digital knots and biases embedded in LLMs. Biliteracy and cultural sensitivity are prerequisites for effective AI collaboration.
- Rather than viewing human intervention as a failure of the technology, it should be viewed as a necessary component of knowledge production. Policy frameworks should value the “care work” involved in validating and refining AI outputs.
- To prevent the erasure of minority dialects and cultural nuances, users must actively resist the “flattening” effect of English-dominant training data through iterative, critical engagement.
Publication Details
Tsao, J., Heinrichs, D. H., & Camit, M. (2025). Artificial intelligence and epistemic interoperability: towards a sympoietic approach. Discourse: Studies in the Cultural Politics of Education.
https://doi.org/10.1080/01596306.2025.2579702