Towards a technical pedagogy
A coalition of educators believes that ChatGPT will kill the essay. But should we really fear the algorithms used in large language models? Anthropology has the generative potential to re-evaluate teaching practices that attend to the use of emergent technologies in the classroom.
Instagram, TikTok, and Mastodon posts highlight how students could use AI technologies to bypass complex study assignments: simply enter a prompt in a digital console and algorithms will cook up an ingenious answer for you. Or, put differently, fill in a course assignment and you’ll get a supposedly well-developed treatise within seconds. Caught by surprise, there’s a range of educators advocating or even implementing a ban on such tools, and ChatGPT in specific.
Since the advent of new media technologies, accessibility tools and artificial intelligence have become contentious among educators and education scholars. Part of the discourse revolves around the effectiveness of such tools in higher education, another calls into question whether students in higher education should be allowed to use them. Spelling checks in word processors? Yes, because it boosts the readability of a text. Speech-to-text software? Yes, as it contributes to accessibility in education. Cloud-based typing assistants and proofreading tools? Eh, maybe not?
Anthropology offers tools and frameworks to scrutinise why AI might be seen as an emancipatory dream by some and a destructive force by others. Attending to the situated, diverse, and contradictory attributes of emergent technologies might engender more critical and contextual considerations.
Critical usership
Plagiarism prevention services are designed around the notion that students must submit original work. Underlying such services are arguments of fairness and originality: obtaining an academic degree means that you – as an individual – have achieved a range of discipline-specific learning goals. What scares some educators is the possibility that plagiarism prevention services won’t be able to detect whether students have used advanced algorithmic tools to display their knowledge and skills. As a result, it might be more challenging to assess whether a student has reached these learning goals by themselves. Or, more precisely, whether they’ve done so by only using tools that are considered legitimate.
A ban or a total rehaul of our assessment strategies sounds like a quick fix to a potentially massive problem. I’ve heard suggestions that we should return to in-class exams, to be completed with pens and pencils in an offline environment.
This might aid exam boards, but what other implications might such an approach have? At the very least, it echoes digital anthropologist Daniel Miller, who writes that “there is a constant tendency to simplify and romanticise the pre-digital world”, precisely “because the appraisal of new technologies is generally moralistic”.
At Leiden, anthropology students study “the everyday practices of individuals and groups around the world in relation to the complex global challenges of diversity, sustainability, and digitalisation.” We support students in their academic endeavour to engage in socio-cultural and political issues, particularly addressing intersecting injustices. Across the globe there’s a call to attend more radically and reflexively to societal issues addressed in anthropology curricula/education. This asks for a critical usership of digital tools. For if educators categorically exclude AI driven technologies from the curriculum, how can we support students in developing the capacity to critically engage with the processes of digitalisation permeating all socio-cultural phenomena?
Towards a technical pedagogy
The transnational research network Pirate Care offers a framework that engages the socio-cultural assumptions of digital tools, including the user worlds students and educators are or might become part of. The network proposes a technical pedagogy as an engaged mode of learning. Whereas the interfaces of many digital tools appear simple and user-friendly, “these simplified interaction patterns occlude the complexity of computing infrastructures and the intricacy of the social behaviors they generate”.
Put differently, our increasingly mediated realities might not necessarily lead to technological literacy. Not being able to orient ourselves in and make sense of AI induced environments runs the risk of amplifying our dependency on big tech. Stronger yet, for Pirate Care, “the introduction of ubiquitous digital technology coincides with a loss of autonomy that takes on the contours of an experience of deskilling”. The more specialised our knowledge becomes, the more difficult it will be to gauge the complexity and diversity of AI technologies in the worlds we inhabit.
There’s the possibility that extensive bans on large language models and other AI technologies haphazardly leads to technical deskillment, rather than the hoped-for empowerment of students. The Pirate Care network suggests that “we need to get to know the tools we use better” (p.130): what is needed is a critical pedagogy pertaining to technical enskilment and critical usership.
Enskilment as pedagogical value
This is not to say that we must embrace all things artificial. But there’s more to large language models such as ChatGPT than “risks” and “opportunities”. Not all things digital or computational are binary in form or meaning.
By using algorithmic models as a resource (alongside traditional publications), we can invite students to scrutinise cultural biases built into these models. By designing assignments that engage emerging technologies, we can acquire the skills needed to assess in what ways algorithms can be oppressive. By exposing AI technologies in courses that are historically ‘non-digital’, students and educators can collectively explore the pervasive entanglements between human and nonhuman technologies. In short, we might engender pedagogical approaches that – paraphrasing the Precarious Workers Brigade – “assist the project of radically amending a world deeply mixed in social and ecological crises”.
As emergent technologies reproduce (or challenge) existing axes of marginalisation and oppression, their significance for the field of anthropology is indisputable. Rather than prescribing what a ‘good’ or ‘successful’ technological tool for education is or can be, educators can facilitate learning spaces that help us gauge what an increasingly digitalised society might look like. In so doing, students and teachers alike might practice critical usership or design alternative models, rather than ban consumption or perform moralist acts of policing. Through technical enskilment we can try to unpack the socio-cultural assumptions and multiple meanings of technologies that are or will be.
0 Comments
Add a comment