/ 27 January 2023

Humanities and social science educators must embrace ChatGPT (for now). Here’s why

Chatgpt
It is entirely possible that November's release of ChatGPT by California company OpenAI will be remembered as a turning point in introducing a new wave of artificial intelligence to the wider public.

It opened up for mainstream use in late November 2022 and it has already sparked a huge debate in the research and higher education space. This is ChatGPT, accessible (mostly) for free on its website and other interfaces. 

The application is an example of areas of artificial intelligence (AI) including natural language processing (NLP) and soft computing. It performs incredible feats of analysis in response to a wide array of prompts. 

It can, for example, give a reasonably good response to prompts to write articles or sort complicated data, within a matter of seconds. This has caught the attention of academics and is the latest perceived bête noire for lecturers across faculties and universities.

In the higher education sector, the take-home exam format gained increased use during Covid-19 lockdowns and has been retained in many institutions. In the wake of ChatGPT, the knee-jerk response has been to argue for a reversion to the traditional exam, written in-person and under invigilation, across the board. 

Yet the exam is only the culmination of the learning done in a module. Part of the educational process in the humanities and social sciences is teaching students to conduct research and write original essays. 

On this front, many are worried about what they see as the diminished ability to ensure the authorship of the submissions made in essays, the cornerstone of a humanities and social sciences education and the primary tool by which students’ understanding, application and synthesis of complex concepts is put to the test. 

“How sure can we really be that a student has really written the submission they make and that it was not produced by this cutting-edge AI?” seems to be the key concern.

As with all major innovations, two major sides have emerged. One sees the AI tool as a threat to education, the other thinks it is a welcome development for the sector. 

I fall within the latter camp. ChatGPT is a good development, irreversible, and its negative aspects can be contained. My brief “experiments” with the tool have left me unworried about this piece of AI — at least in its current rendition. 

Two core reasons are at the root of my optimism. First, and most importantly, the tool has limitations, in part thanks to the complexity of people as individuals and society. Second, and stemming from the first reason, there is no substitute for intellectual engagement. Let us explore each of these reasons in turn.

First, in my experience, ChatGPT can only give back so many answers to the same basic question. Educators can run questions through the system and identify all the possible answers. This is an engaging, and indeed possibly enjoyable, undertaking. 

In essence, it allows the lecturer to see as many possible iterations as possible of potential answers to assessment questions before students submit. The lecturer can then submit these answers to their university-mandated plagiarism detection software, Turnitin being the most widely used. 

This mitigates the most common concerns about ChatGPT — that it is not connected to the internet in real time and that Turnitin cannot detect plagiarism from ChatGPT since almost every answer it can give is unique due to the AI being based on soft computing. 

Soft computing in this case refers to the capability of an AI processor to “understand” language flexibly in the same way that a human would. In practice, it means that ChatGPT can still give responses even when there are typos, which would only elicit an error message from a typical algorithm-based AI.

Human nature will also kick in. Students, like all humans, are natural game theorists. That is, they are capable of thinking about not only their actions but also those of others, in relation to their own best interests. Given the task of writing papers on a limited range of topics, many, insofar as they are rational agents, will refrain from relying on the tool to write their work as they would be aware that their classmates would be using the same software. 

Lecturers must design their curricula and assessments in such a way that the assessments are designed to extract principles and techniques and not only content. They must, as seasoned experts, “get ahead” of their students. This can only help us to refine our tools and assumptions about our disciplines and what we teach.

The second reason for my optimism is that AI has limitations which incentivise engagement. While it can do impressive things, such as sorting, arranging and opining on complex topics at high speed, AI generally gives short and extremely superficial answers to questions, even classic social science prompts, such as “Write me a topic on Theory X and how it can be applied to XYZ, using historical and contemporary examples.” 

Faced with such a brief, generic and inadequate response (which, as stated above, the lecturer could have themselves initially run through and put up on a plagiarism detector), the would-be dishonest student is left little choice but to probe further, giving further prompts and request elaborations. 

Through this futile exercise, they are likely to end up doing what they ought to have been doing to begin with — conducting honest and deep research.  In addition, the tool cannot cite within the text and has no access to works published since 2021. This requires the student to engage the literature and identify relevant sources.

ChatGPT is also only an average student at best. In an article by two psychology professors at Temple University in the United States, they describe how, in response to a question posed to it from an honours-level class in their department, the AI “cannot distinguish [a] ‘classic’ article in a field that must be cited from any other article that reviews the same content. The bot also tends to keep referencing the same sources over and over again.” 

My own experiments with ChatGPT have yielded numerous self-contradictions from the AI when the questions have been about specialised facts which are nonetheless common knowledge among specialists. 

This means it cannot be used, at least (ironically) not without some considerable innovation, to produce dissertations and PhD theses at the postgraduate level. Well-trained academics would be able to detect its surface-level takes, among many other mechanisms that exist at that level of education.

Rather than regressing and retreating to the familiar, academics should embrace this new source of change and showcase dynamism in how they approach teaching and learning. To do otherwise would be to ill-prepare our students for the complex and ever-shifting world that lies ahead of them and thus render the sector increasingly archaic and irrelevant. 

Indeed, as philosopher Hericlutus stated more than 2 500 years ago, change is the only constant.

Bhaso Ndzendze is head and associate professor at the University of Johannesburg’s department of politics and international relations, as well as its 4IR and Digital Policy Research Unit. His books include Artificial Intelligence and International Relations Theories (2023) and Artificial Intelligence and Emerging Technologies in International Relations (2021).

The views expressed are those of the author and do not necessarily reflect the official policy or position of the Mail & Guardian.