How South African universities are leading the way in ethical AI practices
How South African universities are leading the way in ethical AI practices



South African universities are adopting a human-centred, ethical, and responsible approach to the use of artificial intelligence (AI) to balance the threat of academic dishonesty with the need to produce ‘AI-literate’ graduates for a tech-driven world.

With the development of advanced AI tools, including ChatGPT, where students could prompt AI to write an academic paper in seconds, or solve a math quiz in the blink of an eye, tertiary institutions found themselves in a complex environment where they have to draw the line on what is permissible or not, to preserve academic integrity.

While most universities have AI guidelines, the North-West University (NWU) recently announced that it has become the first South African institution to adopt an official AI policy, transitioning from temporary guidelines to a formal AI Framework Policy.

Professor Anné Verhoef, director of the North-West University Artificial Intelligence Hub, stated that their approach is human-centred and puts people first, not technology, and they aim to use AI to the benefit of people, societies, and the environment.

“The policy makes it clear that the NWU wants to embrace AI in a human-centred, ethical, and responsible way. These three qualifications of our implementation and integration of AI in all our activities – teaching, learning, research, and management – form the core pillars of the new policy,” Verhoef said.

Using AI ethically is a continuous challenge, and in the context of academic integrity, it means that we should always be honest and transparent about our use of AI. To use AI responsibly means to use it sustainably, manage risks, use it lawfully, use it critically, verify its sources, identify its biases, and more, she stated.

Verhoef added that they have introduced a free online AI literacy course for students titled ‘AI for Academic and Career Success’, aimed at educating students on how AI develops algorithms based on the data it receives, which predominantly comes from Western countries, leading to a bias towards Western knowledge and cultures.

For lecturers, she said, they have developed a specialised AI course entitled ‘Winning the AI Assessment Game’, to equip lecturers with the knowledge of when to prohibit the use of AI in their assessments and when and how to incorporate it effectively. 

“Additionally, NWU has secured free access for all staff members to the international AI courses offered by the Digital Education Council (DEC), ensuring alignment with global standards. To further support our lecturers, we provide help, ad hoc training, workshops, and information through the NWU AI Hub, empowering them to integrate AI in accordance with our policy objectives,” she said.

On a threshold that separates editing from ghostwriting, she said NWU permits the use of AI editing tools, similar to policies of many scientific journals, but students must declare their use. 

“Ideally, these tools should only be used to enhance English proficiency, not to rewrite content entirely. The risk lies in AI potentially misstructuring arguments or misusing subject-specific terminology. Therefore, human oversight and ownership are essential when employing these tools. If work edited by AI is submitted, the responsibility for that work remains with the individual,” Verhoef said

She highlighted that ghostwriting occurs when AI is used to produce the entire work, and the student or researcher presents it as their own without acknowledging the use of AI. 

“This practice is unethical and dishonest. Consequently, we require students to declare that their work is their own and to clearly indicate where and how AI was utilised,” Verhoef stated.

Dr Hanelie Adendorff, a senior advisor for the Centre for Teaching and Learning at Stellenbosch University (SU), stated that the institution does not have a single fixed AI policy; instead, it has adopted an ethical position statement to support thoughtful, context-sensitive decision-making by staff and students.

She said the current approach is explicitly interim, and emerged in response to practical questions and requests from students, supervisors, and examiners, rather than as a top-down enforcement mechanism.

“The current guidelines are structured around four interrelated considerations: authenticity, fairness, accountability, and transparency. These are not framed as compliance checks, but as thinking tools to help academics and students consider when, why, and how AI use may be appropriate, risky, or inappropriate in a given disciplinary and assessment context,” Adendorff stated.

She highlighted that a key principle is the protection of learning and human agency, shifting the focus from detecting AI use or enforcing tool-based checklists to fostering students’ ability to make informed decisions regarding AI use and to uphold the integrity of their work.

As part of this broader shift, she said, the university recently discontinued the use of Turnitin’s AI text detection functionality, a decision reflecting well-documented concerns internationally about the reliability and pedagogical value of automated AI detection tools, particularly in high-stakes assessment contexts. 

“A key message communicated to students is that outsourcing their learning to AI tools may offer short-term convenience but will be to their detriment in invigilated, oral, or other high-stakes summative assessments, where independent understanding and performance are required. In this sense, the redesigned approach places greater responsibility for learning on the student, which is both intentional and pedagogically appropriate,” she said.

Adendorff added that SU students have access to a short course on AI literacy offered through the Digital Education Council, and the university’s AI guidelines are shared with students and frequently workshopped by lecturers within modules.

The guidelines are currently being refined to place even stronger emphasis on learning processes, rather than solely on assessment products, Adendorff stated.

To ensure that students, especially postgraduates, are not penalised for ethical AI use, she said external examiners are offered access to the university’s ethical position statement and guidelines or instructions for examiners through the AI declaration form, which provides context for how AI use is approached at Stellenbosch University. 

“The declaration is not intended as a compliance checklist or a tool-based disclosure exercise, but as confirmation of a reflective process that has taken place during the research journey.”

Describing the process, she said, students in postgraduate studies must complete an internal declaration form regarding their intended use of AI tools, detailing the purpose, extent, and rationale.

This initiates a discussion with their supervisor to establish a context-sensitive approach. Due to the dynamic nature of generative AI, students should maintain ongoing dialogue with their supervisor throughout their research, rather than viewing the initial decision as permanent.

She added that at submission, students provide a declaration confirming discussions with the examiner and take full responsibility for the accuracy and integrity of their work, including any AI-assisted content. 

Adendorff emphasised that the ethical use of AI, following supervisory guidance, is not penalised, and the responsibility for scholarly quality lies solely with the student.

The intention is not to ‘police’ students, but to empower them to engage critically and responsibly with AI, to understand the consequences of their choices, and to take full ownership of the academic work they submit, she stated. 

University of KwaZulu-Natal’s Professor Nyna Amin, Professor Donrich Thaldar, and Professor Thabo Msibi stated that the institution has adopted an approach that views AI as a transformative opportunity to be responsibly harnessed.

“The UKZN has developed principles that actively promote the use of AI and encourage ethical behaviour through education rather than surveillance. We do not require lecturers to submit ‘logs of brainstorming conversations with colleagues’ or issue certificates for the use of statistical software – why should AI be any different?” the academics stated.

The experts further said that UKZN’s AI guidelines are based on four interrelated principles of innovation, not intimidation; education instead of enforcement; transparency before surveillance and trust, not control.

gcwalisile.khanyile@inl.co.za



Source link

Leave comment

Your email address will not be published. Required fields are marked with *.