GenAI:N3

Dr Hazel Farrell

Introduction

Since the launch of ChatGPT in November 2022 and subsequent surge in Generative Artificial Intelligence (Gen AI) technology, the education sector has been impacted significantly, with efforts to develop policies, strategies, and guidelines to support staff and students in navigating the changing landscape. While these technologies offer great potential for enhancing learning experiences, they also pose significant challenges to academic integrity. Traditional assessment methods, such as essays, unsupervised open-book or remote exams, and online quizzes, are increasingly vulnerable, as students can access AI tools to produce content that appears to be original but is not their own work. While a variety of AI detection tools have been developed, their accuracy remains questionable and reliance on them is not recommended. The implication of this is that a shift of focus is needed from detection to prevention, or as Cath Ellis espouses, from detecting cheating to detecting learning. This presents an urgent need for higher education institutions to reconsider their assessment strategies to uphold academic standards and ensure that assessments accurately reflect students' knowledge and skills.

Objectives

The goal of assessment redesign is to develop robust, fair, valid, and effective methods that can withstand the potential misuse of AI tools, while also providing students with the opportunity to demonstrate their learning meaningfully. By incorporating a variety of assessment types, considering the balance between formative and summative, and high- and low-stake assessment types, and emphasising process and understanding over final product, educators can create a more reliable and integrity-focused assessment environment. In practice, this can be challenging for a wide variety of reasons including time constraints and large class numbers. However, consideration of which assessments are appropriate for Gen AI usage is also necessary, and - ultimately - the alignment of assessments with programme and module learning outcomes remains the key guiding principle in ensuring that learners have achieved the requisite knowledge, skill and competence.

Scope

Reconsidering the purpose of Assessment

In an AI-enhanced environment, reconsidering the purpose of assessment becomes imperative to foster a more meaningful and authentic learning experience. Traditional assessments often emphasise rote memorisation and the reproduction of knowledge, which are increasingly susceptible to manipulation through AI tools. Instead, the focus should shift towards assessing higher-order thinking skills, such as critical analysis, creativity, problem-solving, and the ability to synthesise and apply knowledge in novel contexts, illustrated below in the Revised Bloom's Taxonomy by Lorin Anderson and David Krathwohl (2001).

Untitled

By prioritising these competencies, assessments can better reflect real-world applications and prepare students for the complexities of the modern workforce. This shift also encourages deeper engagement with the material, promoting a learning environment where students are evaluated not just on what they know, but on how they think and adapt. Consequently, the redefined purpose of assessment should aim to cultivate lifelong learners equipped with the skills to navigate and innovate in an AI-driven world.

The types of tasks associated with the different levels of Bloom’s Taxonomy are detailed below and while they were not specifically intended for assessment redesign in an AI context, the application for this purpose is entirely relevant.

Untitled