ChatGPT in the classroom
“Much has been written about the potential for digital tools to modernise teaching and learning, but as yet, we haven’t seen whole-scale digital disruption and transformation”
Generative artificial intelligence, of which ChatGPT is perhaps the most famous example (although there are many others), has, in the few short months it’s been widely accessible, completely transformed the way we understand the potential of technology in our lives. It has brought into stark contrast the balance between the power for good, and the power for, well, less than ideal, in rapid technology advances.
There have also been suggestions that generative AI challenges those areas of creation that, some claim, are central in defining our humanity: for example, the creation of art, music and literature. Algorithms can do all of these things and, in increasing instances, in ways that can’t be easily distinguished from original human endeavour.
So, what does this mean for education?
Teaching is one of the oldest of human activities, and while it has evolved over time, much of its current practices (e.g., classroom based, one-to-many teacher led learning) date to the Industrial Revolution. Much has been written in recent years about the potential for digital tools to modernise teaching and learning, but as yet, we haven’t seen whole-scale digital disruption and transformation in our teaching institutions. Will generative AI be the catalyst? And in the context of education, is generative AI a power for good or for ill?
Much analysis has been publicly broadcast about the perceived risks of generative AI in classrooms, with most of the commentary concerned that these tools will facilitate academic cheating and so undermine academic integrity.
This could mean that students fail to develop important skills like primary research, creating and positing new arguments and original points of view, and impede even the ability to create original content. All these concerns are valid, and there is no perfect control for them. However, already there are tools being developed to help mitigate these risks.
For example, there are new software tools that support teachers to determine the likelihood that, for instance, an assignment has been created using AI, in the same way software tools already test for plagiarism. These concerns also seem to rest on the assumption that teachers are unable to distinguish between original and machine endeavour and are unable to develop pedagogical methods to both assess students’ competencies in “real time”, or to teach students how to engage with AI such that their learning experiences are enriched, rather than diminished.
AI is already in our schools and universities: The philosophical debate about whether it should be will continue to rage in the background, but the fact is, it’s there, and it’s doing some good work.
It is already being used by teachers to create lesson plans, develop rubrics, generate quizzes, create resource lists and suggest personalised learning support for students. In this way, it is removing administrative and repetitive tasks from teachers and giving them more time to prepare and teach. Additionally, instead of detracting from students’ abilities to reflect and think critically, some teachers are inviting the technology into their classrooms, by, for instance, asking students to generate AI answers to assignment questions, and then the assignment work itself is to critique what the generative AI engine has produced. In some instances, students might even use generative AI, as a new skill and emerging field in “prompt engineering”, to improve the outputs of the original generative AI query.
Teachers are themselves being creative and adaptive in their use of these tools. An earlier comparison could be akin to when word processors and computers first emerged. New learning and education happened quickly, just by the nature of having to learn to use a computer to complete previously written or typed student work.
Unlike the development of early technology in the classroom, generative AI can generate far more inclusive and personalised learning experiences for all students by providing real-time language translation, voice to text translation, and personalised lesson support. This helps improve equitable access to both the learning and the social elements of a rich education experience.
Our students, therefore, need to be taught how to use generative AI, and along with its potential, to appreciate its limitations. For example, the quality of any response to a question posed to any AI will only be as good as the question (or prompt) itself. Just as it’s true that “garbage” data produces “garbage” results, it’s also true that poor questions deliver poor answers.
To that end, it becomes very important to teach students how to craft prompts that give the most relevant answers, and for students to then be able to critically assess those answers against other data sets. Accordingly, “prompt engineering” is one of the competencies with which we should be equipping our learners so that they develop mastery of the digital world with which they engage, rather than being naïve players in it.
Correspondingly, education about how to engage with AI must include teaching about the limitations of historical data sets and the “smoothing” of statistical outliers in populations. An ability to understand the limitations of historical data sets, including the unconscious (or perhaps conscious) biases of those who created those data sets is vital. As algorithms “learn” from historical data sets, they assimilate those biases and stereotypes. Knowing this is crucial for anyone wanting to rely on generative AI.
In fact, the University of Sydney believes that helping students learn to utilise and understand generative AI will help them become ethical leaders in the digital age, and that the university has a responsibility to teach students both the strengths and limitations of AI so that students can harness it wisely and insightfully. These types of engagement include deliberately engaging with AI as a research partner or lab assistant, asking it to analyse texts, and using the tool as the basis for existential discussions about the nature of humanity. This open approach, and others that are being adopted by teaching institutions around the world, allows students to learn how to best make AI work for them, rather than seeking to expel it from the classroom.
Further, surveys of student experience regularly reveal that some of the areas of greatest student frustrations are in areas where generative AI (or hybrid) solutions could potentially create better outcomes. These frustrations include instructors requiring students to use niche or unfamiliar applications with limited support, professors not adapting their courses for online learning outside of the classroom (too many slides, not enough interaction between the faculty member and undergraduate students), poor hosting and ineffectual communication by instructors and professors.
I share the belief of EY Oceania AI Leader Lisa Bouari that this technology will change the world profoundly. So, given the role of our education institutions is to improve lives and drive progress, their active engagement with generative AI is a “no brainer.” The question isn’t if they should, but how they do so in a way that equips students to be responsible, ethical and creative: to fully realize their own humanity with the tools at their disposal.
About the author: Catherine Friday is EY Global Education Leader. The views reflected in this article are the views of the author and do not necessarily reflect the views of the global EY organisation or its member firms.
Leave a Reply