By: Anthony Copeland
As schools become more technologically enhanced, the seemingly infinite array of new EdTech platforms or services we encounter can give the impression that we are dealing with new technology all the time. These tools might contain new ideas, but very rarely do they involve new technology. Dissect any EdTech, and we will generally find some mix of creating and/or sharing video, image, text, audio and/or data to fit some learning structure. For this reason, it has always been more important for schools to understand the pedagogy that’s crystallized in the way new tools are built than the array of technologies that they use. However, once in a while, a new technology does become sufficiently advanced as to, as Arthur C. Clarke famously wrote, be “indistinguishable from magic.”
With its astounding ability to curate and transform knowledge from very basic written prompts, ChatGPT is a new tool that has set the internet on fire for just that sense of magic. Where Google indexed and curated the internet, ChatGPT appears to have read and understood the internet and now sits waiting to construct any kind of information given a simple prompt. Users can request ChatGPT to write poems, legal documents, computer code and, yes, homework. Stephen Marche’s article in The Atlantic, “The College Essay Is Dead“, sparked concern among educators about the potential for large language models like ChatGPT to create undergraduate-level essays. To test the capabilities of ChatGPT, I fed it recent assignments from three different subjects – English, Science, and Social Studies.
The language model did very well at analyzing a short story for English. Where it missed the mark, I was able to ask it to repeat the task and communicate what the last response was missing. Because ChatGPT remembers context over the period of any given conversation, each iteration was better than the last, and I had an excellent response within only a couple of minutes. The science homework was a little more difficult because it challenged the students to analyze text, images and data to provide a diagnosis of disease. However, it did exceptionally well at using clues within the text and the data (Yes, you can feed it CSV data) to write a good evaluation of the disease. The most difficult bit for a student attempting to have ChatGPT do their homework for them would have been the final part of the homework, which asked them to submit a short explanation of their diagnosis using Flip. However, ChatGPT did manage to produce a short script for this task, and I think most of us would overlook a student reading from a script as simply poor preparation.
For the third and final challenge, I decided to use ChatGPT a little differently. In social studies, students were asked to “Compare and contrast the American and French revolutions”. Before using ChatGPT, I glanced over the essay guidance and the assessment rubric before breaking the essay down into a series of smaller questions that used the criteria and the language of the assessment documents. When the responses to these questions were stitched together, the result was a 1600-word essay that was better tailored to the assessment criteria that a simple “Write me an essay to compare and contrast the American and French revolutions” might have achieved.
Concerns about ChatGPT’s ability to complete student work are perfectly valid. It can. Often with small mistakes or logical misconceptions, but this only makes it more alike the student work it might be purporting to be. To make matters worse, the response from specialists in plagiarism detection software has been less than promising. “What caught me off guard was how much of a leap forward it was,” said VP of AI for TurnItIn, Eric Wang, in this article for Bloomberg. He went on to add, “It’s early days, the halcyon days of the field. We’ll look back in a year or two and think about these things with wonder and recognize how primitive they were”. In that last statement, Eric Wang has an important point. It took only five days for ChatGPT to reach its first million users, and that’s already a heck of a lot of usage data for it to learn from. Already I’m noticing that the AI is able to better answer questions that I’d asked it on the initial release. Further to this, the growing excitement around ChatGPT and generative AI can only lead to more research and development in the field. Personally, I would advise against waiting for some miraculous AI technology to protect our old ways of assessment and consider rather that it might be time for a change.
Many have pointed out that the degree to which the education space feels threatened by a generative AI shines a light on a greater problem in education today. “What does it say about what we ask students to do in school that we assume they will do whatever they can to avoid it?” asks John Warner in his article “ChatGPT Can’t Kill Anything Worth Preserving”. In this article, Warner explains that a major reason that AI models are finding it so easy to produce student work is because in the pursuit of standardized education, we have over-systematized written assessment to the point of it being algorithmic enough that any generative AI trained to recognize and imitate patterns can recreate it without breaking a pixelated sweat. Unfortunately, this deeply impersonal method of assessment also makes it difficult for students to find meaning and value in actually doing the work. ‘For most professors, writing represents a form of thinking. But for some students, writing is simply a product, an assemblage of words repeated back to the teacher’ explains Beth McMurtrie in her article “AI and the Future of Undergraduate Writing”. To put it simply, if students feel like the work is worth doing, then the existence of a tool like ChatGPT will never be an issue. Only in a system where students have learned to value grades more than actual progress would students be tempted to outsource their work to a generative AI.
The good news is that we’ve long known how to make learning meaningful and how to move beyond assessment-driven learning. Authentic, student-centred project-based learning (PBL) has been the desired state for many educators since the early nineties and as the millennium approached, conversations also started turning towards the idea of an education that utilizes structures like PBL to foster ‘21st Century Skills’. We’re now almost a quarter of our way through that century, and I think it’s fair to say that change hasn’t come quickly or easily. All too often, standardization wins over personalisation or authenticity, and education becomes little more to students than ‘Doing work’. Could tools like ChatGPT be the final nail in the coffin for uninspiring, assessment-driven education? I hope so.
Not only does PBL focus on student-centred activities that commonly involve the construction of a portfolio of mixed media and reflective writing (Two things that I don’t see generative AI’s achieving anytime soon), but with sufficient training on how to use AI tools our students’ ability to tackle meaningful projects could be magnified. My advice to schools as they navigate this new technology is to begin to consider how we can work with these tools and not despite them. This technology is going to play a massive role in shaping our students’ future, and they deserve to learn about it.
So how might we begin using this tool with our students? AI researcher Jeremy Howard discusses giving his six-year-old daughter access to ChatGPT to act as a virtual tutor in this article by the New York Times. In doing so, he communicated a very important message to her: “Don’t trust everything it gives you.” he said, “It can make mistakes”. He went on to explain how ChatGPT and other AI’s can be seemingly very confident about their mistakes and how we should all learn to be very careful about the information they are feeding us. In an article by Stratechery, this bug is turned into a feature for educators as the idea of ‘Zero Trust Homework’ is put forward. This involves teaching and assessing student ability to verify the AI responses through independent research. To borrow a line from the article:
“In the case of AI, don’t ban it for students — or anyone else for that matter; leverage it to create an educational model that starts with the assumption that content is free and the real skill is editing it into something true or beautiful; only then will it be valuable and reliable.”
Not only do activities like this help to shine a light on the dangers of over-trusting AI-generated information, but it offers a unique situation in which to practice critical thinking. In his article ‘How to… Use AI to Generate Ideas’ author Ethan Mollick demonstrates how this quirk can also be used for creative thinking. Mollick highlights that whilst ChatGPT can sometimes generate bizarre conclusions, in the idea generation stage, this can, in fact, aid creativity. Unsurprisingly, we see the emergence of many ‘21st century skills’ when exploring learning structures that complement this 21st-century technology.
In summary, ChatGPT might just be the disruption that progressive education models like Project Based Learning have been waiting for. My hope is that we look forward and not back and use this disruption as a catalyst for progression. In a world of ‘Intelligence on demand’, there has never been a more important time for educators to reflect on the purpose of education and the joy of learning.
If you’d like to read more about some of the ways ChatGPT might be used in education, I’ve written extensively about the tool in the upcoming book “ChatGPT for Teaching and Learning” available on Amazon. If this article has inspired you to introduce more hands-on, project-based learning into your classroom, then you can also check out my website makerlearners.com, where I’ll be curating the best hands-on projects for subject-specific learning. To keep in touch, follow me on Twitter or LinkedIn.