All 2 entries tagged Matthew Voice
No other Warwick Blogs use the tag Matthew Voice on entries | View entries tagged Matthew Voice at Technorati | There are no images tagged Matthew Voice on this blog
July 17, 2023
Who Uses AI, How, and When?
By Matthew Voice, Department of Applied Linguistics, University of Warwick
As I mentioned in my previous WJETT blog post, I have participated in a strand of the ‘AI in Education’ learning circle during the last few months. Over the course of our strand’s meetings regarding the role of AI in formative feedback, our focus has primarily been on the nature of the emerging generative AI technology. Our research and conversation has a great deal to contribute with regards to what large language models mean for the future of formative feedback. However, given that these models are tools, it is worth reflecting on the user, too.
Our learning circle represent educators and students from a wide range of degree programmes, and motivations for engaging with higher education will vary from student to student across all courses. Throughout our discussions, conversation has largely focused on the role of AI in the improvement of formative feedback for students who are motivated by a desire for academic excellence. Given that students in this position will likely be motivated to engage with extracurricular activities within the university (e.g. joining the WIHEA as a student member, partaking in surveys of student experience or voluntary training workshops), their voices will perhaps be heard most clearly during our present conversations.
But the experiences of these students are not representative of our cohort as a whole, or indeed of all points across an individual’s university journey. Many students may view academic writing – either across their degrees, or for certain assignments or modules – as an obstacle to be overcome. In these cases, the objective of academic writing shifts away from excellence, and towards the production of work which will simply allow the student to attain a passing grade. Engagement with GenAI for these students might not mean refinement and improvement; it may simply be a resource for the fastest or most convenient means of content generation.
Integrating GenAI into teaching and formative feedback requires a recognition of this spectrum of experience and motivation. In my previous WJETT blog I recommended that future discussion and planning should consider the reasonableness and realism when supporting students to use and think critically about GenAI. By this, I mean:
1) Reasonableness: Is the task being asked of the GenAI (e.g. to produce formative feedback on a draft assignment) one that it is capable of achieving?
2) Realism: Does our (educators, policy planners, researchers) understanding of engagement with GenAI reflect actual independent student use cases?
Assessing reasonableness through understanding what GenAI is capable of achieving will require ongoing review, and the development of support and training for staff in order to keep pace with developments. This, I think, has largely been the focus of the work done by the ‘AI in Education’ learning circle and the final report this group has produced. Going forward, we also need to consider how well we understand our students’ independent engagement with this sort of assistive technology. What tools and resources are they familiar with? How do they understand them? What do they do with them, and at what point(s) in their academic writing?
Grounding future policy and pedagogic resource development in relation to a realistic model of students’ use and understanding of GenAI will be a task as complex as anticipating the impact of future technological development in large language models. By acknowledging this, and undertaking this work, we best position ourselves to ensure that outputs and resources which arise from working groups such as ours will be meaningful to staff and students working across the university.
June 26, 2023
Using AI for Formative Feedback: Current Challenges, Reflections, and Future Investigation
By Matthew Voice, Applied Linguistics at the University of Warwick
One strand of the WIHEA’s working group for AI in education has focused on the role of AI in formative feedback. As part of this strand, I have been experimenting with feeding my own writing to a range of generative AI (ChatGPT, Google Bard, and Microsoft Bing), to learn more about the sorts of feedback they provide.
The accompanying presentation documents my observations during this process. Some issues, such as the propensity of AI to ‘hallucinate’ sources, are well-documented concerns with current models. As discourse on student use of AI begins to make its way into the classroom, these challenges might provide a basis for critical discussion around the accuracy and quality of the feedback produced by language models, and the need for student to review any outputs produced by LLMs.
Other common issues present different challenges for students using LLMs to elicit formative feedback. For instance, the prompt protocol in the presentation revealed a tendency for AI to provide contradictory advice when its suggestions are queried, leading to a confusing stance on whether or not an issue raised actually constitutes a point for improvement within the source text. When tasked with rewriting prompt material for improvement, LLMs consistently misconstrued (and therefore left absent) some of the nuances of my original review, in a fashion which changed key elements of the original argumentation without acknowledgement. The potential challenges for student users which arise from these tendencies is discussed in more detail in the presentation’s notes.
In addition to giving some indication of the potential role of LLMs in formative feedback, this task has also prompted me to reflect on the way I approach and understand generative AI as an educator. Going forward, I want to suggest two points of reflection for future tasks used to generate and model LLM output in pedagogical contexts. Firstly: is the task a reasonable one? Using LLMs ethically requires using my own writing as a basis for prompt material, but my choice to use published work means that the text in question had already been re-drafted and edited to a publishable standard. What improvements were the LLMs supposed to find, at this point? In future, I would be interested to try eliciting LLM feedback on work in progress as a point of comparison.
Secondly, is the task realistic, i.e. does it accurately reflect the way students use and engage with AI independently? The review in my presentation, for example, presupposes that the process of prompting an LLM for improvements to pre-written text is comparable to student use of these programmes. But how accurate is this assumption? In the Department of Applied Linguistics, our in-progress Univoice project sees student researchers interviewing their peers about their academic process. Data from this project might provide clearer insight into the ways students employ AI in their learning and writing, providing a stronger basis for future critical investigation of the strengths and limitations in AI’s capacity as a tool for feedback.
This is blog 14 in our diverse assessment series, the two most recent previous blogs can be found here:
- Assessments: Capturing Lived Experience and Shaping the Future
- Building knowledge on the pedagogy of using generative AI in the classroom and in assessments