All entries for June 2023
June 26, 2023
Using AI for Formative Feedback: Current Challenges, Reflections, and Future Investigation
By Matthew Voice, Applied Linguistics at the University of Warwick
One strand of the WIHEA’s working group for AI in education has focused on the role of AI in formative feedback. As part of this strand, I have been experimenting with feeding my own writing to a range of generative AI (ChatGPT, Google Bard, and Microsoft Bing), to learn more about the sorts of feedback they provide.
The accompanying presentation documents my observations during this process. Some issues, such as the propensity of AI to ‘hallucinate’ sources, are well-documented concerns with current models. As discourse on student use of AI begins to make its way into the classroom, these challenges might provide a basis for critical discussion around the accuracy and quality of the feedback produced by language models, and the need for student to review any outputs produced by LLMs.
Other common issues present different challenges for students using LLMs to elicit formative feedback. For instance, the prompt protocol in the presentation revealed a tendency for AI to provide contradictory advice when its suggestions are queried, leading to a confusing stance on whether or not an issue raised actually constitutes a point for improvement within the source text. When tasked with rewriting prompt material for improvement, LLMs consistently misconstrued (and therefore left absent) some of the nuances of my original review, in a fashion which changed key elements of the original argumentation without acknowledgement. The potential challenges for student users which arise from these tendencies is discussed in more detail in the presentation’s notes.
In addition to giving some indication of the potential role of LLMs in formative feedback, this task has also prompted me to reflect on the way I approach and understand generative AI as an educator. Going forward, I want to suggest two points of reflection for future tasks used to generate and model LLM output in pedagogical contexts. Firstly: is the task a reasonable one? Using LLMs ethically requires using my own writing as a basis for prompt material, but my choice to use published work means that the text in question had already been re-drafted and edited to a publishable standard. What improvements were the LLMs supposed to find, at this point? In future, I would be interested to try eliciting LLM feedback on work in progress as a point of comparison.
Secondly, is the task realistic, i.e. does it accurately reflect the way students use and engage with AI independently? The review in my presentation, for example, presupposes that the process of prompting an LLM for improvements to pre-written text is comparable to student use of these programmes. But how accurate is this assumption? In the Department of Applied Linguistics, our in-progress Univoice project sees student researchers interviewing their peers about their academic process. Data from this project might provide clearer insight into the ways students employ AI in their learning and writing, providing a stronger basis for future critical investigation of the strengths and limitations in AI’s capacity as a tool for feedback.
This is blog 14 in our diverse assessment series, the two most recent previous blogs can be found here:
- Assessments: Capturing Lived Experience and Shaping the Future
- Building knowledge on the pedagogy of using generative AI in the classroom and in assessments
June 22, 2023
Rethinking authentic assessment: work, well–being, and society by Jan McArthur
In this 2022 paper, Jan McArthur builds on “existing work on authentic assessment to develop a more holistic and richer concept that will be more beneficial to individual students and to the larger society of which they are part.” McArthur presents three key principles to help us rethink and broaden the concept of authentic assessment: 1) From real world/world of work to society; 2) From task performance to why we value the task; 3) From the status‑quo of real‑world/world of work to transforming society. If you are short on time, you might want to jump straight to page 8 where discussion of these three principles begins:
https://link.springer.com/article/10.1007/s10734-022-00822-y
June 19, 2023
Assessments: Capturing Lived Experience and Shaping the Future
Reflection on Project Outputs by Molly Fowler
This WIHEA funded co-creation project aimed to capture and explore student and staff perspectives on diverse assessment. Neither group were clearly able to define a diverse assessment strategy, but interestingly their feelings about assessment and ideas of how they can be improved were very similar. Students expressed a desire for greater choice, flexibility and equitable access to assessments. Equitable access encompasses a wide range of complex personal needs including language requirements, disability, neurodiversity, caring responsibilities, and the need to work alongside studies. Staff iterated many of the same concepts but framed their ideas around pedagogical models. There was a strong emphasis on learning from assessments on both sides and a widespread longing for a culture shift to design assessments that model a fair and fulfilling education. Student co-creation was seen as a necessary tool to expedite the shift towards embedding assessments as part of the learning journey.
I am a final year student on the Health and Medical Sciences BSc programme. My role as a student cocreator in this research project was to collect and analyse data from students and staff pertaining to their beliefs around assessment. In the analysis stage of the project, I mainly focused on collating and summarising the student data. I am new to conducting primary research and I have thoroughly appreciated this experience. I enjoyed the challenge of leading interviews and focus groups and deciding when to explore a statement further or manoeuvre back to the set questions. Gaining first-hand insight into the research process has augmented my ability to understand and extract key information from research papers which will be a life-long skill – and was particularly useful when I was conducting a systematic review for my dissertation. It has been very satisfying to observe my own personal development in this way.
This project has made me aware of my privilege in assessments as a neurotypical English speaker. I have been exposed to a range of different perspectives on assessment and I hope to be better equipped to identify problems and support those around me. For example, I was surprised to learn that international students feel more disadvantaged by multiple choice exams than essays, as MCQs often require a nuanced understanding of language and grammar. Similarly, I have always taken a pragmatic approach to assessments and centred my learning around them. I had not previously considered assessments as part of the learning journey or as a learning exercise. As I move into the next phase of my own education, I will try to extend my learning beyond assessments to gain knowledge that I can use in my profession. Undertaking this project has been an enriching experience as a student and as an individual. It has shaped my approach to my assessments, and I have become more aware of the complex needs of others who are completing the same assessment. Students and staff are calling for the same changes to assessment methodology, which can only be implemented if the University takes a holistic approach to restructuring assessments with students contributing to the process.
I look forward to bringing my knowledge from this assignment into my next research project. This is the 13th blog in our diverse assessment series. Previous blogs can be found here:
Blog 1: Launch of the learning circle (Isabel Fischer & Leda Mirbahai): https://blogs.warwick.ac.uk/wjett/entry/interested_in_diverse/
Blog 2: Creative projects and the ‘state of play’ in diverse assessments (Lewis Beer): https://blogs.warwick.ac.uk/wjett/entry/creative_projects_and/
Blog 3: Student experience of assessments (Molly Fowler): https://blogs.warwick.ac.uk/wjett/entry/a_student_perspective/
Blog 4: Assessment Strategy – one year after starting the learning circle (Isabel Fischer & Leda Mirbahai): https://blogs.warwick.ac.uk/wjett/entry/one_year_on/
Blog 5: Learnings and suggestions based on implementing diverse assessments in the foundation year at Warwick (Lucy Ryland): https://blogs.warwick.ac.uk/wjett/entry/learnings_suggestions_based/
Blog 6: How inclusive is your assessment strategy? (Leda Mirbahai): https://blogs.warwick.ac.uk/wjett/entry/blog_6_how/
Blog 7: Democratising the feedback process (Linda Enow): https://blogs.warwick.ac.uk/wjett/entry/democratising_the_feedback/
Blog 8: AI for Good: Evaluating and Shaping Opportunities of AI in Education (Isabel Fischer, Leda Mirbahai & David Buxton): https://blogs.warwick.ac.uk/wjett/entry/ai_for_good/
Blog 9: On ‘Opportunities of AI in Higher Education’ by DALL.E and ChatGPT (Isabel Fischer): https://blogs.warwick.ac.uk/wjett/entry/on_opportunities_of/
Blog 10: Pedagogic paradigm 4.0: bringing students, educators and AI together (Isabel Fischer): https://www.timeshighereducation.com/campus/pedagogic-paradigm-40-bringing-students-educators-and-ai-together
Blog 11: Ethically deploying AI in education: An update from the University of Warwick’s open community of practice (Isabel Fischer, Leda Mirbahai, Lewis Beer, David Buxton, Sam Grierson, Lee Griffin, and Neha Gupta): https://www.open.ac.uk/scholarship-and-innovation/scilab/ethically-deploying-ai-education
Blog 12: Building knowledge on the pedagogy of using generative AI in the classroom and in assessments (Isabel Fischer and Matt Lucas): https://blogs.warwick.ac.uk/wjett/entry/building_knowledge_on/
Join the Diverse Assessment Learning Circle: If you would like to join the learning circle please contact the co-leads: Leda Mirbahai, Warwick Medical School (WMS) (Leda.Mirbahai@warwick.ac.uk) and Isabel Fischer, Warwick Business School (WBS) (Isabel.Fischer@wbs.ac.uk). This LC is open to non-WIHEA members.
June 12, 2023
Building knowledge on the pedagogy of using generative AI in the classroom and in assessments
By Matt Lucas and Isabel Fischer (WBS)
Matt Lucas is a Senior Product Manager at IBM, and Isabel Fischer is an Associate Professor (Reader) of Information Systems at WBS (Warwick Business School). Isabel also co-convenes an IATL (Institute for Advanced Teaching and Learning) module. This blog represents their own opinions and not those of their employers.
After two terms of including generative AI (GenAI) in my teaching and for assessments I am still building my knowledge and understanding around the pedagogy of using GenAI. Students seem to like the entertainment of playing around with music and art (e.g. DALL.E 2 and midjourney), creating images and also memes, with all of these being user-friendly for big screens and also for huddling around one laptop as part of teamwork. Text-outputs seems less intuitive for ‘collective use’: There does not seem to be yet an app available that allows for hands-on collaborative refinement of prompts (e.g. similar to students working on the same Google doc). And displaying a string of words on a shared screen clearly does not have the same entertainment value for students as ‘customers and consumers’.
In addition to a lack of entertainment value I also found that students seem to actually appreciate word-based GenAI (e.g. ChatGPT and Bard) as ‘their secret tool’ at their disposal and for them to use. They appear to appreciate it, if lecturers show them the exact prompts that they can copy that allows them to make the most of ‘their secret tool’. They seem less keen about having to be transparent about using the tool themselves and having to justify and critically reflect on usage. It not only means additional work, more importantly, they dislike the thought of the tool’s hidden power being exposed. They appear even less keen for lecturers to use GenAI for the lesson preparation and to be transparent about it because otherwise, what is the ‘perceived added value’ of attending the lecture if they could have just reviewed GenAI?
With this in mind, what are the skills that students can learn from using GenAI in the classroom and in assessments?
In the attached blog Matt Lucas and I suggest that by including innovative aspects into assessments, students can learn and practise four skills that are relevant for their future careers in a world disrupted by AI:
-
Cognitive flexibility, abstraction and simplification
-
Curiosity, including prompt engineering
-
Personalisation, reflection and empathising to adapt to different audiences
-
Critical evaluation of AI
For each of the four skills we explain in the attached blog the relevance for student learning with some illustrative examples, before outlining how we have incorporated these four skills into students’ assessments in the recent term.
June 05, 2023
Developing compassionate academic leadership: the practice of kindness
This two-page opinion piece argues “that universities are, or ought to be, ‘caregiving organisations’ that promote the practice of compassionate pedagogy, because of their role and primary task of helping students to learn.” However, “neoliberal ideology and higher education policy models..are seemingly at odds with the values and practice of compassionate pedagogy.” What do you think? How can we help to develop an approach to academic leadership that is underpinned by the practice of kindness?
https://jpaap.ac.uk/JPAAP/article/view/375/498