All 10 entries tagged AI

View all 16 entries tagged AI on Warwick Blogs | View entries tagged AI at Technorati | View all 2 images tagged AI

March 25, 2024

Collaborating with AI – Writing an assignment

Rob Liu-Preece is the Academic Technologist for PAIS, Sociology and Philosophy at the University of Warwick. He has also been an IELTS marker for Writing and Speaking for 20 years and previously taught Academic Skills to international students both in the UK and overseas for 20 years.

This is the second of two posts written by Rob about AI and the ANTF Project:

While many have expressed fears that the advent of AI may threaten future employment, others have emphasised that those able to work with AI may well be in the most secure forms of work. This assignment is designed to deploy a relatively undemanding writing task which students will hopefully be motivated to carry out, being on a topic of their choice and hopefully enable them to sustain interest through the stages of writing, reading an AI answer, re-drafting and then commenting on it. I hope students will benefit from being unconstrained by academic demands as they compose a short piece of writing on a topic familiar to themselves. Additionally, that might help increase the learning take-aways they generate having completed the process. I have included an example to help students undertake the novel experience of writing with AI operating as a kind of writing assistant.

Aims of the Assignment

  • To provide a motivating topic for students to write about, review and redraft.
  • To practice collaboration on writing with AI.
  • To enhance students understanding and awareness of significant features of their writing.
  • To improve their appreciation and ability to compare human-created and AI-generated writing.
  • To practice collaborating with AI to produce an improved finished piece of writing.

Learning Objectives:

  • To increase understanding of the difference between a human and machine written response.
  • To improve the ability to collaborate with AI to enhance a written response.
  • To recognise the strengths and weaknesses of AI generated content.

Instructions:

You need to complete 4 activities for this assignment.

  1. Choose a cultural artefact important to you. Write 500 – 600 words on why it’s important and what insights it carries.
  2. Then write a prompt for Chat GPT and generate an answer.
  3. Re-write your original incorporating new content gleaned from Chat GPT. Highlight the additional content in italics.
  4. Write 2-4 paragraphs comparing human-created and AI-generated writing.

March 18, 2024

AI Marking Criteria

Rob Liu-Preece is the Academic Technologist for PAIS, Sociology and Philosophy at the University of Warwick. He has also been an IELTS marker for Writing and Speaking for 20 years and previously taught Academic Skills to international students both in the UK and overseas for 20 years.

This is the first of two posts written by Rob about AI and the ANTF Project:

The sudden explosion in the availability and use of generative AI technology, especially by university students has left education professionals in a position of playing catch-up. With ChatGPT gaining 1 million users in just 5 days and 100 million in 2 months, I feel like educationalists have just arrived at the point of coming up for air. As part of that process, I’ve written marking criteria aimed at marking the use of AI by students completing assignments.

I think in a learning environment characterised by uncertainty and disruption, students will benefit from an explicit expression of how the university wants them to use AI. Applying a marking framework like this could also lessen the need for tutors to follow a punitive/academic integrity route for dealing with misuse of AI. It could achieve this by opening up and defining ‘poor academic practice’ more closely aligned to AI as an alternative. I also hope this type of approach will help steer the development of pedagogy and AI, providing a structure for on-going debate and discussion. Lastly, having a set of criteria like this enables reverse engineering of training and coaching on AI for both students and tutors.

To address these issues, I’ve written a set of marking criteria based on the existing Politics and International Studies assignment marking criteria for undergraduate students. I would anticipate students including a short report to their written assignments covering their use of AI. The framework is based around 2 main categories, namely appropriacy of use and awareness of key issues. The criteria is by no means a finished piece of work, is not necessarily fit for purpose and hasn’t undergone any road testing or standardisation. Rather it is designed to signal a possible route forward for those of us concerned and interested in shaping the take-up of AI in education. It does raise some thoughts in my own mind about whether such an approach is the right way to go. Should we be setting or defining an orthodoxy in quite tight terms for AI use, like this? Is a literacy model approach implied here the correct one, or would a better way be to focus on conscious use of AI by students?

Please note I used Google Gemini to help with the overall structure of this blog and for the statistics in the first paragraph.


January 22, 2024

AI: Empowering Inclusive Education

Have a read of this post by Nalina Brahim-Said a student at the University of Bolton looking at inclusive education in the UK curriculum and integrating AI for SEN and neurodiversity:

https://nationalcentreforai.jiscinvolve.org/wp/2024/01/09/ai-empowering-inclusive-education/


January 15, 2024

Generative AI: Misconceptions and misunderstandings

This blog considers some of the misconceptions, and misunderstandings around generative AI:

https://nationalcentreforai.jiscinvolve.org/wp/2023/12/01/debunking-myths-and-harnessing-solutions-navigating-generative-ai-in-education/


July 24, 2023

Using AI to evaluate film criticism – Lewis Beer

Introduction

Can ChatGPT evaluate the quality of insights in a student essay? In January 2023, Daisy Christodoulou published an article (Can ChatGPT mark writing?, No More Marking Blog) exploring this question. She tested ChatGPT using English essays, and found that while its grading and feedback were more or less aligned with her own, the AI was sometimes unable to spot fundamental problems in essays. Christodoulou offers some fascinating insights, but one thing she does not explore in any detail is the relationship between ChatGPT’s qualitative evaluation of an essay and the content of the essay itself.

In this post, I will share the results of my own brief experiment, in which I pasted two film reviews into ChatGPT and asked it to evaluate (and compare) the quality of insights in these reviews. My aim here was to use these texts as substitute ‘essays’, and consider how the AI-generated responses might help or hinder a marker in giving meaningful feedback.

The dialogues quoted from below were conducted on 23 March 2023, using the free ChatGPT 3.5. (I later repeated the dialogues with both this and ChatGPT 4.0, but found the responses from 23 March gave more substantial and interesting feedback. I will not discuss those later dialogues in this post.)

Review 1: Peter Bradshaw

To begin with, I asked ChatGPT to ‘Evaluate the quality of insight in the following review of the film Nomadland’, then pasted the text of Peter Bradshaw’s review of Nomadland from The Guardian. I wanted to see how it would respond to this vague request, in which I provided no criteria on which to base an evaluation.

The response did a good job of summarising the review using selected words from the text. For instance, at the end of its evaluation the chatbot said:

[Bradshaw] does offer a mild critique that the film could have been more explicit in its condemnation of the economic forces causing the characters' hardships. However, they still recognize the film's greatness and Chloé Zhao's exceptional filmmaking.

This is clearly pieced together from the following passage in Bradshaw’s review, in which I have highlighted the details used by ChatGPT:

[The film is] brought off with exceptional intelligence and style. Arguably it is not angry enough about the economic forces that are causing all this but it still looks superbly forthright. There is real greatness in Chloé Zhao’s film-making.

The paraphrasing techniques used here – transposing words like ‘exceptional’ and ‘greatness’ into slightly different contexts, and spelling out that ‘all this’ means ‘the characters’ hardships’ – are effective in ‘feedback’ terms, because they seem to indicate an understanding of the writer’s (i.e. student’s) argument. However, Bradshaw says the film is ‘not angry enough’, not that it is ‘not explicit enough’, and this may be a significant error.

In its final verdict on the review, ChatGPT said:

Overall, this review appears to be well-written and insightful, providing a thorough analysis of the film’s themes, characters, and style while also highlighting its strengths and potential weaknesses.

Without my having provided any criteria, the AI assumes that an insightful film review will explore themes, characters, and style, and that it will offer a balanced assessment of the film’s strengths and weaknesses.

More of ChatGPT’s ‘marking criteria’ emerged when I asked it to ‘Identify some ways in which the review could be improved’. It offered some valid points in response to this prompt. For example:

The review could have delved deeper into the film's technical aspects, such as its cinematography, editing, and sound design. While the review mentions the film's style and how it differs from traditional narrative-driven films, it could have gone into more detail about how the film achieves this style and what specific techniques the filmmakers used.

This is arguably an area where Bradshaw’s review falls short. Bradshaw does discuss how Nomadland differs from traditional narrative-driven films, and he does connect this with Zhao’s stylistic choices; in this part of his review, he might have discussed film-making technique in more detail. So the AI’s choice to structure feedback like this (‘While you did talk about x, you could have delved deeper into y...’) could be helpful for a student looking to improve their essay by building on their strengths.

On that note, here is another of ChatGPT’s constructive criticisms:

The review could have included a more critical evaluation of the film. While the review praises the film’s strengths, it could have offered a more balanced assessment that acknowledges both the film’s strengths and its weaknesses.

This picks up on the detail quoted above, from the AI’s initial evaluation, noting that Bradshaw’s critique of the film is ‘mild’. The AI has perhaps noticed that Bradshaw’s more negative language is limited to the very end of his review, and is couched in the word ‘arguably’. Again, if we imagine this as feedback being provided to a student, ChatGPT’s evaluations do a good job of mixing praise and criticism: ‘You balance your argument by acknowledging the film’s weaknesses, but you only do this briefly at the end – you could have included a more critical evaluation.’

Other responses, however, show ChatGPT’s limitations, and would constitute problematic essay feedback. For instance:

The review could have offered more specific examples of how the film explores its themes and characters. While the review mentions some of the film's themes, such as the impact of economic hardship on older Americans, it could have gone into more detail about how the film portrays these themes and how they are relevant to contemporary society.

This is not really a fair critique: Bradshaw does highlight specific examples of how the film explores ‘the impact of economic hardship on older Americans’, and he does allude to contemporary issues such as factory closures, the dominance of Amazon, and the importance of the tourist trade in this part of America:

...looking for seasonal work in bars, restaurants and – in this film – in a gigantic Amazon warehouse in Nevada, which takes the place of the agricultural work searched for by itinerant workers in stories such as The Grapes of Wrath.

Fern, a widow and former substitute teacher in Empire, Nevada – a town wiped off the map by a factory closure – who is forced into piling some possessions into a tatty van and heading off...

At times, the film looks like a tour of a deserted planet, especially when she heads out to the Badlands national park in South Dakota, where there is also tourist-trade work to be had.

ChatGPT also says:

The review could have provided more context for the film's production and reception. For example, the review could have mentioned the awards and critical acclaim that the film has received, or how it fits into Chloé Zhao's broader filmography.

Some of this is fair – the review was published after Nomadland’s Oscar success, so Bradshaw could have mentioned this – but it misses the contextual details Bradshaw includes about the film’s production:

Zhao was even allowed to film inside one of Amazon’s eerie service-industry cathedrals.

The movie is inspired by Jessica Bruder’s 2017 nonfiction book, Nomadland: Surviving America in the Twenty-First Century, and by the radical nomadist and anti-capitalist leader Bob Wells, who appears as himself.

The people she meets on the road are, mostly, real nomads who have vivid presences on screen.

As with the previous criticism, ChatGPT has not acknowledged key details of the review in its initial assessment, so its critique is not balanced: it is like a marker who blames a student for ‘not doing x’ when the student in fact spent several paragraphs on ‘x’. (Human markers sometimes do this, of course.)

Review 2: Beatrice Loayza

I then asked ChatGPT, ‘Is the following review of the film Nomadland more incisive than the previous one?’, and pasted the text of Beatrice Loayza’s review of Nomadland, from Sight & Sound. Again, I deliberately did not provide any assessment criteria. ChatGPT’s answer was ‘yes’, for several reasons – some valid, some less so. First of all, it said, Loayza ‘provides a detailed analysis of the film's themes and cinematography, as well as the performance of Frances McDormand’. This is fair, and picks up on one of the criticisms of Bradshaw cited above (namely his lack of attention to technical aspects). Loayza comments on specific camera techniques, naming the cinematographer and describing the light effects he achieves. She also does more than Bradshaw to explain why McDormand’s performance is so effective.

ChatGPT picks up on another of its own criticisms of Bradshaw by praising Loayza’s critical perspective on the film:

However, the review also criticizes the film's lack of force and clarity in its insights into labor in the 21st century and the exploitation of older Americans. The author points out that the film's depiction of workers exploited by Amazon feels too easygoing and questions the film's liberal naivete in addressing the conditions of the nomadic lifestyle. Overall, the review provides a more nuanced and thoughtful analysis of the film.

This draws upon the following passage in Loayza’s review; again, I have highlighted phrases that ChatGPT seems to have picked up on:

[The film’s] insights into labour in the 21st century, and the exploitation of an older generation of Americans, lack force and clarity. At the very beginning of the film, Fern is employed by Amazon’s CamperForce programme, which provides base wages and free parking space to seasonal workers in their 60s and 70s. In 2020, Amazon doubled its profits during a global pandemic, which makes Zhao’s easygoing depiction of workers exploited by the company feel rather toothless. That the film aims to capture the ways in which a kind of working-class American outsider struggles without fully addressing the conditions of that struggle casts over it the shadow of a questionable liberal naivete.

To re-cap:

  • In its initial assessment of Bradshaw’s review, ChatGPT noted that his critique of the film was ‘mild’
  • In suggesting improvements, it built on this comment by recommending a more balanced approach
  • And in drawing a comparison with Loayza’s review, it notes her more substantial version of Bradshaw’s criticism.

At each stage, the AI appears to be drawing upon specific evidence from the texts, rather than just ‘hallucinating’ these evaluative comments.

Elsewhere in its comparison between Bradshaw and Loayza, however, ChatGPT did hallucinate some differences in order to justify its verdict. I will not cite these here, as this post is already very long, but the inaccuracies were of a similar kind to those in the summary of Bradshaw discussed in the previous section.

Final thoughts

If these film reviews were formative essays that I had to mark, I could use ChatGPT’s feedback to offer legitimate praise and criticism, suggest improvements, and judge the relative merits of the two essays in relation to each other. However, I would also notice that ChatGPT misses important details in these texts and draws some un-founded contrasts between them.

In the course of this experiment, I tried several variations on the above prompts. Here are some things to note if you want to try a similar experiment yourself:

  • I fed the reviews into ChatGPT several times, and in a different order. When I asked it to make a comparative evaluation, it tended to prefer the second review (even if this was Bradshaw’s). When I asked if it could reverse its comparative evaluation (i.e. ‘Can you argue that the other review is more insightful than the first?’), its responses varied: sometimes it doubled down on its first opinion, sometimes it conceded that an alternative opinion could be justified. Again, the reasons given for these opinions ranged from ‘valid’ to ‘hallucinatory’.
  • This post demonstrates what Chat-GPT is capable of in the hands of a technically ignorant, time-poor amateur like me, but by using the right prompts and follow-up prompts, it would no doubt be possible to collate more credible ‘essay feedback’, and then ask the AI to present and construct this in an appropriate way. Have a look at the other articles and resources linked to on the AI in Education Learning Circle webpage, try an experiment of your own, and share the results in the comments below. In particular, you might think about the learning outcomes and marking criteria specific to your discipline, or your module, and consider how you might train ChatGPT to use these in evaluating a piece of text.

July 17, 2023

Who Uses AI, How, and When?

By Matthew Voice, Department of Applied Linguistics, University of Warwick

As I mentioned in my previous WJETT blog post, I have participated in a strand of the ‘AI in Education’ learning circle during the last few months. Over the course of our strand’s meetings regarding the role of AI in formative feedback, our focus has primarily been on the nature of the emerging generative AI technology. Our research and conversation has a great deal to contribute with regards to what large language models mean for the future of formative feedback. However, given that these models are tools, it is worth reflecting on the user, too.

Our learning circle represent educators and students from a wide range of degree programmes, and motivations for engaging with higher education will vary from student to student across all courses. Throughout our discussions, conversation has largely focused on the role of AI in the improvement of formative feedback for students who are motivated by a desire for academic excellence. Given that students in this position will likely be motivated to engage with extracurricular activities within the university (e.g. joining the WIHEA as a student member, partaking in surveys of student experience or voluntary training workshops), their voices will perhaps be heard most clearly during our present conversations.

But the experiences of these students are not representative of our cohort as a whole, or indeed of all points across an individual’s university journey. Many students may view academic writing – either across their degrees, or for certain assignments or modules – as an obstacle to be overcome. In these cases, the objective of academic writing shifts away from excellence, and towards the production of work which will simply allow the student to attain a passing grade. Engagement with GenAI for these students might not mean refinement and improvement; it may simply be a resource for the fastest or most convenient means of content generation.

Integrating GenAI into teaching and formative feedback requires a recognition of this spectrum of experience and motivation. In my previous WJETT blog I recommended that future discussion and planning should consider the reasonableness and realism when supporting students to use and think critically about GenAI. By this, I mean:

1) Reasonableness: Is the task being asked of the GenAI (e.g. to produce formative feedback on a draft assignment) one that it is capable of achieving?

2) Realism: Does our (educators, policy planners, researchers) understanding of engagement with GenAI reflect actual independent student use cases?

Assessing reasonableness through understanding what GenAI is capable of achieving will require ongoing review, and the development of support and training for staff in order to keep pace with developments. This, I think, has largely been the focus of the work done by the ‘AI in Education’ learning circle and the final report this group has produced. Going forward, we also need to consider how well we understand our students’ independent engagement with this sort of assistive technology. What tools and resources are they familiar with? How do they understand them? What do they do with them, and at what point(s) in their academic writing?

Grounding future policy and pedagogic resource development in relation to a realistic model of students’ use and understanding of GenAI will be a task as complex as anticipating the impact of future technological development in large language models. By acknowledging this, and undertaking this work, we best position ourselves to ensure that outputs and resources which arise from working groups such as ours will be meaningful to staff and students working across the university.


July 10, 2023

AI Feedback Systems: A Student Perspective – Mara Bortnowschi

The buzz is endless – AI is taking Higher Education by storm. Since the launch of ChatGPT, everyone seems to have an opinion, and rightfully so. It’s so new and we have yet to fully understand its potential and the impact it will have. Within academia, the general sentiment mostly surrounds concern for responsible use, as many students have heard their professors and lecturers warning them against its use. However, its rapid growth and widespread adoption demonstrate that it’s not going anywhere soon so instead of avoiding it, it should be given the time of day to understand the risks and challenges but also the opportunities it presents. Furthermore, I think the student voice in these discussions has been underrepresented, but really students can be the key to harnessing this technological advancement as an asset to enhancing learning and education.

The WIHEA group have already explored a number of subjects regarding AI in education from student perspectives that can be found on the group’s Artificial Intelligence in Education webpage. These have included emerging questions AI presents, the risks and ethics of academic integrity, evolving assessment styles to mitigate and integrate AI into assessment and how teaching may evolve. I will now explore some of the opportunities that are being presented with the widening availability and access to AI tools for students to enhance their learning and to generate formative feedback. While summative marking has been and continues to be required to be marked by human markers according to the UK Quality Code in Assessment (UKSCQA, 2018), formative feedback has more flexibility, and we are now presented with an opportunity to test and utilise the capabilities of these AI technologies in providing timely, constructive, and developmental feedback.

Existing feedback systems

This notion will be particularly explored with regards to formative elements of summative assessments. Feedback should allow a student to understand strengths and weaknesses of their work and if engaged with effectively, can be used to improve academic performance, and thus learning. Especially throughout the pandemic, we have seen the role of feedback change massively: as more of education has shifted online, reliance on formative assessments has increased as assessments for learning. This is in contrast to summative assessments which more so represent assessments of learning (Wyatt-Smith, Klenowski and Colbert, 2014). Formative assessments also are an opportunity for autonomous learning by developing one’s own skills and relying on self-motivation. It would also be fair to say that formative feedback can be self-assessment of sorts, as even though the formative feedback is generated externally, it is the engagement with, and learning you apply from it that will ultimately make a difference in each student’s performance.

AI generated feedback

So what could incorporation of AI in these feedback systems change? Well, the use of algorithms in generation of feedback is not an entirely new concept. Algorithms, such as Grammarly and Sketch Engine, have been around for a while and they can generate feedback on academic writing and are for the most part freely available, or students are granted access to them by their institutions. But with more complicated algorithms that use machine learning, we can apply them to provide specific and personalised feedback. To make this even more applicable, by integrating what could be different elements of summative marking criteria or rubrics, they could provide some of the most relevant feedback at a moment’s notice.

This application is indeed being explored right here at the University of Warwick. Isabel Fischer, a WBS professor, is trying to pilot a deep learning formative feedback tool that has the potential to provide more comprehensive feedback that was developed with WBS marking criteria at its core. By simply submitting a pdf or word document, the algorithm instantly produces a document of in depth feedback on the four aspects of WBS marking criteria. This could be just the start of developing similar department-specific feedback tools taking into account department-specific assignments, marking criteria, and writing styles for drafts of academic writing. While there are definitely some considerations to look out for, this is fascinating and shows great promise as a tool to increase student autonomy in adapting how they might approach assignments to still have the opportunity to personally benefit from formative feedback.

Considerations of using generative AI

The considerations I mentioned earlier are worth discussing as students are turning to generative AI technologies like ChatGPT more and more. While these technologies are being developed to simulate human intelligence, there are some things they are simply not capable of. For example, it lacks expressions or expressive language. If using them to generate feedback on your writing, you should be aware that they will not always be able to grasp the nuances or expressive language in that writing. In other words, any feedback you receive from AI should be approached critically. You decide what you implement from feedback you receive, and you are responsible for identifying and understanding what truly can improve your work. This is all part of the responsible use of AI, but really also goes for human generated feedback. Your assignment at the end of the day will still be marked by a human marker with in-depth subject-specific knowledge and skills that they are asking you to learn and demonstrate in your assignment. I think this is the quick, irresponsible and neglectful way people have been quick to exploit resources like ChatGPT, where they do not doubt any response it has generated and implement them into a piece of work, only to find that its references are wrong or entirely don’t exist. Firstly, this should not be the way we utilise it, as this is blatant plagiarism, but secondly, a critical approach should be used to (for example) verify references, and critically understand that the way AI answers can lack certain elements of context. Regardless, the point still stands: responsible applications of AI technologies should not be about using it to do your work, but using them to enhance or improve your outputs.

Engagement with AI technologies and feedback

A new level of engagement with AI has been seen since the release of ChatGPT and DALL-E. Perhaps this is rooted in the great advancement that this represented or, more sinisterly, the opportunity to exploit the technology to minimise workload. Regardless, everyone’s interest has been piqued, and the level of engagement has been massive, exceeding what anyone might have expected particularly from students. At the University of Warwick alone, students have made a total of 850,000 total site visits to ChatGPT in the first two months only on the university’s Wi-Fi (SWNS, 2023). I think it’s important to try to understand why this might be in order to be able to channel this traffic for good rather than just fear this alleged ‘cheating epidemic’ that the media has been dubbing it.

In contrast to the older algorithm technologies that have been around, like for example the previously mentioned Grammarly and Sketch, which experienced much more moderate levels of engagement and use. Reasons vary from lack of awareness, to limited breadth of feedback to language, or to lack of confidence in the feedback they provide. AI has surpassed some of these limiting factors in that it is capable of generating a wider breadth of feedback that can include language, style, structure and more. The confidence in the feedback it produces is reassured by the continuous concern from educators. If professors are afraid AI technologies can be used to write entire assessments, then they must be capable of doing so.

Further benefits

As a result, we have seen students be a lot more open to trying to use ChatGPT, and I think we should utilise this eagerness in a way that encourages students to enhance their academic achievements. By introducing resources such as Isabel Fischer’s feedback tool or teaching students how to formulate prompts for ChatGPT to generate similar constructive feedback, we can guide a smooth integration of ChatGPT into Higher Education practices. And there are so many more benefits we have the potential to see. For one, this alleviates a massive workload off staff. If such tools are available to take care of the writing style and structure, staff’s role in formative feedback can remain more focused on content. The speed (or even instantaneity) with which AI can produce feedback also makes feedback more accessible. Furthermore, students can engage with it as many times as they like, inputting multiple drafts, as they are less limited by staff work capacity. Also, different students work on different timescales and with different approaches when faced with an assignment. This further widens accessibility to students that might start assignments later that what might normally be a formative deadline. Communicating these advantages is key in order to achieve these outcomes and to harness this technology towards enhancing educational experience for both staff and students.

Conclusion and personal experience

In my experience thus far with using ChatGPT, I have had mixed feelings. On the one hand, I am very apprehensive of the fact that its use is quite contentious at the moment, with some professors explicitly prohibiting its use or consultation. On the other hand, it is a resource that is available and it feels foolish not to use what is at your disposal. Throughout the research that went into this article and discussion with faculty members about its potential to provide feedback, I have been introduced to a very clearly constructive way to engage with ChatGPT, that seems to make both staff and students happy. While we are still in the early stages of understanding the potential and risks of generative AI technology, at the end of the day this is a tool that will have implications for Higher Education and we are being faced with the possibility of either embracing it, in various ways such as to generate formative feedback, or let it escape our control at the cost of academic integrity, because it is clear that prohibiting its use will not prevent people from exploiting it.

For further queries: marabortnowschi@yahoo.ca or mara.bortnowschi@warwick.ac.uk (may expire soon)

Reference List

SWNS (2023). University of Warwick fears cheating epidemic as data reveals huge number of students visiting AI website during exams. [online] Kenilworth Nub News. Available at: https://kenilworth.nub.news/news/local-news/university-of-warwick-fears-cheating-epidemic-as-data-reveals-huge-number-of-students-visiting-ai-website-during-exams-176836 [Accessed 19 Jun. 2023].

UKSCQA (2018). UK Quality Code for Higher Education Advice and Guidance Assessment. [online] Available at: https://www.qaa.ac.uk/docs/qaa/quality-code/advice-and-guidance-assessment.pdf?sfvrsn=ca29c181_4 [Accessed 16 Jun. 2023].

Wyatt-Smith, C., Klenowski, V. and Colbert, P. (2014). Assessment Understood as Enabling. The Enabling Power of Assessment, pp.1–20. doi:https://doi.org/10.1007/978-94-007-5902-2_1.


March 06, 2023

On ‘Opportunities of AI in Higher Education’ by DALL.E and ChatGPT

Prologue and epilogue by Isabel Fischer (Warwick Business School)

Image illustrating artificial intelligence

Since the creation of our task and finish groups on evaluating and shaping the opportunities of AI in Higher Education (for further information see here: https://blogs.warwick.ac.uk/wjett/entry/ai_for_good/) we have had many very insightful ‘human to human’ discussions. The aim of this blog is, however, to show what ‘machines’ draw and write on the topic, to illustrate both the strengths and the weaknesses of generative AI.

The images at the start of the blog represent DALL.E’s take on the topic. Having always been untalented in drawing, I use DALL.E in my business teaching to show live how AI translates the topic we discuss by imitating distinct artists. Students enjoy how DALL.E (OpenAI) shows other – very creative - images while we wait for AI ‘to think & draw’ our images – there are always four images coming up for each prompt. Of course, we also discuss the risks, threats, and ethics behind AI (here an open-access case study which we wrote on AI ethics in the field of ESG investing: https://journals.sagepub.com/doi/full/10.1177/2043886920961782).

While DALL.E generates images that I could not have drawn, ChatGPT generates text that I could have written better and more concisely, with different logical priorities. I could - and actually did - create visualisations to illustrate my points. Nevertheless, ChatGPT did beat me by far on ‘speed’. For anybody who has not yet asked ChatGPT any questions, here are the ChatGPT responses based on five prompts:

  1. Opportunities for AI in HE
  2. Strategy for implementing and integrating AI in HE
  3. Developing research on the opportunities of AI in HE
  4. Developing an AI based marking and feedback tool
  5. Common areas of weakness of students’ academic writing

Opportunities for AI in HE

Artificial intelligence (AI) is transforming the world around us, and higher education (HE) is no exception. The potential opportunities of AI in HE are vast, ranging from improving learning outcomes and enhancing student engagement to streamlining administrative tasks and reducing costs. In this article, we explore some of the key opportunities of AI in HE and how they can be realized.

  1. Personalized learning: AI can help personalize the learning experience by adapting the content, pace, and style of instruction to the individual needs and preferences of each student. This can be achieved through intelligent tutoring systems, which use machine learning algorithms to analyze student performance data and provide customized feedback and support. Personalized learning can improve student engagement, increase retention rates, and enhance learning outcomes.
  2. Predictive analytics: AI can be used to analyze large datasets of student information, such as grades, attendance records, and behavioral data, to predict future performance and identify at-risk students who may need additional support. This can help educators intervene early and provide targeted interventions to improve student success.
  3. Virtual assistants: AI-powered virtual assistants can help students with routine administrative tasks such as scheduling, registration, and accessing resources. This can free up staff time and reduce wait times for students, improving their overall experience.
  4. Automated grading: AI can be used to automatically grade assignments and provide feedback to students, reducing the workload on instructors and improving the consistency and objectivity of grading. This can also provide students with immediate feedback, enabling them to improve their performance more quickly.
  5. Research and data analysis: AI can be used to analyze large datasets of research publications and help identify trends, patterns, and gaps in the literature. This can help researchers stay up-to-date with the latest developments in their field and generate new insights.
  6. Adaptive assessments: AI can be used to create adaptive assessments that adjust the difficulty and content of questions based on the student's level of knowledge and ability. This can help educators more accurately assess student performance and provide targeted feedback and support.
  7. Smart campus: AI can be used to optimize campus operations, such as energy management, facility maintenance, and security. This can reduce costs and improve the overall efficiency of campus operations.

While the opportunities of AI in HE are significant, there are also challenges that need to be addressed, such as ethical considerations, data privacy concerns, and ensuring that AI is used to enhance, rather than replace, human interaction and support. To fully realize the potential of AI in HE, it is essential that educators, researchers, and policymakers work together to develop responsible and effective strategies for implementing and integrating AI technologies.

Strategy for implementing and integrating AI in HE

Implementing and integrating AI technologies in higher education (HE) requires careful planning and consideration to ensure that these technologies are used in a responsible and effective manner. Here are some responsible and effective strategies that HE institutions can use to implement and integrate AI technologies:

  1. Develop a clear vision and strategy: HE institutions should develop a clear vision and strategy for how AI technologies will be used to enhance the student experience, improve learning outcomes, and support research. This should involve identifying specific use cases and goals for AI, as well as assessing the potential benefits and risks.
  2. Foster collaboration and communication: Implementing and integrating AI technologies requires collaboration and communication across departments and stakeholders. HE institutions should establish cross-functional teams that include educators, administrators, IT professionals, and students to ensure that all perspectives are considered and to foster buy-in and support for AI initiatives.
  3. Ensure transparency and ethical considerations: HE institutions should ensure that the use of AI technologies is transparent and ethical. This includes being clear about how AI technologies are being used, ensuring data privacy and security, and ensuring that AI is used to enhance, rather than replace, human interaction and support.
  4. Invest in staff training and development: AI technologies require specialized skills and knowledge to implement and use effectively. HE institutions should invest in staff training and development programs to ensure that educators and administrators have the necessary skills and knowledge to use AI technologies in their work.
  5. Pilot and evaluate AI initiatives: Before scaling up the use of AI technologies, HE institutions should pilot and evaluate their effectiveness. This involves testing AI technologies in small-scale pilots, collecting feedback from stakeholders, and using data to evaluate their impact on learning outcomes, student engagement, and other key metrics.
  6. Engage with the wider AI community: HE institutions should engage with the wider AI community to stay up-to-date with the latest developments and trends in AI. This includes attending conferences and workshops, collaborating with other institutions, and partnering with industry experts and researchers.
  7. Align AI initiatives with institutional goals and values: Finally, HE institutions should ensure that their AI initiatives are aligned with their institutional goals and values. This involves considering the unique needs and priorities of the institution, as well as the needs and preferences of students, educators, and other stakeholders. By aligning AI initiatives with institutional goals and values, HE institutions can ensure that they are using these technologies in a responsible and effective manner.

Developing research on the opportunities of AI in HE

There are several ways to develop ideas for future opportunities of AI in higher education and organize it as a research topic:

  1. Literature review: Conducting a literature review of existing research on AI in higher education can help identify the gaps and potential areas for future research. This can involve searching for relevant articles, books, and reports in databases such as Google Scholar, ACM Digital Library, and IEEE Xplore.
  2. Expert interviews: Talking to experts in the field, such as educators, researchers, and AI professionals, can help identify emerging trends and potential opportunities for AI in higher education.
  3. Case studies: Conducting case studies of AI implementation in higher education institutions can provide insights into the benefits and challenges of adopting AI technologies.
  4. Surveys: Conducting surveys of stakeholders such as students, faculty, and administrators can help identify their needs and preferences for AI in higher education.
  5. Workshops and conferences: Attending workshops and conferences on AI in higher education can provide opportunities to learn from experts and exchange ideas with peers.

Once you have developed ideas for future opportunities of AI in higher education, organizing it as a research topic can involve the following steps:

  1. Identifying research questions: Based on the ideas developed, identify research questions that are relevant, feasible, and significant.
  2. Defining the scope of the research: Clarify the boundaries of the research topic and define the key concepts and variables.
  3. Developing a research methodology: Decide on the appropriate research methods, such as qualitative, quantitative, or mixed methods, and develop a research design.
  4. Conducting data collection: Collect data using appropriate methods such as surveys, interviews, and case studies.
  5. Analyzing data: Analyze the data using appropriate statistical or qualitative analysis techniques.
  6. Drawing conclusions and making recommendations: Based on the findings, draw conclusions and make recommendations for future research and practice.
  7. Writing up the research: Write up the research in a clear and concise manner, following the conventions of academic writing.

When reviewing emerging trends of AI in higher education after conducting qualitative research, a theory that could be useful is grounded theory.

Grounded theory is a research methodology that involves systematically collecting and analyzing qualitative data to develop a theory or explanation of a phenomenon. The approach emphasizes the need to generate theory from the data itself, rather than imposing preconceived notions or theories on the data.

When conducting a qualitative study of emerging trends of AI in higher education, the grounded theory approach can help identify key themes and patterns in the data that may point towards emerging trends. These trends can then be used to develop a theory that explains how AI is being used in higher education, what the benefits and challenges are, and what the future implications may be.

Grounded theory can also be helpful in identifying gaps in the literature and areas for future research. By systematically analyzing the data, researchers can identify areas where more research is needed, and use the emerging trends to develop research questions that are relevant and meaningful.

Developing an AI based marking and feedback tool

Designing an AI system that can be used to automatically grade assignments and provide personalized feedback to students can be a complex task, but it can be accomplished by following these steps:

  1. Define the grading criteria: The first step is to define the grading criteria for the assignment. This involves breaking down the assignment into its key components and determining how each component will be evaluated. The grading criteria should be clearly defined and objective, so that the AI system can easily understand and apply them.
  2. Create a training dataset: To train the AI system, you need to create a training dataset that includes a set of annotated assignments. These annotated assignments should be graded by human experts according to the defined grading criteria. The dataset should be large enough to ensure that the AI system can learn to recognize a variety of different assignment types and styles.
  3. Choose the appropriate machine learning algorithm: There are several machine learning algorithms that can be used for automatic grading, including decision trees, support vector machines, and neural networks. The choice of algorithm will depend on the complexity of the grading criteria and the size of the training dataset.
  4. Train the AI system: Once you have chosen the appropriate machine learning algorithm, you can train the AI system using the annotated training dataset. The system should be trained to recognize patterns and relationships in the data, and to identify which components of the assignments are important for grading.
  5. Test the AI system: After training the AI system, you should test it using a set of unseen assignments that have not been included in the training dataset. This will help you evaluate the system's accuracy and identify any areas where it needs further improvement.
  6. Provide personalized feedback: To provide personalized feedback to students, you can use the AI system to identify areas where a student may need additional support or guidance. For example, if a student consistently struggles with a particular component of the assignment, the system can provide targeted feedback and resources to help the student improve.
  7. Continuously improve the AI system: Finally, you should continuously monitor and improve the AI system. This involves regularly updating the training dataset, evaluating the system's performance, and incorporating feedback from students and educators to refine the grading criteria and feedback mechanisms.

In summary, designing an AI system for automatic grading and personalized feedback involves defining the grading criteria, creating a training dataset, choosing the appropriate machine learning algorithm, training the system, testing it, providing personalized feedback, and continuously improving the system. By following these steps, you can develop an effective and efficient AI system that enhances the learning experience for students and educators alike.

Common areas of weakness of students’ academic writing

Academic writing is an essential skill for students in higher education. However, there are common areas of weakness that students often struggle with. Here are some of the most common areas of weaknesses for students' academic writing in higher education:

  1. Poor organization and structure: Students often struggle with organizing their ideas and structuring their writing in a way that is clear and logical. This can make their writing difficult to follow and reduce the overall effectiveness of their argument.
  2. Weak thesis statements: A strong thesis statement is essential for guiding the direction of an academic paper. However, many students struggle to develop a clear and concise thesis statement that effectively summarizes the main argument of their paper.
  3. Inadequate research and citation skills: Effective academic writing requires the use of credible sources to support arguments. However, many students struggle with finding appropriate sources, evaluating their credibility, and correctly citing them in their writing.
  4. Poor grammar and syntax: Grammatical errors and poor syntax can make writing difficult to read and understand. This can detract from the clarity and effectiveness of students' arguments.
  5. Lack of clarity and concision: Academic writing should be clear and concise, but many students struggle with using language that is overly complex or wordy. This can make their writing difficult to read and reduce the impact of their argument.
  6. Inappropriate tone or style: Students may struggle to adopt an appropriate tone or style for academic writing, which can lead to writing that is overly informal or lacking in scholarly rigor.
  7. Failure to address the assignment prompt: Finally, students may struggle with addressing the assignment prompt in a way that fully meets the expectations of the instructor. This can result in writing that is off-topic or fails to fully address the main requirements of the assignment.

Overall, addressing these common areas of weakness in academic writing can help students improve their writing skills and achieve greater success in their academic pursuits.

Epilogue: For comparison to the above, here are some ChatGPT responses to prompts by Chaoran Yang, a student who is part of my consulting module. I suggested to all students of that cohort to try ChatGPT for idea generation at the start of a consulting exercise for the Warwick Esports centre. Similar to my findings from above, the students also felt that ChatGPT did not provide any major insights. Possibly because we did not ask the right questions, and, more importantly, because ChatGPT is still in its infancy.

In our team we have spent over 2,000 hours developing the Warwick AI Essay Analyst. We used a mixture of non-AI rule-based statistical features and deep-learning algorithms and databases, e.g., Pytorch, Hugging face framework, and Transformer (for further information on our AI-based tool see here: https://nationalcentreforai.jiscinvolve.org/wp/2022/11/16/interested-in-receiving-formative-feedback-on-your-draft-essays-and-dissertations-on-demand-introducing-warwicks-ai-essay-analyst/).

With the current progress in the field of generative AI, developments of future tools will be faster – let’s work together to ensure that all tools, whether developed in-house or bought / endorsed by the university have robust ethical underpinnings. My final suggestions for readers is to review here the Ethical guidelines on the use of artificial intelligence (AI) and data in teaching and learning for educators, produced by the Office of the European Union: https://data.europa.eu/doi/10.2766/153756


February 13, 2023

AI for Good: Evaluating and Shaping Opportunities of AI in Education

By Isabel Fischer, Leda Mirbahai, and David Buxton

Following the rise of awareness of the opportunities (and threats) of artificial intelligence (AI) in education, we have created a task and finish group which aims to review and ‘imagine’ the opportunities and challenges of AI in education, incl. assessments. Our vision is to deploy AI as a tool to support all students, independent of background and socio-demographic characteristics, to be successful in their studies and in their future work, while ensuring academic integrity, as well as to support educators feel confident in using AI effectively in promoting learning. We are working in five (sub)groups:

  1. General AI in Education (AIEd) Opportunities & information sharing
  2. Novel and Diverse Assessment Designs
  3. Feedback, Marking, Authorship Detection
  4. Designing Teaching Content - ‘what is out there being developed?’
  5. 'Red Team': AI Ethics and Academic Integrity

As we are still interested in colleagues from within Warwick as well as other institutions and the wider community of stakeholders to join us, here some further information per (sub)group:

1) General AI in Education (AIEd) Opportunities & information sharing: We review how to capture, shape, and disseminate the opportunities for both learner-facing and educator-facing AI, mainly in HE but also considering how HE can support the secondary and even primary school sector (e.g. how to help teachers to experiment with different forms of AI in a low-stake environment). We also consider the benefits, such as reducing inequality, fairness and democratisation that AI offers, evaluating how we can support SDG 4 (equitable and quality education) and SDG 10 (reducing inequalities). We want to help educators to know how to potentially embrace recent AI developments for their professional practice. Combined with sub-group / Strand 5, the ‘red team’ we also want to inform colleagues on research (similar to mini literature reviews) on topics such as Algorithmic Fairness.

Target Output: A WIHEA page that is informative for colleagues new to AIEd (explanations, links to other resources, links to discussions / brainstorming exercises / blogs, suggestions for their practice)

2) Designing Assessments: We review the opportunities for designing and setting diverse assessments (Learner-facing), including embedding our work within our different Learning Circle’s work. It is in this strand that most of the student co-creation will take place.

Target Output: WIHEA page, blogs, and talks

3) Feedback and Marking: We review the opportunities of using AI for formative feedback (Learner-facing), summative feedback (Educator-facing), ‘AES – automated essay scoring’ (educator-facing), and stylometry (authorship authentication) as well as ChatGPT detection. One aspect of this strand (but not constrained to this strand) is also ‘Move fast, Break fast, Learn fast’ – doing small scale experiments and testing them (e.g., Consulting Students will experiment with mind maps this term and then can, but don’t have to, submit their work to the Warwick AI Essay Analyst for formative feedback and we can analyse their work).

Target Output: A WIHEA page that disseminates information and possibly diffusion of the actual Warwick AI Essay Analyst tool at Warwick, potentially producing research output

4) Designing Module and Lesson Content & Recommendations for institutional Developments / Purchases: Educator-facing, we review tools and initiatives that might help educators in planning and organising their modules and lessons, as well monitoring their email and forum entries. This group looks at all educator-facing areas besides designing assessments (group 2) and providing feedback on assessments (group 3). This group might also make recommendations to the institution on what software to build or to purchase etc.

Target Output: A WIHEA page that disseminates information, possibly making recommendations for in-house developments / purchase of external software packages

5) A ‘red team’ acknowledges that AI is here to stay and ensures we follow AI Ethics guidelines and that everybody is clear about the risks. This team also reviews and mitigates the challenges to Academic Integrity more broadly. Moreover, it reviews the risk of bought-in products from EdTech and Tech companies, ensuring that AI Ethics is applicable both for in-house and off the shelf, bought-in products.

Target Output: A WIHEA page that provides information for colleagues worried about AIEd (explanations, links to other resources, links to discussions) especially on the topic of AI Ethics and Academic Integrity (what is OK to do, what isn’t – where should students / educators draw the line). Collaborating with stand 1, this group might want to explain (do a high-level literature review / providing links to important research) aspects of AI Ethics / Academic Integrity, such as explaining concepts such as ‘Algorithmic Fairness’. Building on work by other groups, e.g., last year’s ‘Online Assessment Data Ethics Group’, this group might want to develop a proposal for SLEEC (https://warwick.ac.uk/services/gov/committees/sleec/) and/or to provide guidance and advice to EPQ on suitable policy and guidance where appropriate.

Proposed Overall Target for entire Task and Finish Group, i.e. across the five groups / strands: Have some tangible outputs (WIHEA page, blogs, talks) that support colleagues when they embrace change in an ethically sound way that respects all stakeholders, especially learners and educators. Ideally collaborating with other universities, other education providers, and industry. Possibly develop a proposal for SLEEC and/or provide guidance and advice to EPQ on suitable policy and guidance where appropriate.

Please email Isabel.fischer@wbs.ac.uk if you are interested in joining any of the groups.

Please email Leda.Mirbahai@warwick.ac.uk if you are interested in joining our open WIHEA Diverse Assessment Learning Circle with interesting talks, such as our talks this month on Synoptic Assessments and on Democratising Assessments.


October 25, 2021

AI Ethics for Assessments in Higher Education

Warwick University logo

AI Ethics for Assessments in Higher Education: A project example of an interdisciplinary social sciences undergraduate summer research scheme

By Isabel Fischer (Warwick Business School) and Thomas Martin (Economics)

Warwick’s Social Sciences offer students and faculty from economics, education and Warwick Business School (WBS) the opportunity to take part in an interdisciplinary summer research project to improve awareness and understanding of collaborative research work on a topic of the students’ choice. One of this summer’s group focused on AI ethics for assessments where students applied the EU Ethics Guidelines for Trustworthy AI to Higher Education Assessment. Students concluded that despite the limitations of AI, AI has the potential to make assessment processes in higher education more effective and fairer. Students suggested that AI should be embraced, but only with human oversight and agency, and with clear stakeholder communication in place.

Linden Davison, a student from the Department of Economics, commented: “Getting involved in the UG research scheme broadened my awareness beyond my single subject discipline - working in a field I wasn't aware existed when we started! It was a pleasure to work alongside students from different departments and be guided by such engaging, motivated staff alike.” Toby Pia, also from Economics, added: “Throughout my URSS experience I improved my ability to explain complex ideas in a more understandable way. I also got better in conveying a balanced argument as previously I tended to get more entrenched into one side of a debate rather than looking at it from both sides.”

At the end of the project each student produced a research poster, depicting one of the seven overarching themes of the EU Ethics Guidelines for Trustworthy AI. Below an example by Shubhangi Bhatt, a student from the Department of Education Studies, on Transparency.

poster_example.pdf

AI poster (text only version)

Finally, here a link to five articles that explain how AI could be made ethical and trustworthy:

https://www.wbs.ac.uk/news/five-reads-you-need-to-make-ai-ethical-and-trustworthy/

And you also might want to read here how AI can (positively) influence education generally:

https://www.wbs.ac.uk/news/five-areas-ai-can-have-the-greatest-impact/


April 2024

Mo Tu We Th Fr Sa Su
Mar |  Today  |
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30               

Search this blog

Tags

Galleries

Most recent comments

  • Very interesting, thank you for sharing. Great CPD reflection. by Joel Milburn on this entry
  • Hi Lucy, Thank you for sharing the highs and lows of diverse assessments. I hope you have inspired o… by Anna Tranter on this entry
  • Hello Lucy, I totally agree with everything you have said here. And well done for having the energy … by Natalie Sharpling on this entry
  • Thank you for setting up this Learning Circle. Clearly, this is an area where we can make real progr… by Gwen Van der Velden on this entry
  • It's wonderful to read of your success Alex and the fact that you've been able to eradicate some pre… by Catherine Glavina on this entry

Blog archive

Loading…
RSS2.0 Atom
Not signed in
Sign in

Powered by BlogBuilder
© MMXXIV