All 14 entries tagged AI

View all 20 entries tagged AI on Warwick Blogs | View entries tagged AI at Technorati | View all 2 images tagged AI

August 14, 2024

TELSIG podcast episode

Have a look at the latest TELSIG podcast about how small language models can change education: https://www.youtube.com/watch?v=Ftxo3SduXog


June 03, 2024

Jagged frontiers of AI in academic practice

Dominik Lukeš (University of Oxford) and Isabel Fischer (Warwick Business School) explored with a group of Developing Consulting Expertise students the integration of AI into academic practice.

We looked at the concept of the Jagged Frontier of AI capabilities that illustrates the mismatch between what people expect AI would be good at and what AI is actually good at. We asked the question of how to explore the jagged frontier, what is the best way to do so and what to pay attention to.

Dominik introduced his framework for evaluating AI tasks along five dimensions. For each AI task, we asked five questions based on exploring various AI tools.

  1. AI capability mismatch: What is the level of mismatch against known AI capabilities and limitations?
  2. Hallucination / unpredictability problem: How manageable is hallucination here? Where is it likely to show up?
  3. Prompt sensitivity: Is the output sensitive to how the prompt is formulated?
  4. Context window dependence: Can the AI tool see everything you present it?
  5. Variability across tools and models: Does the tool and/or the model the tool uses matter for the task?

The students worked in groups and came up with the following comparison of various common academic tasks. The table represents their work.

How AI handles different academic tasks.

While the table is not the final word on any of these tasks, it was the subject of a much wider conversation. Perceptions along the five dimensions are very much dependent on how the specifics of the task are conceived. A different group, at a different time, may come up with a table that is quite different. It is the conversation and joint exploration that matters. You can read more about the workshop on AI integration and explore some of the presentation materials here.


May 13, 2024

AI on Campus: Students' Perspectives podcast

Have a look at this initiative at University of Limerick (Ireland) where students discuss the innovative ways that GenAI tools enhance their educational experience. Topics covered include neurodiversity, Universal Design for Learning, authentic assessment, and day-to-day student pressures.

AI on Campus: Students' Perspectives podcast


April 29, 2024

Insights from a faculty session to Integrate AI in teaching practice

This blog is in two parts and was written by Dr. Neha Gupta and Dr. Susanne Beck, Assistant Professors, ISMA Group, Warwick Business School.

Part 1: Planning to deliver a faculty session to integrate AI into teaching practice (Date: 19th April 2024)

This blog share ideas under consideration in preparation to lead a faculty session about integrating AI in Teaching and Learning practices in various forms in a Higher Education setting. The session will be one of the parallel breakout sessions during the annual event at Warwick Business School, the Teaching and Learning Symposium 2023-24, where faculty from all groups (i.e. various disciplines) engage in peer dialogues, discussions, and activities around how the learning and teaching needs in the higher education landscape are evolving. The broad aims of the session are to inspire discussions and ideas about how to use Generative AI (GenAI) and emerging technologies to foster relevant skills enhancing students' employability.

The leading faculty (co-authors of this blog) plan to use a pool of resources from the WIHEA AI Learning Circle, JISC and a Harvard AI pedagogy project to stimulate discussion on the use of AI practices across higher education. A demo of hands-on examples and of AI Tools and prompts used by colleagues from WBS and beyond, such as Ethan Mollick, will help the attendees see how practically they can engage with AI, for example in setting up assessment tasks with the use of ChatGPT. A notable aspect of the session will be the demonstration of AI tools. For example, CODY AI, a web-based AI tool capable of generating bots to address student queries efficiently by using LLM will be demonstrated using the existing knowledge base from the student handbook to answer dissertation related queries. This demonstration will exemplify how AI can streamline administrative tasks, such as responding to common student inquiries, thereby optimising staff resources and minimising response times.

As the job market evolves, students must be equipped with both domain-specific knowledge and technological proficiency. Integrating AI into teaching not only prepares students for future careers but also empowers them to engage with and leverage technology responsibly. The AI technology is out there and students are going to be using AI tools in their future work places. During their job interviews they will be asked about these tools and about their opinion on these tools. As educators, it is our responsibility to provide students with opportunities to experiment with these tools during their learning journey and allow them to form their own experiences and opinions. Perhaps, educators should recognise that they should have an open mind to experiment with emerging AI tools that offer immense potential in enhancing teaching and assessment practices. Yet, its implementation must be guided by ethical considerations and a commitment to fostering critical thinking skills among students.

Part 2: Insights from faculty session to Integrate AI in teaching practice (Date: 25th April 2024)

This blog share insights from the delivery of faculty session at WBS teaching and learning symposium (an account of which is mentioned in the blog above). The potential of AI tools revolutionising the student supporting task with the use of Cody AI for answering queries sparked discussions amongst colleagues and on its implementation across various educational contexts. For example colleagues were concerned about:

  1. Copyrighted information shared on public domains unless such tools are contractually brought into the university ecosystem where such challenges can be managed through a more formal implementation of such AI tools in a university setting.
  2. Hallucinations or information made-up by AI tools given the underlying LLM layers. In the demonstrated case of Cody AI bot, however, the answers the Bot gives is primarily based on the knowledge base provided by the user.

Though colleagues had a consensus that committing to innovation and the integration of such AI tools into teaching practices holds the promise of both, creating more efficient handling of student queries as well as enhancing their learning experiences in higher education.

The break out session further delved into using AI for assessments. Hands-on examples of prompts and outcomes where shared within the session, exploring benefits for both educators and students. For educators, GenAI tools can be used to develop creative assignments more efficiently, that require students to critically engage with AI generated content. For instance, instead of preparing a recap-exercise at the beginning of a class, asking students what they remember, teachers can ask students to critically review a text about a given topic and identify (purposefully included) false claims, and share their thoughts with their neighbours. Besides subject knowledge, this exercise sensitises students that even text that might sound good, may be factually wrong. Both the text as well as the instructions can be generated by ChatGPT in an instant, making it easily replicable and customisable for educators (see another example, asking students to write a critical essay, here).

For students, such assignments can help them develop skills such as critical thinking. But through the use of GenAI they may also be empowered to leverage individualised learning opportunities and stimulate their curiosity. For example, in his recent book, Ethan Mollick showcases a potential methodology to encourage students to experiment with tasks they have no experience in. For his entrepreneurship class, he asks students to take the development of a business idea a step further and come up with a website or even develop an app for their business – especially when they have no experience with coding. This opens a new space for students to experiment and become creative, another skill enhancing their future employability.

For students to thrive through the use of GenAI in the classroom, however, the discussions in the session emphasised two important boundary conditions: First, students need to be given the space to experiment with using AI, as well as other emerging technologies. Providing them with space includes aspects such as rewarding ambitious ideas rather than penalising if they fail in persuasion. Second, a responsible usage of AI needs to find its place in the students’ curriculum. Teachers cannot expect students to be fully knowledgeable about the most recent capabilities and risks related to such a dynamic technology. Schools and educators need to provide them with the necessary training.

At the end of the breakout session, the attending faculty were invited to join a discussion, imagining themselves a) in the student’s role and share What could be students concerns when receiving an assignment that asks you to use AI? What steps an educator takes to address these concerns?; or b) in the teacher’s role, thinking about What could be your concerns when designing an assignment that asks students to use AI? What would they (teachers) need to address their concerns? The discussion generated below key takeaways that underscored the importance of ethical AI integration, ongoing teacher professional development in AI literacy, and the need for a balance between technological advancement and human-centric pedagogy:

  1. Invest time to train ourselves first then further share AI related knowledge with our students.
  2. Avoid falling into the AI trap – i.e. students still need step by step guidance in terms of what is expected from them in their assessment task with minimum ambiguity in the instructions.
  3. Incorporate AI as a step towards innovation by evolving our teaching practices by going beyond the AI tool and being valuable as a knowledge expert (both in setting up assessments and teaching content) (see also Mollick & Mollick, 2024).
  4. Teaching and learning tasks should be aligned to learning outcomes and not incorporate AI just for the sake of it or for perceived pressure. AI and emerging technologies should be considered powerful means to achieve learning outcomes more effectively.

Feel free to reach out to Dr. Neha Gupta neha.gupta@wbs.ac.uk for more details about the session.


March 25, 2024

Collaborating with AI – Writing an assignment

Rob Liu-Preece is the Academic Technologist for PAIS, Sociology and Philosophy at the University of Warwick. He has also been an IELTS marker for Writing and Speaking for 20 years and previously taught Academic Skills to international students both in the UK and overseas for 20 years.

This is the second of two posts written by Rob about AI and the ANTF Project:

While many have expressed fears that the advent of AI may threaten future employment, others have emphasised that those able to work with AI may well be in the most secure forms of work. This assignment is designed to deploy a relatively undemanding writing task which students will hopefully be motivated to carry out, being on a topic of their choice and hopefully enable them to sustain interest through the stages of writing, reading an AI answer, re-drafting and then commenting on it. I hope students will benefit from being unconstrained by academic demands as they compose a short piece of writing on a topic familiar to themselves. Additionally, that might help increase the learning take-aways they generate having completed the process. I have included an example to help students undertake the novel experience of writing with AI operating as a kind of writing assistant.

Aims of the Assignment

  • To provide a motivating topic for students to write about, review and redraft.
  • To practice collaboration on writing with AI.
  • To enhance students understanding and awareness of significant features of their writing.
  • To improve their appreciation and ability to compare human-created and AI-generated writing.
  • To practice collaborating with AI to produce an improved finished piece of writing.

Learning Objectives:

  • To increase understanding of the difference between a human and machine written response.
  • To improve the ability to collaborate with AI to enhance a written response.
  • To recognise the strengths and weaknesses of AI generated content.

Instructions:

You need to complete 4 activities for this assignment.

  1. Choose a cultural artefact important to you. Write 500 – 600 words on why it’s important and what insights it carries.
  2. Then write a prompt for Chat GPT and generate an answer.
  3. Re-write your original incorporating new content gleaned from Chat GPT. Highlight the additional content in italics.
  4. Write 2-4 paragraphs comparing human-created and AI-generated writing.

March 18, 2024

AI Marking Criteria

Rob Liu-Preece is the Academic Technologist for PAIS, Sociology and Philosophy at the University of Warwick. He has also been an IELTS marker for Writing and Speaking for 20 years and previously taught Academic Skills to international students both in the UK and overseas for 20 years.

This is the first of two posts written by Rob about AI and the ANTF Project:

The sudden explosion in the availability and use of generative AI technology, especially by university students has left education professionals in a position of playing catch-up. With ChatGPT gaining 1 million users in just 5 days and 100 million in 2 months, I feel like educationalists have just arrived at the point of coming up for air. As part of that process, I’ve written marking criteria aimed at marking the use of AI by students completing assignments.

I think in a learning environment characterised by uncertainty and disruption, students will benefit from an explicit expression of how the university wants them to use AI. Applying a marking framework like this could also lessen the need for tutors to follow a punitive/academic integrity route for dealing with misuse of AI. It could achieve this by opening up and defining ‘poor academic practice’ more closely aligned to AI as an alternative. I also hope this type of approach will help steer the development of pedagogy and AI, providing a structure for on-going debate and discussion. Lastly, having a set of criteria like this enables reverse engineering of training and coaching on AI for both students and tutors.

To address these issues, I’ve written a set of marking criteria based on the existing Politics and International Studies assignment marking criteria for undergraduate students. I would anticipate students including a short report to their written assignments covering their use of AI. The framework is based around 2 main categories, namely appropriacy of use and awareness of key issues. The criteria is by no means a finished piece of work, is not necessarily fit for purpose and hasn’t undergone any road testing or standardisation. Rather it is designed to signal a possible route forward for those of us concerned and interested in shaping the take-up of AI in education. It does raise some thoughts in my own mind about whether such an approach is the right way to go. Should we be setting or defining an orthodoxy in quite tight terms for AI use, like this? Is a literacy model approach implied here the correct one, or would a better way be to focus on conscious use of AI by students?

Please note I used Google Gemini to help with the overall structure of this blog and for the statistics in the first paragraph.


January 22, 2024

AI: Empowering Inclusive Education

Have a read of this post by Nalina Brahim-Said a student at the University of Bolton looking at inclusive education in the UK curriculum and integrating AI for SEN and neurodiversity:

https://nationalcentreforai.jiscinvolve.org/wp/2024/01/09/ai-empowering-inclusive-education/


January 15, 2024

Generative AI: Misconceptions and misunderstandings

This blog considers some of the misconceptions, and misunderstandings around generative AI:

https://nationalcentreforai.jiscinvolve.org/wp/2023/12/01/debunking-myths-and-harnessing-solutions-navigating-generative-ai-in-education/


July 24, 2023

Using AI to evaluate film criticism – Lewis Beer

Introduction

Can ChatGPT evaluate the quality of insights in a student essay? In January 2023, Daisy Christodoulou published an article (Can ChatGPT mark writing?, No More Marking Blog) exploring this question. She tested ChatGPT using English essays, and found that while its grading and feedback were more or less aligned with her own, the AI was sometimes unable to spot fundamental problems in essays. Christodoulou offers some fascinating insights, but one thing she does not explore in any detail is the relationship between ChatGPT’s qualitative evaluation of an essay and the content of the essay itself.

In this post, I will share the results of my own brief experiment, in which I pasted two film reviews into ChatGPT and asked it to evaluate (and compare) the quality of insights in these reviews. My aim here was to use these texts as substitute ‘essays’, and consider how the AI-generated responses might help or hinder a marker in giving meaningful feedback.

The dialogues quoted from below were conducted on 23 March 2023, using the free ChatGPT 3.5. (I later repeated the dialogues with both this and ChatGPT 4.0, but found the responses from 23 March gave more substantial and interesting feedback. I will not discuss those later dialogues in this post.)

Review 1: Peter Bradshaw

To begin with, I asked ChatGPT to ‘Evaluate the quality of insight in the following review of the film Nomadland’, then pasted the text of Peter Bradshaw’s review of Nomadland from The Guardian. I wanted to see how it would respond to this vague request, in which I provided no criteria on which to base an evaluation.

The response did a good job of summarising the review using selected words from the text. For instance, at the end of its evaluation the chatbot said:

[Bradshaw] does offer a mild critique that the film could have been more explicit in its condemnation of the economic forces causing the characters' hardships. However, they still recognize the film's greatness and Chloé Zhao's exceptional filmmaking.

This is clearly pieced together from the following passage in Bradshaw’s review, in which I have highlighted the details used by ChatGPT:

[The film is] brought off with exceptional intelligence and style. Arguably it is not angry enough about the economic forces that are causing all this but it still looks superbly forthright. There is real greatness in Chloé Zhao’s film-making.

The paraphrasing techniques used here – transposing words like ‘exceptional’ and ‘greatness’ into slightly different contexts, and spelling out that ‘all this’ means ‘the characters’ hardships’ – are effective in ‘feedback’ terms, because they seem to indicate an understanding of the writer’s (i.e. student’s) argument. However, Bradshaw says the film is ‘not angry enough’, not that it is ‘not explicit enough’, and this may be a significant error.

In its final verdict on the review, ChatGPT said:

Overall, this review appears to be well-written and insightful, providing a thorough analysis of the film’s themes, characters, and style while also highlighting its strengths and potential weaknesses.

Without my having provided any criteria, the AI assumes that an insightful film review will explore themes, characters, and style, and that it will offer a balanced assessment of the film’s strengths and weaknesses.

More of ChatGPT’s ‘marking criteria’ emerged when I asked it to ‘Identify some ways in which the review could be improved’. It offered some valid points in response to this prompt. For example:

The review could have delved deeper into the film's technical aspects, such as its cinematography, editing, and sound design. While the review mentions the film's style and how it differs from traditional narrative-driven films, it could have gone into more detail about how the film achieves this style and what specific techniques the filmmakers used.

This is arguably an area where Bradshaw’s review falls short. Bradshaw does discuss how Nomadland differs from traditional narrative-driven films, and he does connect this with Zhao’s stylistic choices; in this part of his review, he might have discussed film-making technique in more detail. So the AI’s choice to structure feedback like this (‘While you did talk about x, you could have delved deeper into y...’) could be helpful for a student looking to improve their essay by building on their strengths.

On that note, here is another of ChatGPT’s constructive criticisms:

The review could have included a more critical evaluation of the film. While the review praises the film’s strengths, it could have offered a more balanced assessment that acknowledges both the film’s strengths and its weaknesses.

This picks up on the detail quoted above, from the AI’s initial evaluation, noting that Bradshaw’s critique of the film is ‘mild’. The AI has perhaps noticed that Bradshaw’s more negative language is limited to the very end of his review, and is couched in the word ‘arguably’. Again, if we imagine this as feedback being provided to a student, ChatGPT’s evaluations do a good job of mixing praise and criticism: ‘You balance your argument by acknowledging the film’s weaknesses, but you only do this briefly at the end – you could have included a more critical evaluation.’

Other responses, however, show ChatGPT’s limitations, and would constitute problematic essay feedback. For instance:

The review could have offered more specific examples of how the film explores its themes and characters. While the review mentions some of the film's themes, such as the impact of economic hardship on older Americans, it could have gone into more detail about how the film portrays these themes and how they are relevant to contemporary society.

This is not really a fair critique: Bradshaw does highlight specific examples of how the film explores ‘the impact of economic hardship on older Americans’, and he does allude to contemporary issues such as factory closures, the dominance of Amazon, and the importance of the tourist trade in this part of America:

...looking for seasonal work in bars, restaurants and – in this film – in a gigantic Amazon warehouse in Nevada, which takes the place of the agricultural work searched for by itinerant workers in stories such as The Grapes of Wrath.

Fern, a widow and former substitute teacher in Empire, Nevada – a town wiped off the map by a factory closure – who is forced into piling some possessions into a tatty van and heading off...

At times, the film looks like a tour of a deserted planet, especially when she heads out to the Badlands national park in South Dakota, where there is also tourist-trade work to be had.

ChatGPT also says:

The review could have provided more context for the film's production and reception. For example, the review could have mentioned the awards and critical acclaim that the film has received, or how it fits into Chloé Zhao's broader filmography.

Some of this is fair – the review was published after Nomadland’s Oscar success, so Bradshaw could have mentioned this – but it misses the contextual details Bradshaw includes about the film’s production:

Zhao was even allowed to film inside one of Amazon’s eerie service-industry cathedrals.

The movie is inspired by Jessica Bruder’s 2017 nonfiction book, Nomadland: Surviving America in the Twenty-First Century, and by the radical nomadist and anti-capitalist leader Bob Wells, who appears as himself.

The people she meets on the road are, mostly, real nomads who have vivid presences on screen.

As with the previous criticism, ChatGPT has not acknowledged key details of the review in its initial assessment, so its critique is not balanced: it is like a marker who blames a student for ‘not doing x’ when the student in fact spent several paragraphs on ‘x’. (Human markers sometimes do this, of course.)

Review 2: Beatrice Loayza

I then asked ChatGPT, ‘Is the following review of the film Nomadland more incisive than the previous one?’, and pasted the text of Beatrice Loayza’s review of Nomadland, from Sight & Sound. Again, I deliberately did not provide any assessment criteria. ChatGPT’s answer was ‘yes’, for several reasons – some valid, some less so. First of all, it said, Loayza ‘provides a detailed analysis of the film's themes and cinematography, as well as the performance of Frances McDormand’. This is fair, and picks up on one of the criticisms of Bradshaw cited above (namely his lack of attention to technical aspects). Loayza comments on specific camera techniques, naming the cinematographer and describing the light effects he achieves. She also does more than Bradshaw to explain why McDormand’s performance is so effective.

ChatGPT picks up on another of its own criticisms of Bradshaw by praising Loayza’s critical perspective on the film:

However, the review also criticizes the film's lack of force and clarity in its insights into labor in the 21st century and the exploitation of older Americans. The author points out that the film's depiction of workers exploited by Amazon feels too easygoing and questions the film's liberal naivete in addressing the conditions of the nomadic lifestyle. Overall, the review provides a more nuanced and thoughtful analysis of the film.

This draws upon the following passage in Loayza’s review; again, I have highlighted phrases that ChatGPT seems to have picked up on:

[The film’s] insights into labour in the 21st century, and the exploitation of an older generation of Americans, lack force and clarity. At the very beginning of the film, Fern is employed by Amazon’s CamperForce programme, which provides base wages and free parking space to seasonal workers in their 60s and 70s. In 2020, Amazon doubled its profits during a global pandemic, which makes Zhao’s easygoing depiction of workers exploited by the company feel rather toothless. That the film aims to capture the ways in which a kind of working-class American outsider struggles without fully addressing the conditions of that struggle casts over it the shadow of a questionable liberal naivete.

To re-cap:

  • In its initial assessment of Bradshaw’s review, ChatGPT noted that his critique of the film was ‘mild’
  • In suggesting improvements, it built on this comment by recommending a more balanced approach
  • And in drawing a comparison with Loayza’s review, it notes her more substantial version of Bradshaw’s criticism.

At each stage, the AI appears to be drawing upon specific evidence from the texts, rather than just ‘hallucinating’ these evaluative comments.

Elsewhere in its comparison between Bradshaw and Loayza, however, ChatGPT did hallucinate some differences in order to justify its verdict. I will not cite these here, as this post is already very long, but the inaccuracies were of a similar kind to those in the summary of Bradshaw discussed in the previous section.

Final thoughts

If these film reviews were formative essays that I had to mark, I could use ChatGPT’s feedback to offer legitimate praise and criticism, suggest improvements, and judge the relative merits of the two essays in relation to each other. However, I would also notice that ChatGPT misses important details in these texts and draws some un-founded contrasts between them.

In the course of this experiment, I tried several variations on the above prompts. Here are some things to note if you want to try a similar experiment yourself:

  • I fed the reviews into ChatGPT several times, and in a different order. When I asked it to make a comparative evaluation, it tended to prefer the second review (even if this was Bradshaw’s). When I asked if it could reverse its comparative evaluation (i.e. ‘Can you argue that the other review is more insightful than the first?’), its responses varied: sometimes it doubled down on its first opinion, sometimes it conceded that an alternative opinion could be justified. Again, the reasons given for these opinions ranged from ‘valid’ to ‘hallucinatory’.
  • This post demonstrates what Chat-GPT is capable of in the hands of a technically ignorant, time-poor amateur like me, but by using the right prompts and follow-up prompts, it would no doubt be possible to collate more credible ‘essay feedback’, and then ask the AI to present and construct this in an appropriate way. Have a look at the other articles and resources linked to on the AI in Education Learning Circle webpage, try an experiment of your own, and share the results in the comments below. In particular, you might think about the learning outcomes and marking criteria specific to your discipline, or your module, and consider how you might train ChatGPT to use these in evaluating a piece of text.

July 17, 2023

Who Uses AI, How, and When?

By Matthew Voice, Department of Applied Linguistics, University of Warwick

As I mentioned in my previous WJETT blog post, I have participated in a strand of the ‘AI in Education’ learning circle during the last few months. Over the course of our strand’s meetings regarding the role of AI in formative feedback, our focus has primarily been on the nature of the emerging generative AI technology. Our research and conversation has a great deal to contribute with regards to what large language models mean for the future of formative feedback. However, given that these models are tools, it is worth reflecting on the user, too.

Our learning circle represent educators and students from a wide range of degree programmes, and motivations for engaging with higher education will vary from student to student across all courses. Throughout our discussions, conversation has largely focused on the role of AI in the improvement of formative feedback for students who are motivated by a desire for academic excellence. Given that students in this position will likely be motivated to engage with extracurricular activities within the university (e.g. joining the WIHEA as a student member, partaking in surveys of student experience or voluntary training workshops), their voices will perhaps be heard most clearly during our present conversations.

But the experiences of these students are not representative of our cohort as a whole, or indeed of all points across an individual’s university journey. Many students may view academic writing – either across their degrees, or for certain assignments or modules – as an obstacle to be overcome. In these cases, the objective of academic writing shifts away from excellence, and towards the production of work which will simply allow the student to attain a passing grade. Engagement with GenAI for these students might not mean refinement and improvement; it may simply be a resource for the fastest or most convenient means of content generation.

Integrating GenAI into teaching and formative feedback requires a recognition of this spectrum of experience and motivation. In my previous WJETT blog I recommended that future discussion and planning should consider the reasonableness and realism when supporting students to use and think critically about GenAI. By this, I mean:

1) Reasonableness: Is the task being asked of the GenAI (e.g. to produce formative feedback on a draft assignment) one that it is capable of achieving?

2) Realism: Does our (educators, policy planners, researchers) understanding of engagement with GenAI reflect actual independent student use cases?

Assessing reasonableness through understanding what GenAI is capable of achieving will require ongoing review, and the development of support and training for staff in order to keep pace with developments. This, I think, has largely been the focus of the work done by the ‘AI in Education’ learning circle and the final report this group has produced. Going forward, we also need to consider how well we understand our students’ independent engagement with this sort of assistive technology. What tools and resources are they familiar with? How do they understand them? What do they do with them, and at what point(s) in their academic writing?

Grounding future policy and pedagogic resource development in relation to a realistic model of students’ use and understanding of GenAI will be a task as complex as anticipating the impact of future technological development in large language models. By acknowledging this, and undertaking this work, we best position ourselves to ensure that outputs and resources which arise from working groups such as ours will be meaningful to staff and students working across the university.


October 2024

Mo Tu We Th Fr Sa Su
Sep |  Today  |
   1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30 31         

Search this blog

Tags

Galleries

Most recent comments

  • Very interesting, thank you for sharing. Great CPD reflection. by Joel Milburn on this entry
  • Hi Lucy, Thank you for sharing the highs and lows of diverse assessments. I hope you have inspired o… by Anna Tranter on this entry
  • Hello Lucy, I totally agree with everything you have said here. And well done for having the energy … by Natalie Sharpling on this entry
  • Thank you for setting up this Learning Circle. Clearly, this is an area where we can make real progr… by Gwen Van der Velden on this entry
  • It's wonderful to read of your success Alex and the fact that you've been able to eradicate some pre… by Catherine Glavina on this entry

Blog archive

Loading…
RSS2.0 Atom
Not signed in
Sign in

Powered by BlogBuilder
© MMXXIV