All 150 entries tagged Teaching
View all 306 entries tagged Teaching on Warwick Blogs | View entries tagged Teaching at Technorati | There are no images tagged Teaching on this blog
August 07, 2023
Embedding compassionate micro skills of communication in Higher Education
This short paper reports on the implementation and evaluation of a strategy to embed compassionate communication in group work. The evaluation “demonstrated several benefits, including positive experiences of group work and increased group/social cohesion”. The authors give a more detailed account of the implementation and evaluation data collected in this recorded presentation (46 mins).
July 31, 2023
"Don’t think they know it” – Neha Gupta
My reflections on being interviewed for Asst. Professor as an internal candidate
This blog discusses my personal reflections on the experience of interviewing for an Assistant Professor role which I recently applied for following a fixed term contract in a teaching capacity here at Warwick. I thought being an internal candidate would offer a unique advantage. I am well positioned and capable of showing my best side in an interview but despite my teaching accolades and accreditation which made me an eligible candidate for interview, I felt uncomfortable and partially disappointed with my performance. I know I could have done better. Therefore, I thought to pen this experience which might prepare fellow colleagues for any such future endeavours.
Pros: Being an internal candidate for the Assistant Professor interview at the University has its fair share of advantages. On the plus side, I was already familiar with the institution's culture, values, and expectations. This familiarity gave me a unique advantage to prepare myself for questions such as – 'Why would you like to work here?' To answer this, I leveraged my existing knowledge about school initiatives, academic programmes, teaching and research goals, and project synergies. This allowed me to seamlessly integrate my own knowledge and experience into the existing framework and to contribute to the institution's academic mission. Also, all the examples I shared in the interview, whether it was module feedback from students or course related statistics were already known by some of the faculty members present in the panel, and perhaps gave me the confidence to bring across the point I’m making in the interview.
Cons: Being an internal candidate also presents bigger challenges, which I never expected or rather got perplexed about when I faced them. One significant one is dealing with the confusion of knowing and not knowing the interview panel. I’ve given interviews in the past where the people sitting across the table are completely unknown to me thereby giving me an opportunity to showcase my greatest version, articulating skills and knowledge that I possess in the best possible way to win the job. In contrast, the interview panel I faced here had some people from senior faculty whom I was already working with for the last couple of years. Therefore, there was a tendency to resist sharing information which they might already know. I was repeatedly asking myself - am I doing too much in already telling them what some of them know? Simultaneously, my mind tells me that this interview should be treated in isolation to the positive performance I have evidenced through my work here in the University. I was constantly dealing with this confusion in my head during the interview process and as a result I didn’t share that I’m in possession of FHEA, or a WIHEA fellow, which are all very relevant points for the interview. These should have come across despite being present as information in my CV and application letter and I should have steered the answer to some of the academic questions in a manner that links my qualifications and accreditations to reveal that I’m a good fit for the job advertised.
I think, when it comes to interviews, it is important to approach the interview panel as unknown individuals, just like any other interview. Although it may be difficult, this mindset is crucial for one’s performance during the interviews. Since they are unaware of your capabilities, it is essential to have a prepared action plan to address any confusion that may arise in your mind during the interview. By doing so, you can excel in your performance regardless of whether the panel is familiar or unknown to you. Last, but not least it is also vital to be ready with the set of questions you can ask the panel (even though you are aware of the initiative your institution is taking). Perhaps, these could be related to the higher education sector as a whole and not just your own institution.
July 24, 2023
Using AI to evaluate film criticism – Lewis Beer
Introduction
Can ChatGPT evaluate the quality of insights in a student essay? In January 2023, Daisy Christodoulou published an article (Can ChatGPT mark writing?, No More Marking Blog) exploring this question. She tested ChatGPT using English essays, and found that while its grading and feedback were more or less aligned with her own, the AI was sometimes unable to spot fundamental problems in essays. Christodoulou offers some fascinating insights, but one thing she does not explore in any detail is the relationship between ChatGPT’s qualitative evaluation of an essay and the content of the essay itself.
In this post, I will share the results of my own brief experiment, in which I pasted two film reviews into ChatGPT and asked it to evaluate (and compare) the quality of insights in these reviews. My aim here was to use these texts as substitute ‘essays’, and consider how the AI-generated responses might help or hinder a marker in giving meaningful feedback.
The dialogues quoted from below were conducted on 23 March 2023, using the free ChatGPT 3.5. (I later repeated the dialogues with both this and ChatGPT 4.0, but found the responses from 23 March gave more substantial and interesting feedback. I will not discuss those later dialogues in this post.)
Review 1: Peter Bradshaw
To begin with, I asked ChatGPT to ‘Evaluate the quality of insight in the following review of the film Nomadland’, then pasted the text of Peter Bradshaw’s review of Nomadland from The Guardian. I wanted to see how it would respond to this vague request, in which I provided no criteria on which to base an evaluation.
The response did a good job of summarising the review using selected words from the text. For instance, at the end of its evaluation the chatbot said:
[Bradshaw] does offer a mild critique that the film could have been more explicit in its condemnation of the economic forces causing the characters' hardships. However, they still recognize the film's greatness and Chloé Zhao's exceptional filmmaking.
This is clearly pieced together from the following passage in Bradshaw’s review, in which I have highlighted the details used by ChatGPT:
[The film is] brought off with exceptional intelligence and style. Arguably it is not angry enough about the economic forces that are causing all this but it still looks superbly forthright. There is real greatness in Chloé Zhao’s film-making.
The paraphrasing techniques used here – transposing words like ‘exceptional’ and ‘greatness’ into slightly different contexts, and spelling out that ‘all this’ means ‘the characters’ hardships’ – are effective in ‘feedback’ terms, because they seem to indicate an understanding of the writer’s (i.e. student’s) argument. However, Bradshaw says the film is ‘not angry enough’, not that it is ‘not explicit enough’, and this may be a significant error.
In its final verdict on the review, ChatGPT said:
Overall, this review appears to be well-written and insightful, providing a thorough analysis of the film’s themes, characters, and style while also highlighting its strengths and potential weaknesses.
Without my having provided any criteria, the AI assumes that an insightful film review will explore themes, characters, and style, and that it will offer a balanced assessment of the film’s strengths and weaknesses.
More of ChatGPT’s ‘marking criteria’ emerged when I asked it to ‘Identify some ways in which the review could be improved’. It offered some valid points in response to this prompt. For example:
The review could have delved deeper into the film's technical aspects, such as its cinematography, editing, and sound design. While the review mentions the film's style and how it differs from traditional narrative-driven films, it could have gone into more detail about how the film achieves this style and what specific techniques the filmmakers used.
This is arguably an area where Bradshaw’s review falls short. Bradshaw does discuss how Nomadland differs from traditional narrative-driven films, and he does connect this with Zhao’s stylistic choices; in this part of his review, he might have discussed film-making technique in more detail. So the AI’s choice to structure feedback like this (‘While you did talk about x, you could have delved deeper into y...’) could be helpful for a student looking to improve their essay by building on their strengths.
On that note, here is another of ChatGPT’s constructive criticisms:
The review could have included a more critical evaluation of the film. While the review praises the film’s strengths, it could have offered a more balanced assessment that acknowledges both the film’s strengths and its weaknesses.
This picks up on the detail quoted above, from the AI’s initial evaluation, noting that Bradshaw’s critique of the film is ‘mild’. The AI has perhaps noticed that Bradshaw’s more negative language is limited to the very end of his review, and is couched in the word ‘arguably’. Again, if we imagine this as feedback being provided to a student, ChatGPT’s evaluations do a good job of mixing praise and criticism: ‘You balance your argument by acknowledging the film’s weaknesses, but you only do this briefly at the end – you could have included a more critical evaluation.’
Other responses, however, show ChatGPT’s limitations, and would constitute problematic essay feedback. For instance:
The review could have offered more specific examples of how the film explores its themes and characters. While the review mentions some of the film's themes, such as the impact of economic hardship on older Americans, it could have gone into more detail about how the film portrays these themes and how they are relevant to contemporary society.
This is not really a fair critique: Bradshaw does highlight specific examples of how the film explores ‘the impact of economic hardship on older Americans’, and he does allude to contemporary issues such as factory closures, the dominance of Amazon, and the importance of the tourist trade in this part of America:
...looking for seasonal work in bars, restaurants and – in this film – in a gigantic Amazon warehouse in Nevada, which takes the place of the agricultural work searched for by itinerant workers in stories such as The Grapes of Wrath.
Fern, a widow and former substitute teacher in Empire, Nevada – a town wiped off the map by a factory closure – who is forced into piling some possessions into a tatty van and heading off...
At times, the film looks like a tour of a deserted planet, especially when she heads out to the Badlands national park in South Dakota, where there is also tourist-trade work to be had.
ChatGPT also says:
The review could have provided more context for the film's production and reception. For example, the review could have mentioned the awards and critical acclaim that the film has received, or how it fits into Chloé Zhao's broader filmography.
Some of this is fair – the review was published after Nomadland’s Oscar success, so Bradshaw could have mentioned this – but it misses the contextual details Bradshaw includes about the film’s production:
Zhao was even allowed to film inside one of Amazon’s eerie service-industry cathedrals.
The movie is inspired by Jessica Bruder’s 2017 nonfiction book, Nomadland: Surviving America in the Twenty-First Century, and by the radical nomadist and anti-capitalist leader Bob Wells, who appears as himself.
The people she meets on the road are, mostly, real nomads who have vivid presences on screen.
As with the previous criticism, ChatGPT has not acknowledged key details of the review in its initial assessment, so its critique is not balanced: it is like a marker who blames a student for ‘not doing x’ when the student in fact spent several paragraphs on ‘x’. (Human markers sometimes do this, of course.)
Review 2: Beatrice Loayza
I then asked ChatGPT, ‘Is the following review of the film Nomadland more incisive than the previous one?’, and pasted the text of Beatrice Loayza’s review of Nomadland, from Sight & Sound. Again, I deliberately did not provide any assessment criteria. ChatGPT’s answer was ‘yes’, for several reasons – some valid, some less so. First of all, it said, Loayza ‘provides a detailed analysis of the film's themes and cinematography, as well as the performance of Frances McDormand’. This is fair, and picks up on one of the criticisms of Bradshaw cited above (namely his lack of attention to technical aspects). Loayza comments on specific camera techniques, naming the cinematographer and describing the light effects he achieves. She also does more than Bradshaw to explain why McDormand’s performance is so effective.
ChatGPT picks up on another of its own criticisms of Bradshaw by praising Loayza’s critical perspective on the film:
However, the review also criticizes the film's lack of force and clarity in its insights into labor in the 21st century and the exploitation of older Americans. The author points out that the film's depiction of workers exploited by Amazon feels too easygoing and questions the film's liberal naivete in addressing the conditions of the nomadic lifestyle. Overall, the review provides a more nuanced and thoughtful analysis of the film.
This draws upon the following passage in Loayza’s review; again, I have highlighted phrases that ChatGPT seems to have picked up on:
[The film’s] insights into labour in the 21st century, and the exploitation of an older generation of Americans, lack force and clarity. At the very beginning of the film, Fern is employed by Amazon’s CamperForce programme, which provides base wages and free parking space to seasonal workers in their 60s and 70s. In 2020, Amazon doubled its profits during a global pandemic, which makes Zhao’s easygoing depiction of workers exploited by the company feel rather toothless. That the film aims to capture the ways in which a kind of working-class American outsider struggles without fully addressing the conditions of that struggle casts over it the shadow of a questionable liberal naivete.
To re-cap:
- In its initial assessment of Bradshaw’s review, ChatGPT noted that his critique of the film was ‘mild’
- In suggesting improvements, it built on this comment by recommending a more balanced approach
- And in drawing a comparison with Loayza’s review, it notes her more substantial version of Bradshaw’s criticism.
At each stage, the AI appears to be drawing upon specific evidence from the texts, rather than just ‘hallucinating’ these evaluative comments.
Elsewhere in its comparison between Bradshaw and Loayza, however, ChatGPT did hallucinate some differences in order to justify its verdict. I will not cite these here, as this post is already very long, but the inaccuracies were of a similar kind to those in the summary of Bradshaw discussed in the previous section.
Final thoughts
If these film reviews were formative essays that I had to mark, I could use ChatGPT’s feedback to offer legitimate praise and criticism, suggest improvements, and judge the relative merits of the two essays in relation to each other. However, I would also notice that ChatGPT misses important details in these texts and draws some un-founded contrasts between them.
In the course of this experiment, I tried several variations on the above prompts. Here are some things to note if you want to try a similar experiment yourself:
- I fed the reviews into ChatGPT several times, and in a different order. When I asked it to make a comparative evaluation, it tended to prefer the second review (even if this was Bradshaw’s). When I asked if it could reverse its comparative evaluation (i.e. ‘Can you argue that the other review is more insightful than the first?’), its responses varied: sometimes it doubled down on its first opinion, sometimes it conceded that an alternative opinion could be justified. Again, the reasons given for these opinions ranged from ‘valid’ to ‘hallucinatory’.
- This post demonstrates what Chat-GPT is capable of in the hands of a technically ignorant, time-poor amateur like me, but by using the right prompts and follow-up prompts, it would no doubt be possible to collate more credible ‘essay feedback’, and then ask the AI to present and construct this in an appropriate way. Have a look at the other articles and resources linked to on the AI in Education Learning Circle webpage, try an experiment of your own, and share the results in the comments below. In particular, you might think about the learning outcomes and marking criteria specific to your discipline, or your module, and consider how you might train ChatGPT to use these in evaluating a piece of text.
July 17, 2023
Who Uses AI, How, and When?
By Matthew Voice, Department of Applied Linguistics, University of Warwick
As I mentioned in my previous WJETT blog post, I have participated in a strand of the ‘AI in Education’ learning circle during the last few months. Over the course of our strand’s meetings regarding the role of AI in formative feedback, our focus has primarily been on the nature of the emerging generative AI technology. Our research and conversation has a great deal to contribute with regards to what large language models mean for the future of formative feedback. However, given that these models are tools, it is worth reflecting on the user, too.
Our learning circle represent educators and students from a wide range of degree programmes, and motivations for engaging with higher education will vary from student to student across all courses. Throughout our discussions, conversation has largely focused on the role of AI in the improvement of formative feedback for students who are motivated by a desire for academic excellence. Given that students in this position will likely be motivated to engage with extracurricular activities within the university (e.g. joining the WIHEA as a student member, partaking in surveys of student experience or voluntary training workshops), their voices will perhaps be heard most clearly during our present conversations.
But the experiences of these students are not representative of our cohort as a whole, or indeed of all points across an individual’s university journey. Many students may view academic writing – either across their degrees, or for certain assignments or modules – as an obstacle to be overcome. In these cases, the objective of academic writing shifts away from excellence, and towards the production of work which will simply allow the student to attain a passing grade. Engagement with GenAI for these students might not mean refinement and improvement; it may simply be a resource for the fastest or most convenient means of content generation.
Integrating GenAI into teaching and formative feedback requires a recognition of this spectrum of experience and motivation. In my previous WJETT blog I recommended that future discussion and planning should consider the reasonableness and realism when supporting students to use and think critically about GenAI. By this, I mean:
1) Reasonableness: Is the task being asked of the GenAI (e.g. to produce formative feedback on a draft assignment) one that it is capable of achieving?
2) Realism: Does our (educators, policy planners, researchers) understanding of engagement with GenAI reflect actual independent student use cases?
Assessing reasonableness through understanding what GenAI is capable of achieving will require ongoing review, and the development of support and training for staff in order to keep pace with developments. This, I think, has largely been the focus of the work done by the ‘AI in Education’ learning circle and the final report this group has produced. Going forward, we also need to consider how well we understand our students’ independent engagement with this sort of assistive technology. What tools and resources are they familiar with? How do they understand them? What do they do with them, and at what point(s) in their academic writing?
Grounding future policy and pedagogic resource development in relation to a realistic model of students’ use and understanding of GenAI will be a task as complex as anticipating the impact of future technological development in large language models. By acknowledging this, and undertaking this work, we best position ourselves to ensure that outputs and resources which arise from working groups such as ours will be meaningful to staff and students working across the university.
July 03, 2023
Embedding and assessing compassion in the university curriculum
In this short video (21 mins), Theo Gilbert explains the principles and rationales for rooting the science of compassion into the 21st century degree programme. It is the first of a series that aims to support colleagues in learning more about the research on this topic and how, in simple and practical ways, you can apply this research to your own practices.
You can access the full playlist of short videos in this series on this webpage.
June 22, 2023
Rethinking authentic assessment: work, well–being, and society by Jan McArthur
In this 2022 paper, Jan McArthur builds on “existing work on authentic assessment to develop a more holistic and richer concept that will be more beneficial to individual students and to the larger society of which they are part.” McArthur presents three key principles to help us rethink and broaden the concept of authentic assessment: 1) From real world/world of work to society; 2) From task performance to why we value the task; 3) From the status‑quo of real‑world/world of work to transforming society. If you are short on time, you might want to jump straight to page 8 where discussion of these three principles begins:
https://link.springer.com/article/10.1007/s10734-022-00822-y
June 12, 2023
Building knowledge on the pedagogy of using generative AI in the classroom and in assessments
By Matt Lucas and Isabel Fischer (WBS)
Matt Lucas is a Senior Product Manager at IBM, and Isabel Fischer is an Associate Professor (Reader) of Information Systems at WBS (Warwick Business School). Isabel also co-convenes an IATL (Institute for Advanced Teaching and Learning) module. This blog represents their own opinions and not those of their employers.
After two terms of including generative AI (GenAI) in my teaching and for assessments I am still building my knowledge and understanding around the pedagogy of using GenAI. Students seem to like the entertainment of playing around with music and art (e.g. DALL.E 2 and midjourney), creating images and also memes, with all of these being user-friendly for big screens and also for huddling around one laptop as part of teamwork. Text-outputs seems less intuitive for ‘collective use’: There does not seem to be yet an app available that allows for hands-on collaborative refinement of prompts (e.g. similar to students working on the same Google doc). And displaying a string of words on a shared screen clearly does not have the same entertainment value for students as ‘customers and consumers’.
In addition to a lack of entertainment value I also found that students seem to actually appreciate word-based GenAI (e.g. ChatGPT and Bard) as ‘their secret tool’ at their disposal and for them to use. They appear to appreciate it, if lecturers show them the exact prompts that they can copy that allows them to make the most of ‘their secret tool’. They seem less keen about having to be transparent about using the tool themselves and having to justify and critically reflect on usage. It not only means additional work, more importantly, they dislike the thought of the tool’s hidden power being exposed. They appear even less keen for lecturers to use GenAI for the lesson preparation and to be transparent about it because otherwise, what is the ‘perceived added value’ of attending the lecture if they could have just reviewed GenAI?
With this in mind, what are the skills that students can learn from using GenAI in the classroom and in assessments?
In the attached blog Matt Lucas and I suggest that by including innovative aspects into assessments, students can learn and practise four skills that are relevant for their future careers in a world disrupted by AI:
-
Cognitive flexibility, abstraction and simplification
-
Curiosity, including prompt engineering
-
Personalisation, reflection and empathising to adapt to different audiences
-
Critical evaluation of AI
For each of the four skills we explain in the attached blog the relevance for student learning with some illustrative examples, before outlining how we have incorporated these four skills into students’ assessments in the recent term.
May 22, 2023
Spotlight collation: the art of collegiality and why it matters
This collection from THE Campus offers resources on nurturing a spirit of companionship and cooperation between colleagues within the institution and beyond. Take a few moments to scroll through the various categories of resources, including friendship and teamwork, communities of practice, leadership and supervision, and bookmark one or two pieces to return to later.
May 15, 2023
Reflections on adaptive teaching – Asimina Georgakopoulou
My initial perception of “adaptive teaching” was that it was synonymous to differentiation—a term which is still used in teaching publications (DfE, 2021, T.S.5, p.11) and which the Times Educational Supplement used, to describe the practice of “putting the student first” (Amass, 2021). As the two terms are often used interchangeably, I began my practice unsure about which approach I was truly implementing. A broad understanding of both terms dictates that differentiation involves assigning certain needs to students while planning, assuming an objective can only be met a certain way. Adaptive teaching involves adjusting to address progress by providing scaffolding or challenge to support achievement of a unified objective in a flexible way (Deunk et al., 2018, p.31-54). After focused conversations with my placement colleagues, I was intrigued by the general consensus that the main difference between the two concepts in practice centres around the teacher’s understanding of “high expectations”.
I struggled with this concept originally, as my understanding lacked practical depth. During English writing objectives, I was expected to scribe for certain children after probing them to articulate themselves. I found this problematic, as it assumed that these children could express themselves orally and only struggled with writing. I understood that this was a genuine effort to avoid differentiating by task and communicating to the children that they were capable of completing the same task. In reality, the children were not expressing any ideas, and this resulted in them copying a board. Upon questioning them, I discovered that they still perceived their task as different, because they were not doing it independently.
Discussing this with my teacher, we ascertained that high expectations could be more effectively communicated by expecting all children to work independently and regularly changing support groups (CCF, 5.20). Although it seems like the same few pupils require constant small group support, I now realise that adaptive teaching is an approach meant to broaden our understanding of how to provide support. When the children were given a word mat that indicated meaning with symbols, they were able to start expressing their understanding independently, with little guidance. While other children did not have this support, all children were working independently and were given equal attention. I observed the positive psychological impact on students who felt that we were raising prior expectations.
As Coe et al. (2020, p.6) highlight, feelings of competence and autonomy are pivotal in promoting “learner motivation”. Additionally, they point out that “progressing…from structured to more independent learning” aids pupils to activate “hard thinking”. Adaptive teaching has the potential to lift children from the cycle of constantly requiring support to superficially meet an outcome that will not progress their understanding and will only lead to them requiring more support in future.
Although I do regret not taking initiative sooner, as I will not be able to observe long-term outcome improvement, my developed understanding of high expectations and adaptive teaching will have strong implications in my next placement, as I have grown my confidence and resourcefulness in supporting children appropriately. This is a point in my teaching where the WTV of creativity will greatly support development. By finding creative ways to scaffold learning, it is possible to communicate high expectations and creating a supportive learning environment.
References
Amass, H. (2021) “Differentiation: the dos and don’ts,” Tes Global Ltd, 16 April.
Coe, R. et al. (2020) Great teaching toolkit: Evidence Review, Great Teaching Toolkit. Cambridge Assessment International Education. Available at: https://assets.website-files.com/5ee28729f7b4a5fa99bef2b3/5ee9f507021911ae35ac6c4d_EBE_GTT_EVIDENCE%20REVIEW_DIGITAL.pdf?utm_referrer=https%3A%2F%2Fwww.greatteaching.com%2F (Accessed: April 14, 2023).
DfE (2019) ITT Core Content Framework available online at: https://www.gov.uk/government/publications/initial-teacher-training-itt-core-content-framework
Department for Education (2011) Teachers' Standards. Available at: https://www.gov.uk/government/publications/teachers-standards
Deunk, M., Smale-Jacobse, A., de Boer, H., Doolaard, S. and Bosker, R. (2018) 'Effective differentiation Practices:A systematic review and metaanalysis of studies on the cognitive effects of differentiation practices in primary education.' Educational Research Review (24) pp.31-54.
May 09, 2023
Reflections on planning – Phoebe Thompson
My understanding of the principles of effective planning has developed because at the start of my placement I was extremely naïve as didn’t think planning would be hard. I assumed I could write a few notes on the lesson plan and the pupils would have a deep understanding of the learning. I believed because my classroom teacher didn’t have formal plans that I could do the same. I found that effective planning is a difficult skill and for a lesson to have purpose it must have certain aspects. Ashcraft (2014) made clear that to be an effective teacher you must have effective lesson plans. For example, my first lesson plan lacked substance and I found myself getting stressed when teaching. The stress from a lack of detailed planning didn’t make me an effective teacher as I started to panic that the learning wasn’t clear. However, it has been argued that detailed daily lesson planning is a ‘box-ticking’ activity and adds to the teacher’s workload (Teacher Workload Review Group, 2016). Yet, I would argue that this is regarding experienced teachers.
My planning has been a journey. At the beginning of the placement, I put more emphasis on the activity than the learning. This became evident at the end of the lesson when I asked the class questions about the learning, and they couldn’t answer. I swiftly changed this and put more effort in making sure my lesson objective was clear to the class and that the activities reflected the lesson objectives.
I was very fortunate to have a supportive class teacher who encouraged me to take risks when teaching. Hattie (2012) argues that the most effective planning is when teachers support each other and discuss what is the most important to teach and the impact of their teaching on their pupils. In our shared PPA time the class teacher would suggest ways that I could adapt my teaching for all needs in the class. However, I am aware that this might not be the case at every placement. For example, Mutton et al (2011) mentions that it can be a struggle for student teachers to teach other teachers classes. Yet, in the future, I would like to work with my class teacher or mentor to plan low threshold, high ceiling planning so all needs are met.
Throughout my placement I struggled with my workload as mentioned previously I quickly understood the need to plan thought provoking lessons where all pupils learning flourished. I was encouraged to use the schemes that the school subscribed to and the class teacher’s previous resources. I found that the schemes were extremely helpful, but I had to use them as a guide because they were generic and I had to fit them to my class due to the different needs of the pupils. However, I did find some pressure to use the class teacher’s resources in certain lessons. In the future, I want to be confident enough to use my own creativity to make the resources and use the class teacher’s as a guide like I did with the scheme.
References
Ashcraft, N. (2014) Lesson Planning. Alexandria, VA: TESOL Press.
Hattie J (2012) Visible Learning for Teachers, Maximising Impact on Learning. Routledge, pages 67-74.
Mutton, T., Hagger, H. and Burn, K. (2011) “Learning to plan, planning to learn: The developing expertise of beginning teachers,” Teachers and Teaching, 17(4), pp. 399–416. Available at: https://doi.org/10.1080/13540602.2011.580516.
Teacher Workload Review Group (2016) Eliminating unnecessary workload around planning and teaching resources Report of the Independent. Available at: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/511257/Eliminating-unnecessary-workload-around-planning-and-teaching-resources.pdf (Accessed: 20 December 2022).