All 136 entries tagged Teaching

View all 292 entries tagged Teaching on Warwick Blogs | View entries tagged Teaching at Technorati | There are no images tagged Teaching on this blog

March 13, 2023

Recording and Sharing Mentoring Best Practice

by Jonty Leese and Anna Donnelly

What we did:

In Initial Teacher Education (ITE), the role of coaching a student teacher is crucial. For new and established mentors and trainees, the ability to see good examples of coaching is a powerful tool to see and experience this process. Inspired by Knight’s coaching model (Knight, 2021) and combined with the unique lens of Warwick Teacher Values (Centre for Teacher Education, 2022) we were able to build a framework to support understanding of the equality of role within effective coaching. This process supports university tutors as well as school-based mentors.

To facilitate this, we recorded and edited post lesson mentor meetings from a variety of phases (KS1-KS4) in different contexts to build a clearer understanding of effective mentoring with current all stakeholders.

These were real staff and students, so it was an authentic learning conversation and formed part of their regular weekly meeting.

We processed these to add transcripts for accessibility (W3C, 2018) and used software which created a clickable audio and text of the conversations (otter.ai)

These were then shared on a site builder site here: breaking down into different categories for mentors, whether they were new to it or more experienced. See image below:

Screen shot of mentoring website buttons

How it will be integrated into future practice:

These have been embedded into current practice for our online PGCE and will be shared through mentor training, with these as a support resource going forward for all routes.

These can be utilised as an “on demand” resource for mentors and trainees.

This forms a basis of a growing body of real-world examples which will grow incorporating future recordings ensuring that a range of contexts and examples reflecting the unique role of mentorship is captured.

Lessons Learnt:

It’s very time consuming to record, edit and upload video – even with an organised workflow, it took longer than anticipated.

The use of AI in technology is fallible – proof reading of automated transcripts is necessary to ensure accuracy and appropriateness of what has been transcribed.

It’s our responsibility to build time into training plans, rather than expect that these are additional and extra to core mentor requirements. This could form part of the compulsory 20 hours mentor training that all mentors are mandated to complete for the start of the 2024/25 academic year.

The recording and disseminating of a mentoring video which is intrinsically a private conversation is a powerful window into another person’s world. This can support lifting practitioners out of a silo mindset and can open them up into seeing practical examples of different approaches within the Warwick framework – giving them agency to transform their practice and to reflect upon their own methodologies.

Recording the videos does not require high production values to have value.

Watching a video as a standalone resource without guided reflection and dialogue- may be a missed opportunity; watching, pausing and reflecting as part of a dialogue can stimulate rich conversations (Alexander, 2021). Using the videos alongside pertinent questions to evoke reflection and transformation of practice is the next step.

References:

Alexander, R. (2021) ‘DIALOGIC TEACHING AND THE STUDY OF CLASSROOM TALK’.

Centre for Teacher Education (2022) Developing an ambitious ITE Curriculum. Available at: https://warwick.ac.uk/fac/soc/cte/about/ite-curriculum/ (Accessed: 3 March 2023).

Knight, J. (2021) The Definitive Guide to Instructional Coaching: Seven Factors for Success. Available at: https://www.amazon.co.uk/Definitive-Guide-Instructional-Coaching-Factors/dp/141663066X/ref=sr_1_3?keywords=The+Definitive+Guide+to+Instructional+Coaching%3A+Seven+Factors+for+Success&qid=1646652872&sr=8-3 (Accessed: 7 March 2022).

W3C (2018) Web Content Accessibility Guidelines (WCAG) Overview | Web Accessibility Initiative (WAI) | W3C, Web Accessibility Initiative. Available at: https://www.w3.org/WAI/standards-guidelines/wcag/ (Accessed: 2 January 2019).


March 06, 2023

On ‘Opportunities of AI in Higher Education’ by DALL.E and ChatGPT

Prologue and epilogue by Isabel Fischer (Warwick Business School)

Image illustrating artificial intelligence

Since the creation of our task and finish groups on evaluating and shaping the opportunities of AI in Higher Education (for further information see here: https://blogs.warwick.ac.uk/wjett/entry/ai_for_good/) we have had many very insightful ‘human to human’ discussions. The aim of this blog is, however, to show what ‘machines’ draw and write on the topic, to illustrate both the strengths and the weaknesses of generative AI.

The images at the start of the blog represent DALL.E’s take on the topic. Having always been untalented in drawing, I use DALL.E in my business teaching to show live how AI translates the topic we discuss by imitating distinct artists. Students enjoy how DALL.E (OpenAI) shows other – very creative - images while we wait for AI ‘to think & draw’ our images – there are always four images coming up for each prompt. Of course, we also discuss the risks, threats, and ethics behind AI (here an open-access case study which we wrote on AI ethics in the field of ESG investing: https://journals.sagepub.com/doi/full/10.1177/2043886920961782).

While DALL.E generates images that I could not have drawn, ChatGPT generates text that I could have written better and more concisely, with different logical priorities. I could - and actually did - create visualisations to illustrate my points. Nevertheless, ChatGPT did beat me by far on ‘speed’. For anybody who has not yet asked ChatGPT any questions, here are the ChatGPT responses based on five prompts:

  1. Opportunities for AI in HE
  2. Strategy for implementing and integrating AI in HE
  3. Developing research on the opportunities of AI in HE
  4. Developing an AI based marking and feedback tool
  5. Common areas of weakness of students’ academic writing

Opportunities for AI in HE

Artificial intelligence (AI) is transforming the world around us, and higher education (HE) is no exception. The potential opportunities of AI in HE are vast, ranging from improving learning outcomes and enhancing student engagement to streamlining administrative tasks and reducing costs. In this article, we explore some of the key opportunities of AI in HE and how they can be realized.

  1. Personalized learning: AI can help personalize the learning experience by adapting the content, pace, and style of instruction to the individual needs and preferences of each student. This can be achieved through intelligent tutoring systems, which use machine learning algorithms to analyze student performance data and provide customized feedback and support. Personalized learning can improve student engagement, increase retention rates, and enhance learning outcomes.
  2. Predictive analytics: AI can be used to analyze large datasets of student information, such as grades, attendance records, and behavioral data, to predict future performance and identify at-risk students who may need additional support. This can help educators intervene early and provide targeted interventions to improve student success.
  3. Virtual assistants: AI-powered virtual assistants can help students with routine administrative tasks such as scheduling, registration, and accessing resources. This can free up staff time and reduce wait times for students, improving their overall experience.
  4. Automated grading: AI can be used to automatically grade assignments and provide feedback to students, reducing the workload on instructors and improving the consistency and objectivity of grading. This can also provide students with immediate feedback, enabling them to improve their performance more quickly.
  5. Research and data analysis: AI can be used to analyze large datasets of research publications and help identify trends, patterns, and gaps in the literature. This can help researchers stay up-to-date with the latest developments in their field and generate new insights.
  6. Adaptive assessments: AI can be used to create adaptive assessments that adjust the difficulty and content of questions based on the student's level of knowledge and ability. This can help educators more accurately assess student performance and provide targeted feedback and support.
  7. Smart campus: AI can be used to optimize campus operations, such as energy management, facility maintenance, and security. This can reduce costs and improve the overall efficiency of campus operations.

While the opportunities of AI in HE are significant, there are also challenges that need to be addressed, such as ethical considerations, data privacy concerns, and ensuring that AI is used to enhance, rather than replace, human interaction and support. To fully realize the potential of AI in HE, it is essential that educators, researchers, and policymakers work together to develop responsible and effective strategies for implementing and integrating AI technologies.

Strategy for implementing and integrating AI in HE

Implementing and integrating AI technologies in higher education (HE) requires careful planning and consideration to ensure that these technologies are used in a responsible and effective manner. Here are some responsible and effective strategies that HE institutions can use to implement and integrate AI technologies:

  1. Develop a clear vision and strategy: HE institutions should develop a clear vision and strategy for how AI technologies will be used to enhance the student experience, improve learning outcomes, and support research. This should involve identifying specific use cases and goals for AI, as well as assessing the potential benefits and risks.
  2. Foster collaboration and communication: Implementing and integrating AI technologies requires collaboration and communication across departments and stakeholders. HE institutions should establish cross-functional teams that include educators, administrators, IT professionals, and students to ensure that all perspectives are considered and to foster buy-in and support for AI initiatives.
  3. Ensure transparency and ethical considerations: HE institutions should ensure that the use of AI technologies is transparent and ethical. This includes being clear about how AI technologies are being used, ensuring data privacy and security, and ensuring that AI is used to enhance, rather than replace, human interaction and support.
  4. Invest in staff training and development: AI technologies require specialized skills and knowledge to implement and use effectively. HE institutions should invest in staff training and development programs to ensure that educators and administrators have the necessary skills and knowledge to use AI technologies in their work.
  5. Pilot and evaluate AI initiatives: Before scaling up the use of AI technologies, HE institutions should pilot and evaluate their effectiveness. This involves testing AI technologies in small-scale pilots, collecting feedback from stakeholders, and using data to evaluate their impact on learning outcomes, student engagement, and other key metrics.
  6. Engage with the wider AI community: HE institutions should engage with the wider AI community to stay up-to-date with the latest developments and trends in AI. This includes attending conferences and workshops, collaborating with other institutions, and partnering with industry experts and researchers.
  7. Align AI initiatives with institutional goals and values: Finally, HE institutions should ensure that their AI initiatives are aligned with their institutional goals and values. This involves considering the unique needs and priorities of the institution, as well as the needs and preferences of students, educators, and other stakeholders. By aligning AI initiatives with institutional goals and values, HE institutions can ensure that they are using these technologies in a responsible and effective manner.

Developing research on the opportunities of AI in HE

There are several ways to develop ideas for future opportunities of AI in higher education and organize it as a research topic:

  1. Literature review: Conducting a literature review of existing research on AI in higher education can help identify the gaps and potential areas for future research. This can involve searching for relevant articles, books, and reports in databases such as Google Scholar, ACM Digital Library, and IEEE Xplore.
  2. Expert interviews: Talking to experts in the field, such as educators, researchers, and AI professionals, can help identify emerging trends and potential opportunities for AI in higher education.
  3. Case studies: Conducting case studies of AI implementation in higher education institutions can provide insights into the benefits and challenges of adopting AI technologies.
  4. Surveys: Conducting surveys of stakeholders such as students, faculty, and administrators can help identify their needs and preferences for AI in higher education.
  5. Workshops and conferences: Attending workshops and conferences on AI in higher education can provide opportunities to learn from experts and exchange ideas with peers.

Once you have developed ideas for future opportunities of AI in higher education, organizing it as a research topic can involve the following steps:

  1. Identifying research questions: Based on the ideas developed, identify research questions that are relevant, feasible, and significant.
  2. Defining the scope of the research: Clarify the boundaries of the research topic and define the key concepts and variables.
  3. Developing a research methodology: Decide on the appropriate research methods, such as qualitative, quantitative, or mixed methods, and develop a research design.
  4. Conducting data collection: Collect data using appropriate methods such as surveys, interviews, and case studies.
  5. Analyzing data: Analyze the data using appropriate statistical or qualitative analysis techniques.
  6. Drawing conclusions and making recommendations: Based on the findings, draw conclusions and make recommendations for future research and practice.
  7. Writing up the research: Write up the research in a clear and concise manner, following the conventions of academic writing.

When reviewing emerging trends of AI in higher education after conducting qualitative research, a theory that could be useful is grounded theory.

Grounded theory is a research methodology that involves systematically collecting and analyzing qualitative data to develop a theory or explanation of a phenomenon. The approach emphasizes the need to generate theory from the data itself, rather than imposing preconceived notions or theories on the data.

When conducting a qualitative study of emerging trends of AI in higher education, the grounded theory approach can help identify key themes and patterns in the data that may point towards emerging trends. These trends can then be used to develop a theory that explains how AI is being used in higher education, what the benefits and challenges are, and what the future implications may be.

Grounded theory can also be helpful in identifying gaps in the literature and areas for future research. By systematically analyzing the data, researchers can identify areas where more research is needed, and use the emerging trends to develop research questions that are relevant and meaningful.

Developing an AI based marking and feedback tool

Designing an AI system that can be used to automatically grade assignments and provide personalized feedback to students can be a complex task, but it can be accomplished by following these steps:

  1. Define the grading criteria: The first step is to define the grading criteria for the assignment. This involves breaking down the assignment into its key components and determining how each component will be evaluated. The grading criteria should be clearly defined and objective, so that the AI system can easily understand and apply them.
  2. Create a training dataset: To train the AI system, you need to create a training dataset that includes a set of annotated assignments. These annotated assignments should be graded by human experts according to the defined grading criteria. The dataset should be large enough to ensure that the AI system can learn to recognize a variety of different assignment types and styles.
  3. Choose the appropriate machine learning algorithm: There are several machine learning algorithms that can be used for automatic grading, including decision trees, support vector machines, and neural networks. The choice of algorithm will depend on the complexity of the grading criteria and the size of the training dataset.
  4. Train the AI system: Once you have chosen the appropriate machine learning algorithm, you can train the AI system using the annotated training dataset. The system should be trained to recognize patterns and relationships in the data, and to identify which components of the assignments are important for grading.
  5. Test the AI system: After training the AI system, you should test it using a set of unseen assignments that have not been included in the training dataset. This will help you evaluate the system's accuracy and identify any areas where it needs further improvement.
  6. Provide personalized feedback: To provide personalized feedback to students, you can use the AI system to identify areas where a student may need additional support or guidance. For example, if a student consistently struggles with a particular component of the assignment, the system can provide targeted feedback and resources to help the student improve.
  7. Continuously improve the AI system: Finally, you should continuously monitor and improve the AI system. This involves regularly updating the training dataset, evaluating the system's performance, and incorporating feedback from students and educators to refine the grading criteria and feedback mechanisms.

In summary, designing an AI system for automatic grading and personalized feedback involves defining the grading criteria, creating a training dataset, choosing the appropriate machine learning algorithm, training the system, testing it, providing personalized feedback, and continuously improving the system. By following these steps, you can develop an effective and efficient AI system that enhances the learning experience for students and educators alike.

Common areas of weakness of students’ academic writing

Academic writing is an essential skill for students in higher education. However, there are common areas of weakness that students often struggle with. Here are some of the most common areas of weaknesses for students' academic writing in higher education:

  1. Poor organization and structure: Students often struggle with organizing their ideas and structuring their writing in a way that is clear and logical. This can make their writing difficult to follow and reduce the overall effectiveness of their argument.
  2. Weak thesis statements: A strong thesis statement is essential for guiding the direction of an academic paper. However, many students struggle to develop a clear and concise thesis statement that effectively summarizes the main argument of their paper.
  3. Inadequate research and citation skills: Effective academic writing requires the use of credible sources to support arguments. However, many students struggle with finding appropriate sources, evaluating their credibility, and correctly citing them in their writing.
  4. Poor grammar and syntax: Grammatical errors and poor syntax can make writing difficult to read and understand. This can detract from the clarity and effectiveness of students' arguments.
  5. Lack of clarity and concision: Academic writing should be clear and concise, but many students struggle with using language that is overly complex or wordy. This can make their writing difficult to read and reduce the impact of their argument.
  6. Inappropriate tone or style: Students may struggle to adopt an appropriate tone or style for academic writing, which can lead to writing that is overly informal or lacking in scholarly rigor.
  7. Failure to address the assignment prompt: Finally, students may struggle with addressing the assignment prompt in a way that fully meets the expectations of the instructor. This can result in writing that is off-topic or fails to fully address the main requirements of the assignment.

Overall, addressing these common areas of weakness in academic writing can help students improve their writing skills and achieve greater success in their academic pursuits.

Epilogue: For comparison to the above, here are some ChatGPT responses to prompts by Chaoran Yang, a student who is part of my consulting module. I suggested to all students of that cohort to try ChatGPT for idea generation at the start of a consulting exercise for the Warwick Esports centre. Similar to my findings from above, the students also felt that ChatGPT did not provide any major insights. Possibly because we did not ask the right questions, and, more importantly, because ChatGPT is still in its infancy.

In our team we have spent over 2,000 hours developing the Warwick AI Essay Analyst. We used a mixture of non-AI rule-based statistical features and deep-learning algorithms and databases, e.g., Pytorch, Hugging face framework, and Transformer (for further information on our AI-based tool see here: https://nationalcentreforai.jiscinvolve.org/wp/2022/11/16/interested-in-receiving-formative-feedback-on-your-draft-essays-and-dissertations-on-demand-introducing-warwicks-ai-essay-analyst/).

With the current progress in the field of generative AI, developments of future tools will be faster – let’s work together to ensure that all tools, whether developed in-house or bought / endorsed by the university have robust ethical underpinnings. My final suggestions for readers is to review here the Ethical guidelines on the use of artificial intelligence (AI) and data in teaching and learning for educators, produced by the Office of the European Union: https://data.europa.eu/doi/10.2766/153756


February 20, 2023

Blog 6: How inclusive is your assessment strategy?

By Leda Mirbahai (Warwick Medical School)

Leda Mirbahai photo

Assessments are a fundamental part of student experience - with students learning by doing, i.e. by engaging with assessment tasks and then, after submission through the feedback they receive on their performance and progress Internal and External Examiner often ensure that assessment strategies are reliable, effective, and accurate, however, whether they are inclusive is often overlooked.

Considering the diversity of our student population, it is clear that ‘one size fits all’ approach to assessment design and delivery is not an inclusive assessment strategy which links with diversification of our assessment approaches (a point that we will come back to shortly). The Quality Assurance Agency (QAA), UK’s quality code for higher education, lists 10 guiding principles on expectations and practices for assessment which are:

  1. Assessment methods and criteria are aligned to learning outcomes and teaching activities.
  2. Assessment is reliable, consistent, fair and valid.
  3. Assessment design is approached holistically.
  4. Assessment is inclusive and equitable.
  5. Assessment is explicit and transparent.
  6. Assessment and feedback are purposeful and supports the learning process.
  7. Assessment is timely.
  8. Assessment is efficient and manageable.
  9. Students are supported and prepared for assessment.
  10. Assessment encourages academic integrity.

As mentioned, a criteria of good assessment ‘strategy’ is to be inclusive and equitable. If one really reflects on the 10 principles, they are all interlinked. For example, it is very difficult to demonstrate an assessment strategy that is inclusive where assessment loads are not manageable for our students or students don’t receive equitable level of support. With Toolkits such as ‘Embedding inclusive assessment reflective toolkit’, a project funded by QAA, becoming available we can reflect on the assessment strategy of our courses and programmes by considering how well we align to attributes of an inclusive assessment. The toolkit starts to encourage a triangulation critic of our assessment approaches by involving major stakeholders from students, academics and leaders in the process and asking some key reflective questions. Reflecting on the 9 attributes of inclusive assessment as mentioned by this toolkit, most questions are basically reflecting on the principles of good assessment, which in my view indicates that if your assessment aligns to the principles, it should promote inclusivity.

As an academic staff leading the assessment strategy for a new UG course at WMS, I have been involved in the planning and delivery of a course level assessment strategy. As the course is new, it has removed some of the challenges of trying to alter and adjust existing assessments for individual modules. This raises an important concept; assessment strategies should be seen at course/programme level. Going back to principles of good assessments, we need to demonstrate how our assessments map to course level, year level and module level learning outcomes as well as ensuring students are supported and prepared for assessments. This highlights the gradual building of our learner’s skills and knowledge in a spiral curriculum and enabling them to receive continuous feedback on their progress in a meaningful way. Learners need to be able to visualise and reflect on their progress across the programme and to achieve this, our assessments in one module (skills, attribute and knowledge) need to meaningfully build on the previous modules; hence programme level approach being more desirable.

Using diverse assessments is a great way of acknowledging that ‘one size fits all’ approach to assessment design and delivery is not an inclusive assessment strategy. However, this also doesn’t mean that we should sprinkle our assessment programmes with as many different modes of assessments possible. Introducing too many different modes of assessments that are not revisited or are not utilising or building on skills and knowledge of a learner would just add to the level of stress encountered by our learner as it means our students need to learn a new and unfamiliar assessment approach just to use it once! Therefore, diversification should be achieved in a meaningful way rather than just for the sack of introducing new assessment modes.

Finally, in my view one of the most important aspects of a good assessment strategy is continuous reflection and improvement. There is no fixed perfect assessment strategy as our learners and their requirements are constantly changing. An assessment strategy that is developed and never revised will soon become unfit for the purpose it was developed. I know this as I am already making a list of changes that we need to embed into our assessment strategy and the course is only 3 years old! So I leave you with one question. How inclusive is your assessment strategy?

References

The Quality Assurance Agency (QAA) for Higher Education (2018) UK Quality Code for Higher Education advice and guidance: Assessments.

Embedding inclusive assessment-Reflective toolkit (2022), a QAA funded project. Developing a Set of Inclusive Assessment Design Attributes for use Across the Higher Education Sector (qaa.ac.uk)

This is the 6th blog in our diverse assessment series. Previous blogs can be found here:

Blog 1: Launch of the learning circle (Isabel Fischer & Leda Mirbahai): https://blogs.warwick.ac.uk/wjett/entry/interested_in_diverse/

Blog 2: Creative projects and the ‘state of play’ in diverse assessments (Lewis Beer): https://blogs.warwick.ac.uk/wjett/entry/creative_projects_and/

Blog 3: Student experience of assessments (Molly Fowler): https://blogs.warwick.ac.uk/wjett/entry/a_student_perspective/

Blog 4: Assessment Strategy – one year after starting the learning circle (Isabel Fischer & Leda Mirbahai): https://blogs.warwick.ac.uk/wjett/entry/one_year_on/

Blog 5: Learnings and suggestions based on implementing diverse assessments in the foundation year at Warwick (Lucy Ryland): https://blogs.warwick.ac.uk/wjett/entry/learnings_suggestions_based/

Join the Diverse Assessment Learning Circle: If you would like to join the learning circle please contact the co-leads: Leda Mirbahai, Warwick Medical School (WMS) (Leda.Mirbahai@warwick.ac.uk) and Isabel Fischer, Warwick Business School (WBS) (Isabel.Fischer@wbs.ac.uk). This LC is open to non-WIHEA members.


February 13, 2023

AI for Good: Evaluating and Shaping Opportunities of AI in Education

By Isabel Fischer, Leda Mirbahai, and David Buxton

Following the rise of awareness of the opportunities (and threats) of artificial intelligence (AI) in education, we have created a task and finish group which aims to review and ‘imagine’ the opportunities and challenges of AI in education, incl. assessments. Our vision is to deploy AI as a tool to support all students, independent of background and socio-demographic characteristics, to be successful in their studies and in their future work, while ensuring academic integrity, as well as to support educators feel confident in using AI effectively in promoting learning. We are working in five (sub)groups:

  1. General AI in Education (AIEd) Opportunities & information sharing
  2. Novel and Diverse Assessment Designs
  3. Feedback, Marking, Authorship Detection
  4. Designing Teaching Content - ‘what is out there being developed?’
  5. 'Red Team': AI Ethics and Academic Integrity

As we are still interested in colleagues from within Warwick as well as other institutions and the wider community of stakeholders to join us, here some further information per (sub)group:

1) General AI in Education (AIEd) Opportunities & information sharing: We review how to capture, shape, and disseminate the opportunities for both learner-facing and educator-facing AI, mainly in HE but also considering how HE can support the secondary and even primary school sector (e.g. how to help teachers to experiment with different forms of AI in a low-stake environment). We also consider the benefits, such as reducing inequality, fairness and democratisation that AI offers, evaluating how we can support SDG 4 (equitable and quality education) and SDG 10 (reducing inequalities). We want to help educators to know how to potentially embrace recent AI developments for their professional practice. Combined with sub-group / Strand 5, the ‘red team’ we also want to inform colleagues on research (similar to mini literature reviews) on topics such as Algorithmic Fairness.

Target Output: A WIHEA page that is informative for colleagues new to AIEd (explanations, links to other resources, links to discussions / brainstorming exercises / blogs, suggestions for their practice)

2) Designing Assessments: We review the opportunities for designing and setting diverse assessments (Learner-facing), including embedding our work within our different Learning Circle’s work. It is in this strand that most of the student co-creation will take place.

Target Output: WIHEA page, blogs, and talks

3) Feedback and Marking: We review the opportunities of using AI for formative feedback (Learner-facing), summative feedback (Educator-facing), ‘AES – automated essay scoring’ (educator-facing), and stylometry (authorship authentication) as well as ChatGPT detection. One aspect of this strand (but not constrained to this strand) is also ‘Move fast, Break fast, Learn fast’ – doing small scale experiments and testing them (e.g., Consulting Students will experiment with mind maps this term and then can, but don’t have to, submit their work to the Warwick AI Essay Analyst for formative feedback and we can analyse their work).

Target Output: A WIHEA page that disseminates information and possibly diffusion of the actual Warwick AI Essay Analyst tool at Warwick, potentially producing research output

4) Designing Module and Lesson Content & Recommendations for institutional Developments / Purchases: Educator-facing, we review tools and initiatives that might help educators in planning and organising their modules and lessons, as well monitoring their email and forum entries. This group looks at all educator-facing areas besides designing assessments (group 2) and providing feedback on assessments (group 3). This group might also make recommendations to the institution on what software to build or to purchase etc.

Target Output: A WIHEA page that disseminates information, possibly making recommendations for in-house developments / purchase of external software packages

5) A ‘red team’ acknowledges that AI is here to stay and ensures we follow AI Ethics guidelines and that everybody is clear about the risks. This team also reviews and mitigates the challenges to Academic Integrity more broadly. Moreover, it reviews the risk of bought-in products from EdTech and Tech companies, ensuring that AI Ethics is applicable both for in-house and off the shelf, bought-in products.

Target Output: A WIHEA page that provides information for colleagues worried about AIEd (explanations, links to other resources, links to discussions) especially on the topic of AI Ethics and Academic Integrity (what is OK to do, what isn’t – where should students / educators draw the line). Collaborating with stand 1, this group might want to explain (do a high-level literature review / providing links to important research) aspects of AI Ethics / Academic Integrity, such as explaining concepts such as ‘Algorithmic Fairness’. Building on work by other groups, e.g., last year’s ‘Online Assessment Data Ethics Group’, this group might want to develop a proposal for SLEEC (https://warwick.ac.uk/services/gov/committees/sleec/) and/or to provide guidance and advice to EPQ on suitable policy and guidance where appropriate.

Proposed Overall Target for entire Task and Finish Group, i.e. across the five groups / strands: Have some tangible outputs (WIHEA page, blogs, talks) that support colleagues when they embrace change in an ethically sound way that respects all stakeholders, especially learners and educators. Ideally collaborating with other universities, other education providers, and industry. Possibly develop a proposal for SLEEC and/or provide guidance and advice to EPQ on suitable policy and guidance where appropriate.

Please email Isabel.fischer@wbs.ac.uk if you are interested in joining any of the groups.

Please email Leda.Mirbahai@warwick.ac.uk if you are interested in joining our open WIHEA Diverse Assessment Learning Circle with interesting talks, such as our talks this month on Synoptic Assessments and on Democratising Assessments.


January 30, 2023

How higher education leaders are putting digital at the heart of institutional strategy

Nick Skelton from JISC posts about research he has undertaken looking at how higher education leaders are putting digital at the heart of institutional strategy:

https://www.jisc.ac.uk/blog/how-higher-education-leaders-are-putting-digital-at-the-heart-of-institutional-strategy-24-jan-2023


January 23, 2023

Embracing a compassionate approach in higher education

This short THE Campus piece discusses the importance of compassionate communication and the benefits that it brings to students. It provides advice on how to embed compassionate communication into group working activities when teaching online.

https://www.timeshighereducation.com/campus/embracing-compassionate-approach-higher-education


January 17, 2023

Aligning our values through accessibility – Teaching in Higher Ed podcast

In this podcast, the discussion focuses on the use of Alt Text and how it can enhance the accessibility of online images, gifs and infographics. There are some particularly useful tips for those who make use of Twitter in their teaching. All the resources discussed in the podcast are available on the episode webpage.

https://teachinginhighered.com/podcast/aligning-our-values-through-accessibility/


December 05, 2022

Learnings & suggestions based on implementing diverse assessments – Lucy Ryland

Learnings and suggestions based on implementing diverse assessments in the foundation year at Warwick

By Lucy Ryland

Photo of Lucy Ryland






Diverse assessments, sometimes known as ‘alternative assessments’, is a growing area of interest in higher education, and Warwick is no exception. As a member of the WIHEA Diverse Assessment Learning Circle, I delivered a presentation on my own practice in relation to diverse assessments. This triggered a discussion around how my collection of assessments could be expressed in terms of an ‘assessment strategy’. Here I highlight my rationale for diversifying assessment and reflect on my experiences with implementing a diverse assessment diet with international foundation students.

Diverse assessment can be defined as an assessment method “that is less familiar (to you and/or the students) and increases the range of assessments in your discipline” (O’Neill and Padden 2021:3). Interested in this concept and the potential benefits it could bring, in the 2021-22 academic year, as convener of three modules on the Warwick International Foundation Programme in Social Science, I used 11 different assessment methods with my group of students, and embedded choice into almost all of them.

Research suggests there are many potential benefits to adopting a diverse assessment strategy. I identified six advantages that I hoped to bring to my teaching and my students’ learning through this diversity.

  1. In some instances, diverse assessment was required in order for the intended learning outcomes of my modules to be assessed, to ensure the constructive alignment of my programme. One such ILO was that students should be able to “justify informed solutions to social science problems to a varied audience using a variety of media formats”. Diverse assessment methods of blog post writing, infographic design and film production were therefore appropriate methods to assess students’ attainment of this learning outcome.
  2. Using a variety of assessment methods, and embedding choice into students’ assessments, is argued by Bloxham and Boyd (2007:190) to enhance the inclusive nature of assessment. It enables students to pursue their own interests, preferences, and strengths. If students are always assessed using written, essay-style tasks, this will advantage students who are strong in the relevant skills, and disadvantage those whose skills and strengths lie in different areas. Therefore, including written assessments alongside oral presentations and more visual or creative methods of assessment is likely to make the students’ assessment diet more inclusive.
  3. Part of the purpose of higher education has to be to prepare students for life beyond studies, in the ‘real world’. There is therefore value in encouraging students to develop skills and knowledge that is going to help them pursue their future goals (Knight and Yorke 2003). A number of the different assessment methods I use, including blog writing, annotated bibliographies, case studies and research projects, are designed to be authentic, to use real world contexts and facilitate the growth of skills that the students may well need in their future studies and careers. They have a function beyond simply receiving a grade at the end of the process.
  4. In a similar vein, evidence suggests that the ‘traditional’ examination type assessment has limited value in terms of developing students’ skills rather than focusing on simple knowledge recall (Dunn et al. 2004; Bloxham and Boyd 2007:195). It is also apparent that more traditional timed examinations induce high levels of stress in students, and this can limit how effectively they can really show their full capability. Conversely, it may be the case that diverse assessments, that are more inclusive and authentic, are less stressful for students and enable them to perform to the best of their ability (Hong-Meng Tai et al. 2022:6).
  5. Preparing and submitting the same type of assessment on multiple occasions can be monotonous for students, especially in an intense year of study and assessment. Introducing variety and authenticity into the assessments can enhance students’ motivations to complete them: they can do something different, something new and relevant for them (Bloxham and Boyd 2007:191; Struyven et al. 2005). There is also motivation for the teacher in reviewing and marking a variety of assessment types rather than reading multiple 2000-word essays throughout the year.
  6. Finally, diverse assessments open many opportunities for more peer to peer and group learning experiences, which have value in themselves. Compared to a simple essay or exam, adding comments to each other’s blog posts, acting as the audience for each other’s presentations, watching each other’s film productions, and collaborating in case study assessments, provide numerous opportunities for students to learn from each other and work together, enhancing their learning experience (Hong-Meng Tai et al. 2022:9).

As an outcome of the above, across my three 30 CAT modules of the students’ 120 CAT programme, I implemented the following assessment diet:

FP009 Politics & International Relations FP053 Inquiry and Research Skills in Social Science FP036 Understanding Society
Individual presentation (20%) Infographic (10%) Online Moodle course completion (10%)
1500 word essay (40%) 1000 word blog post (20%) Annotated bibliography (25%)
Take home exam (40%) Film project (40%) Research project and academic poster presentation (40%)
  Case study (30%) Reflective log book (25%)


On reflection, after implementing this for one academic year, I found that the diversity and variety did indeed facilitate the nurturing of new skills, both for me and the students, that are likely to be useful for their future careers. In a feedback survey conducted at the end of the academic year, all of the respondents (6 out of the 8 registered on the course) agreed or strongly agreed that the assessments helped them develop skills that will be useful to them in the future. I found the variety motivating and helped to keep things exciting, both in terms of the types of assessments and the students’ choice within them. Within Understanding Society for example, for the infographic, the blog post and the film project, students had the option to choose to focus on any aspect of the given topic they are interested in, leading to a broad range of pieces of work which were a joy to mark. In collecting student feedback, all of the students who completed the survey either agreed or strongly agreed that the variety of assessment methods was exciting and motivated them to work hard.

I had been concerned about how student performance might be affected by the variety of assessment and the cognitive load that necessarily comes with it, but students actually performed very well, with the mean score for FP009 11 points higher than the previous academic year and three points higher in FP053, whilst FP036 did see a three-point fall in the mean overall outcome. All students responded to my survey that they agreed or strongly agreed that the variety of assessments enabled them to demonstrate the full range of their knowledge and skills and meant that at least some of the assessment types had catered to their personal strengths.

Nonetheless, there were challenges along the way. Introducing lots of new skills to the students alongside new knowledge was a heavy load both for me as the teacher and the students. Students reported at the end of the year feeling sufficiently prepared to complete all of the assessments, but a lot of time and energy had gone into this preparation. Further challenges from the teacher perspective included preparing appropriate assessment criteria for all the different forms of assessment, and considering parity across different assessment types. For example, I had to consider whether a 1500-word essay in Politics is of comparable workload and complexity to a 5-minute film production in Understanding Society, since they are both worth 40% of the 30 CAT modules.

The only one of my reasons for introducing such assessment diversity that was not evident in the experience was the level of stress students felt around their assessments. Two students disagreed that not doing only traditional exams reduced how stressful they found assessments and one response was neutral. Three did agree, but none strongly agreed. Interestingly, the assessments that the students reported enjoying the most were quite different to those they reported having learned the most from, so it may not always be the case that we learn the most when we are enjoying the experience. This appears to go against much research in the area of enjoyment and learning (Lucardie 2014).

There are thus many aspects for me to continue to reflect on and develop, including how my practice can be expressed in terms of an assessment ‘strategy’ (see the earlier post from the WIHEA Learning Circle), and the WIHEA Learning Circle is an excellent space for me and anyone else interested in diverse assessment to discuss good practice and share experiences as we look to improve our students’ experiences of assessment. If you would like to join this learning circle please contact the co-leads: Leda Mirbahai, Warwick Medical School (WMS) (Leda.Mirbahai@warwick.ac.uk) and Isabel Fischer, Warwick Business School (WBS) (Isabel.Fischer@wbs.ac.uk).

Reference list:

Bloxham, S and Boyd, P (2007) Developing Effective Assessment in Higher Education: A Practical Guide, London: McGraw-Hill Education

Dunn, L et al. (2004) The Student Assessment Handbook: New Directions in Traditional and On-line Assessment, London: RoutledgeFalmer

Hong-Meng Tai, J. et al. (2022) ‘Designing assessment for inclusion: an exploration of diverse students’ assessment experiences’ in Assessment & Evaluation in Higher Education DOI: https://doi.org/10.1080/02602938.2022.2082373

Knight, P. T and Yorke, M (2003) Assessment, Learning and Employability, Maidenhead: OUP

Lucardie, D (2014) ‘The impact of fun and enjoyment on adults’ learning’ in Procedia – Social and Behavioural Sciences 142, pp.439-446

O’Neill, G and Padden, L (2021)’Diversifying assessment methods: Barriers, benefits and enablers’ in Innovations in Education and Teaching International DOI: https://doi.org/10.1080/14703297.2021.1880462

Struyven, K et al. (2005) ‘Students’ perceptions about evaluation and assessment in higher education: a review’ in Assessment and Evaluation in Higher Education 30(4) pp.325-341

This is the 5th blog in our diverse assessment series, the previous blogs can be found here:

Blog 1: Launch of the learning circle: https://blogs.warwick.ac.uk/wjett/entry/interested_in_diverse/

Blog 2: Creative projects and the ‘state of play’ in diverse assessments: https://blogs.warwick.ac.uk/wjett/entry/creative_projects_and/

Blog 3: Student experience of assessments: https://blogs.warwick.ac.uk/wjett/entry/a_student_perspective/

Blog 4: Assessment Strategy – one year after starting the learning circle https://blogs.warwick.ac.uk/wjett/entry/one_year_on/

If you would like to join this learning circle please contact the co-leads: Leda Mirbahai, Warwick Medical School (WMS) (Leda.Mirbahai@warwick.ac.uk) and Isabel Fischer, Warwick Business School (WBS) (Isabel.Fischer@wbs.ac.uk).


November 28, 2022

Well–being pedagogies: activities and practices to improve the student experience online

In this short THE Campus piece, Warwick’s own Elena Riva shares some helpful practices that can boost student well-being in the online teaching and learning environment:

https://www.timeshighereducation.com/campus/wellbeing-pedagogies-activities-and-practices-improve-student-experience-online


November 21, 2022

Making connections – embedding values in the student journey

Maha Bali is an educator who writes and speaks frequently about social justice, critical pedagogy, and open and online education. You can find her blog on this webpage. She is co-facilitator of Equity Unbound which, in collaboration with OneHE, has curated a range open educational resources focused on community building activities that educators can use with their students. You can also watch Maha’s keynote address at our recent TEALfest 2022 event by following this link: Building community online with equity and care.


June 2024

Mo Tu We Th Fr Sa Su
May |  Today  |
               1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30

Search this blog

Tags

Galleries

Most recent comments

  • Very interesting, thank you for sharing. Great CPD reflection. by Joel Milburn on this entry
  • Hi Lucy, Thank you for sharing the highs and lows of diverse assessments. I hope you have inspired o… by Anna Tranter on this entry
  • Hello Lucy, I totally agree with everything you have said here. And well done for having the energy … by Natalie Sharpling on this entry
  • Thank you for setting up this Learning Circle. Clearly, this is an area where we can make real progr… by Gwen Van der Velden on this entry
  • It's wonderful to read of your success Alex and the fact that you've been able to eradicate some pre… by Catherine Glavina on this entry

Blog archive

Loading…
RSS2.0 Atom
Not signed in
Sign in

Powered by BlogBuilder
© MMXXIV