All 14 entries tagged Isabel Fischer
No other Warwick Blogs use the tag Isabel Fischer on entries | View entries tagged Isabel Fischer at Technorati | View all 1 images tagged Isabel Fischer
June 12, 2023
Building knowledge on the pedagogy of using generative AI in the classroom and in assessments
By Matt Lucas and Isabel Fischer (WBS)
Matt Lucas is a Senior Product Manager at IBM, and Isabel Fischer is an Associate Professor (Reader) of Information Systems at WBS (Warwick Business School). Isabel also co-convenes an IATL (Institute for Advanced Teaching and Learning) module. This blog represents their own opinions and not those of their employers.
After two terms of including generative AI (GenAI) in my teaching and for assessments I am still building my knowledge and understanding around the pedagogy of using GenAI. Students seem to like the entertainment of playing around with music and art (e.g. DALL.E 2 and midjourney), creating images and also memes, with all of these being user-friendly for big screens and also for huddling around one laptop as part of teamwork. Text-outputs seems less intuitive for ‘collective use’: There does not seem to be yet an app available that allows for hands-on collaborative refinement of prompts (e.g. similar to students working on the same Google doc). And displaying a string of words on a shared screen clearly does not have the same entertainment value for students as ‘customers and consumers’.
In addition to a lack of entertainment value I also found that students seem to actually appreciate word-based GenAI (e.g. ChatGPT and Bard) as ‘their secret tool’ at their disposal and for them to use. They appear to appreciate it, if lecturers show them the exact prompts that they can copy that allows them to make the most of ‘their secret tool’. They seem less keen about having to be transparent about using the tool themselves and having to justify and critically reflect on usage. It not only means additional work, more importantly, they dislike the thought of the tool’s hidden power being exposed. They appear even less keen for lecturers to use GenAI for the lesson preparation and to be transparent about it because otherwise, what is the ‘perceived added value’ of attending the lecture if they could have just reviewed GenAI?
With this in mind, what are the skills that students can learn from using GenAI in the classroom and in assessments?
In the attached blog Matt Lucas and I suggest that by including innovative aspects into assessments, students can learn and practise four skills that are relevant for their future careers in a world disrupted by AI:
-
Cognitive flexibility, abstraction and simplification
-
Curiosity, including prompt engineering
-
Personalisation, reflection and empathising to adapt to different audiences
-
Critical evaluation of AI
For each of the four skills we explain in the attached blog the relevance for student learning with some illustrative examples, before outlining how we have incorporated these four skills into students’ assessments in the recent term.
March 27, 2023
Is interdisciplinarity learnable in 10 weeks?
A Student - Staff co-analysis from IATL’s Creating Digital Futures module
by Anil Dhariwal, Isabel Fischer, Jamal Lawal, Matthew Leslie, Claire Rocks, Shwetha Roy, Bryon Wandera, and Devon Williams
We created this blog collectively in 1h as part of our last Creating Digital Futures workshop. In the interest of time, instead of using Harvard Referencing we entered Weblinks.
In this blog we look back at a 10-week module which brought together students from different fields for the collaborative development of a digital product, which we pitched to the Warwick #Creative Futures Incubator (https://warwickinnovationdistrict.com/creative-futures-incubator-application/). The purpose of the blog is to reflect on whether students can effectively learn and apply the fundamentals of interdisciplinarity, and if 10 weeks is a suitable amount of time for the benefits of interdisciplinarity to take effect.
3 min read
Source: DALLE.E 2 by OpenAI - retrieved 15/3/23 using these prompts: ‘Interdisciplinarity Creating Digital Futures’
From an educator-side our research has shown that a key pedagogical challenge is to encourage students to move beyond scanning of information towards critical engagement and action (Preuss, Fischer, Luiz, 2023, see: https://onlinelibrary.wiley.com/doi/10.1111/hequ.12429). Payne (1999, p.173) emphasises the potential of interdisciplinarity to enhance ‘democratisation of knowledge construction processes in society’ with Klein (2000, p.18) explaining that ‘Interdisciplinary knowledge strengthens connections between disciplines and in that process it weakens the division of labour in disciplines, exposes gaps, stimulates cross fertilisation and creates new fields of focus for knowledge inquiry’.
Problem based learning is often thought of as a pedagogical approach that can facilitate interdisciplinary learning. Adopting this approach, Creating Digital Futures asked students to create a product or service at the intersection of Digital Technology, Creativity and the Global Sustainable Development Goals. Focussing first on identifying a problem using Design Thinking was intended to encourage students to come out of their disciplinary boundaries to search for solutions. As research has shown that we cannot take for granted that interdisciplinary learning will take place, and that it is necessary to address issues of interdisciplinary learning specifically (Stentoft et al, 2017, https://doi.org/10.1177/1469787417693510), in Creating Digital Futures we scaffolded the interdisciplinary learning by providing 10 weekly taught sessions covering topics such as working in interdisciplinary teams, design thinking, ethical frameworks, storytelling with data and presentation skills.
A ChatGPT (2023) prompt tells us: ‘Interdisciplinarity has become a buzzword in academia, with many universities and institutions promoting collaborative learning and research. But can students truly learn the skills and mindset needed for interdisciplinary work in just 10 weeks? In this blog, we explore [if] it is possible to truly break down the boundaries between disciplines and instil a collaborative mindset in a matter of weeks.’ (ChatGPT, 2023a)
We - the students - found that our interdisciplinary module has a number of benefits, including:
- We were able to develop real world skills such as: critical thinking; communication; collaboration and the ability to synthesise information (Anil, Econ)
- Interdisciplinarity gives the opportunity to see areas that you may lack in due to your subject background and allows you to learn and improve them with the help of others with different perspectives (Byron, Biomed)
- Providing a better understanding of the utility of one’s own skills from their field in the context of the real world (Devon, Comp Sci)
- Really highlights the inherent interdisciplinarity of traditional subjects (Jamal, PPE)
- The helpfulness of application of theory in a practical matter, experiencing, growing and learning from and with others (Matthew, WBS)
- Allows you to produce an impactful and cohesive piece of work that brings together the ideas from different disciplines (Shwetha, Econ).
In conclusion, students have found many aspects of the module rewarding, and have also reflected that it was a good preparation in anticipation of embracing interdisciplinarity in the workplace. A problem based approach seems to be good to foster interdisciplinary learning in just 10 weeks. More importantly though, students appreciated having been taught by members from five different departments. The module convenors - Claire Rocks (Computer Science) and Isabel Fischer (WBS) - would therefore like to thank Elke Thonnes (Stats) and Robert O’Toole (Arts) as well as Rachel Davis, Clare Green and Holly Smeeton (Warwick’s Creative Futures Incubator / Warwick Entreprise) for making time to take this cohort of IATL students on a journey of interdisciplinary discovery.
ChatGPT prompt to help with the initial thoughts on the blog:
ChatGPT 2023 prompt used on 15/3/23: ‘Is interdisciplinarity learnable in 10 weeks, write a short intro to hook readers in for a blog with this title with an interesting question’.
March 06, 2023
On ‘Opportunities of AI in Higher Education’ by DALL.E and ChatGPT
Prologue and epilogue by Isabel Fischer (Warwick Business School)
Since the creation of our task and finish groups on evaluating and shaping the opportunities of AI in Higher Education (for further information see here: https://blogs.warwick.ac.uk/wjett/entry/ai_for_good/) we have had many very insightful ‘human to human’ discussions. The aim of this blog is, however, to show what ‘machines’ draw and write on the topic, to illustrate both the strengths and the weaknesses of generative AI.
The images at the start of the blog represent DALL.E’s take on the topic. Having always been untalented in drawing, I use DALL.E in my business teaching to show live how AI translates the topic we discuss by imitating distinct artists. Students enjoy how DALL.E (OpenAI) shows other – very creative - images while we wait for AI ‘to think & draw’ our images – there are always four images coming up for each prompt. Of course, we also discuss the risks, threats, and ethics behind AI (here an open-access case study which we wrote on AI ethics in the field of ESG investing: https://journals.sagepub.com/doi/full/10.1177/2043886920961782).
While DALL.E generates images that I could not have drawn, ChatGPT generates text that I could have written better and more concisely, with different logical priorities. I could - and actually did - create visualisations to illustrate my points. Nevertheless, ChatGPT did beat me by far on ‘speed’. For anybody who has not yet asked ChatGPT any questions, here are the ChatGPT responses based on five prompts:
- Opportunities for AI in HE
- Strategy for implementing and integrating AI in HE
- Developing research on the opportunities of AI in HE
- Developing an AI based marking and feedback tool
- Common areas of weakness of students’ academic writing
Opportunities for AI in HE
Artificial intelligence (AI) is transforming the world around us, and higher education (HE) is no exception. The potential opportunities of AI in HE are vast, ranging from improving learning outcomes and enhancing student engagement to streamlining administrative tasks and reducing costs. In this article, we explore some of the key opportunities of AI in HE and how they can be realized.
- Personalized learning: AI can help personalize the learning experience by adapting the content, pace, and style of instruction to the individual needs and preferences of each student. This can be achieved through intelligent tutoring systems, which use machine learning algorithms to analyze student performance data and provide customized feedback and support. Personalized learning can improve student engagement, increase retention rates, and enhance learning outcomes.
- Predictive analytics: AI can be used to analyze large datasets of student information, such as grades, attendance records, and behavioral data, to predict future performance and identify at-risk students who may need additional support. This can help educators intervene early and provide targeted interventions to improve student success.
- Virtual assistants: AI-powered virtual assistants can help students with routine administrative tasks such as scheduling, registration, and accessing resources. This can free up staff time and reduce wait times for students, improving their overall experience.
- Automated grading: AI can be used to automatically grade assignments and provide feedback to students, reducing the workload on instructors and improving the consistency and objectivity of grading. This can also provide students with immediate feedback, enabling them to improve their performance more quickly.
- Research and data analysis: AI can be used to analyze large datasets of research publications and help identify trends, patterns, and gaps in the literature. This can help researchers stay up-to-date with the latest developments in their field and generate new insights.
- Adaptive assessments: AI can be used to create adaptive assessments that adjust the difficulty and content of questions based on the student's level of knowledge and ability. This can help educators more accurately assess student performance and provide targeted feedback and support.
- Smart campus: AI can be used to optimize campus operations, such as energy management, facility maintenance, and security. This can reduce costs and improve the overall efficiency of campus operations.
While the opportunities of AI in HE are significant, there are also challenges that need to be addressed, such as ethical considerations, data privacy concerns, and ensuring that AI is used to enhance, rather than replace, human interaction and support. To fully realize the potential of AI in HE, it is essential that educators, researchers, and policymakers work together to develop responsible and effective strategies for implementing and integrating AI technologies.
Strategy for implementing and integrating AI in HE
Implementing and integrating AI technologies in higher education (HE) requires careful planning and consideration to ensure that these technologies are used in a responsible and effective manner. Here are some responsible and effective strategies that HE institutions can use to implement and integrate AI technologies:
- Develop a clear vision and strategy: HE institutions should develop a clear vision and strategy for how AI technologies will be used to enhance the student experience, improve learning outcomes, and support research. This should involve identifying specific use cases and goals for AI, as well as assessing the potential benefits and risks.
- Foster collaboration and communication: Implementing and integrating AI technologies requires collaboration and communication across departments and stakeholders. HE institutions should establish cross-functional teams that include educators, administrators, IT professionals, and students to ensure that all perspectives are considered and to foster buy-in and support for AI initiatives.
- Ensure transparency and ethical considerations: HE institutions should ensure that the use of AI technologies is transparent and ethical. This includes being clear about how AI technologies are being used, ensuring data privacy and security, and ensuring that AI is used to enhance, rather than replace, human interaction and support.
- Invest in staff training and development: AI technologies require specialized skills and knowledge to implement and use effectively. HE institutions should invest in staff training and development programs to ensure that educators and administrators have the necessary skills and knowledge to use AI technologies in their work.
- Pilot and evaluate AI initiatives: Before scaling up the use of AI technologies, HE institutions should pilot and evaluate their effectiveness. This involves testing AI technologies in small-scale pilots, collecting feedback from stakeholders, and using data to evaluate their impact on learning outcomes, student engagement, and other key metrics.
- Engage with the wider AI community: HE institutions should engage with the wider AI community to stay up-to-date with the latest developments and trends in AI. This includes attending conferences and workshops, collaborating with other institutions, and partnering with industry experts and researchers.
- Align AI initiatives with institutional goals and values: Finally, HE institutions should ensure that their AI initiatives are aligned with their institutional goals and values. This involves considering the unique needs and priorities of the institution, as well as the needs and preferences of students, educators, and other stakeholders. By aligning AI initiatives with institutional goals and values, HE institutions can ensure that they are using these technologies in a responsible and effective manner.
Developing research on the opportunities of AI in HE
There are several ways to develop ideas for future opportunities of AI in higher education and organize it as a research topic:
- Literature review: Conducting a literature review of existing research on AI in higher education can help identify the gaps and potential areas for future research. This can involve searching for relevant articles, books, and reports in databases such as Google Scholar, ACM Digital Library, and IEEE Xplore.
- Expert interviews: Talking to experts in the field, such as educators, researchers, and AI professionals, can help identify emerging trends and potential opportunities for AI in higher education.
- Case studies: Conducting case studies of AI implementation in higher education institutions can provide insights into the benefits and challenges of adopting AI technologies.
- Surveys: Conducting surveys of stakeholders such as students, faculty, and administrators can help identify their needs and preferences for AI in higher education.
- Workshops and conferences: Attending workshops and conferences on AI in higher education can provide opportunities to learn from experts and exchange ideas with peers.
Once you have developed ideas for future opportunities of AI in higher education, organizing it as a research topic can involve the following steps:
- Identifying research questions: Based on the ideas developed, identify research questions that are relevant, feasible, and significant.
- Defining the scope of the research: Clarify the boundaries of the research topic and define the key concepts and variables.
- Developing a research methodology: Decide on the appropriate research methods, such as qualitative, quantitative, or mixed methods, and develop a research design.
- Conducting data collection: Collect data using appropriate methods such as surveys, interviews, and case studies.
- Analyzing data: Analyze the data using appropriate statistical or qualitative analysis techniques.
- Drawing conclusions and making recommendations: Based on the findings, draw conclusions and make recommendations for future research and practice.
- Writing up the research: Write up the research in a clear and concise manner, following the conventions of academic writing.
When reviewing emerging trends of AI in higher education after conducting qualitative research, a theory that could be useful is grounded theory.
Grounded theory is a research methodology that involves systematically collecting and analyzing qualitative data to develop a theory or explanation of a phenomenon. The approach emphasizes the need to generate theory from the data itself, rather than imposing preconceived notions or theories on the data.
When conducting a qualitative study of emerging trends of AI in higher education, the grounded theory approach can help identify key themes and patterns in the data that may point towards emerging trends. These trends can then be used to develop a theory that explains how AI is being used in higher education, what the benefits and challenges are, and what the future implications may be.
Grounded theory can also be helpful in identifying gaps in the literature and areas for future research. By systematically analyzing the data, researchers can identify areas where more research is needed, and use the emerging trends to develop research questions that are relevant and meaningful.
Developing an AI based marking and feedback tool
Designing an AI system that can be used to automatically grade assignments and provide personalized feedback to students can be a complex task, but it can be accomplished by following these steps:
- Define the grading criteria: The first step is to define the grading criteria for the assignment. This involves breaking down the assignment into its key components and determining how each component will be evaluated. The grading criteria should be clearly defined and objective, so that the AI system can easily understand and apply them.
- Create a training dataset: To train the AI system, you need to create a training dataset that includes a set of annotated assignments. These annotated assignments should be graded by human experts according to the defined grading criteria. The dataset should be large enough to ensure that the AI system can learn to recognize a variety of different assignment types and styles.
- Choose the appropriate machine learning algorithm: There are several machine learning algorithms that can be used for automatic grading, including decision trees, support vector machines, and neural networks. The choice of algorithm will depend on the complexity of the grading criteria and the size of the training dataset.
- Train the AI system: Once you have chosen the appropriate machine learning algorithm, you can train the AI system using the annotated training dataset. The system should be trained to recognize patterns and relationships in the data, and to identify which components of the assignments are important for grading.
- Test the AI system: After training the AI system, you should test it using a set of unseen assignments that have not been included in the training dataset. This will help you evaluate the system's accuracy and identify any areas where it needs further improvement.
- Provide personalized feedback: To provide personalized feedback to students, you can use the AI system to identify areas where a student may need additional support or guidance. For example, if a student consistently struggles with a particular component of the assignment, the system can provide targeted feedback and resources to help the student improve.
- Continuously improve the AI system: Finally, you should continuously monitor and improve the AI system. This involves regularly updating the training dataset, evaluating the system's performance, and incorporating feedback from students and educators to refine the grading criteria and feedback mechanisms.
In summary, designing an AI system for automatic grading and personalized feedback involves defining the grading criteria, creating a training dataset, choosing the appropriate machine learning algorithm, training the system, testing it, providing personalized feedback, and continuously improving the system. By following these steps, you can develop an effective and efficient AI system that enhances the learning experience for students and educators alike.
Common areas of weakness of students’ academic writing
Academic writing is an essential skill for students in higher education. However, there are common areas of weakness that students often struggle with. Here are some of the most common areas of weaknesses for students' academic writing in higher education:
- Poor organization and structure: Students often struggle with organizing their ideas and structuring their writing in a way that is clear and logical. This can make their writing difficult to follow and reduce the overall effectiveness of their argument.
- Weak thesis statements: A strong thesis statement is essential for guiding the direction of an academic paper. However, many students struggle to develop a clear and concise thesis statement that effectively summarizes the main argument of their paper.
- Inadequate research and citation skills: Effective academic writing requires the use of credible sources to support arguments. However, many students struggle with finding appropriate sources, evaluating their credibility, and correctly citing them in their writing.
- Poor grammar and syntax: Grammatical errors and poor syntax can make writing difficult to read and understand. This can detract from the clarity and effectiveness of students' arguments.
- Lack of clarity and concision: Academic writing should be clear and concise, but many students struggle with using language that is overly complex or wordy. This can make their writing difficult to read and reduce the impact of their argument.
- Inappropriate tone or style: Students may struggle to adopt an appropriate tone or style for academic writing, which can lead to writing that is overly informal or lacking in scholarly rigor.
- Failure to address the assignment prompt: Finally, students may struggle with addressing the assignment prompt in a way that fully meets the expectations of the instructor. This can result in writing that is off-topic or fails to fully address the main requirements of the assignment.
Overall, addressing these common areas of weakness in academic writing can help students improve their writing skills and achieve greater success in their academic pursuits.
Epilogue: For comparison to the above, here are some ChatGPT responses to prompts by Chaoran Yang, a student who is part of my consulting module. I suggested to all students of that cohort to try ChatGPT for idea generation at the start of a consulting exercise for the Warwick Esports centre. Similar to my findings from above, the students also felt that ChatGPT did not provide any major insights. Possibly because we did not ask the right questions, and, more importantly, because ChatGPT is still in its infancy.
In our team we have spent over 2,000 hours developing the Warwick AI Essay Analyst. We used a mixture of non-AI rule-based statistical features and deep-learning algorithms and databases, e.g., Pytorch, Hugging face framework, and Transformer (for further information on our AI-based tool see here: https://nationalcentreforai.jiscinvolve.org/wp/2022/11/16/interested-in-receiving-formative-feedback-on-your-draft-essays-and-dissertations-on-demand-introducing-warwicks-ai-essay-analyst/).
With the current progress in the field of generative AI, developments of future tools will be faster – let’s work together to ensure that all tools, whether developed in-house or bought / endorsed by the university have robust ethical underpinnings. My final suggestions for readers is to review here the Ethical guidelines on the use of artificial intelligence (AI) and data in teaching and learning for educators, produced by the Office of the European Union: https://data.europa.eu/doi/10.2766/153756
February 13, 2023
AI for Good: Evaluating and Shaping Opportunities of AI in Education
By Isabel Fischer, Leda Mirbahai, and David Buxton
Following the rise of awareness of the opportunities (and threats) of artificial intelligence (AI) in education, we have created a task and finish group which aims to review and ‘imagine’ the opportunities and challenges of AI in education, incl. assessments. Our vision is to deploy AI as a tool to support all students, independent of background and socio-demographic characteristics, to be successful in their studies and in their future work, while ensuring academic integrity, as well as to support educators feel confident in using AI effectively in promoting learning. We are working in five (sub)groups:
- General AI in Education (AIEd) Opportunities & information sharing
- Novel and Diverse Assessment Designs
- Feedback, Marking, Authorship Detection
- Designing Teaching Content - ‘what is out there being developed?’
- 'Red Team': AI Ethics and Academic Integrity
As we are still interested in colleagues from within Warwick as well as other institutions and the wider community of stakeholders to join us, here some further information per (sub)group:
1) General AI in Education (AIEd) Opportunities & information sharing: We review how to capture, shape, and disseminate the opportunities for both learner-facing and educator-facing AI, mainly in HE but also considering how HE can support the secondary and even primary school sector (e.g. how to help teachers to experiment with different forms of AI in a low-stake environment). We also consider the benefits, such as reducing inequality, fairness and democratisation that AI offers, evaluating how we can support SDG 4 (equitable and quality education) and SDG 10 (reducing inequalities). We want to help educators to know how to potentially embrace recent AI developments for their professional practice. Combined with sub-group / Strand 5, the ‘red team’ we also want to inform colleagues on research (similar to mini literature reviews) on topics such as Algorithmic Fairness.
Target Output: A WIHEA page that is informative for colleagues new to AIEd (explanations, links to other resources, links to discussions / brainstorming exercises / blogs, suggestions for their practice)
2) Designing Assessments: We review the opportunities for designing and setting diverse assessments (Learner-facing), including embedding our work within our different Learning Circle’s work. It is in this strand that most of the student co-creation will take place.
Target Output: WIHEA page, blogs, and talks
3) Feedback and Marking: We review the opportunities of using AI for formative feedback (Learner-facing), summative feedback (Educator-facing), ‘AES – automated essay scoring’ (educator-facing), and stylometry (authorship authentication) as well as ChatGPT detection. One aspect of this strand (but not constrained to this strand) is also ‘Move fast, Break fast, Learn fast’ – doing small scale experiments and testing them (e.g., Consulting Students will experiment with mind maps this term and then can, but don’t have to, submit their work to the Warwick AI Essay Analyst for formative feedback and we can analyse their work).
Target Output: A WIHEA page that disseminates information and possibly diffusion of the actual Warwick AI Essay Analyst tool at Warwick, potentially producing research output
4) Designing Module and Lesson Content & Recommendations for institutional Developments / Purchases: Educator-facing, we review tools and initiatives that might help educators in planning and organising their modules and lessons, as well monitoring their email and forum entries. This group looks at all educator-facing areas besides designing assessments (group 2) and providing feedback on assessments (group 3). This group might also make recommendations to the institution on what software to build or to purchase etc.
Target Output: A WIHEA page that disseminates information, possibly making recommendations for in-house developments / purchase of external software packages
5) A ‘red team’ acknowledges that AI is here to stay and ensures we follow AI Ethics guidelines and that everybody is clear about the risks. This team also reviews and mitigates the challenges to Academic Integrity more broadly. Moreover, it reviews the risk of bought-in products from EdTech and Tech companies, ensuring that AI Ethics is applicable both for in-house and off the shelf, bought-in products.
Target Output: A WIHEA page that provides information for colleagues worried about AIEd (explanations, links to other resources, links to discussions) especially on the topic of AI Ethics and Academic Integrity (what is OK to do, what isn’t – where should students / educators draw the line). Collaborating with stand 1, this group might want to explain (do a high-level literature review / providing links to important research) aspects of AI Ethics / Academic Integrity, such as explaining concepts such as ‘Algorithmic Fairness’. Building on work by other groups, e.g., last year’s ‘Online Assessment Data Ethics Group’, this group might want to develop a proposal for SLEEC (https://warwick.ac.uk/services/gov/committees/sleec/) and/or to provide guidance and advice to EPQ on suitable policy and guidance where appropriate.
Proposed Overall Target for entire Task and Finish Group, i.e. across the five groups / strands: Have some tangible outputs (WIHEA page, blogs, talks) that support colleagues when they embrace change in an ethically sound way that respects all stakeholders, especially learners and educators. Ideally collaborating with other universities, other education providers, and industry. Possibly develop a proposal for SLEEC and/or provide guidance and advice to EPQ on suitable policy and guidance where appropriate.
Please email Isabel.fischer@wbs.ac.uk if you are interested in joining any of the groups.
Please email Leda.Mirbahai@warwick.ac.uk if you are interested in joining our open WIHEA Diverse Assessment Learning Circle with interesting talks, such as our talks this month on Synoptic Assessments and on Democratising Assessments.
November 14, 2022
One year on: Progress update on our Diverse Assessments Learning Circle
Isabel Fischer (WBS) and Leda Mirbahai (WMS)
One year ago we created an open WIHEA Learning Circle on Diverse Assessments. Since we seem to be building a reputation as the ‘godparents of assessments’. In addition to contributing to diversifying assessment strategies across Warwick we aim to work towards providing equity in our assessment practices and to improve student experience. Assessments, if used effectively, are key to promoting learning for our students.
To encourage reflection and to drive change in how we use and view assessments in our programmes, we hosted a series of keynote speeches to start our regular meetings. Here one example from Kerry Dobbins, Academic Development Centre, on How to create an effective assessment strategy (drilling down – or up – from institutional, via course, to module level)
My aim for this presentation was to highlight the conflation that often occurs between assessment ‘strategies’ and assessment ‘methods’. The term ‘strategy’ is often used when we are actually referring to the mode of assessment, e.g. ‘our assessment strategy is coursework or an online exam’. It is important to disentangle these terms so that we can take an explicitly strategic approach to designing assessments that supports inclusion at all levels, i.e., module, course/programme and institution. An assessment strategy develops a shared and holistic view of the course/programme between students and academics. At a macro level, there needs to be constructive alignment between module learning outcomes (LOs), course/programme LOs and graduate attributes. In this way, a programme level view is taken to what LOs are being assessed across modules and how. For diverse assessments this is extremely important because it ensures that a holistic view is taken in relation to how comfort with, skills for and literacy of different types of assessment methods are developed and scaffolded for students as the programme progresses. This strategic and holistic view also recognises the various transition points of the students’ journey; so first year assessments may start to introduce elements of doing things differently, that are built on in the second year, etc.
In essence then, a strategic approach is vital for inclusive assessment practices as it provides an explicit framework for developing assessment literacy skills and for assignment feedback to be clearly directed towards feeding forward into future assessment activities. Taking a strategic approach also provides greater opportunities for teams to develop a coherent view about the purposes and values of assessment; and how those shared values are threaded through the course or programme. Assessment is not value-free as we are always conveying value messages to students about what we assess and how. A programme strategy allows us to really consider our values and what we are trying to achieve with our assessment practices and processes overall.
Assessment strategy also occurs at the module level. Again, at this level the strategy is not the mode of assessment but how support to achieve within the assessment is structured into the module. For example, how is assessment and feedback literacy designed into the module curriculum? What does the pre and post-assessment support look like? What is the rationale for the mode of assessment being used? How is assessment (formative and summative) being used within the module to support learning, not just quantify it?
You might find the attached presentation and some of the texts below useful to review:
Boud and Associates (2010) Assessment 2020: Seven propositions for assessment reform in higher education
Brunton et al (2016) Designing and developing a programme-focused assessment strategy: a case study
Scholtz (2016) (PDF) The assessment strategy: An elusive curriculum structure (researchgate.net)
If you are interested in this area, I would welcome you to get in touch: Kerry.Dobbins@warwick.ac.uk
For our learning circle we have also managed to secure funding to undertake a research project to capture both student and staff views of diverse assessments. Although the project is still ongoing, our student project officers, Molly Fowler and Pula Prakash, have managed to gather valuable data with an aim to feed into institutional considerations around assessment strategies.
Finally, if you want to find out more about our Learning Circle you can visit our webpage and you can read our previous blogs here:
Blog 1: Launch of the learning circle: https://blogs.warwick.ac.uk/wjett/entry/interested_in_diverse/
Blog 2: Creative projects and the ‘state of play’ in diverse assessments: https://blogs.warwick.ac.uk/wjett/entry/creative_projects_and/
Blog 3: Student experience of assessments: https://blogs.warwick.ac.uk/wjett/entry/a_student_perspective/
If you would like to join this learning circle please contact the co-leads: Leda Mirbahai, Warwick Medical School (WMS) (Leda.Mirbahai@warwick.ac.uk) and Isabel Fischer, Warwick Business School (WBS) (Isabel.Fischer@wbs.ac.uk).
October 10, 2022
Introducing Warwick’s “AI Essay–Analyst”
By Isabel Fischer, Zhewei Zhang, Lichuan Xiang, Aiqi Jiang, Yiran Xu and Joe Nandhakumar
Three years since its first conceptualisation, we are pleased to introduce the “AI Essay-Analyst”, an academic-writing-tool in support of the mission of Warwick Business School (WBS) “to enable our stakeholders to realise their full potential” and the University of Warwick’s 2030 strategy “to ensure that, irrespective of background, disability, faith, gender, race and sexual orientation, all students have access to equal opportunities to thrive and progress at Warwick”.
A recent WBS survey showed that the majority of students perceive poor academic writing as their main barrier to success. In response, a group of WBS faculty and students developed and piloted a machine-generated automated formative essay feedback tool in-house which is now being made available to an increasing number of students on an optional basis.
Academic writing feedback tools have the potential of providing students with “personalised feedback that is currently only available to a privileged minority”[1] and can enhance students’ self-determined learning: Formative assessments are seen as “one of the most important mechanisms for improving student learning. Self and peer-assessment are particularly effective in formative learning as they require students to engage more fully with the assessment process”[2]. Currently, students of select modules are being offered the opportunity to trial the software by submitting their draft essays or dissertations prior to their submission deadlines. Participating students receive a personalised AI-generated feedback report of approximately 15 pages. The report includes images, charts and graphs which students are encouraged to review prior to the formal submission of their assignment.
External providers, such as Grammarly, Turnitin Draft Coach, Bartleby, Writefull, and Hemingway Editor also offer feedback to students, however, most of these tools focus on grammar and spelling. In addition, in most cases, students have to agree that external providers can use their data. The “AI Essay-Analyst” does not use student data and is substantially more comprehensive. For example, by also including the CABS ABS ranking we can check the quality of the journal articles that are cited and referenced. In addition, we offer visualisations such as knowledge graphsand argumentative zoning[3], which are expressed as PIE charts and knowledge graphs[4]. These visualisations are very much appreciated by students.
Students who opted to take part in the project so far were very satisfied, commenting: “The overall feedback is very useful for the general understanding of your academic writing skills”, “It is quite cool and it is a new approach I never tried before”, “I have enjoyed the visualisations most since they are interactive and easy to understand” and “Grammar suggestions are useful since they show some spelling and small mistakes that I ignored before.”
Detailed student feedback on specific features included:
- The most useful are grammar suggestions, because it helps me revise the essay most directly.
- The spider graph is useful to help me understand where the essay is lacking.
- The Word Cloud is useful to help me check if the essay is on topic.
- Systematic stages of negation is helpful as it let me know if my critical thinking has been fully applied.
- For readability, this is an aspect that I usually find difficult to notice, because everything is readable in my own mind. So that is very helpful.
- The knowledge graph allowed me to see the bigger picture at a time when I was too focused on the detail. It helped me to break down my essay and also showed the correct as well as incorrect relationships between key concepts.
For comments or questions please contact the project lead Isabel.fischer@wbs.ac.uk
[1] https://oro.open.ac.uk/46517/1/LAK16%20Writing%20Analytics%20Wkshp%20-%20FINAL.pdf
[2] https://www.tandfonline.com/doi/full/10.11120/plan.2010.00230040
[3] https://www.cl.cam.ac.uk/~sht25/az.html
June 13, 2022
Instructional Innovation
Including Esports in the curriculum to prepare for the Metaverse(s)
By Dr Isabel Fischer, Reader in Information Systems, WBS
Following the launch of our Warwick Esports centre, in collaboration with the Esports team Jack Fenton and Elenore Jiawen Li, we included Esports in four WBS modules (Digital Transformation, Design Thinking for Digital Innovation, Digital Marketing Technology and Management, and Developing Consulting Expertise). This allowed management students to experiment with and to reflect on digital innovations and to find solutions to problems. Specifically, we wanted to encourage students to develop tangible ideas for the future metaverse ecosystem.
While learning about technologies and business models related to Esports and the Metaverse, the topic also allowed students to reflect on the convergence of technology, ethics, science, psychology and digital wellbeing, as well as on the impact on environmental and social sustainability. Teaching delivery was accompanied by authentic assessments, with students able to choose their topic. These novel assessments (vlogs, blogs, board papers) were introduced in the previous year which allowed for comparisons.
We found on the one hand that the quality of submitted assignments improved, with students seemingly much more creative and also technological ‘savvy’, both for their choice of topics and content as well as for the delivery formats. On the other hand, module evaluation showed that the inclusion of the Metaverse and Esports early on in the module ‘hyped’ students and wet their appetite to ‘fully’ understand the potential applications of the Metaverse despite the Metaverse(s) still being conceptualised. While previous students were happy with carton-based headsets using their own mobile phones, some of this year’s students would have appreciated working with sophisticated VR headsets, possibly because our teaching delivery was further hyped up as it coincided with Microsoft’s $68.7bn acquisition of Activision Blizzard, clearly showing the current potential of the gaming industry.
Finally, here are some testimonials from students new to Esports, provided after the initial seminar on Esports:
- I am doing the Design Thinking module where we had a chance to join the esports world. This enables us to think creatively! I love our Warwick Esports Centre.
- Thank you for letting us use the Esports Centre - it was really fun and easy to learn.
- First time and it was a very enjoyable experience. Very well organized and easy for beginners.
- Really enjoyable experience, great equipment.
- The game is really fun to play. It’s quite unexpecting and fun which the seminar works. I enjoyed it a lot. Thank you for the experience.
- Thank you for giving students such a good opportunity on campus. It really is a good way to bring people from different backgrounds together.
For further information on this initiative you might want to listen to this podcast: https://anchor.fm/ldcuwarwick/episodes/Blogging--AI-Marking--Online-Learning--Communication--Metaverse--Esports-e1d8efu
March 14, 2022
Interested in diverse assessments? – Isabel Fischer et al.
Interested in diverse assessments? Join our learning circle for an exciting grant-funded project on the future of assessments.
Authors: Isabel Fischer; Leda Mirbaha; Lewis Beer; Dawn Collins; Peter Fossey; Celine Martin; Natasha Nakariakov; Pula Prakash; Farrah Vogel-Javeri
We have recently created an interdisciplinary learning circle which aims to optimise the learning opportunities for Warwick students. We want to ensure that the teaching and learning opportunities are inclusive and cater for our diverse student community. Assessment and feedback are critical stages in the learning process. Using diverse assessments will ensure that students are not unfairly disadvantaged or advantaged by a specific form of assessment. Although it is worth noting that what may count as diverse assessment in one faculty may not necessary be seen as a diverse assessment approach in another. Therefore, using diverse assessments comes with its own challenges and barriers such as:
- Diversification without sufficient opportunity for students to practice and get familiar with the new and different forms of assessment, disadvantaging group of students that may not be as familiar with certain style of assessments
- The resource and time component needed for familiarisation then reduces the uptake and engagement by faculty with more innovative assessment approaches
Therefore, the aim of this learning circle is to capture both staff and student experience of diverse assessments and to involve students, staff, and other stakeholders in shaping the future of assessments. Furthermore, the learning circle aims to develop practical recommendations on overcoming some of the challenges associated with use of diverse assessments which will significantly benefit the community.
To achieve this we need your support!
If we have not yet convinced you to join our learning circle, then read on:
Vision: Our vision is to foster an inclusive environment where assessments are designed and developed in partnership with students, staff, and external stakeholders, to effectively promote learning, valuing students’ uniqueness and considering their future employment(s) and wellbeing, as well as the social and environmental responsibility and sustainability of the wider community.
Mission. Our mission is to:
- Gather existing data on practices around use of diverse and inclusive assessments, including Warwick staff and student experiences
- Develop shared understanding of principles and practice of diverse assessment
- Develop an evaluative framework for measuring the success of diversified assessment strategies at module, year, and course level
- Capture student and staff views on diverse assessments
To help us achieve our mission we have successfully applied and been awarded a WIHEA funding which will enable us to capture staff and student experiences of some of the diverse and innovative assessment approaches used in different disciplines to address some of the key questions around: 1) perception of diverse assessments from a staff and student perspective, 2) practical tips for successful application of the assessment method and marking, 3) communicating assessed skills and requirements, 4) overcoming challenges. To achieve this, we will conduct interviews with staff and students and will share resources, included but not limited to examples of assessments, marking and feedback rubrics and assessment briefs.
For further information on joining the learning circle please contact the co-leads: Leda Mirbahai, Warwick Medical School (WMS) (Leda.Mirbahai@warwick.ac.uk) and Isabel Fischer, Warwick Business School (WBS) (Isabel.Fischer@wbs.ac.uk)
Stay tuned for further updates and blogs on our initiative, such as: Creative Projects and the ‘state of play’ in diverse assessments – Lewis Beer
January 10, 2022
Scholarship of Teaching and Learning (SoTL): Considering and measuring impact
Scholarship of Teaching and Learning (SoTL): Considering and measuring impact
By Dr Kerry Dobbins (Assistant Professor, Academic Development Centre) and Dr Isabel Fischer (Reader in Information Systems at Warwick Business School)
Questions about how to evaluate and measure impact of Scholarship of Teaching and Learning activities (SoTL) have always generated much lively discussion and debate within the SoTL community. For example, a recent LTHE Tweetchat focused on the topic of ‘Breaking boundaries: career progression and education-focused roles’. At the heart of the discussion was the nature and purpose of SoTL in relation to career advancement. Two specific questions discussed were ‘what forms and outputs can SoTL take?’ and ‘how should quality and impact be measured?’ In recent times, these questions have become more important than ever for all colleagues on education or teaching-focused contracts due to their connection to institutional promotions criteria. Routes to promotion for these colleagues will often involve criteria including:
- Making a significant contribution to pedagogy.
- Gaining recognition for quality and impact of scholarship.
- Demonstrating educational impact and influence within and beyond the University.
- Producing scholarly outcomes that advance learning, teaching and assessment.
Outputs and demonstrable impact of SoTL underpin these criteria and highlight why the recent Tweetchat was such a topical discussion. Considering the various forms or outputs that SoTL can take connects to how impact from those outputs may be measured or viewed to have made a significant contribution. The Twitter community identified a wide range of forms that SoTL outputs can take. These included:
- Conference/webinar presentations
- Publications
- Podcasts, blogs, vlogs
- Infographics
- Content for repositories, e.g. the National Teaching Repository
- Books
- Invited talks
- Staff/student development training
- Well-designed courses/modules
There was much leaning towards open access forms and critical scrutiny occurring through the teaching community, rather than tying outputs largely to journal publications and scrutiny through the standard academic peer review process. This leaning is still in line with the SoTL principles of sharing findings for communal use and review, but importantly the Twitter community placed considerable value on practitioner-focused scrutiny.
There was also recognition that different outputs serve different purposes and that any output can have an impact. This takes us into the realms of questioning what ‘impact’ means in SoTL and how it might be measured. Again, the Twitter community identified various ways that ‘impact’ of SoTL could be considered and/or measured:
- Policy changes – national, institutional, departmental, etc.
- Creation of staff development provision.
- Explicit recognition and reward of teaching through, for example, promotion and recruitment criteria.
- Improved module/course evaluations and student grades.
- Enhanced student engagement.
- Paper citations, usage metrics.
- Engagement in networks or communities of practice.
- Improved resourcing for teaching/development.
- Changes in colleagues’ practices.
- Engagement in collaborative activities.
It is interesting to see from the responses the different levels at which impact may play out (e.g. individual, institutional, etc). Of course, one of the ultimate aims of SoTL is to understand how students learn effectively so that learning experiences can be enhanced. However, SoTL is also recognised as an essential component of academic professional development and a mechanism through which HE teaching as a profession is advanced. As the responses above show, this wide remit of SoTL means that impacts from its activities can take many forms.
It is also apparent that some of the impact examples offered above may take longer to realise than others and be harder to measure in a quantifiable sense. A key question emerging in the Twitter discussion was whether SoTL impact needs to be measurable. This again takes us back to the question of what ‘impact’ means in SoTL and more specifically, what type of impact is being sought.
Different stakeholders (e.g., individuals, community groups, institutions) may of course be seeking different types of impact from SoTL activities but in reality, the needs of the different stakeholders are not so easily separated. There continues to be a challenging balancing act being played out in SoTL between it being an activity to develop individual and community practices, and it becoming a significant feature of aspects like promotions criteria. Whilst inclusion in promotions criteria demonstrates institutional recognition and reward of SoTL, it adds a layer of trying to measure or capture a narrative of identifiable and evidence-based impact that resonates with a panel who may or may not be familiar with the various lenses and dynamics of SoTL.
What all of this leads to is the need to keep engaging in institutional conversations about SoTL so that purposes, outputs and impacts of this activity are not viewed in too reductive terms and the various lenses and levels identified by the Twitter community are not overlooked. This is the vast flavour of SoTL and if we are too reductive, we may only taste elements of the impacts that it could have.
Two further questions should also be asked in the context of these institutional discussions:
- How to align expectations about output and impact with time allocated for SoTL, e.g. what should be the output for somebody who receives a workload allocation of, say, two to three hours a week versus somebody who receives one to two full days per week?
- How might expectations about output and impact be aligned on an inter-disciplinary instructional-level to encourage transferability?
It is important to ensure that discussions to define impact more clearly (and broadly) at an institutional level do not deter us from seeing the sharing of SoTL work and findings as an activity worthy in and of itself. At Warwick, we aim to create and encourage opportunities for the sharing of SoTL activities as part of our internal communities of practice. This way inspirational SoTL findings can be spread and impact upon colleagues’ practices, independent of views about measurability.
December 02, 2021
Using Machine Learning to offer students optional feedback on their draft essay
Using Machine Learning to offer students optional feedback on their draft essays: A joint initiative with Progressay
By Rebecca Mace (Progressay), Moktar Alqaderi (Progressay), and Dr Isabel Fischer (Reader in Information Systems at Warwick Business School)
As part of their Digital Marketing and Technologies module WBS students had this summer the opportunity to receive feedback generated by Progressay, an EdAI organisation, on their draft assignment essays. The initiative was in response to students asking for more academic writing support. The project received formal ethics approval from the university. Students who decided to participate found the recommendations received useful, e.g., ‘I found the graph that showed where the references come from very useful. It is good to see what kind of research my peers do’ and ‘I think it is relatively objective and trustable, because the different evaluations it gave were consistent with my expectations.'
We discussed the importance of ethics for this type of projects already in previous articles and blogs, e.g.:
- https://journals.sagepub.com/doi/10.1177/2043886920961782#articleCitationDownloadContainer
- https://www.wbs.ac.uk/news/five-reads-you-need-to-make-ai-ethical-and-trustworthy/
- https://blogs.warwick.ac.uk/wjett/entry/ai_ethics_for/
Rebecca Mace, from Progressay, explains here her thoughts on the ethical foundations of Progressay generally and this project in particular:
Progressay is a disrupter in the EdTech world, not only due to the technology, gamification and feedback systems it offers, but due to its deeply human desire to change things from the inside. We are educationalists, not technologists, at heart. Although we work developing EdTech and EdAI, our fundamental aim is to make things better for those who struggle to find ease of access with regards to learning. Our ethical value system is almost hardcoded into everything we do. Here is how:
- We are acutely aware of the potential for algorithmic bias and seeks to avoid this by working with schools and universities that have significant diversity in their student population. This goes a long way towards ensuring that training for the machine learning model does not reflect common problems such as race or class bias.
- We firmly believe that access to education is a human right, however, having access is so much more than having the ability to attend but feeling you can fully participate in the process. Truly understanding the teaching and learning available is fundamental to a deeper understanding of ‘access’. We facilitate this through gamified and adaptive learning activities for students.
- We focus especially upon making higher achievement a understandable process and understand what reduced transparency within the marking process can do to student aspiration. Our tool marks the essay and shows the student/lecturer in a detailed way how the grade generated was arrived at. It does this using written feedback and infographic dashboards, but also a series of targets for how to improve. Aspiration is translated into achievable reality.
- We adopt an honest and open approach that allows students, lecturers and parents access and understanding into how the system works. It presents this in understandable and easily accessible dashboards. The information it presents is designed to be immediately useable. Students and lecturers can feel informed and knowledgeable about fine grained information relating to their work. Transparency is facilitated through fairness and trust.
- Humanity, not technology, is at the core of everything we do. Education is about the quality of relationships that can be developed, fostered and maintained. We have specifically designed its entire platform to retain this educational ideal adopting an “augmented” approach, where humans are helped rather than replaced. It positively impacts upon areas such as marking workload, leaving increased time (and energy) for lecturers to focus on in depth knowledge of their students’ strengths and weaknesses. It also retains a human in the loop throughout with lecturers having the option to override the system, change feedback, offer alternative comments to their students. Furthermore, the system has been developed to promote student agency. Having deeper engagement with ones learning through transparency of grading, coupled with gamification to enhance understanding, has been shown to increase student efficacy and have positive impacts on motivation and engagement.
In short, we have deliberately and mindfully developed our platform to reflect its ethical values. At the heart is a drive to enhance social mobility by democratising access to education. Those involved in developing the platform have an in-depth understanding of educational theory, as well as years of experience teaching, lecturing, and working with students at all levels. They know what limited transparency, conscious and unconscious bias, a lack of motivation, discrimination, and reduced expectation can do to a student’s educational aspiration. It is out of personal experience and a real desire for change, that we have sought to develop a tool that speaks to these issues directly.
For more information about this project please contact: