April 17, 2023

Establishing Classroom Relationships

by Robert Smith

Developing a safe and predictable environment can have an enormous impact for both pupils generally and for a specific identified pupil. Bohn, Roehrig, and Pressley (2004) found that effective teachers introduce and maintain routines at the beginning of their relationship with each group. Doing so enables pupils to predict events throughout the lesson, helping them to feel safe. This feeling of security results in higher engagement and less problem behaviour. While this supports the behaviour and learning of all pupils, it can be particularly beneficial to individuals with higher needs; for example, though all pupils benefit when alerted of upcoming change, this predictability is especially supportive for individuals who are distressed by unexpected change (Kern and Clemens, 2007).

Similar to the early introduction of routines, the immediate establishment of rules impacts how safe a pupil feels in the classroom. Kern and Clemens (2007) advise that there be no more than five rules to aid memorisation, that each one be positively worded, and that they be displayed prominently in the classroom, to serve as both a reminder to pupils and as supporting artefacts for the teacher when reinforcing the rules. They suggest that engaging the students in the creation of the class rules is beneficial because it gives the pupils a sense of involvement, that the teacher cares about what they think, and it gives pupils choice; providing opportunities for choice is listed by Kern and Clemens (2007) as important at both class-wide and individual levels.

Kern and Clemens (2007) collated literature to consider the benefits of antecedent interventions and concluded that the implementation of such strategies can create a structured learning environment beneficial to most pupils. They divide antecedent strategies into two groups: class-wide and individual. They recommend that before class-wide strategies are considered school-wide ones need to be implemented, but these do not fall under the scope of their research. This improves the behaviour of pupils, which then makes responses to class-wide interventions more successful. Once class-wide strategies are implemented, teachers can implement individualised interventions to support pupils who have not adequately responded to general strategies. These individual interventions can only be successful if they are tailored to the needs of the specific pupil and environment, and so the teacher needs to have a solid understanding of what those needs are.

I am particularly interested in Kern and Clemens’ suggestion to involve students in rule making, creating as it does an element of accountability. I will incorporate this into my practice when I have the opportunity. Despite the mostly sequential nature of maths, I can also see plenty of opportunities for letting classes choose how to approach learning (visual scaffolding, bullet-pointed steps, etc.) as well as larger decisions, such as the order they learn non-sequential topics.

Typically, maths is taught using large amounts of repetitive questions that all practise the same new skill, yet this also increases the likelihood of problem behaviour (Kern and Clemens, 2007). I was interested to read about the benefits of incorporating simple, loosely related questions into the current work, and that pupils preferred this type of work despite there being more questions to answer (Kern and Clemens, 2007). These simpler questions could be used effectively as a form of retrieval practice, further benefiting the learning.

References:

Bohn, C.M., Roehrig, A.D., and Pressley, M. (2004) 'The first days of school in the classrooms of two more effective and four less effective primary-grades teachers.' Elementary School Journal, 104, pp. 269–287. https://doi.org/10.1086/499753

Kern, L. and Clemens, N.H. (2007) 'Antecedent strategies to promote appropriate classroom behaviour.' Psychol. Schs., 44, pp. 65-75. https://doi.org/10.1002/pits.20206


April 11, 2023

Warwick Wellbeing Pedagogies Library

This resource has been ‘co-developed by students and teachers to share practice and support our community to embed and sustain wellbeing in the curriculum, when planning modules, courses, and assessments’. The collation of pedagogical practices aims to support the enhancement of: student-centred environments; intercultural and international integration; emotional intelligence; and staff and student wellbeing.

https://warwick.ac.uk/fac/cross_fac/academy/funding/2016-17fundedprojects/wellbeingpedagogieslibrary


April 03, 2023

Democratising the feedback process

Blog 7 of our Diverse Assessment Learning Series, based on a recent keynote address as part of the Diverse Assessments WIHEA Learning Circle

Linda Enow photo

By Dr Linda Enow, from Newman University, Birmingham UK

We need to rethink feedback in Higher Education (HE). To examine feedback in HE, this contribution engages with democratic principles, and through Audio-Visual feedback (AVF) interrogates current feedback practices. Some current gaps in feedback research are on understanding the nature of student engagement and interaction with feedback, relational constituents of feedback processes, and the role of technology in supporting feedback processes. This paper posits that power imbalance with feedback is a barrier to effective feedback engagement and interaction. Embedded in the, now challenged, conceptualisation of feedback giver and feedback recipient dynamic is implicit power imbalance. A further challenge for feedback is the over-reliance on written feedback within which is arguably entrenched the ‘product’ conceptualisation of feedback. Through the exploration of democratic and cognitive requirements of feedback, this contribution traces an outlook which values equitable relationships and emphasises the positioning of feedback as a process. Insights on feedback processes in this piece have emerged from empirical work on audio-visual feedback.

Where we are in HE with assessment design?

Significant strides are being taken to design diverse assessment tasks in HE. These tasks are designed with due consideration of the diverse student population and established knowledge of andragogy. Typically, assessment design aims to showcase inclusion. With graduate outcome requirements for HEIs in the UK, assessments subsequently engage with the more utilitarian constituent of seeking applicability in workplace settings. Perhaps assessments should be more dynamic, for instance generated from cohort identity and depth of knowledge of the practice requirements of students, rather than current static assessment practices. In any case, progress made with assessment design and the thinking behind assessment practices demonstrates the ongoing effort of Higher Education Institutes (HEIs) to be diverse in their practices, and this is welcome effort in the right direction.

As we diversify assessments, is feedback left behind?

In the context of assessment, the corresponding element of feedback is left behind. Written feedback remains the dominant feedback mode. A small proportion engages with audio-feedback, typically as a one-off. Audio-Visual Feedback (AVF) despite demonstrating its multimodal composition (see West & Turner, 2016) is minimally used compared to written feedback and audio feedback (Lowenthal, 2021; Nicol, 2012). Audio-visual feedback is inclusive, multimodal, precise, and retains its quality over time hence guaranteeing portability. The nature of AVF embeds verbal feedback with access to the written assignment, and the use of technology to enable precision. Verbalising naturally offers more depth in explanation with the added advantage of capturing relational constituents of feedback. AVF is a suitable format for the thinking or the cognitive composition of the feedback to become explicit (see Nicol, 2021). For all involved in the feedback process engagement and interaction are both enhanced. A summation of these strengths is reflected in Payne et al. (2022) positioning audio and video feedback modes as humanising.

We no longer ‘give’ feedback

Contemporary research on feedback processes tracks the evolution of feedback from the now defunct perception of information which was given to the students, to the contemporary understanding that; first of all feedback is a process, and secondly feedback is dialogic, ongoing and embedded with andragogical expectations, as well as aspirations of transferability and portability (e.g. de Kleijn, 2023; Winstone et al., 2022a). Equipped with the understanding that saying you are giving feedback is no longer acceptable (Winstone, et al., 2022b) from the andragogical perspective, this paper makes the case to re-think feedback processes from the bases of democratic principles. Educating, especially in HEIs in the UK, is based on foundational democratic principles imbued with a variety of duties, rights and responsibilities. Power imbalance is therefore in conflict with these democratic principles. From the position of evaluating HEIs in the UK, Winstone et al., (2022b: 1524) raises the concern that National Student Survey (NSS) questions “promote an outdated view of feedback as information transmitted from teacher to student in a timely and specific manner, largely ignoring the role of the student in learning through feedback processes”.

How do we democratise feedback?

Democratic principles advocate freedoms, rights and responsibilities. If we think about feedback following democratic principles, we interrogate primarily the power imbalance in the lecturer-student relationship. The lecturer is in the powerful position of ownership of the feedback which they ‘give’ to the student (see Matthews, et al., 2021). The sense of ownership of the feedback content is taken away from the student and the student becomes a recipient. This power imbalance means the student has a skewed relationship with this feedback which is being imposed on them. What is the student to do with this feedback which has been given to them? What if the student does not identify with the feedback which has been given? Carless (2015:28) declares; impact on learning is limited unless students are actively engaging with feedback processes, and ultimately acting on feedback. Consensus on what these processes constitute is yet to be arrived at as De Kleijn (2023) requests clarity of the activities and strategies which must be applied in these feedback processes. When democratic principles are not followed, distance is created between the student and the feedback process.

In contrast to the lecturer-student feedback element, there is strong uptake of peer-feedback. Peer feedback (see Deneen & Hoo, 2023) supports feedback dialogue, evidences students’ feedback literacy and supports development of self-regulation skills. The strong interest in peer feedback is a result of the significant benefits. Nicol and McCallum (2022) assert that powerful insights are generated from student feedback, at times more powerful than teacher feedback. From the research which this contribution on democratising feedback draws on, the strength of peer feedback is being explored and understood from the position of power. There is the opportunity to redesign assessments to possibly capture and support these peer feedback partnerships linking them to assessment design. Moving away from the conceptualisation of feedback as product, to feedback as process serves to further democratisation of feedback. Careful consideration in incorporating peer feedback within this process contributes to improving feedback dialogue, and to establishing tangible steps. Continuing this dialogue using the audio-visual mode opens up relational spaces and further enhances feedback processes.

Audio-Visual Feedback (AVF)

The discursive nature of AVF assists in personalising feedback and modelling the expectation for engaging with feedback. The multimodal nature of audio-visual feedback (Lacković, and Popova, 2021) necessitates active designation of time to not only engage with feedback, but to interact with feedback. AVF supports the transition of feedback into a process. There is a challenge embedded in this; assessment design minimally factors in feedback avenues and suitable allocation of time for AVF. This is understandable as written feedback is dominant in HEIs in the UK. In order for AVF to attain its full potential, HEIs need to re-work their assessment and feedback policies. Re-working policies is definitely not to give more time to turnaround times; rather this is more time to work on effectively embedding AVF as a viable feedback avenue. AVF is not without its potential challenges. One example is large class sizes and staffing limitations in some HEIs. A potential solution is considering AVF for group assessments and group feedback. Teaching in HEIs is inherently multimodal; combining written, audio and audio-visual content. Why is feedback disproportionately in one format?

Some points to facilitate change
  1. Undertake an audit of feedback formats; recording written format, audio format, and audio-visual format.
  2. Interrogate power imbalance relating to feedback and explore institution-focused and relevant ways to improve the power imbalance.
  3. Co-design feedback processes in line with contemporary co-designing of assessments. Ensure feedback is not an add-on or an after-thought to assessments (see Ajjawi and Boud, 2018).
  4. Review assessment and feedback policies.
  5. Equity: establish feedback processes reflective of advances in learning, teaching and scholarship.
What next?

Feedback is a process involving students and lecturers in a shared space. Cognitive, sociocultural, social constructivist positions, amongst others are at play and space needs to be created for democratic principles. Alongside making sense of feedback, cognitive drivers pass judgement on the utility of feedback, and decision making guides portability of feedback. This contribution advocates the externalisation of these cognitive constituents as a precursor to democratising feedback. The argument is; the student has a right to know. Knowing in this way is empowering. Through democratisation, the student evolves from the position of a recipient to that of a partner in the feedback process. Partnership conjures images of ownership, responsibility and duties. Democratising feedback, through the medium of audio-visual feedback (AVF) removes the power imbalance, showcases the dialogic nature of feedback, and enhances subsequent portability of knowledge and skills. Whilst there is empirical work on feedback as a process, implementation in the structures of HEIs is yet to follow.

This is the 7th blog in our diverse assessment series. Previous blogs can be found here:

Blog 1: Launch of the learning circle (Isabel Fischer & Leda Mirbahai): https://blogs.warwick.ac.uk/wjett/entry/interested_in_diverse/

Blog 2: Creative projects and the ‘state of play’ in diverse assessments (Lewis Beer): https://blogs.warwick.ac.uk/wjett/entry/creative_projects_and/

Blog 3: Student experience of assessments (Molly Fowler): https://blogs.warwick.ac.uk/wjett/entry/a_student_perspective/

Blog 4: Assessment Strategy – one year after starting the learning circle (Isabel Fischer & Leda Mirbahai): https://blogs.warwick.ac.uk/wjett/entry/one_year_on/

Blog 5: Learnings and suggestions based on implementing diverse assessments in the foundation year at Warwick (Lucy Ryland): https://blogs.warwick.ac.uk/wjett/entry/learnings_suggestions_based/

Blog 6: How inclusive is your assessment strategy? (Leda Mirbahai): https://blogs.warwick.ac.uk/wjett/entry/blog_6_how/

Join the Diverse Assessment Learning Circle: If you would like to join the learning circle please contact the co-leads: Leda Mirbahai, Warwick Medical School (WMS) (Leda.Mirbahai@warwick.ac.uk) and Isabel Fischer, Warwick Business School (WBS) (Isabel.Fischer@wbs.ac.uk). This LC is open to non-WIHEA members.

References

Ajjawi, R. & Boud, D. (2018) 'Examining the nature and effects of feedback dialogue', Assessment & Evaluation in Higher Education, 43:7, 1106-1119, DOI: 10.1080/02602938.2018.1434128 https://doi.org/10.1080/02602938.2018.1434128

Carless, D. (2015) Excellence in university assessment: learning from award-winning practice. London: Routledge.

Deneen, C. C. & Hoo, H-T. (2023) 'Connecting teacher and student assessment literacy with self-evaluation and peer feedback', Assessment & Evaluation in Higher Education, 48:2, 214-226, DOI: 10.1080/02602938.2021.1967284 https://doi.org/10.1080/02602938.2021.1967284

de Kleijn, R. A. M. (2023) 'Supporting student and teacher feedback literacy: an instructional model for student feedback processes', Assessment & Evaluation in Higher Education, 48:2, 186-200, DOI: 10.1080/02602938.2021.1967283

Lacković, N. & Popova, B. (2021) 'Multimodality and socio-materiality of lectures in global universities’ media: accounting for bodies and things', Learning, Media and Technology, 46:4, 531-549, DOI: 10.1080/17439884.2021.19286


March 27, 2023

Is interdisciplinarity learnable in 10 weeks?

A Student - Staff co-analysis from IATL’s Creating Digital Futures module

by Anil Dhariwal, Isabel Fischer, Jamal Lawal, Matthew Leslie, Claire Rocks, Shwetha Roy, Bryon Wandera, and Devon Williams

We created this blog collectively in 1h as part of our last Creating Digital Futures workshop. In the interest of time, instead of using Harvard Referencing we entered Weblinks.

In this blog we look back at a 10-week module which brought together students from different fields for the collaborative development of a digital product, which we pitched to the Warwick #Creative Futures Incubator (https://warwickinnovationdistrict.com/creative-futures-incubator-application/). The purpose of the blog is to reflect on whether students can effectively learn and apply the fundamentals of interdisciplinarity, and if 10 weeks is a suitable amount of time for the benefits of interdisciplinarity to take effect.

3 min read

Source: DALLE.E 2 by OpenAI - retrieved 15/3/23 using these prompts: ‘Interdisciplinarity Creating Digital Futures’

Source: DALLE.E 2 by OpenAI - retrieved 15/3/23 using these prompts: ‘Interdisciplinarity Creating Digital Futures’

From an educator-side our research has shown that a key pedagogical challenge is to encourage students to move beyond scanning of information towards critical engagement and action (Preuss, Fischer, Luiz, 2023, see: https://onlinelibrary.wiley.com/doi/10.1111/hequ.12429). Payne (1999, p.173) emphasises the potential of interdisciplinarity to enhance ‘democratisation of knowledge construction processes in society’ with Klein (2000, p.18) explaining that ‘Interdisciplinary knowledge strengthens connections between disciplines and in that process it weakens the division of labour in disciplines, exposes gaps, stimulates cross fertilisation and creates new fields of focus for knowledge inquiry’.

Problem based learning is often thought of as a pedagogical approach that can facilitate interdisciplinary learning. Adopting this approach, Creating Digital Futures asked students to create a product or service at the intersection of Digital Technology, Creativity and the Global Sustainable Development Goals. Focussing first on identifying a problem using Design Thinking was intended to encourage students to come out of their disciplinary boundaries to search for solutions. As research has shown that we cannot take for granted that interdisciplinary learning will take place, and that it is necessary to address issues of interdisciplinary learning specifically (Stentoft et al, 2017, https://doi.org/10.1177/1469787417693510), in Creating Digital Futures we scaffolded the interdisciplinary learning by providing 10 weekly taught sessions covering topics such as working in interdisciplinary teams, design thinking, ethical frameworks, storytelling with data and presentation skills.

A ChatGPT (2023) prompt tells us: ‘Interdisciplinarity has become a buzzword in academia, with many universities and institutions promoting collaborative learning and research. But can students truly learn the skills and mindset needed for interdisciplinary work in just 10 weeks? In this blog, we explore [if] it is possible to truly break down the boundaries between disciplines and instil a collaborative mindset in a matter of weeks.’ (ChatGPT, 2023a)

We - the students - found that our interdisciplinary module has a number of benefits, including:

  • We were able to develop real world skills such as: critical thinking; communication; collaboration and the ability to synthesise information (Anil, Econ)
  • Interdisciplinarity gives the opportunity to see areas that you may lack in due to your subject background and allows you to learn and improve them with the help of others with different perspectives (Byron, Biomed)
  • Providing a better understanding of the utility of one’s own skills from their field in the context of the real world (Devon, Comp Sci)
  • Really highlights the inherent interdisciplinarity of traditional subjects (Jamal, PPE)
  • The helpfulness of application of theory in a practical matter, experiencing, growing and learning from and with others (Matthew, WBS)
  • Allows you to produce an impactful and cohesive piece of work that brings together the ideas from different disciplines (Shwetha, Econ).

In conclusion, students have found many aspects of the module rewarding, and have also reflected that it was a good preparation in anticipation of embracing interdisciplinarity in the workplace. A problem based approach seems to be good to foster interdisciplinary learning in just 10 weeks. More importantly though, students appreciated having been taught by members from five different departments. The module convenors - Claire Rocks (Computer Science) and Isabel Fischer (WBS) - would therefore like to thank Elke Thonnes (Stats) and Robert O’Toole (Arts) as well as Rachel Davis, Clare Green and Holly Smeeton (Warwick’s Creative Futures Incubator / Warwick Entreprise) for making time to take this cohort of IATL students on a journey of interdisciplinary discovery.

ChatGPT prompt to help with the initial thoughts on the blog:

ChatGPT 2023 prompt used on 15/3/23: ‘Is interdisciplinarity learnable in 10 weeks, write a short intro to hook readers in for a blog with this title with an interesting question’.


March 20, 2023

Coaching using video – developing teachers internationally: challenges and opportunities

by Anna Donnelly and Jonty Leese

What we did:

Whilst there is tremendous value in learning face to face, there are some situations in which online learning is the only viable option: international teacher education is a case in point. With the growth of demand for HEIs in the UK to undertake teacher education and development internationally, the University of Warwick sought some practical solutions to developing teachers, often thousands of miles and many time zones away.

Using video to support teacher learning (Major and Watson, 2017) and reflection (Hamel and Viau-Guay, 2019) is not a new practice, but the teacher values (Centre for Teacher Education, 2022) that thread through our course and the bespoke approach to coaching at the University of Warwick shaped how we approached using this technology.

We wanted to develop some tools so that when video was utilised on platforms that had the capability to time stamp or pause either pre-recorded or live recordings of teachers, then make a variety of learning comments on this. We shaped the responses to encourage supportive, nurturing and developmental feed forward.

Where is it?

We selected a platform which had the capability for key stakeholders to live-record content (trainee teachers teaching, mentor and trainee coaching meetings) and / or examine pre-recorded content with a reflective and intellectually curious lens (Centre for Teacher Education, 2022). Comments from all stakeholders could then be time-stamped to aspects of practice and specific coaching or teaching attributes could be flagged and discussed within the platform comment facility. These were linked to our Professional Practice Units (PPU) which have a specific emphasis relating to their teacher education. This gave the opportunity for focussed, evidence-based reflections from mentor, mentee, and university professional; integrating closely theory and practice, as well as providing the opportunity to relate learner with readings and other content to support strengthening of the theory-practice link for all involved.

How it will be integrated into future practice?

As we develop our international course as well as look to expanding our remote programme into teacher education ‘cold spots’ across the UK, this platform could provide a useful tool for both teacher education and professional mentor development as well as a step change in how HEI colleagues may undertake part of their role as teacher educators.

Lessons Learnt:

Challenges
  • When recording involved people under 16, there were significant barriers in terms of negotiating safeguarding concerns and approaches of individual schools across multiple legal jurisdictions. This is a challenge that the company providing the online platform is still developing in terms of support materials for schools.
  • The wider issue around ethics for collecting data GDPR policies and CCTV protocols for under 16s proved to be an ongoing challenge and not one that was solved within the scope of this project.
Opportunities
  • There were tremendous gains to be had in terms of purposeful and focussed teacher reflection on practice, as video proved a rich source of statistics and evidence on which to base a coaching conversation. It had the potential to give agency to teacher construction of their own learning as well as supporting coaches in their examinations of the ‘reality’ of the impact of the student teaching on the children’s learning (Knight, 2021).

References:

Centre for Teacher Education (2022) Developing an ambitious ITE Curriculum. Available at: https://warwick.ac.uk/fac/soc/cte/about/ite-curriculum/ (Accessed: 3 March 2023).

Hamel, C. and Viau-Guay, A. (2019) ‘Using video to support teachers’ reflective practice: A literature review’, Cogent Education. Edited by B. Nkuyubwatsi, 6(1), p. 1673689. Available at: https://doi.org/10.1080/2331186X.2019.1673689.

Major, L. and Watson, S. (2017) ‘Using video to support in-service teacher professional development: the state of the field, limitations and possibilities’, Technology Pedagogy and Education, 27, p. In Press. Available at: https://doi.org/10.1080/1475939X.2017.1361469.


March 13, 2023

Recording and Sharing Mentoring Best Practice

by Jonty Leese and Anna Donnelly

What we did:

In Initial Teacher Education (ITE), the role of coaching a student teacher is crucial. For new and established mentors and trainees, the ability to see good examples of coaching is a powerful tool to see and experience this process. Inspired by Knight’s coaching model (Knight, 2021) and combined with the unique lens of Warwick Teacher Values (Centre for Teacher Education, 2022) we were able to build a framework to support understanding of the equality of role within effective coaching. This process supports university tutors as well as school-based mentors.

To facilitate this, we recorded and edited post lesson mentor meetings from a variety of phases (KS1-KS4) in different contexts to build a clearer understanding of effective mentoring with current all stakeholders.

These were real staff and students, so it was an authentic learning conversation and formed part of their regular weekly meeting.

We processed these to add transcripts for accessibility (W3C, 2018) and used software which created a clickable audio and text of the conversations (otter.ai)

These were then shared on a site builder site here: breaking down into different categories for mentors, whether they were new to it or more experienced. See image below:

Screen shot of mentoring website buttons

How it will be integrated into future practice:

These have been embedded into current practice for our online PGCE and will be shared through mentor training, with these as a support resource going forward for all routes.

These can be utilised as an “on demand” resource for mentors and trainees.

This forms a basis of a growing body of real-world examples which will grow incorporating future recordings ensuring that a range of contexts and examples reflecting the unique role of mentorship is captured.

Lessons Learnt:

It’s very time consuming to record, edit and upload video – even with an organised workflow, it took longer than anticipated.

The use of AI in technology is fallible – proof reading of automated transcripts is necessary to ensure accuracy and appropriateness of what has been transcribed.

It’s our responsibility to build time into training plans, rather than expect that these are additional and extra to core mentor requirements. This could form part of the compulsory 20 hours mentor training that all mentors are mandated to complete for the start of the 2024/25 academic year.

The recording and disseminating of a mentoring video which is intrinsically a private conversation is a powerful window into another person’s world. This can support lifting practitioners out of a silo mindset and can open them up into seeing practical examples of different approaches within the Warwick framework – giving them agency to transform their practice and to reflect upon their own methodologies.

Recording the videos does not require high production values to have value.

Watching a video as a standalone resource without guided reflection and dialogue- may be a missed opportunity; watching, pausing and reflecting as part of a dialogue can stimulate rich conversations (Alexander, 2021). Using the videos alongside pertinent questions to evoke reflection and transformation of practice is the next step.

References:

Alexander, R. (2021) ‘DIALOGIC TEACHING AND THE STUDY OF CLASSROOM TALK’.

Centre for Teacher Education (2022) Developing an ambitious ITE Curriculum. Available at: https://warwick.ac.uk/fac/soc/cte/about/ite-curriculum/ (Accessed: 3 March 2023).

Knight, J. (2021) The Definitive Guide to Instructional Coaching: Seven Factors for Success. Available at: https://www.amazon.co.uk/Definitive-Guide-Instructional-Coaching-Factors/dp/141663066X/ref=sr_1_3?keywords=The+Definitive+Guide+to+Instructional+Coaching%3A+Seven+Factors+for+Success&qid=1646652872&sr=8-3 (Accessed: 7 March 2022).

W3C (2018) Web Content Accessibility Guidelines (WCAG) Overview | Web Accessibility Initiative (WAI) | W3C, Web Accessibility Initiative. Available at: https://www.w3.org/WAI/standards-guidelines/wcag/ (Accessed: 2 January 2019).


March 06, 2023

On ‘Opportunities of AI in Higher Education’ by DALL.E and ChatGPT

Prologue and epilogue by Isabel Fischer (Warwick Business School)

Image illustrating artificial intelligence

Since the creation of our task and finish groups on evaluating and shaping the opportunities of AI in Higher Education (for further information see here: https://blogs.warwick.ac.uk/wjett/entry/ai_for_good/) we have had many very insightful ‘human to human’ discussions. The aim of this blog is, however, to show what ‘machines’ draw and write on the topic, to illustrate both the strengths and the weaknesses of generative AI.

The images at the start of the blog represent DALL.E’s take on the topic. Having always been untalented in drawing, I use DALL.E in my business teaching to show live how AI translates the topic we discuss by imitating distinct artists. Students enjoy how DALL.E (OpenAI) shows other – very creative - images while we wait for AI ‘to think & draw’ our images – there are always four images coming up for each prompt. Of course, we also discuss the risks, threats, and ethics behind AI (here an open-access case study which we wrote on AI ethics in the field of ESG investing: https://journals.sagepub.com/doi/full/10.1177/2043886920961782).

While DALL.E generates images that I could not have drawn, ChatGPT generates text that I could have written better and more concisely, with different logical priorities. I could - and actually did - create visualisations to illustrate my points. Nevertheless, ChatGPT did beat me by far on ‘speed’. For anybody who has not yet asked ChatGPT any questions, here are the ChatGPT responses based on five prompts:

  1. Opportunities for AI in HE
  2. Strategy for implementing and integrating AI in HE
  3. Developing research on the opportunities of AI in HE
  4. Developing an AI based marking and feedback tool
  5. Common areas of weakness of students’ academic writing

Opportunities for AI in HE

Artificial intelligence (AI) is transforming the world around us, and higher education (HE) is no exception. The potential opportunities of AI in HE are vast, ranging from improving learning outcomes and enhancing student engagement to streamlining administrative tasks and reducing costs. In this article, we explore some of the key opportunities of AI in HE and how they can be realized.

  1. Personalized learning: AI can help personalize the learning experience by adapting the content, pace, and style of instruction to the individual needs and preferences of each student. This can be achieved through intelligent tutoring systems, which use machine learning algorithms to analyze student performance data and provide customized feedback and support. Personalized learning can improve student engagement, increase retention rates, and enhance learning outcomes.
  2. Predictive analytics: AI can be used to analyze large datasets of student information, such as grades, attendance records, and behavioral data, to predict future performance and identify at-risk students who may need additional support. This can help educators intervene early and provide targeted interventions to improve student success.
  3. Virtual assistants: AI-powered virtual assistants can help students with routine administrative tasks such as scheduling, registration, and accessing resources. This can free up staff time and reduce wait times for students, improving their overall experience.
  4. Automated grading: AI can be used to automatically grade assignments and provide feedback to students, reducing the workload on instructors and improving the consistency and objectivity of grading. This can also provide students with immediate feedback, enabling them to improve their performance more quickly.
  5. Research and data analysis: AI can be used to analyze large datasets of research publications and help identify trends, patterns, and gaps in the literature. This can help researchers stay up-to-date with the latest developments in their field and generate new insights.
  6. Adaptive assessments: AI can be used to create adaptive assessments that adjust the difficulty and content of questions based on the student's level of knowledge and ability. This can help educators more accurately assess student performance and provide targeted feedback and support.
  7. Smart campus: AI can be used to optimize campus operations, such as energy management, facility maintenance, and security. This can reduce costs and improve the overall efficiency of campus operations.

While the opportunities of AI in HE are significant, there are also challenges that need to be addressed, such as ethical considerations, data privacy concerns, and ensuring that AI is used to enhance, rather than replace, human interaction and support. To fully realize the potential of AI in HE, it is essential that educators, researchers, and policymakers work together to develop responsible and effective strategies for implementing and integrating AI technologies.

Strategy for implementing and integrating AI in HE

Implementing and integrating AI technologies in higher education (HE) requires careful planning and consideration to ensure that these technologies are used in a responsible and effective manner. Here are some responsible and effective strategies that HE institutions can use to implement and integrate AI technologies:

  1. Develop a clear vision and strategy: HE institutions should develop a clear vision and strategy for how AI technologies will be used to enhance the student experience, improve learning outcomes, and support research. This should involve identifying specific use cases and goals for AI, as well as assessing the potential benefits and risks.
  2. Foster collaboration and communication: Implementing and integrating AI technologies requires collaboration and communication across departments and stakeholders. HE institutions should establish cross-functional teams that include educators, administrators, IT professionals, and students to ensure that all perspectives are considered and to foster buy-in and support for AI initiatives.
  3. Ensure transparency and ethical considerations: HE institutions should ensure that the use of AI technologies is transparent and ethical. This includes being clear about how AI technologies are being used, ensuring data privacy and security, and ensuring that AI is used to enhance, rather than replace, human interaction and support.
  4. Invest in staff training and development: AI technologies require specialized skills and knowledge to implement and use effectively. HE institutions should invest in staff training and development programs to ensure that educators and administrators have the necessary skills and knowledge to use AI technologies in their work.
  5. Pilot and evaluate AI initiatives: Before scaling up the use of AI technologies, HE institutions should pilot and evaluate their effectiveness. This involves testing AI technologies in small-scale pilots, collecting feedback from stakeholders, and using data to evaluate their impact on learning outcomes, student engagement, and other key metrics.
  6. Engage with the wider AI community: HE institutions should engage with the wider AI community to stay up-to-date with the latest developments and trends in AI. This includes attending conferences and workshops, collaborating with other institutions, and partnering with industry experts and researchers.
  7. Align AI initiatives with institutional goals and values: Finally, HE institutions should ensure that their AI initiatives are aligned with their institutional goals and values. This involves considering the unique needs and priorities of the institution, as well as the needs and preferences of students, educators, and other stakeholders. By aligning AI initiatives with institutional goals and values, HE institutions can ensure that they are using these technologies in a responsible and effective manner.

Developing research on the opportunities of AI in HE

There are several ways to develop ideas for future opportunities of AI in higher education and organize it as a research topic:

  1. Literature review: Conducting a literature review of existing research on AI in higher education can help identify the gaps and potential areas for future research. This can involve searching for relevant articles, books, and reports in databases such as Google Scholar, ACM Digital Library, and IEEE Xplore.
  2. Expert interviews: Talking to experts in the field, such as educators, researchers, and AI professionals, can help identify emerging trends and potential opportunities for AI in higher education.
  3. Case studies: Conducting case studies of AI implementation in higher education institutions can provide insights into the benefits and challenges of adopting AI technologies.
  4. Surveys: Conducting surveys of stakeholders such as students, faculty, and administrators can help identify their needs and preferences for AI in higher education.
  5. Workshops and conferences: Attending workshops and conferences on AI in higher education can provide opportunities to learn from experts and exchange ideas with peers.

Once you have developed ideas for future opportunities of AI in higher education, organizing it as a research topic can involve the following steps:

  1. Identifying research questions: Based on the ideas developed, identify research questions that are relevant, feasible, and significant.
  2. Defining the scope of the research: Clarify the boundaries of the research topic and define the key concepts and variables.
  3. Developing a research methodology: Decide on the appropriate research methods, such as qualitative, quantitative, or mixed methods, and develop a research design.
  4. Conducting data collection: Collect data using appropriate methods such as surveys, interviews, and case studies.
  5. Analyzing data: Analyze the data using appropriate statistical or qualitative analysis techniques.
  6. Drawing conclusions and making recommendations: Based on the findings, draw conclusions and make recommendations for future research and practice.
  7. Writing up the research: Write up the research in a clear and concise manner, following the conventions of academic writing.

When reviewing emerging trends of AI in higher education after conducting qualitative research, a theory that could be useful is grounded theory.

Grounded theory is a research methodology that involves systematically collecting and analyzing qualitative data to develop a theory or explanation of a phenomenon. The approach emphasizes the need to generate theory from the data itself, rather than imposing preconceived notions or theories on the data.

When conducting a qualitative study of emerging trends of AI in higher education, the grounded theory approach can help identify key themes and patterns in the data that may point towards emerging trends. These trends can then be used to develop a theory that explains how AI is being used in higher education, what the benefits and challenges are, and what the future implications may be.

Grounded theory can also be helpful in identifying gaps in the literature and areas for future research. By systematically analyzing the data, researchers can identify areas where more research is needed, and use the emerging trends to develop research questions that are relevant and meaningful.

Developing an AI based marking and feedback tool

Designing an AI system that can be used to automatically grade assignments and provide personalized feedback to students can be a complex task, but it can be accomplished by following these steps:

  1. Define the grading criteria: The first step is to define the grading criteria for the assignment. This involves breaking down the assignment into its key components and determining how each component will be evaluated. The grading criteria should be clearly defined and objective, so that the AI system can easily understand and apply them.
  2. Create a training dataset: To train the AI system, you need to create a training dataset that includes a set of annotated assignments. These annotated assignments should be graded by human experts according to the defined grading criteria. The dataset should be large enough to ensure that the AI system can learn to recognize a variety of different assignment types and styles.
  3. Choose the appropriate machine learning algorithm: There are several machine learning algorithms that can be used for automatic grading, including decision trees, support vector machines, and neural networks. The choice of algorithm will depend on the complexity of the grading criteria and the size of the training dataset.
  4. Train the AI system: Once you have chosen the appropriate machine learning algorithm, you can train the AI system using the annotated training dataset. The system should be trained to recognize patterns and relationships in the data, and to identify which components of the assignments are important for grading.
  5. Test the AI system: After training the AI system, you should test it using a set of unseen assignments that have not been included in the training dataset. This will help you evaluate the system's accuracy and identify any areas where it needs further improvement.
  6. Provide personalized feedback: To provide personalized feedback to students, you can use the AI system to identify areas where a student may need additional support or guidance. For example, if a student consistently struggles with a particular component of the assignment, the system can provide targeted feedback and resources to help the student improve.
  7. Continuously improve the AI system: Finally, you should continuously monitor and improve the AI system. This involves regularly updating the training dataset, evaluating the system's performance, and incorporating feedback from students and educators to refine the grading criteria and feedback mechanisms.

In summary, designing an AI system for automatic grading and personalized feedback involves defining the grading criteria, creating a training dataset, choosing the appropriate machine learning algorithm, training the system, testing it, providing personalized feedback, and continuously improving the system. By following these steps, you can develop an effective and efficient AI system that enhances the learning experience for students and educators alike.

Common areas of weakness of students’ academic writing

Academic writing is an essential skill for students in higher education. However, there are common areas of weakness that students often struggle with. Here are some of the most common areas of weaknesses for students' academic writing in higher education:

  1. Poor organization and structure: Students often struggle with organizing their ideas and structuring their writing in a way that is clear and logical. This can make their writing difficult to follow and reduce the overall effectiveness of their argument.
  2. Weak thesis statements: A strong thesis statement is essential for guiding the direction of an academic paper. However, many students struggle to develop a clear and concise thesis statement that effectively summarizes the main argument of their paper.
  3. Inadequate research and citation skills: Effective academic writing requires the use of credible sources to support arguments. However, many students struggle with finding appropriate sources, evaluating their credibility, and correctly citing them in their writing.
  4. Poor grammar and syntax: Grammatical errors and poor syntax can make writing difficult to read and understand. This can detract from the clarity and effectiveness of students' arguments.
  5. Lack of clarity and concision: Academic writing should be clear and concise, but many students struggle with using language that is overly complex or wordy. This can make their writing difficult to read and reduce the impact of their argument.
  6. Inappropriate tone or style: Students may struggle to adopt an appropriate tone or style for academic writing, which can lead to writing that is overly informal or lacking in scholarly rigor.
  7. Failure to address the assignment prompt: Finally, students may struggle with addressing the assignment prompt in a way that fully meets the expectations of the instructor. This can result in writing that is off-topic or fails to fully address the main requirements of the assignment.

Overall, addressing these common areas of weakness in academic writing can help students improve their writing skills and achieve greater success in their academic pursuits.

Epilogue: For comparison to the above, here are some ChatGPT responses to prompts by Chaoran Yang, a student who is part of my consulting module. I suggested to all students of that cohort to try ChatGPT for idea generation at the start of a consulting exercise for the Warwick Esports centre. Similar to my findings from above, the students also felt that ChatGPT did not provide any major insights. Possibly because we did not ask the right questions, and, more importantly, because ChatGPT is still in its infancy.

In our team we have spent over 2,000 hours developing the Warwick AI Essay Analyst. We used a mixture of non-AI rule-based statistical features and deep-learning algorithms and databases, e.g., Pytorch, Hugging face framework, and Transformer (for further information on our AI-based tool see here: https://nationalcentreforai.jiscinvolve.org/wp/2022/11/16/interested-in-receiving-formative-feedback-on-your-draft-essays-and-dissertations-on-demand-introducing-warwicks-ai-essay-analyst/).

With the current progress in the field of generative AI, developments of future tools will be faster – let’s work together to ensure that all tools, whether developed in-house or bought / endorsed by the university have robust ethical underpinnings. My final suggestions for readers is to review here the Ethical guidelines on the use of artificial intelligence (AI) and data in teaching and learning for educators, produced by the Office of the European Union: https://data.europa.eu/doi/10.2766/153756


February 27, 2023

A pedagogy of kindness: the cornerstone for student learning and wellness

This short THE Campus piece outlines a pedagogy founded on human connection, care and compassion that improves student learning outcomes. It suggests 12 ways that you can incorporate a pedagogy of kindness into your teaching practice:

https://www.timeshighereducation.com/campus/pedagogy-kindness-cornerstone-student-learning-and-wellness


February 20, 2023

Blog 6: How inclusive is your assessment strategy?

By Leda Mirbahai (Warwick Medical School)

Leda Mirbahai photo

Assessments are a fundamental part of student experience - with students learning by doing, i.e. by engaging with assessment tasks and then, after submission through the feedback they receive on their performance and progress Internal and External Examiner often ensure that assessment strategies are reliable, effective, and accurate, however, whether they are inclusive is often overlooked.

Considering the diversity of our student population, it is clear that ‘one size fits all’ approach to assessment design and delivery is not an inclusive assessment strategy which links with diversification of our assessment approaches (a point that we will come back to shortly). The Quality Assurance Agency (QAA), UK’s quality code for higher education, lists 10 guiding principles on expectations and practices for assessment which are:

  1. Assessment methods and criteria are aligned to learning outcomes and teaching activities.
  2. Assessment is reliable, consistent, fair and valid.
  3. Assessment design is approached holistically.
  4. Assessment is inclusive and equitable.
  5. Assessment is explicit and transparent.
  6. Assessment and feedback are purposeful and supports the learning process.
  7. Assessment is timely.
  8. Assessment is efficient and manageable.
  9. Students are supported and prepared for assessment.
  10. Assessment encourages academic integrity.

As mentioned, a criteria of good assessment ‘strategy’ is to be inclusive and equitable. If one really reflects on the 10 principles, they are all interlinked. For example, it is very difficult to demonstrate an assessment strategy that is inclusive where assessment loads are not manageable for our students or students don’t receive equitable level of support. With Toolkits such as ‘Embedding inclusive assessment reflective toolkit’, a project funded by QAA, becoming available we can reflect on the assessment strategy of our courses and programmes by considering how well we align to attributes of an inclusive assessment. The toolkit starts to encourage a triangulation critic of our assessment approaches by involving major stakeholders from students, academics and leaders in the process and asking some key reflective questions. Reflecting on the 9 attributes of inclusive assessment as mentioned by this toolkit, most questions are basically reflecting on the principles of good assessment, which in my view indicates that if your assessment aligns to the principles, it should promote inclusivity.

As an academic staff leading the assessment strategy for a new UG course at WMS, I have been involved in the planning and delivery of a course level assessment strategy. As the course is new, it has removed some of the challenges of trying to alter and adjust existing assessments for individual modules. This raises an important concept; assessment strategies should be seen at course/programme level. Going back to principles of good assessments, we need to demonstrate how our assessments map to course level, year level and module level learning outcomes as well as ensuring students are supported and prepared for assessments. This highlights the gradual building of our learner’s skills and knowledge in a spiral curriculum and enabling them to receive continuous feedback on their progress in a meaningful way. Learners need to be able to visualise and reflect on their progress across the programme and to achieve this, our assessments in one module (skills, attribute and knowledge) need to meaningfully build on the previous modules; hence programme level approach being more desirable.

Using diverse assessments is a great way of acknowledging that ‘one size fits all’ approach to assessment design and delivery is not an inclusive assessment strategy. However, this also doesn’t mean that we should sprinkle our assessment programmes with as many different modes of assessments possible. Introducing too many different modes of assessments that are not revisited or are not utilising or building on skills and knowledge of a learner would just add to the level of stress encountered by our learner as it means our students need to learn a new and unfamiliar assessment approach just to use it once! Therefore, diversification should be achieved in a meaningful way rather than just for the sack of introducing new assessment modes.

Finally, in my view one of the most important aspects of a good assessment strategy is continuous reflection and improvement. There is no fixed perfect assessment strategy as our learners and their requirements are constantly changing. An assessment strategy that is developed and never revised will soon become unfit for the purpose it was developed. I know this as I am already making a list of changes that we need to embed into our assessment strategy and the course is only 3 years old! So I leave you with one question. How inclusive is your assessment strategy?

References

The Quality Assurance Agency (QAA) for Higher Education (2018) UK Quality Code for Higher Education advice and guidance: Assessments.

Embedding inclusive assessment-Reflective toolkit (2022), a QAA funded project. Developing a Set of Inclusive Assessment Design Attributes for use Across the Higher Education Sector (qaa.ac.uk)

This is the 6th blog in our diverse assessment series. Previous blogs can be found here:

Blog 1: Launch of the learning circle (Isabel Fischer & Leda Mirbahai): https://blogs.warwick.ac.uk/wjett/entry/interested_in_diverse/

Blog 2: Creative projects and the ‘state of play’ in diverse assessments (Lewis Beer): https://blogs.warwick.ac.uk/wjett/entry/creative_projects_and/

Blog 3: Student experience of assessments (Molly Fowler): https://blogs.warwick.ac.uk/wjett/entry/a_student_perspective/

Blog 4: Assessment Strategy – one year after starting the learning circle (Isabel Fischer & Leda Mirbahai): https://blogs.warwick.ac.uk/wjett/entry/one_year_on/

Blog 5: Learnings and suggestions based on implementing diverse assessments in the foundation year at Warwick (Lucy Ryland): https://blogs.warwick.ac.uk/wjett/entry/learnings_suggestions_based/

Join the Diverse Assessment Learning Circle: If you would like to join the learning circle please contact the co-leads: Leda Mirbahai, Warwick Medical School (WMS) (Leda.Mirbahai@warwick.ac.uk) and Isabel Fischer, Warwick Business School (WBS) (Isabel.Fischer@wbs.ac.uk). This LC is open to non-WIHEA members.


February 13, 2023

AI for Good: Evaluating and Shaping Opportunities of AI in Education

By Isabel Fischer, Leda Mirbahai, and David Buxton

Following the rise of awareness of the opportunities (and threats) of artificial intelligence (AI) in education, we have created a task and finish group which aims to review and ‘imagine’ the opportunities and challenges of AI in education, incl. assessments. Our vision is to deploy AI as a tool to support all students, independent of background and socio-demographic characteristics, to be successful in their studies and in their future work, while ensuring academic integrity, as well as to support educators feel confident in using AI effectively in promoting learning. We are working in five (sub)groups:

  1. General AI in Education (AIEd) Opportunities & information sharing
  2. Novel and Diverse Assessment Designs
  3. Feedback, Marking, Authorship Detection
  4. Designing Teaching Content - ‘what is out there being developed?’
  5. 'Red Team': AI Ethics and Academic Integrity

As we are still interested in colleagues from within Warwick as well as other institutions and the wider community of stakeholders to join us, here some further information per (sub)group:

1) General AI in Education (AIEd) Opportunities & information sharing: We review how to capture, shape, and disseminate the opportunities for both learner-facing and educator-facing AI, mainly in HE but also considering how HE can support the secondary and even primary school sector (e.g. how to help teachers to experiment with different forms of AI in a low-stake environment). We also consider the benefits, such as reducing inequality, fairness and democratisation that AI offers, evaluating how we can support SDG 4 (equitable and quality education) and SDG 10 (reducing inequalities). We want to help educators to know how to potentially embrace recent AI developments for their professional practice. Combined with sub-group / Strand 5, the ‘red team’ we also want to inform colleagues on research (similar to mini literature reviews) on topics such as Algorithmic Fairness.

Target Output: A WIHEA page that is informative for colleagues new to AIEd (explanations, links to other resources, links to discussions / brainstorming exercises / blogs, suggestions for their practice)

2) Designing Assessments: We review the opportunities for designing and setting diverse assessments (Learner-facing), including embedding our work within our different Learning Circle’s work. It is in this strand that most of the student co-creation will take place.

Target Output: WIHEA page, blogs, and talks

3) Feedback and Marking: We review the opportunities of using AI for formative feedback (Learner-facing), summative feedback (Educator-facing), ‘AES – automated essay scoring’ (educator-facing), and stylometry (authorship authentication) as well as ChatGPT detection. One aspect of this strand (but not constrained to this strand) is also ‘Move fast, Break fast, Learn fast’ – doing small scale experiments and testing them (e.g., Consulting Students will experiment with mind maps this term and then can, but don’t have to, submit their work to the Warwick AI Essay Analyst for formative feedback and we can analyse their work).

Target Output: A WIHEA page that disseminates information and possibly diffusion of the actual Warwick AI Essay Analyst tool at Warwick, potentially producing research output

4) Designing Module and Lesson Content & Recommendations for institutional Developments / Purchases: Educator-facing, we review tools and initiatives that might help educators in planning and organising their modules and lessons, as well monitoring their email and forum entries. This group looks at all educator-facing areas besides designing assessments (group 2) and providing feedback on assessments (group 3). This group might also make recommendations to the institution on what software to build or to purchase etc.

Target Output: A WIHEA page that disseminates information, possibly making recommendations for in-house developments / purchase of external software packages

5) A ‘red team’ acknowledges that AI is here to stay and ensures we follow AI Ethics guidelines and that everybody is clear about the risks. This team also reviews and mitigates the challenges to Academic Integrity more broadly. Moreover, it reviews the risk of bought-in products from EdTech and Tech companies, ensuring that AI Ethics is applicable both for in-house and off the shelf, bought-in products.

Target Output: A WIHEA page that provides information for colleagues worried about AIEd (explanations, links to other resources, links to discussions) especially on the topic of AI Ethics and Academic Integrity (what is OK to do, what isn’t – where should students / educators draw the line). Collaborating with stand 1, this group might want to explain (do a high-level literature review / providing links to important research) aspects of AI Ethics / Academic Integrity, such as explaining concepts such as ‘Algorithmic Fairness’. Building on work by other groups, e.g., last year’s ‘Online Assessment Data Ethics Group’, this group might want to develop a proposal for SLEEC (https://warwick.ac.uk/services/gov/committees/sleec/) and/or to provide guidance and advice to EPQ on suitable policy and guidance where appropriate.

Proposed Overall Target for entire Task and Finish Group, i.e. across the five groups / strands: Have some tangible outputs (WIHEA page, blogs, talks) that support colleagues when they embrace change in an ethically sound way that respects all stakeholders, especially learners and educators. Ideally collaborating with other universities, other education providers, and industry. Possibly develop a proposal for SLEEC and/or provide guidance and advice to EPQ on suitable policy and guidance where appropriate.

Please email Isabel.fischer@wbs.ac.uk if you are interested in joining any of the groups.

Please email Leda.Mirbahai@warwick.ac.uk if you are interested in joining our open WIHEA Diverse Assessment Learning Circle with interesting talks, such as our talks this month on Synoptic Assessments and on Democratising Assessments.


November 2024

Mo Tu We Th Fr Sa Su
Oct |  Today  |
            1 2 3
4 5 6 7 8 9 10
11 12 13 14 15 16 17
18 19 20 21 22 23 24
25 26 27 28 29 30   

Search this blog

Tags

Galleries

Most recent comments

  • Very interesting, thank you for sharing. Great CPD reflection. by Joel Milburn on this entry
  • Hi Lucy, Thank you for sharing the highs and lows of diverse assessments. I hope you have inspired o… by Anna Tranter on this entry
  • Hello Lucy, I totally agree with everything you have said here. And well done for having the energy … by Natalie Sharpling on this entry
  • Thank you for setting up this Learning Circle. Clearly, this is an area where we can make real progr… by Gwen Van der Velden on this entry
  • It's wonderful to read of your success Alex and the fact that you've been able to eradicate some pre… by Catherine Glavina on this entry

Blog archive

Loading…
RSS2.0 Atom
Not signed in
Sign in

Powered by BlogBuilder
© MMXXIV