July 10, 2023

AI Feedback Systems: A Student Perspective – Mara Bortnowschi

The buzz is endless – AI is taking Higher Education by storm. Since the launch of ChatGPT, everyone seems to have an opinion, and rightfully so. It’s so new and we have yet to fully understand its potential and the impact it will have. Within academia, the general sentiment mostly surrounds concern for responsible use, as many students have heard their professors and lecturers warning them against its use. However, its rapid growth and widespread adoption demonstrate that it’s not going anywhere soon so instead of avoiding it, it should be given the time of day to understand the risks and challenges but also the opportunities it presents. Furthermore, I think the student voice in these discussions has been underrepresented, but really students can be the key to harnessing this technological advancement as an asset to enhancing learning and education.

The WIHEA group have already explored a number of subjects regarding AI in education from student perspectives that can be found on the group’s Artificial Intelligence in Education webpage. These have included emerging questions AI presents, the risks and ethics of academic integrity, evolving assessment styles to mitigate and integrate AI into assessment and how teaching may evolve. I will now explore some of the opportunities that are being presented with the widening availability and access to AI tools for students to enhance their learning and to generate formative feedback. While summative marking has been and continues to be required to be marked by human markers according to the UK Quality Code in Assessment (UKSCQA, 2018), formative feedback has more flexibility, and we are now presented with an opportunity to test and utilise the capabilities of these AI technologies in providing timely, constructive, and developmental feedback.

Existing feedback systems

This notion will be particularly explored with regards to formative elements of summative assessments. Feedback should allow a student to understand strengths and weaknesses of their work and if engaged with effectively, can be used to improve academic performance, and thus learning. Especially throughout the pandemic, we have seen the role of feedback change massively: as more of education has shifted online, reliance on formative assessments has increased as assessments for learning. This is in contrast to summative assessments which more so represent assessments of learning (Wyatt-Smith, Klenowski and Colbert, 2014). Formative assessments also are an opportunity for autonomous learning by developing one’s own skills and relying on self-motivation. It would also be fair to say that formative feedback can be self-assessment of sorts, as even though the formative feedback is generated externally, it is the engagement with, and learning you apply from it that will ultimately make a difference in each student’s performance.

AI generated feedback

So what could incorporation of AI in these feedback systems change? Well, the use of algorithms in generation of feedback is not an entirely new concept. Algorithms, such as Grammarly and Sketch Engine, have been around for a while and they can generate feedback on academic writing and are for the most part freely available, or students are granted access to them by their institutions. But with more complicated algorithms that use machine learning, we can apply them to provide specific and personalised feedback. To make this even more applicable, by integrating what could be different elements of summative marking criteria or rubrics, they could provide some of the most relevant feedback at a moment’s notice.

This application is indeed being explored right here at the University of Warwick. Isabel Fischer, a WBS professor, is trying to pilot a deep learning formative feedback tool that has the potential to provide more comprehensive feedback that was developed with WBS marking criteria at its core. By simply submitting a pdf or word document, the algorithm instantly produces a document of in depth feedback on the four aspects of WBS marking criteria. This could be just the start of developing similar department-specific feedback tools taking into account department-specific assignments, marking criteria, and writing styles for drafts of academic writing. While there are definitely some considerations to look out for, this is fascinating and shows great promise as a tool to increase student autonomy in adapting how they might approach assignments to still have the opportunity to personally benefit from formative feedback.

Considerations of using generative AI

The considerations I mentioned earlier are worth discussing as students are turning to generative AI technologies like ChatGPT more and more. While these technologies are being developed to simulate human intelligence, there are some things they are simply not capable of. For example, it lacks expressions or expressive language. If using them to generate feedback on your writing, you should be aware that they will not always be able to grasp the nuances or expressive language in that writing. In other words, any feedback you receive from AI should be approached critically. You decide what you implement from feedback you receive, and you are responsible for identifying and understanding what truly can improve your work. This is all part of the responsible use of AI, but really also goes for human generated feedback. Your assignment at the end of the day will still be marked by a human marker with in-depth subject-specific knowledge and skills that they are asking you to learn and demonstrate in your assignment. I think this is the quick, irresponsible and neglectful way people have been quick to exploit resources like ChatGPT, where they do not doubt any response it has generated and implement them into a piece of work, only to find that its references are wrong or entirely don’t exist. Firstly, this should not be the way we utilise it, as this is blatant plagiarism, but secondly, a critical approach should be used to (for example) verify references, and critically understand that the way AI answers can lack certain elements of context. Regardless, the point still stands: responsible applications of AI technologies should not be about using it to do your work, but using them to enhance or improve your outputs.

Engagement with AI technologies and feedback

A new level of engagement with AI has been seen since the release of ChatGPT and DALL-E. Perhaps this is rooted in the great advancement that this represented or, more sinisterly, the opportunity to exploit the technology to minimise workload. Regardless, everyone’s interest has been piqued, and the level of engagement has been massive, exceeding what anyone might have expected particularly from students. At the University of Warwick alone, students have made a total of 850,000 total site visits to ChatGPT in the first two months only on the university’s Wi-Fi (SWNS, 2023). I think it’s important to try to understand why this might be in order to be able to channel this traffic for good rather than just fear this alleged ‘cheating epidemic’ that the media has been dubbing it.

In contrast to the older algorithm technologies that have been around, like for example the previously mentioned Grammarly and Sketch, which experienced much more moderate levels of engagement and use. Reasons vary from lack of awareness, to limited breadth of feedback to language, or to lack of confidence in the feedback they provide. AI has surpassed some of these limiting factors in that it is capable of generating a wider breadth of feedback that can include language, style, structure and more. The confidence in the feedback it produces is reassured by the continuous concern from educators. If professors are afraid AI technologies can be used to write entire assessments, then they must be capable of doing so.

Further benefits

As a result, we have seen students be a lot more open to trying to use ChatGPT, and I think we should utilise this eagerness in a way that encourages students to enhance their academic achievements. By introducing resources such as Isabel Fischer’s feedback tool or teaching students how to formulate prompts for ChatGPT to generate similar constructive feedback, we can guide a smooth integration of ChatGPT into Higher Education practices. And there are so many more benefits we have the potential to see. For one, this alleviates a massive workload off staff. If such tools are available to take care of the writing style and structure, staff’s role in formative feedback can remain more focused on content. The speed (or even instantaneity) with which AI can produce feedback also makes feedback more accessible. Furthermore, students can engage with it as many times as they like, inputting multiple drafts, as they are less limited by staff work capacity. Also, different students work on different timescales and with different approaches when faced with an assignment. This further widens accessibility to students that might start assignments later that what might normally be a formative deadline. Communicating these advantages is key in order to achieve these outcomes and to harness this technology towards enhancing educational experience for both staff and students.

Conclusion and personal experience

In my experience thus far with using ChatGPT, I have had mixed feelings. On the one hand, I am very apprehensive of the fact that its use is quite contentious at the moment, with some professors explicitly prohibiting its use or consultation. On the other hand, it is a resource that is available and it feels foolish not to use what is at your disposal. Throughout the research that went into this article and discussion with faculty members about its potential to provide feedback, I have been introduced to a very clearly constructive way to engage with ChatGPT, that seems to make both staff and students happy. While we are still in the early stages of understanding the potential and risks of generative AI technology, at the end of the day this is a tool that will have implications for Higher Education and we are being faced with the possibility of either embracing it, in various ways such as to generate formative feedback, or let it escape our control at the cost of academic integrity, because it is clear that prohibiting its use will not prevent people from exploiting it.

For further queries: marabortnowschi@yahoo.ca or mara.bortnowschi@warwick.ac.uk (may expire soon)

Reference List

SWNS (2023). University of Warwick fears cheating epidemic as data reveals huge number of students visiting AI website during exams. [online] Kenilworth Nub News. Available at: https://kenilworth.nub.news/news/local-news/university-of-warwick-fears-cheating-epidemic-as-data-reveals-huge-number-of-students-visiting-ai-website-during-exams-176836 [Accessed 19 Jun. 2023].

UKSCQA (2018). UK Quality Code for Higher Education Advice and Guidance Assessment. [online] Available at: https://www.qaa.ac.uk/docs/qaa/quality-code/advice-and-guidance-assessment.pdf?sfvrsn=ca29c181_4 [Accessed 16 Jun. 2023].

Wyatt-Smith, C., Klenowski, V. and Colbert, P. (2014). Assessment Understood as Enabling. The Enabling Power of Assessment, pp.1–20. doi:https://doi.org/10.1007/978-94-007-5902-2_1.


July 03, 2023

Embedding and assessing compassion in the university curriculum

In this short video (21 mins), Theo Gilbert explains the principles and rationales for rooting the science of compassion into the 21st century degree programme. It is the first of a series that aims to support colleagues in learning more about the research on this topic and how, in simple and practical ways, you can apply this research to your own practices.

You can access the full playlist of short videos in this series on this webpage.


June 26, 2023

Using AI for Formative Feedback: Current Challenges, Reflections, and Future Investigation

By Matthew Voice, Applied Linguistics at the University of Warwick

One strand of the WIHEA’s working group for AI in education has focused on the role of AI in formative feedback. As part of this strand, I have been experimenting with feeding my own writing to a range of generative AI (ChatGPT, Google Bard, and Microsoft Bing), to learn more about the sorts of feedback they provide.

The accompanying presentation documents my observations during this process. Some issues, such as the propensity of AI to ‘hallucinate’ sources, are well-documented concerns with current models. As discourse on student use of AI begins to make its way into the classroom, these challenges might provide a basis for critical discussion around the accuracy and quality of the feedback produced by language models, and the need for student to review any outputs produced by LLMs.

Other common issues present different challenges for students using LLMs to elicit formative feedback. For instance, the prompt protocol in the presentation revealed a tendency for AI to provide contradictory advice when its suggestions are queried, leading to a confusing stance on whether or not an issue raised actually constitutes a point for improvement within the source text. When tasked with rewriting prompt material for improvement, LLMs consistently misconstrued (and therefore left absent) some of the nuances of my original review, in a fashion which changed key elements of the original argumentation without acknowledgement. The potential challenges for student users which arise from these tendencies is discussed in more detail in the presentation’s notes.

In addition to giving some indication of the potential role of LLMs in formative feedback, this task has also prompted me to reflect on the way I approach and understand generative AI as an educator. Going forward, I want to suggest two points of reflection for future tasks used to generate and model LLM output in pedagogical contexts. Firstly: is the task a reasonable one? Using LLMs ethically requires using my own writing as a basis for prompt material, but my choice to use published work means that the text in question had already been re-drafted and edited to a publishable standard. What improvements were the LLMs supposed to find, at this point? In future, I would be interested to try eliciting LLM feedback on work in progress as a point of comparison.

Secondly, is the task realistic, i.e. does it accurately reflect the way students use and engage with AI independently? The review in my presentation, for example, presupposes that the process of prompting an LLM for improvements to pre-written text is comparable to student use of these programmes. But how accurate is this assumption? In the Department of Applied Linguistics, our in-progress Univoice project sees student researchers interviewing their peers about their academic process. Data from this project might provide clearer insight into the ways students employ AI in their learning and writing, providing a stronger basis for future critical investigation of the strengths and limitations in AI’s capacity as a tool for feedback.

This is blog 14 in our diverse assessment series, the two most recent previous blogs can be found here:


June 22, 2023

Rethinking authentic assessment: work, well–being, and society by Jan McArthur

In this 2022 paper, Jan McArthur builds on “existing work on authentic assessment to develop a more holistic and richer concept that will be more beneficial to individual students and to the larger society of which they are part.” McArthur presents three key principles to help us rethink and broaden the concept of authentic assessment: 1) From real world/world of work to society; 2) From task performance to why we value the task; 3) From the status‑quo of real‑world/world of work to transforming society. If you are short on time, you might want to jump straight to page 8 where discussion of these three principles begins:

https://link.springer.com/article/10.1007/s10734-022-00822-y


June 19, 2023

Assessments: Capturing Lived Experience and Shaping the Future

Reflection on Project Outputs by Molly Fowler

Molly Fowler photo

This WIHEA funded co-creation project aimed to capture and explore student and staff perspectives on diverse assessment. Neither group were clearly able to define a diverse assessment strategy, but interestingly their feelings about assessment and ideas of how they can be improved were very similar. Students expressed a desire for greater choice, flexibility and equitable access to assessments. Equitable access encompasses a wide range of complex personal needs including language requirements, disability, neurodiversity, caring responsibilities, and the need to work alongside studies. Staff iterated many of the same concepts but framed their ideas around pedagogical models. There was a strong emphasis on learning from assessments on both sides and a widespread longing for a culture shift to design assessments that model a fair and fulfilling education. Student co-creation was seen as a necessary tool to expedite the shift towards embedding assessments as part of the learning journey.

I am a final year student on the Health and Medical Sciences BSc programme. My role as a student cocreator in this research project was to collect and analyse data from students and staff pertaining to their beliefs around assessment. In the analysis stage of the project, I mainly focused on collating and summarising the student data. I am new to conducting primary research and I have thoroughly appreciated this experience. I enjoyed the challenge of leading interviews and focus groups and deciding when to explore a statement further or manoeuvre back to the set questions. Gaining first-hand insight into the research process has augmented my ability to understand and extract key information from research papers which will be a life-long skill – and was particularly useful when I was conducting a systematic review for my dissertation. It has been very satisfying to observe my own personal development in this way.

This project has made me aware of my privilege in assessments as a neurotypical English speaker. I have been exposed to a range of different perspectives on assessment and I hope to be better equipped to identify problems and support those around me. For example, I was surprised to learn that international students feel more disadvantaged by multiple choice exams than essays, as MCQs often require a nuanced understanding of language and grammar. Similarly, I have always taken a pragmatic approach to assessments and centred my learning around them. I had not previously considered assessments as part of the learning journey or as a learning exercise. As I move into the next phase of my own education, I will try to extend my learning beyond assessments to gain knowledge that I can use in my profession. Undertaking this project has been an enriching experience as a student and as an individual. It has shaped my approach to my assessments, and I have become more aware of the complex needs of others who are completing the same assessment. Students and staff are calling for the same changes to assessment methodology, which can only be implemented if the University takes a holistic approach to restructuring assessments with students contributing to the process.

I look forward to bringing my knowledge from this assignment into my next research project. This is the 13th blog in our diverse assessment series. Previous blogs can be found here:

Blog 1: Launch of the learning circle (Isabel Fischer & Leda Mirbahai): https://blogs.warwick.ac.uk/wjett/entry/interested_in_diverse/

Blog 2: Creative projects and the ‘state of play’ in diverse assessments (Lewis Beer): https://blogs.warwick.ac.uk/wjett/entry/creative_projects_and/

Blog 3: Student experience of assessments (Molly Fowler): https://blogs.warwick.ac.uk/wjett/entry/a_student_perspective/

Blog 4: Assessment Strategy – one year after starting the learning circle (Isabel Fischer & Leda Mirbahai): https://blogs.warwick.ac.uk/wjett/entry/one_year_on/

Blog 5: Learnings and suggestions based on implementing diverse assessments in the foundation year at Warwick (Lucy Ryland): https://blogs.warwick.ac.uk/wjett/entry/learnings_suggestions_based/

Blog 6: How inclusive is your assessment strategy? (Leda Mirbahai): https://blogs.warwick.ac.uk/wjett/entry/blog_6_how/

Blog 7: Democratising the feedback process (Linda Enow): https://blogs.warwick.ac.uk/wjett/entry/democratising_the_feedback/

Blog 8: AI for Good: Evaluating and Shaping Opportunities of AI in Education (Isabel Fischer, Leda Mirbahai & David Buxton): https://blogs.warwick.ac.uk/wjett/entry/ai_for_good/

Blog 9: On ‘Opportunities of AI in Higher Education’ by DALL.E and ChatGPT (Isabel Fischer): https://blogs.warwick.ac.uk/wjett/entry/on_opportunities_of/

Blog 10: Pedagogic paradigm 4.0: bringing students, educators and AI together (Isabel Fischer): https://www.timeshighereducation.com/campus/pedagogic-paradigm-40-bringing-students-educators-and-ai-together

Blog 11: Ethically deploying AI in education: An update from the University of Warwick’s open community of practice (Isabel Fischer, Leda Mirbahai, Lewis Beer, David Buxton, Sam Grierson, Lee Griffin, and Neha Gupta): https://www.open.ac.uk/scholarship-and-innovation/scilab/ethically-deploying-ai-education

Blog 12: Building knowledge on the pedagogy of using generative AI in the classroom and in assessments (Isabel Fischer and Matt Lucas): https://blogs.warwick.ac.uk/wjett/entry/building_knowledge_on/

Join the Diverse Assessment Learning Circle: If you would like to join the learning circle please contact the co-leads: Leda Mirbahai, Warwick Medical School (WMS) (Leda.Mirbahai@warwick.ac.uk) and Isabel Fischer, Warwick Business School (WBS) (Isabel.Fischer@wbs.ac.uk). This LC is open to non-WIHEA members.


June 12, 2023

Building knowledge on the pedagogy of using generative AI in the classroom and in assessments

By Matt Lucas and Isabel Fischer (WBS)

Matt Lucas is a Senior Product Manager at IBM, and Isabel Fischer is an Associate Professor (Reader) of Information Systems at WBS (Warwick Business School). Isabel also co-convenes an IATL (Institute for Advanced Teaching and Learning) module. This blog represents their own opinions and not those of their employers.

After two terms of including generative AI (GenAI) in my teaching and for assessments I am still building my knowledge and understanding around the pedagogy of using GenAI. Students seem to like the entertainment of playing around with music and art (e.g. DALL.E 2 and midjourney), creating images and also memes, with all of these being user-friendly for big screens and also for huddling around one laptop as part of teamwork. Text-outputs seems less intuitive for ‘collective use’: There does not seem to be yet an app available that allows for hands-on collaborative refinement of prompts (e.g. similar to students working on the same Google doc). And displaying a string of words on a shared screen clearly does not have the same entertainment value for students as ‘customers and consumers’.

In addition to a lack of entertainment value I also found that students seem to actually appreciate word-based GenAI (e.g. ChatGPT and Bard) as ‘their secret tool’ at their disposal and for them to use. They appear to appreciate it, if lecturers show them the exact prompts that they can copy that allows them to make the most of ‘their secret tool’. They seem less keen about having to be transparent about using the tool themselves and having to justify and critically reflect on usage. It not only means additional work, more importantly, they dislike the thought of the tool’s hidden power being exposed. They appear even less keen for lecturers to use GenAI for the lesson preparation and to be transparent about it because otherwise, what is the ‘perceived added value’ of attending the lecture if they could have just reviewed GenAI?

With this in mind, what are the skills that students can learn from using GenAI in the classroom and in assessments?

In the attached blog Matt Lucas and I suggest that by including innovative aspects into assessments, students can learn and practise four skills that are relevant for their future careers in a world disrupted by AI:

  1. Cognitive flexibility, abstraction and simplification

  2. Curiosity, including prompt engineering

  3. Personalisation, reflection and empathising to adapt to different audiences

  4. Critical evaluation of AI

For each of the four skills we explain in the attached blog the relevance for student learning with some illustrative examples, before outlining how we have incorporated these four skills into students’ assessments in the recent term.


June 05, 2023

Developing compassionate academic leadership: the practice of kindness

This two-page opinion piece argues “that universities are, or ought to be, ‘caregiving organisations’ that promote the practice of compassionate pedagogy, because of their role and primary task of helping students to learn.” However, “neoliberal ideology and higher education policy models..are seemingly at odds with the values and practice of compassionate pedagogy.” What do you think? How can we help to develop an approach to academic leadership that is underpinned by the practice of kindness?

https://jpaap.ac.uk/JPAAP/article/view/375/498


May 30, 2023

Ungrading: more possibilities than some might think

Assessment is often the biggest cause of student anxiety and distress. Some have begun to explore ‘ungrading’ as a way to enhance the developmental rather than judgemental aspects of assessment. Ungrading can be implemented in various ways and is a process of decentring summative grades or marks. In this 17-minute video, Martin Compton from UCL explores the potential of ‘ungrading’ and the various ways that he has implemented elements of it.

https://www.youtube.com/watch?v=xdBYm8K_pVI&list=PLAbF8wnSF-e9H54nDtvsCXehw4xDS9xQb&index=2


May 22, 2023

Spotlight collation: the art of collegiality and why it matters

This collection from THE Campus offers resources on nurturing a spirit of companionship and cooperation between colleagues within the institution and beyond. Take a few moments to scroll through the various categories of resources, including friendship and teamwork, communities of practice, leadership and supervision, and bookmark one or two pieces to return to later.


May 15, 2023

Reflections on adaptive teaching – Asimina Georgakopoulou

My initial perception of “adaptive teaching” was that it was synonymous to differentiation—a term which is still used in teaching publications (DfE, 2021, T.S.5, p.11) and which the Times Educational Supplement used, to describe the practice of “putting the student first” (Amass, 2021). As the two terms are often used interchangeably, I began my practice unsure about which approach I was truly implementing. A broad understanding of both terms dictates that differentiation involves assigning certain needs to students while planning, assuming an objective can only be met a certain way. Adaptive teaching involves adjusting to address progress by providing scaffolding or challenge to support achievement of a unified objective in a flexible way (Deunk et al., 2018, p.31-54). After focused conversations with my placement colleagues, I was intrigued by the general consensus that the main difference between the two concepts in practice centres around the teacher’s understanding of “high expectations”.

I struggled with this concept originally, as my understanding lacked practical depth. During English writing objectives, I was expected to scribe for certain children after probing them to articulate themselves. I found this problematic, as it assumed that these children could express themselves orally and only struggled with writing. I understood that this was a genuine effort to avoid differentiating by task and communicating to the children that they were capable of completing the same task. In reality, the children were not expressing any ideas, and this resulted in them copying a board. Upon questioning them, I discovered that they still perceived their task as different, because they were not doing it independently.

Discussing this with my teacher, we ascertained that high expectations could be more effectively communicated by expecting all children to work independently and regularly changing support groups (CCF, 5.20). Although it seems like the same few pupils require constant small group support, I now realise that adaptive teaching is an approach meant to broaden our understanding of how to provide support. When the children were given a word mat that indicated meaning with symbols, they were able to start expressing their understanding independently, with little guidance. While other children did not have this support, all children were working independently and were given equal attention. I observed the positive psychological impact on students who felt that we were raising prior expectations.

As Coe et al. (2020, p.6) highlight, feelings of competence and autonomy are pivotal in promoting “learner motivation”. Additionally, they point out that “progressing…from structured to more independent learning” aids pupils to activate “hard thinking”. Adaptive teaching has the potential to lift children from the cycle of constantly requiring support to superficially meet an outcome that will not progress their understanding and will only lead to them requiring more support in future.

Although I do regret not taking initiative sooner, as I will not be able to observe long-term outcome improvement, my developed understanding of high expectations and adaptive teaching will have strong implications in my next placement, as I have grown my confidence and resourcefulness in supporting children appropriately. This is a point in my teaching where the WTV of creativity will greatly support development. By finding creative ways to scaffold learning, it is possible to communicate high expectations and creating a supportive learning environment.

References

Amass, H. (2021) “Differentiation: the dos and don’ts,” Tes Global Ltd, 16 April.

Coe, R. et al. (2020) Great teaching toolkit: Evidence Review, Great Teaching Toolkit. Cambridge Assessment International Education. Available at: https://assets.website-files.com/5ee28729f7b4a5fa99bef2b3/5ee9f507021911ae35ac6c4d_EBE_GTT_EVIDENCE%20REVIEW_DIGITAL.pdf?utm_referrer=https%3A%2F%2Fwww.greatteaching.com%2F (Accessed: April 14, 2023).

DfE (2019) ITT Core Content Framework available online at: https://www.gov.uk/government/publications/initial-teacher-training-itt-core-content-framework

Department for Education (2011) Teachers' Standards. Available at: https://www.gov.uk/government/publications/teachers-standards

Deunk, M., Smale-Jacobse, A., de Boer, H., Doolaard, S. and Bosker, R. (2018) 'Effective differentiation Practices:A systematic review and metaanalysis of studies on the cognitive effects of differentiation practices in primary education.' Educational Research Review (24) pp.31-54.


May 2024

Mo Tu We Th Fr Sa Su
Apr |  Today  |
      1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28 29 30 31      

Search this blog

Tags

Galleries

Most recent comments

  • Very interesting, thank you for sharing. Great CPD reflection. by Joel Milburn on this entry
  • Hi Lucy, Thank you for sharing the highs and lows of diverse assessments. I hope you have inspired o… by Anna Tranter on this entry
  • Hello Lucy, I totally agree with everything you have said here. And well done for having the energy … by Natalie Sharpling on this entry
  • Thank you for setting up this Learning Circle. Clearly, this is an area where we can make real progr… by Gwen Van der Velden on this entry
  • It's wonderful to read of your success Alex and the fact that you've been able to eradicate some pre… by Catherine Glavina on this entry

Blog archive

Loading…
RSS2.0 Atom
Not signed in
Sign in

Powered by BlogBuilder
© MMXXIV