May 28, 2024

Building Effective Client Proposals

by Matthew Lucas

Effective pitching to business leaders is a critical skill, particularly for those involved (or aspire to be involved) in consultancy and business transformation. Drawing from extensive experience at IBM and teaching at Warwick Business School, Matthew Lucas has produced a new article that outlines key strategies for students to craft and deliver successful business proposals. The process is broken down into four main stages; research, formulation, development, and delivery:

  1. Research: The research phase is crucial for grounding the proposal in a thorough understanding of the client's problem and business context. The article recommends starting by investigating the client and their requirements and discusses the various information sources such as company reports, external messaging, and competitive analysis to gain a comprehensive view.
  2. Formulation: In this stage, ideas are developed that address the client's requirements, often utilising a process like Design Thinking. The article recommends a brainstorming session to generate a wide range of ideas, which are then narrowed down in to one or two feasible and high-impact concepts. It describes the key considerations when refining your ideas, including the benefits for various stakeholders, success criteria (using SMART targets), associated costs, potential risks, alternative solutions, ethical considerations, and implementation guidance.
  3. Development: The development phase involves creating a presentation or other documentation to convey your ideas effectively. The article recommends a presentation structure that’s designed to be clear and comprehensive, and with the best chance of success.
  4. Delivery: Effective delivery of the material is as important as the content itself. This section gives recommendations on how to prepare for and pitch any client presentation, with techniques to control nerves and mitigate the chances of things going wrong on the day.

By following the strategies in this article, students can enhance their ability to influence and persuade business leaders effectively.


May 20, 2024

Graphical abstract as a form of assessment

Graphical abstract as a form of assessment by Andre Pires da Silva

A graphical abstract is a pictorial summary of the main findings of a research paper. It is typically used by journals to highlight the paper's key points in a concise visual format.

The format of graphical abstracts varies by journal. Some require a single panel where everything is summarised, while others may have multiple panels showing the introduction, methods, results, and conclusions. Graphical abstracts follow specific conventions:

  • They have a clear start and end, read from top to bottom or left to right.
  • They provide context for the results, such as the type of tissue represented.
  • The figures are different from those in the main paper, emphasising new findings.
  • They do not include data but show the findings conceptually.
  • They exclude excessive details from previous literature and anything speculative.
  • They have simple labels and minimal text, with no distracting clutter.

To test the capabilities of generative AI in creating graphical abstracts, an example from a complex paper on nematode sexual forms was used. The original graphical abstract clearly depicted the main points of the paper. However, when generative AI attempted to produce a graphical abstract based on the same paper, the result was confusing, cluttered, and failed to capture the main points accurately.

Analysing this failure through the lens of Bloom's Taxonomy, a hierarchical framework for cognitive skills, can provide insights. AI excels at lower-level skills like remembering and understanding but struggles with higher-level skills like analysing, evaluating, and creating.

While AI can remember and list information it has been trained on, many scientific fields lack sufficient training data, leading to potential inaccuracies. AI can produce abstracts by analysing information, but may miss the most important aspects that require nuance. Creativity, the highest cognitive skill, remains a significant challenge for AI.

In assessing students' understanding in a developmental biology course, various methods were employed, including multiple-choice questions, short answers, and graphical abstracts. The multiple-choice questions required interpreting datasets not directly solvable by AI, as the context was provided during lectures. The short-answer questions involved analysing complex anatomical figures from papers not readily available for AI training.

For the graphical abstract assignment, students were given simple instructions on the format and a word limit for the legend summarising key conclusions. They could use various digital tools or hand-drawings. The assigned paper discussed two theories of embryonic patterning: positional information and reaction-diffusion.

When the paper was submitted to generative AI to produce a graphical abstract, the result was cluttered and nonsensical, failing to represent the main ideas accurately. Even with simplified instructions, the AI-generated graphical abstract remained inadequate.

In contrast, student-produced graphical abstracts effectively communicated the key concepts. Some clearly depicted the relationship between the two theories, whether one was upstream or downstream of the other, or if they interacted in parallel. Others used effective visual representations, although some lacked sufficient guiding text or clarity in conveying the relationship between the theories.

The experience of grading the graphical abstract assignments was efficient, taking only a few minutes per submission. Creating new exams based on this format is straightforward, as instructors can select different research papers for each iteration.

From the students' perspective, the graphical abstract assignment is valuable as it requires them to communicate complex ideas clearly and critically select the most important aspects of a paper.

While companies offer graphical abstract creation services, they are currently time-consuming and expensive, limiting their widespread adoption.

Looking ahead, implementing other assessment formats like short video productions, as done in science communication classes, could further challenge AI capabilities in this domain.

Overall, the graphical abstract assignment provides a valuable assessment tool that requires higher-order cognitive skills, promotes scientific communication, and remains a challenge for current AI systems to generate effectively.


May 13, 2024

AI on Campus: Students' Perspectives podcast

Have a look at this initiative at University of Limerick (Ireland) where students discuss the innovative ways that GenAI tools enhance their educational experience. Topics covered include neurodiversity, Universal Design for Learning, authentic assessment, and day-to-day student pressures.

AI on Campus: Students' Perspectives podcast


April 29, 2024

Insights from a faculty session to Integrate AI in teaching practice

This blog is in two parts and was written by Dr. Neha Gupta and Dr. Susanne Beck, Assistant Professors, ISMA Group, Warwick Business School.

Part 1: Planning to deliver a faculty session to integrate AI into teaching practice (Date: 19th April 2024)

This blog share ideas under consideration in preparation to lead a faculty session about integrating AI in Teaching and Learning practices in various forms in a Higher Education setting. The session will be one of the parallel breakout sessions during the annual event at Warwick Business School, the Teaching and Learning Symposium 2023-24, where faculty from all groups (i.e. various disciplines) engage in peer dialogues, discussions, and activities around how the learning and teaching needs in the higher education landscape are evolving. The broad aims of the session are to inspire discussions and ideas about how to use Generative AI (GenAI) and emerging technologies to foster relevant skills enhancing students' employability.

The leading faculty (co-authors of this blog) plan to use a pool of resources from the WIHEA AI Learning Circle, JISC and a Harvard AI pedagogy project to stimulate discussion on the use of AI practices across higher education. A demo of hands-on examples and of AI Tools and prompts used by colleagues from WBS and beyond, such as Ethan Mollick, will help the attendees see how practically they can engage with AI, for example in setting up assessment tasks with the use of ChatGPT. A notable aspect of the session will be the demonstration of AI tools. For example, CODY AI, a web-based AI tool capable of generating bots to address student queries efficiently by using LLM will be demonstrated using the existing knowledge base from the student handbook to answer dissertation related queries. This demonstration will exemplify how AI can streamline administrative tasks, such as responding to common student inquiries, thereby optimising staff resources and minimising response times.

As the job market evolves, students must be equipped with both domain-specific knowledge and technological proficiency. Integrating AI into teaching not only prepares students for future careers but also empowers them to engage with and leverage technology responsibly. The AI technology is out there and students are going to be using AI tools in their future work places. During their job interviews they will be asked about these tools and about their opinion on these tools. As educators, it is our responsibility to provide students with opportunities to experiment with these tools during their learning journey and allow them to form their own experiences and opinions. Perhaps, educators should recognise that they should have an open mind to experiment with emerging AI tools that offer immense potential in enhancing teaching and assessment practices. Yet, its implementation must be guided by ethical considerations and a commitment to fostering critical thinking skills among students.

Part 2: Insights from faculty session to Integrate AI in teaching practice (Date: 25th April 2024)

This blog share insights from the delivery of faculty session at WBS teaching and learning symposium (an account of which is mentioned in the blog above). The potential of AI tools revolutionising the student supporting task with the use of Cody AI for answering queries sparked discussions amongst colleagues and on its implementation across various educational contexts. For example colleagues were concerned about:

  1. Copyrighted information shared on public domains unless such tools are contractually brought into the university ecosystem where such challenges can be managed through a more formal implementation of such AI tools in a university setting.
  2. Hallucinations or information made-up by AI tools given the underlying LLM layers. In the demonstrated case of Cody AI bot, however, the answers the Bot gives is primarily based on the knowledge base provided by the user.

Though colleagues had a consensus that committing to innovation and the integration of such AI tools into teaching practices holds the promise of both, creating more efficient handling of student queries as well as enhancing their learning experiences in higher education.

The break out session further delved into using AI for assessments. Hands-on examples of prompts and outcomes where shared within the session, exploring benefits for both educators and students. For educators, GenAI tools can be used to develop creative assignments more efficiently, that require students to critically engage with AI generated content. For instance, instead of preparing a recap-exercise at the beginning of a class, asking students what they remember, teachers can ask students to critically review a text about a given topic and identify (purposefully included) false claims, and share their thoughts with their neighbours. Besides subject knowledge, this exercise sensitises students that even text that might sound good, may be factually wrong. Both the text as well as the instructions can be generated by ChatGPT in an instant, making it easily replicable and customisable for educators (see another example, asking students to write a critical essay, here).

For students, such assignments can help them develop skills such as critical thinking. But through the use of GenAI they may also be empowered to leverage individualised learning opportunities and stimulate their curiosity. For example, in his recent book, Ethan Mollick showcases a potential methodology to encourage students to experiment with tasks they have no experience in. For his entrepreneurship class, he asks students to take the development of a business idea a step further and come up with a website or even develop an app for their business – especially when they have no experience with coding. This opens a new space for students to experiment and become creative, another skill enhancing their future employability.

For students to thrive through the use of GenAI in the classroom, however, the discussions in the session emphasised two important boundary conditions: First, students need to be given the space to experiment with using AI, as well as other emerging technologies. Providing them with space includes aspects such as rewarding ambitious ideas rather than penalising if they fail in persuasion. Second, a responsible usage of AI needs to find its place in the students’ curriculum. Teachers cannot expect students to be fully knowledgeable about the most recent capabilities and risks related to such a dynamic technology. Schools and educators need to provide them with the necessary training.

At the end of the breakout session, the attending faculty were invited to join a discussion, imagining themselves a) in the student’s role and share What could be students concerns when receiving an assignment that asks you to use AI? What steps an educator takes to address these concerns?; or b) in the teacher’s role, thinking about What could be your concerns when designing an assignment that asks students to use AI? What would they (teachers) need to address their concerns? The discussion generated below key takeaways that underscored the importance of ethical AI integration, ongoing teacher professional development in AI literacy, and the need for a balance between technological advancement and human-centric pedagogy:

  1. Invest time to train ourselves first then further share AI related knowledge with our students.
  2. Avoid falling into the AI trap – i.e. students still need step by step guidance in terms of what is expected from them in their assessment task with minimum ambiguity in the instructions.
  3. Incorporate AI as a step towards innovation by evolving our teaching practices by going beyond the AI tool and being valuable as a knowledge expert (both in setting up assessments and teaching content) (see also Mollick & Mollick, 2024).
  4. Teaching and learning tasks should be aligned to learning outcomes and not incorporate AI just for the sake of it or for perceived pressure. AI and emerging technologies should be considered powerful means to achieve learning outcomes more effectively.

Feel free to reach out to Dr. Neha Gupta neha.gupta@wbs.ac.uk for more details about the session.


April 22, 2024

Green Space 2024

Today is the CTE annual Green Space conference. The conference page link has the schedule and bios about the presenters. Once the conference is over, any recordings of the keynote and parallel sessions will be available from the same link.


April 15, 2024

Theory into practice

Have a look at the new Theory into Practice blog for tips on linking theory and practice.


April 08, 2024

BERA_blog

The British Educational Research Association (BERA) blog has a range of topical subjects you may be interested in reading such as AI and sustainability:

https://www.bera.ac.uk/blog


March 25, 2024

Collaborating with AI – Writing an assignment

Rob Liu-Preece is the Academic Technologist for PAIS, Sociology and Philosophy at the University of Warwick. He has also been an IELTS marker for Writing and Speaking for 20 years and previously taught Academic Skills to international students both in the UK and overseas for 20 years.

This is the second of two posts written by Rob about AI and the ANTF Project:

While many have expressed fears that the advent of AI may threaten future employment, others have emphasised that those able to work with AI may well be in the most secure forms of work. This assignment is designed to deploy a relatively undemanding writing task which students will hopefully be motivated to carry out, being on a topic of their choice and hopefully enable them to sustain interest through the stages of writing, reading an AI answer, re-drafting and then commenting on it. I hope students will benefit from being unconstrained by academic demands as they compose a short piece of writing on a topic familiar to themselves. Additionally, that might help increase the learning take-aways they generate having completed the process. I have included an example to help students undertake the novel experience of writing with AI operating as a kind of writing assistant.

Aims of the Assignment

  • To provide a motivating topic for students to write about, review and redraft.
  • To practice collaboration on writing with AI.
  • To enhance students understanding and awareness of significant features of their writing.
  • To improve their appreciation and ability to compare human-created and AI-generated writing.
  • To practice collaborating with AI to produce an improved finished piece of writing.

Learning Objectives:

  • To increase understanding of the difference between a human and machine written response.
  • To improve the ability to collaborate with AI to enhance a written response.
  • To recognise the strengths and weaknesses of AI generated content.

Instructions:

You need to complete 4 activities for this assignment.

  1. Choose a cultural artefact important to you. Write 500 – 600 words on why it’s important and what insights it carries.
  2. Then write a prompt for Chat GPT and generate an answer.
  3. Re-write your original incorporating new content gleaned from Chat GPT. Highlight the additional content in italics.
  4. Write 2-4 paragraphs comparing human-created and AI-generated writing.

March 18, 2024

AI Marking Criteria

Rob Liu-Preece is the Academic Technologist for PAIS, Sociology and Philosophy at the University of Warwick. He has also been an IELTS marker for Writing and Speaking for 20 years and previously taught Academic Skills to international students both in the UK and overseas for 20 years.

This is the first of two posts written by Rob about AI and the ANTF Project:

The sudden explosion in the availability and use of generative AI technology, especially by university students has left education professionals in a position of playing catch-up. With ChatGPT gaining 1 million users in just 5 days and 100 million in 2 months, I feel like educationalists have just arrived at the point of coming up for air. As part of that process, I’ve written marking criteria aimed at marking the use of AI by students completing assignments.

I think in a learning environment characterised by uncertainty and disruption, students will benefit from an explicit expression of how the university wants them to use AI. Applying a marking framework like this could also lessen the need for tutors to follow a punitive/academic integrity route for dealing with misuse of AI. It could achieve this by opening up and defining ‘poor academic practice’ more closely aligned to AI as an alternative. I also hope this type of approach will help steer the development of pedagogy and AI, providing a structure for on-going debate and discussion. Lastly, having a set of criteria like this enables reverse engineering of training and coaching on AI for both students and tutors.

To address these issues, I’ve written a set of marking criteria based on the existing Politics and International Studies assignment marking criteria for undergraduate students. I would anticipate students including a short report to their written assignments covering their use of AI. The framework is based around 2 main categories, namely appropriacy of use and awareness of key issues. The criteria is by no means a finished piece of work, is not necessarily fit for purpose and hasn’t undergone any road testing or standardisation. Rather it is designed to signal a possible route forward for those of us concerned and interested in shaping the take-up of AI in education. It does raise some thoughts in my own mind about whether such an approach is the right way to go. Should we be setting or defining an orthodoxy in quite tight terms for AI use, like this? Is a literacy model approach implied here the correct one, or would a better way be to focus on conscious use of AI by students?

Please note I used Google Gemini to help with the overall structure of this blog and for the statistics in the first paragraph.


March 11, 2024

Reflection in the creative arts

Have a look at this TalkingHE podcast by Dr Annamarie Mckie on using reflection in the creative arts.


June 2024

Mo Tu We Th Fr Sa Su
May |  Today  |
               1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30

Search this blog

Tags

Galleries

Most recent comments

  • Very interesting, thank you for sharing. Great CPD reflection. by Joel Milburn on this entry
  • Hi Lucy, Thank you for sharing the highs and lows of diverse assessments. I hope you have inspired o… by Anna Tranter on this entry
  • Hello Lucy, I totally agree with everything you have said here. And well done for having the energy … by Natalie Sharpling on this entry
  • Thank you for setting up this Learning Circle. Clearly, this is an area where we can make real progr… by Gwen Van der Velden on this entry
  • It's wonderful to read of your success Alex and the fact that you've been able to eradicate some pre… by Catherine Glavina on this entry

Blog archive

Loading…
RSS2.0 Atom
Not signed in
Sign in

Powered by BlogBuilder
© MMXXIV