Graphical abstract as a form of assessment
Graphical abstract as a form of assessment by Andre Pires da Silva
A graphical abstract is a pictorial summary of the main findings of a research paper. It is typically used by journals to highlight the paper's key points in a concise visual format.
The format of graphical abstracts varies by journal. Some require a single panel where everything is summarised, while others may have multiple panels showing the introduction, methods, results, and conclusions. Graphical abstracts follow specific conventions:
- They have a clear start and end, read from top to bottom or left to right.
- They provide context for the results, such as the type of tissue represented.
- The figures are different from those in the main paper, emphasising new findings.
- They do not include data but show the findings conceptually.
- They exclude excessive details from previous literature and anything speculative.
- They have simple labels and minimal text, with no distracting clutter.
To test the capabilities of generative AI in creating graphical abstracts, an example from a complex paper on nematode sexual forms was used. The original graphical abstract clearly depicted the main points of the paper. However, when generative AI attempted to produce a graphical abstract based on the same paper, the result was confusing, cluttered, and failed to capture the main points accurately.
Analysing this failure through the lens of Bloom's Taxonomy, a hierarchical framework for cognitive skills, can provide insights. AI excels at lower-level skills like remembering and understanding but struggles with higher-level skills like analysing, evaluating, and creating.
While AI can remember and list information it has been trained on, many scientific fields lack sufficient training data, leading to potential inaccuracies. AI can produce abstracts by analysing information, but may miss the most important aspects that require nuance. Creativity, the highest cognitive skill, remains a significant challenge for AI.
In assessing students' understanding in a developmental biology course, various methods were employed, including multiple-choice questions, short answers, and graphical abstracts. The multiple-choice questions required interpreting datasets not directly solvable by AI, as the context was provided during lectures. The short-answer questions involved analysing complex anatomical figures from papers not readily available for AI training.
For the graphical abstract assignment, students were given simple instructions on the format and a word limit for the legend summarising key conclusions. They could use various digital tools or hand-drawings. The assigned paper discussed two theories of embryonic patterning: positional information and reaction-diffusion.
When the paper was submitted to generative AI to produce a graphical abstract, the result was cluttered and nonsensical, failing to represent the main ideas accurately. Even with simplified instructions, the AI-generated graphical abstract remained inadequate.
In contrast, student-produced graphical abstracts effectively communicated the key concepts. Some clearly depicted the relationship between the two theories, whether one was upstream or downstream of the other, or if they interacted in parallel. Others used effective visual representations, although some lacked sufficient guiding text or clarity in conveying the relationship between the theories.
The experience of grading the graphical abstract assignments was efficient, taking only a few minutes per submission. Creating new exams based on this format is straightforward, as instructors can select different research papers for each iteration.
From the students' perspective, the graphical abstract assignment is valuable as it requires them to communicate complex ideas clearly and critically select the most important aspects of a paper.
While companies offer graphical abstract creation services, they are currently time-consuming and expensive, limiting their widespread adoption.
Looking ahead, implementing other assessment formats like short video productions, as done in science communication classes, could further challenge AI capabilities in this domain.
Overall, the graphical abstract assignment provides a valuable assessment tool that requires higher-order cognitive skills, promotes scientific communication, and remains a challenge for current AI systems to generate effectively.
No comments
Add a comment
You are not allowed to comment on this entry as it has restricted commenting permissions.