Dominik Lukeš (University of Oxford) and Isabel Fischer (Warwick Business School) explored with a group of Developing Consulting Expertise students the integration of AI into academic practice.
We looked at the concept of the Jagged Frontier of AI capabilities that illustrates the mismatch between what people expect AI would be good at and what AI is actually good at. We asked the question of how to explore the jagged frontier, what is the best way to do so and what to pay attention to.
Dominik introduced his framework for evaluating AI tasks along five dimensions. For each AI task, we asked five questions based on exploring various AI tools.
- AI capability mismatch: What is the level of mismatch against known AI capabilities and limitations?
- Hallucination / unpredictability problem: How manageable is hallucination here? Where is it likely to show up?
- Prompt sensitivity: Is the output sensitive to how the prompt is formulated?
- Context window dependence: Can the AI tool see everything you present it?
- Variability across tools and models: Does the tool and/or the model the tool uses matter for the task?
The students worked in groups and came up with the following comparison of various common academic tasks. The table represents their work.
While the table is not the final word on any of these tasks, it was the subject of a much wider conversation. Perceptions along the five dimensions are very much dependent on how the specifics of the task are conceived. A different group, at a different time, may come up with a table that is quite different. It is the conversation and joint exploration that matters. You can read more about the workshop on AI integration and explore some of the presentation materials here.