Future learning technologies (ELE not VLE)
People keep asking me if I think that Warwick should have a VLE. I keep answering NO!! – or at least nothing that looks like a conventional VLE (browser-based content transmission). My justifications: 1. VLE technologies are about to become obsolete; 2. the “environment” metaphor is a mistake. We need to think about extended learning ecologies (ELE) not virtual learning environments (VLE).
This vision of the near future illustrates some of the technologies and practices that will be common place very soon.
We’re in a high-tech open access learning centre. We walk over to a collaboration space. The large touch screen lights up, having recognized the members of the group (our mobile devices have already connected to it). Our shared notebook is on the screen, displaying the contents that we are likely to be using for this session (it already knows which seminar, which module etc). It is a version of the digital course-pack for this module, but with our seminar group’s additions and customizations. Our tablet computers and laptops (some of us are using the iPad 4) have also opened up the notebook in sync, with the same contents.
We can work on documents, and see our updates replicated across all the devices. On the big screen we create a mind map together. It is instantly replicated onto our own individual devices, so that later on we can carry on using it individually.
During the session, someone mentions that a useful documentary will be on tonight. Some of us add it to our personal calendars. But it clashes with a dinner that I’m going to. No problem, the calendar has already set up whatever is required for me to record the programme and watch it at a later date (it’s been added to my to-watch list, when I get to my television I see it listed there).
Part way through the seminar, one of the students does a short talk, pushing a sequence of slides (containing text, images, audio and video) from her iPad onto the screen and onto our own tablet devices. If we want, all of the audio of her presentation and our discussion is recorded along with the slides. We can also annotate them in our own digital notebooks.
As she talks, a resource finder identifies keywords that might be relevant to our work. It uses in-built intelligence and previous experience with the group to search for and make available (to the whole class) relevant resources. There’s a mention of the Port Royal Logic in the discussion. Instantly it’s there ready to use. As she starts to talk about the PRL, I realize that it’s really interesting to me. I start recording what she is saying into an audio note, but did I miss the bit just before I pressed record? No, the system was pre-emptively recording that, so I can include a few minutes pre-record in my note. The resource finder also knows about articles and books that will be of use to follow up after the session, some of which have been manually chosen, and some of which are intelligent suggestions.
Later, when I get back to my study, I’m able to access all of this as a timeline of events and as a collection of resources, replicated and presented onto my desktop. I can pick out key ideas, develop them, link to other resources, and build them into a more considered, more complete product – almost a complete essay, which might form part of my assessed work – I’m not sure. So I sleep on it. Then over the next few days, I revisit the essay (on my iPhone when I’m on the bus, on my iPad when I’m in a café, on my desktop computer, and even as audio read back to me as I exercise in the gym).
Finally, I decide that I like what I’ve written, and I submit it into the peer support app for my tutor group. Other members receive a notification (on their various devices) telling them that I’ve submitted an essay for consideration. They can access it and give me feedback, in the document’s workspace using whichever device (mobile, desktop etc) that is at hand. I receive notifications, and once I’m happy with it, I publish the essay to a couple of different “zones” – the module tutors, and also a student research network. It appears on their devices through the apps that they have subscribed to.
I’ve used a lot of new ideas in this work of fiction. But they are all things that are just becoming reality right now. Some of them are named by flashy buzz-words:
Ubiquitous computing: powerful, net-connected devices always at hand in an appropriate form, allowing immediate access to information, people, choices, productivity-functions etc, intuitively and unobtrusively. We can do sophisticated IT without interrupting the flow of ideas and events. Enabled by mobile web enabled devices like the iPhone and iPad, as well as smart connected devices (internet enabled printers, televisions etc).
Cloud computing: our data, files etc are stored over the network on servers, and replicated immediately across all of our devices (and potentially other devices such as presentation screens). Going further, the software that we use runs “in the cloud” and is replicated (in different platform-adapted forms) across all of the devices that we use.
Pre-emptive adaptive search: the computer listens to what I am saying, observes what I am doing, and makes guesses about what might help me – for example, by searching for and listing resources that might be relevant. If I start talking about Cezanne and Deleuze, having Cezanne’s paintings of apples ready at hand would be most useful.
Digital short term memory: the stream of events are constantly being recorded, but we don’t need a permanent record of everything, we just need to keep the important things. However, we often don’t realize that something is important right away. Having a digital short term memory allows us to at any point select the last few minutes to be permanently stored. The rest is deleted.
Digital cloud synced notebook: whichever device I am using, I can record notes, snap shots, audio recordings, links, videos into my note book. My notes are then replicated across all of my devices. I could take a snapshot on my phone, go back to my desktop, find it in my notebook, and add text to the note to expand upon the record. This is possible now using tools like Evernote.
Digital cloud synced course-pack: the resources for a course are packaged and available to students for download. The pack is then imported into their digital notebooks. They can use the materials, annotate, add to them, and share their additions. Again this is possible using Evernote, although not yet with all of the collaboration tools that we might desire.
Notifications: not a new idea, but one made more powerful as more devices become connected and access become ubiquitous. Notifications, and the ability to subscribe to channels of information are the basis for social networking (Facebook etc). We should see more flexibility in our ability to define what we want to be notified about, and how we want to receive notifications.
App-based channels: Starwalk is one of my favourite sources of information. It is an “app” (software designed to run on mobile platforms) that I have on my iPad and on my iPod Touch. It presents up-to-date astronomical information. I’m always interested to hear about new satellite launches. They get “pushed” into the app automatically over the net. I also get notifications on both devices, so I know that new information is there. The information is presented in the context of the app. I can view the locations of the new satellites in the night sky on its digital planetarium. I can use the information in ways appropriate to the field of study, afforded by the app. Increasingly, information will be presented in this contextually designed form.
That kind of rich interconnectivity, intelligence, adaptive flow, immediacy and collaborative productivity is from an entirely different world to the conventional VLE. But it’s not sci-fi. In response to the recession and to market-saturation, tech companies have been innovating with un-paralleled intensity. Who is driving these changes? Who are they targeting? Not geeks, not scientists, not James Bond. These are all consumer-oriented developments: students will become familiar with these tools in their non-academic activities faster than they are adopted in education.
These economic and competitive conditions have a second, perhaps more revolutionary potential. Consumers are becoming designers – that is to say, the rich range of options and interconnectivity means that ordinary people are starting to think about how they are constructing their information and technology capabilities. They are creating new species of cognitive agent, assembled from hardware and software choices. The success of these species is determined by many factors. They evolve and adapt, feeding back positively and negatively, and forming order through the operation of network effects. It’s not an environment. It’s a full-blown ecology, and should be treated as an Extended Learning Ecology not a Virtual Learning Environment.