All 12 entries tagged Networks
View all 19 entries tagged Networks on Warwick Blogs | View entries tagged Networks at Technorati | There are no images tagged Networks on this blog
September 14, 2007
Many web publishing systems allow content to be “tagged” with keywords. For example, a Sitebuilder page may be given one or more tags. Similarly a blog entry is classified using tags. Social bookmarking systems like del.icio.us use keyword tagging. Bibliographical databases also commonly use taxonomies of keywords. Most of these systems can be searched using keywords. Many of these search interfaces can be accessed programmatically.
Most keyword tagging and searching systems, such as that used in Sitebuilder, are not taxonomical. There is no taxonomical structure to which an author or researcher can refer when tagging resources or searching for resources.
I have identified a set of real use cases in which a taxonomical approach to keyword searching would be beneficial. These cases also imply that resources are tagged according to the keywords available in the taxonomy.
Here is a description of how the tool could work:
The search interface contains a tree structure representing the taxonomy of keywords.
The tree structure can be explored, drilling down through its branches.
Individual keywords, or whole branches of keywords can be selected.
There is also a text box allowing the user to type in keywords (with autosuggest giving options as they type).
As keywords are selected, they are listed in a text area showing all currently selected search terms.
Once all of the required keywords have been chosen, a search is done, returning a list of all matching resources.
The search could be ordered in several different ways, for example with resources that have a higher match coming at the top of the list.
Here is an illustration of how the interface might work (it doesn’t yet do the search):
Configuring the taxonomy
The taxonomy could be stored as an xml file on the web (for example in the same location as the search tool).
The xml file could be hand coded, or the search tool could provide an interface for creating a taxonomy file and editing its tags.
The taxonomy file to be used could be specified by the author of the web page on which it appears.
The end user could be allowed to choose a taxonomy file, it could be possible to search for taxonomy files in Sitebuilder.
Configuring the sources
The search tool will need to know about the web application in which resources are stored (for example Sitebuilder).
It will need a method for searching each web application (how to build the search url).
One or more sources could be specified by the author of the page on which the tool is deployed – it could be set to search Sitebuilder, Blogs, etc.
A version of the tool could be made available allowing the end user to choose sources.
Saving and sharing searches
Searches could be saved onto the end users local machine (Shared Object in Flash, filesystem in AIR).
Searches could also be shared with other users.
Using the taxonomy to tag resources
The search tool could be used to assist in tagging resources using a selected taxonomy.
For example, a user selects a series of keywords, and the tool displays the text to paste into the keyword tag field of the Sitebuilder page properties.
- find out how to save an xml file from Flash into Sitebuilder.
July 24, 2007
- Think about how you might want to be able to search/aggregate/organise your content, how other people might want to search it and see it organised, and how you want other people to think about your content (especially if you are trying to establish a schematic structure in their minds).
- Be systematic, especially with punctuation, spaces, spellings.
- Maintain a list of your tags. A concept map is a good mechanism for maintaining this list. You could use MindManager to create the list.
- Cooperate – use the same tags that other people use, develop tagsonomies with other people (formally or informally).
- Use a combination of very specific and more general tags (for example: e-learning, elab, quizbuilder) – think about your tags as being arranged in a tree structure (specific at the leaves general at the branches).
- Combine different tags that identify different aspects of the tagged content (for example, use a tag to identify what kind of content it is (essay, review), what it is about (philosophy, Kant).
- Tagging your work with a unique identifier associated with yourself allows you to aggregate your work from wherever it appears (e.g. robert_o-toole in Sitebuilder gets you this result )
July 06, 2007
I’m gonna do this….
2) The search tool can load its tagxonomy from an xml file stored on the web.
3) I’m going to add a feature that allows different tagxonomies to be loaded as required. If the xml files are stored in Sitebuilder and tagged as “tagxonomy” then the search tool could even auto-discover tagxonomies for the user to choose from.
4) A tagxonomy file could also be tagged with the unique identifier of a person, group or department (using the web groups codes for these), thus associating the file with the profile.
5) I could allow for the tagxonomy to be edited in the search tool (adding new tags or tag collections for example).
6) The tagxonomy could then be uploaded onto Sitebuilder, if the user has permissions, it could replace an existing tagxonomy, if not they can create their own new tagxonomy.
7) It will also be possible to select a series of tags from the tree, and have an string automatically generated from that. The string can be pasted into any content tagging application. Importantly, the string should include tags that identify the tagxonomy from which the tags where taken.
Best of all, i’ll try to build it as an Adobe AIR desktop application (not sure how Single Sign On will work with that).
I reckon that will take a few weeks to build (as I’m in the middle another big project).
This example will load the tags defined for the Renaissance Studies project:
January 17, 2007
There are of course more than three problems, that being the story of human conflict and struggle. Sometimes communities cannot form effectively because of disruptions and lack of continuity. And other times individuals within a community may fail to value others or the channels that connect them. Technology can do little in the short term to ease or avoid these difficulties. In considering learning communities, there are however three recurring problems that have been succesfully addressed with technology.
1. Problems of size and complexity
In some cases the student is overwhelmed by the size and complexity of the institution. They need to break it down into smaller and more manageable segments, so as to be able to judge the value and usefulness of its features, and to form stable and repeatable connections. But all they see is a crowd. There is some truth in the claim that the best communities are in fact composed of small groups of people, perhaps as few as eight, with each group well connected to other groups. This supports adaption and diversity, whereas mass communications and transactions negate diversity and prevent adaption. It certainly is the case that people struggle with meaningful activity within big groups.
2. Beyond the clique
In other cases, the student has a strong, even dominant, small network of friends, peers and tutors. This network provides for some but not all of the individuals needs. Connections to other groups are mediated by the group, and thus may be severely restricted. They therefore need to go beyond the clique to find new connections, new resources. But how? It’s a big and scarey world out there. This problem can be particularly acute for distance learning or part time students, who have little time or opportunity to go out exploring on their own.
3. Loss and dispersal
Many students at Warwick are expected to undergo phase transitions that break them away from their established networks. They commonly experience a loss of community. This is common in the Arts Faculty, with many students spending their second year abroad.
July 07, 2006
The long tail model claims that the following three modifications to capitalism have occured as a result of new technology:
- Democratize the tools of production – cheap and easy tools for creating products;
- Cutting the cost of consumption – make distribution cheaper;
- Connect supply and demand – the availibilty of mechanisms that connect consumers and producers amongst the complex and diverse marketplace.
The first two effects are quite obviously happening, leading to the so called 'long tail' on supply–demand curve, in which niche producers proliferate. However, the nature of the third point is contentious. A naive view would claim that these mechanisms are emergent from some increasingly self–organised and democratic network effect. I disagree. My account argues that in fact the long tail is simply the product of corporate capitalism becoming more sophisticated in dealing with massively increased and rapid demand, thus avoiding stagnation of products and consumer desire. I argue as follows:
- The number of individual consumers has increased massively. There are simply more people with more money able to buy more units.
- Each individual has more time in which to purchase products. Indeed the line between the activity of shopping (until recently considered to be a form of work occupied by the housewife) and leisure has dissolved, with a continuum between shopping for necessity and shopping for fun.
- Demand has therefore increased, but not necessarily in favour of niche producers against big brands. Rather, for most people, the big brands have just got bigger, alongside more peripheral spaces for niche products.
- And furthermore, the majority of people balance a set of big brands, against a set of niche products. The big brands are usually the products that must be bought quickly with as little hassle as possible, but with the ensuing degree of lock–in. The niche products are less essential, and hence can be treated with more consideration, greater risk but less lock–in.
- The corporations behind the big brands are entirely supportive of this second parallel set of markets. The availability of a diversity of peripheral products helps corporations in identifying and developing new desires and new products, without the cost and risk of full scale core product [re]development. The corporations know that they can use scale and association with celebrity to engender their products with a degree of recognitional legitimacy and trustworthiness, giving a competitive edge over niche producers amongst a shifting and fragmented population for whom simplicity and consistency are rare and valued. In fact it may be that as the tail extends, consumers increasingly struggle to navigate a complex market place, and are driven again back to simple brands and recognisable labels, albeit ones that constantly mutate around their core identity. The trick that big corporations must master is this: allow consumers enough slack to explore new niche markets, but prime them with values, symbols and identities that allow the consumers to act as a bridge through which the new territories can be colonised if required.
June 30, 2006
Firstly, some definitions:
Of course such convoluted routes are usually circumvented through the use of bookmarks. The user jumps right into the required context, or at least one close enough to it so as to save time. In such cases other tricks are required to speed up the communication of context. In the Sitebuilder web content management system, each sub site (department, research project, sometimes course and sometimes even individual modules) have their own visual design (within the boundaries allowed by Sitebuilder). This design provides instant visual contextualisation. Warwick Blogs, however, is much more flat in terms of context. Each individual blog has its selected design (sometimes customised) and title. All entries are presented in that context. Even pages that represent entries with a single tag vary little in contextualisation. So for example, my Philosophy Research page looks just like my Baby Lawrence O-Toole page.
Thematic navigation may therefore connect content across contexts. This has obvious advantages, in that related content is automatically aggregated together. But more interestingly for learning and teaching, it encourages the end user to view related content from different contexts, and consider the differences between those contexts explicitly. So for example, a topic may appear in the module of a lecturer in Continental Philosophy, and in a module by a lecturer in Analytic Philosophy.
An obvious question to raise at this point is this: surely this could be achieved by the student doing a free–text search of all of the pages (or even the whole web)? Yes, but, if each of the lecturers were to consciously assign themes to their pages from a pre–figured set of themes (possibly even a taxonomical model), and the student were provided with that model as a guide to understanding the course, then they are given clearly specified and consistent concepts for understanding a diverse set of content or activities. This happens to be an important and widely respected pedagogical method. Curricula are in fact often designed as a combination of contextual navigation (modules, lectures, seminars, assessments) and thematics that run across those contexts (skills, concepts, competencies, objectives, values etc). Almost all of the modules that I am asked to look at work in this way. A diverse series of activities is undertaken, which have their own developmental logic. But alongside that diversity, a core set of concepts (often skills) are supposed to be the objective focus of development. However, frequently the problem is that the students and lecturers do not understand the themes, or lose sight of them in the diversity.
June 02, 2006
Writing about web page /caseyleaver/entry/mle_learning_platform/
Writing about an entry you don't have permission to view
Yes, the rotten decaying body of the corporate Managed Learning Environment stinks. We should bury it.
Hold on a minute, i detect a heart beat. Can it be revived? Should it be revived? Perhaps it will come back having undertaken some kind of near–death moral transformation. Born again.
Sorry, i'll get to the point. We are seeing the emergence of a kind of self–assembled, loosely coupled, lightly managed learning environment (LCL–MLE?). This is made possible by the increasing ubiquity of RSS data feeds, single sign on, and keyword tagging, along with service development and provision strategies such as agile development and managed diversity.
The idea behind the old fashioned centralist MLE (OFC–MLE?) was that the user could see a range of data about the learning process, all in one place. So they would see their timetables, list of courses, marks, tasks, courese content etc all together. And furthermore, it would be possible to join them up. OFC–MLE systems would contain all of this data in a single repository, as a tightly coupled system. Years of painful experience demonstrates that such monolithic systems are hard to develop, difficult to maintain, and harder still to engage the wide range of people and processes. The answer has been to grow more independent services, with responsibility distributed more widely and designed to meet the requirements of each type of user (academics, students, administrators, communications professionals).
The trick is for each of these two make its content available openly to the people and systems who need to use it, but in a filterable, secure and timely manner. This adds up to a LCL–MLE. And that means a data environment in which people can:
- advertise information so that it gets to the right people (using directories and search based upon keyword tagging);
- find relavant information (using directories and search based upon keyword tagging);
- recommend information to others (by building their own del.icio.us style directories or by adding additional tagging);
- combine information in a single location, and present it in a useful way (see how RSS feeds are blended in the left hand panel of the E-learning at Warwick web site.
- allow the user to return information to the systems from which it was harvested, or to get diverse information to interact;
The last of these is enabled by Single Sign On, which is the key to allowing people to easily go from information, presented anywhere, to functionality that allows them to act on it. For example, on a page that I have constructed from a combination of sources, I could see that there is an interesting event happening, and easily add that event to my personal calendar without having to go off into a separate system.
Keyword tagging also contains some revolutionary force. Remember how OFC–MLE systems where built on the assumption that learning processes were constructed by a single individual (or well coordinated team) with a strong overview of all of the contents and connections that should be contained in the learning experience? That has always been the antithesis of the kind of research based learning (RBL) that makes a top UK university what its is. RBL is more like a mentoring and guidance model, in which less centred and hierarchical teams develop a shared understanding of the direction in which the students should be steared, and then input resources, links to resources, and feedback that does the work of moving the students in the right direction. The student is themselves expected to gradually (or sometimes quite quickly) take over the helm and navigational responsibility. OFC–MLEs tend to work against this. But imagine a technology that allows the teaching team to create and select resources, and then annotate, tag and connect them for the students. The students can then explore these resources, and even create their own tagging, annotation and networks of them, to be shared with others or even assessed by the teaching team.
The E–learning Advisor Team are already working on several projects that exploit these possibilities. Our web architecture (Sitebuilder, Warwick Blogs, Warwick Forums etc) provides many of the tools that we need to make this a success.
See this interesting paper on Connectivism presented to Google by George Siemmens.
October 20, 2005
After spending a few weeks away from my research project, I usually restart my thinking by quickly reading a popular-science/history/art-theory book of some kind. My means of choosing such a book is usually quite aleatoric: the criteria being "something that sounds good, is easy and fast to read, and which may provide some unforeseen empirical data for my conceptual activities".
Mark Buchanan's book, selected from the popular-science section of Waterstone's in an instant, without much consideration, met these criteria well. The aim of the book is to convey and contextualize what is a fairly simple idea. But what makes that simplicity into something more significant is that it is an idea that seems to have been overlooked, and which once brought into consideration, gives explanatory sense to lots of seemingly unrelated phenomena. For me as a philosopher, that is interesting: what thoughts were impossible before the arrival of this concept? what errors were made?
The concept is, superficially, that of the "small world network": that is to say, how networks seem to naturally form from highly chaotic and random situations, into simplified but still random organisations, connected together by specially priveliged nodes that do much of the work of maintaining order and flow in the system. As a result of this priveliged position, and the dynamics of its connectivity, all kinds of sophisticated behaviours (economic, social, cognitive, ecological) can be seen to emerge: results of what has been called the network effect (although there is much more to this than the business model). It plausibly demonstrates how it is quite feasible for one random person to be only six degrees of separation from another random person. And then it extends this model to many further domains (physical, ecological, computational etc).
I'm thinking: Kant, sensus communis – what if he new about small world networks? Or conversely: Nietzsche – what did he know? And of course it is there in Deleuze and Guattari (concepts such as transversality), but rarely with the very definite examples that we now know of. A fascinating question to consider would be: when exactly did the concept of "network" arise? – and at what point did people start realising that the conditions for the establishment and operation of a network may cause certain behaviours, patterns and organisations to emerge (the network effect)?
Connect it to Andy Clark's extended cognition theses (with its tightly coupled systems). And throw in our experiences with the small world system known as Warwick Blogs. Interesting. Very.
Even more so because it turned out that, as I discovered on page 16 as the author described the kind of surprising coincidence that a small-world network can cause, I am in fact only two degrees of separation from Mark Buchanan! He is a friend of a friend As I read…
I moved a few years ago from the United States to London to take up an editorial position with Nature
…I dropped the book when I realised that he may well share an office with my friend Karl, who is also an editor with Nature, also working in the physical sciences. Karl is a Moosehead, with whom I occassionaly drink, eat chillis, and bellow loudly. On Friday night (whilst in the Bilash in Botley) I explained this to Travis (also a Moosehead, if not the head Moosehead). Travis was actually planning, that night, to write an email to Mark Buchanan. It may even be that I have met the author at some Moosing event at some time (although I may have been drunk and therefore incapable of remembering the names of any new acquintance). Spooky. Or just the result of a small world network.
Buchanan's book does well to rapidly explain the work of Watts and Strogatz, Granovetter, and other pioneers. It is entertaining and full of fascinating examples throughout – especially when dealing with the ingenious experiments of the sociologist Stanley Milgram. The connecting-up of cases from such a wide range of domains begs many questions (important philosophical questions that I think Deleuze and Guattari address effectively in What is Philosophy?). But it is very much worthwhile because of that.
I'll give the book 4 stars (not 5, as it could do with a bit more detail on the mathematics and mechanics of the networks).
If you have feedback on this entry, please contact me
August 01, 2005
A response to Peter Taylor's BBC documentary.
Not much to say really, and not much of a discussion primer either. All I can say is "so what?". The journalism was conventional and unimaginative – looking for physical connections when only distant connections would suffice, and making suppositions based on fragments of evidence. He barely addressed the really important issue:
al-Qaeda and other new terrorist networks may well be quite different to the terrorist organisations of the past
The connection to organized crime was made, and should be explored a great deal more. But the strange motivations of the terrorists, the most important thing to understand, was barely mentioned. Interestingly, we were told that at least two of the suspects lived 'double lives' as both 'criminal playboys' and Jihadists – that is fascinating and highly significant. There are strange subcultures of violence, criminality, and male power behind this, but that doesn't quite fit with the Islamic conspiracy theory.
Sadly this documentary failed. We need journalists with imagination, capable of creating adequate concepts to match the innovations of the terrorists and the forces behind them.
This raises an interesting question about the purpose of TV documentaries. The BBC are wrong in thinking that even a good investigative journalist is capable of creating a solid and thorough case of evidence in such a complex situation as this. Too much detail is required, hence the high degree of supposition and grasping at connections. In reality it is so complicated and difficult that national governments, judiciaries and intelligence agencies struggle. Why then does the BBC think it can do the job?
I would argue that the role of TV documentary is to come up with alternative ways of seeing the world, examining the implications, and suggesting ways in which those theories might be tested. Unfortunately, TV journalists are either not brave enough or just not up to that challenge. It's easier to play the role of the detective.
June 21, 2005
I have just read and greatly enjoyed Manuel De Landa's A Thousand Years of Non-linear History. In fact, i'm so excited by its approach to creating dynamical models of the world, that i'm using it all the time with a wide range of applications. De Landa takes the ethological approach of Deleuze and Guattari, considering how stratified bodies (organic and inorganic) are built up and eroded by the emergent and self-organising expression of network effects (including geological, biological, social and economic netwoks). So here's a few conjectures based on this...
The Nazi's emerged through a meshwork of radical individuals on the periphery of a range of disciplines: mystical, military, medical, commercial, beaurocratic, artistic and the media. Their individual ideas were not particularly innovative, being mostly concerned with the intensification and purification of existing processes. However, it was their intense and fundamental will to application regardless of cost that marked them out in an otherwise consolidating and cautious climate. The meshwork of these diverse forces was consequently drawn together by the combination of their shared peripheral status along and a powerful belief in the necessity (ethical) and certainty (metaphysical, historical) of the foundation of a new world from the traits (in need of purification and authentication) that they could see all around.
Such peripheral forces exist within any large and relatively homogeneous body of individuals. They are the product of its genetic drift, deviations necessary for the existence of adaptive potential. In some cases, selection and replication mechanisms may form that act to single out, purify and intensify traits within the ceaseless drift. And it is not unusual for a small set of such deviations from sometimes very different bodies to become associated through their co-identification as 'outsiders', despite the fact that they may be concerned with quite different traits. An increase in the mobility of such diverse radical agents is often a catalyst for this co-identification. This was certainly a factor in the emergence of the Nazis, with the increase of mobility and resulting inter-connectivity during and following WWI.
But they don't often grow into the kind of wildly abberant monster that was the Nazis. What then might have been the extra condition that catalysed the transformation of the Nazis from fringe to global threat? One way of answering this would be to look at the 'network effects' internal to the Nazi meshwork. De Landa discusses several 'abstract machines' that exploit network effects in different ways. For example, the 'group and grid' model proposed by the anthropologist Mary Douglas. The 'grid' refers to organisations that maintain their identity through an intensity of centralized regulation (typically propogated through hierarchies). The 'group' on the other hand, operates through an intensity of group allegiance (typically propogted through memes, propaganda etc). Most organisations exist with a mix of both group and grid. However, at the extremes, there are some groups that are highly grid structured, having little opportunity to propogate memes (due to external controls). And there are other organisations that have no grid, and propogate via indirect means (memes). We can take this a step further by arguing that any organisation that is able to master all of the combinations, and switch between them as required, will be able to maintain its consistency regardless of external controls.
My conjecture is that the Nazis crystallized around the collapse of a state hierarchy. As the hierarchy collapsed and became less rigid and certain, access to key elements of its operative functions opened up. In response, the Nazis occupied positions of power within the fragmenting hierarchies, and thus formed their own internal grid based hierarchies from them. This resulted in 'immune responses' from the grided hieracrchies of the official organisations that they parasitised, which responded by attacking the emergent grid of the Nazis, who in turn were forced to return to their meme based 'group' roots, until they were again able to consolidate their control of (or just replace) the state hierarchies. This symbiotic relationship continued throughout the 1930's, with the Nazi party gaining increased mastery of the trick of switching modes of operation.
In a very real sense, the Nazis were subjected to a learning process as they were forced between each mode of operation. Perhaps the lesson that should be learnt from this is:
During a time of rapid change and collapsing hierarchies, the auto-immune response of the state to the evolution of networks may actually provide the ideal learning experience for extremist organisations.
It may well be that in response to the expansion of the internet, capitalism and the release of its sedentary populations, China is providing just such a learning experience for a new generation of extremists.
If you have something interesting to contribute to this, please contact me
March 08, 2005
Expression is the operation of a programme of deterritorialization and reterritorialization. Repeatedly carrying elements formed in context or territory A into a new relation with elements formed in context B. The deterritorialized elements being the content or payload carried by the expression.
The effect of the delivery may be entirely disruptive when reterritorialization occurs. But more productively, it could trigger a response in the second context B that dissipates the effect, but at the same time releases further elements that may be deterritorialized and reterritorialized within the first context A.
Furthermore, if the deterritorialization from A into B is effectively handled by B, without the immediate destruction of B, and the transmission of released elements from B back to A (feedback) is beneficial to A, then A will tend towards repeating the exchange. Having responded to A effectively in the first instance, B will is more likely to be prepared for the second instance, and in fact has started to stabilise around the expectation. And in turn A is stabilising around the sensation of the return delivery of content.
In this way each context has its exposed surfaces or senses repetitively disrupted. In isolation this interference could only ever be noise, but when part of such a mutually beneficial relationship, the exchanges are sensible. This is the case even though the payload exchanged and the programmatic form by which it is carried in each direction may have no necessary relation. In fact it can be said that the meaning only exists in the overall process, between the lines carved out by each seperate context, combining both contexts and the movements between them in an asymmetrical synthesis of the sensible.
September 15, 2004
Following from that is the semantic theory developed by Deleuze in the Logic of Sense. Meaning works as follows. There is an emerging 'problem' space constituted by the way in which different foldings work against each other. And there is the 'solution' which is a second layer of articulations that repeatedly succeeds in operating on the problem space. This second level is itself a problem space requiring solutions (and so on). The applicability of one problem space onto another as a solution is the site of meaning. The twin articulations make sense of each other. Of course the articulations are not entirely seperate, transversal interactions occur between them, complicated feedback loops shift the relationships between the articluations, and hence change their formations.
This raises a question about the semantic web, which attempts to move us to a model for linking content that works with these emergent semantic processes. The idea seems to be this. A document is the static representation of a web of meaning (problem spaces and solutions interacting with each other). That web of meaning is seen to come from the author. A so called 'intelligent agent' is capable of drawing out and representing these meanings with some kind of schema that links them up with other documents and makes them accessible to others using the same schema.
Such a system was demonstrated in the key note speech this morning. It was admitted that the information structure into which the documents are mapped is no different to a database schema, the advantage lying in the data harvesting tools that can do the job more efficiently than manual approaches. It should be obvious that, considering Deleuze's work on meaning, this is an extremely pale imitation of the dynamic transversal relationship that actually takes place in semantics, in the creation and interpretation of documents. The semantic web could allow the reader of a document to identify a web of significant elements themselves, and then seek connections outside that work with that web. Instead, such systems are prejudging the significance of words within the text, applying a set schema, and linking only to other documents that can be similarly analysed. Nothing creative can come of this.
Perhaps a better approach would be to allow the reader to create their own conceptual map of the significant elements of a document, and then search for other simlar maps that apply to other documents. That really would be a semantic web.