All 5 entries tagged Co-Evolution

No other Warwick Blogs use the tag Co-Evolution on entries | View entries tagged Co-Evolution at Technorati | There are no images tagged Co-Evolution on this blog

August 21, 2005

Research Notes: still unconvinced about cognitive 'science'

Follow-up to Research Notes: co–evolution and the limits of explanation from Transversality - Robert O'Toole

Despite some interruptions, I'm now up to chapter 8 of Andy Clark's Being There. The chapter on The Neuroscientific Image presented a plausible model of cognition based upon brain research, but combining functional-hierarchical, distributed, embedded and dynamical approaches. But I am still not satisfied that the question of methodology is addressed sufficiently.

The important point is that dynamical and computational explanations are not exclusive, but rather, describe actually distinct forms of organisation and mechanism, and hence our task is to identify when one or the other is more appropriate (and indeed the combinations of the two).

That is good, but rather than giving a systematic guide as to how we can apply these various approaches, we instead see a cognitive 'science' built upon a patchwork of interdependent conjectures concerning the many distinct aspects of embedded and evolutionary cognition. The conjectures add up to an 'engineering model' of the intelligent organism, with the aim of offering a plausible story. They concern, amongst many other aspects:

  • Components available for the construction of the system, and the energetic and physiological limitations imposed on those components (both in the brain and the rest of the body);
  • The ontogenetic generation and regeneration of those components within the life of the individual, in isolation and through complex co-evolutionary relationships;
  • The diversity of the modes of organisation of those components, and their interactions (including either computational and dynamical operations in combination);
  • The generation and regeneration of those organisations;
  • Components in the environment available for simplifying and extending cognitive processes;
  • The feedback and feedforwards loops between these internal processes and the external environment within which the organism is embedded;
  • The limitations and requirements (temporal, spatial) imposed upon the individual by the environment;
  • Co-evolutionary links with other complex entities in the environment of the organism;
  • The phylogenetic development of all of the above in the evolution of the species and its environment.

But how can these individual conjectures be 'falsified'? Much of the work of cognitive science is to assess the fit of one of these conjectures with the many others. Of course that may lead to complex but consistent theories that seem plausible but which turn out to be entirely wrong. What other means do we have for assessing the plausibility of the conjectures. Clarke relies on two:

  1. Selective neural damage evidence – the mainstay of neuroscience, showing how damage to a part of the brain has specific effects on behaviour;
  2. Economic plausibility – assessing whether a conjecture describes mechanisms that are just too extravagant and costly to be likely (for the individual, species or environment).

Method 1 is relaible where a theory depends upon the existence of a localized centre of mental functionality or control that is suspeptible to damage. However, even in these cases there may be important distributed and dynamical elements that are difficult to assess (and hence get ignored by the theory).

Clarke gives some relatively trivial examples of method 2. However, I am not convinced that such 'economies' of resource (including time) are often simple or stable enough for easy analysis. And furthermore, if we consider the organism not to be an individual, but rather to be an assemblage of highly mobile components (especially at the 'higher' levels, at which an individual may become possessed by powerful, nomadic memes driving its behaviour), economies become much less predictable and reducible.

More fundamentally, i'm not at all convinced that there is some 'rule of efficiency' driving and limiting all phenomona. That I think is the really interesting question.

If you are interested in this entry, please contact me


August 08, 2005

Research Notes: co–evolution and the limits of explanation

Follow-up to Research Notes: Multiplicity, co–involution, Being abstract but not generalized from Transversality - Robert O'Toole

By Chapter 5 of Being There, Clark has reviewed the relevant work in robotics, cognitive science and developmental psychology. The slightly understated conclusion seems to be that a vital ingredient is missing: reality, messy, complex, non-linear reality.

This is a key paragraph:

This approach ignores one of the factors that most strongly differentiate real evolutionary adaption from other forms of learning: the ability to coevolve problems and solutions. p.93

The question is, as I read on, will he propose some kind of mechanism for introducing this factor into simulations, and thus quantifying its likely effects and patterns? Or perhaps he will explore the non-linearity of co-evolution further, with the conclusion that it renders a science of embedded cognition to be of limited explanative power?

My bet would be on coming up with toolkit for identifying the situations in which co-evolution occurs within lmits, as distinct from cases when its non-linearity renders problems obsolete faster than the emergence of solutions: a set of systems co-involuting through a shared Body without Organs, with degrees of stability and relative velocities.

If you are interested in this entry, please contact me


July 04, 2005

Research Notes: Naive Deleuzianisms, the war on terror, the valorization of self–organizing systems

Follow-up to Research Notes: Fascism within networks: China and the internet from Transversality - Robert O'Toole

My reading of Germinal Life has reached the third chapter, with Keith's call for a temporary and critical 'suspension' of Deleuze and Guattari's attempted equation 'ethics = ethology'. This suspension opens them up to an awkward but necessary critique.

And at the same time, I have been thinking more in the style of Manuel De Landa, applying his method of 'non-linear' history to the analysis of extremist and terrorist bodies. I am considering their emergence from pre-individual singularities on the machinic phylum to individuated and efficient learning machines. This raises some interesting issues concerning naive readings of the schizoanalytic project.

Consider this: are the various armed groups in Iraq benefiting from the continued presence of the US in a way that a naive schizoanalysis would praise? There were clearly many disparate splinters formed from the explosion of the Sadam Hussein regime of hierarchies, each itself a pre-individual singularity. And in response to the crudely striated tactics of the US military, are these otherwise unconnected singularities finding common currency, points of convergence, catalysts for the creation of their own internal consistency? As with the Nazis, I would say this is likely.

It would seem that the ethology leads to an ethics in which al-Qaeda might be valorized. Clearly there is something wrong, something out-of-order with this. Perhaps it is the same imprecision and confusion of differences that leads to the problem described by Keith in Germinal Life:

the various 'becomings' that characterize 'evolution', and serve to make it nongenealogical and nonfiliative, cannot be treated as if they were all the same, so that, for example, we could move simply but far too quickly, from talking about the transversal movement of the 'C' virus that is connected to both baboon DNA and the DNA of certain domestic cats, so talking about the 'becoming-baboon in the cat', to talking about the becoming molecular-dog of a human being, as if they were of an equivalent order. p.188-189

De Landa's free use of 'abstract machines' made me nervous. But what principle can there be to guide us as to the required level of detail, of specificity?

The answer from Deleuze and Guattari, and which I think Keith is about to give in the next section, is that understanding each deterritorialization's relationship to its own specific Body without Organs, and its passage into the possible constitution of an abstract machine, is the way to understand the appropriateness of that abstract machine to the specific case.

_

If you have something interesting to contribute to this, please contact me


June 21, 2005

The ethical character of Bergson's method of intuition

De Landa's A Thousand Years of Non-linear History left me with a sense that Deleuze and Guattari have the most effective and exciting practical approach to creating active and dynamical models of the world. But that book is one of examples underpinned with a few key concepts. It aims to show how far those concepts can be taken. I suspect that it intentionally leaves unsatisfied philosophical challeneges. A niche that Keith Ansell Pearson's Germinal Life: The Difference and Repetition of Deleuze fills more than adequately. Here's my thoughts on reading the first chapter.

The 'ethical' character of this method of philosophy resides, therefore, in the cultivation of a 'sympathetic communication' that it seeks to establish between the human and the rest of living matter. Ansell Pearson, Germinal Life, 1999, p.33

Keith's emphasis on the 'ethical' dimension of Bergson's method of intuition is very significant (and he notes, few others have made this link). The significance for me follows from the idea that the ethical dimension requires a consideration of something beyond any singular act or entity (as the sufficient reason of the act), but which does not assume any kind of totality or finality. I'm not usually interested in talk of Being (with a capital 'B'), although it is often more effective than counting sheep. But there is something in this angle on it that has made me take it much more seriously. And that something is in the negative ethical implications of thinking becoming without Being.

The argument seems to demonstrate how a concept of Being is an essential precursor to an encounter with duration, the key concept invented by Bergson. These encounters with duration connect us with the temporal problematics that (it is claimed) drives all activity and differentiation: real time or the asymmetrical synthesis of the sensible – that is, the sufficient reason behind the richness of the world.

Importantly, the encounter with duration is not singular and purely metaphysical, to be done in one philosophical-historic-eschatological event (it's not Hegel). Rather it is a pedagogical method that must be re-applied, with the aim of leading us away from conceptual confusions ('badly analyzed composites'), along lines that differentiate but at the same time follow virtual tendencies, to an understanding and acceptance of specific differences in kind – for example, to apprehend historical singularities (as De Landa does so brilliantly).

Even more importantly, we should recognize the active nature of this method. It takes us away from a passive relation between a subject and an object. It is an act of perception, intelligence and consciousness, but one that is always an active operation on and in the world. Keith provides a great sample on this from Bergson:

to percieve consists in condensing enourmous periods of an infinitely diluted existence into a few more differentiated moments of an intenser life, and in this summing up a very long history. To percieve means to immobilize Matter and Memory p.208 cited in _Ansell Pearson, Germinal Life, 1999, p.34

The method of intuition is therefore both a means of leading us to a comprehension of differences in kind and at the same time through its immanence to the world in which it perceives, actively creates new differences in kind. It is a method that places thought absolutely in the world. We should always remember that the return of thought and philosophy [in]to the world is really what Deleuzianisms (or neo-Bergsonisms) are about

But this then raises the big question: why philosophy? – why this tendency towards conceptual activity and the apprehension of differences in kind? – wht this method of intuition? The answer to this varies slightly but importantly between Bergson and Deleuze (but the principle is the same). Philosophy is the perception of nature, or nature’s own perception (later Deleuze will see perception as a property existing beyond the human). Differentiation is never a simple or ontologically foundational act, but rather is already complex. How the world differs from itself is not reducible to a mechanism or dialectic. In each case the actual mode of its differentiation is that which is indeterminate in its differentiation (the radical difference). If it were otherwise, nature would never differ from itself. There could be no asymmetry, no drive to overcome and reconnect, no real time, no elan vital, no life. The indeterminacy introduced by this radical difference is essential:

The crucial element that Bergson wishes to grant to life is not a mysterious force but rather a principle of 'indetermination'. It is this indetermination, and with it the capacity for novel adaption, that he sees as being 'engrafted' onto the necessity of physical forces, so as making possible a 'creative', as opposed to a purely mechanistic or deterministic, evolution. ibid p.48

But at this point we risk losing any connecting principle between the differentiations. Does radical difference leave us with an absolute becoming? In what sense is there anything to differentiate from? The world has lost itself, cannot perceive itself, is inert and lifeless. In Bergson’s terms, the elan vital is gone. Saving us from this undifferentiated becoming, we have the ‘ethical’ turn. It is an ethics that seeks to posit some principle of reconnection beyond the differentiation. Some exchange and interlocking between the differences. Some expression that carries content across between the two differentiated worlds. A principle assumed in both sides (but not itself outside of the world) that acts as a virtuality in which the differentiation is played out: a Being that they assume.

The important point to realise is that it is on the virtual plane that unification is to be sought. The 'whole' is 'pure virtuality'. Moreover, differentiation is only an actualization to the extent that it preseupposes a unity, which is the primordial virtual totality that differentiates itself according to lines of divergence but which still subsists in its unity and totality in each line. ibid p.67

For me this is where Being gets interesting: being virtual. For a virtuality always has a technics, the coding and decoding mechanisms of intelligence. As Keith indicates, a technology is the solution to indeterminacy, a virtuality that operates in parallel to real time. At this point technology, ethics, philosophy and metaphysics conjoin. And most importantly for me, creativity is shown to be underpinned with technology.

The next question is this: to what extent is this virtuality contained within and maintainable by an organism, an internally differentiating germ? And to what extent is it always reliant upon a third term, an externally constituted and relatively autonomus viral plane cutting transversally across? Both are true to an extent in different specific situations. Here Deleuze discovers an ethology of such types of differentiation: abstract machines. From an ethics to an ethology.

And I will coninue reading Germinal Life.

_

If you have something interesting to contribute to this, please contact me


September 15, 2004

Bergson's intuition and reflection in learning

…negative freedom is the result of manufactured social prejudices where, through social institutions, such as education and language, we become enslaved by 'order-words' that identify for us ready-made problems which we are forced to solve. This is not 'life', and it is not the way life itself has 'creatively' evolved. Therefore, true freedom, which can only be a positive freedom, lies in the power to decide through hesitation and indeterminacy and to constitute problems themselves.

Ansell Pearson, Germinal Life, Routledge 1999, p.23

This 'experimental and ethical pedagogy' (ibid, p.14) employs the Bergsonian method of intuition, which involves a reflection on the difference manifest in creative thought. When one realises that a currently held concept simply could not have existed nor could have been analytically deduced at a previous time in a previous state, one gets a sense of time as pure difference, despatialized. That feeling is creative, and the philosophical method that draws people into this reflection is Bergson's intuition. Only once the reliance on ready-made problems is abandoned can creativity occur.

The word 'implication' has a special meaning in this. Imagine reality as a large sheet of fabric. The fabric is folded to present you with one aspect, which you may grasp at. The fold (French – pli) is an aspect. You struggle to hold onto that fold, and find that you can only do so by holding onto other folds that follow on to it. As you try to grasp other folds, to unfold the folds, to follow the im-pli-cations, your actions on the further folds cause the first fold to be pulled and distorted in your grip. Out of this feedback loop the specific problem of this set of folds emerges. At some point you are able to stabilise the folds in relation to each other, and have a solution.

When you grasp the fact that a new problem has emerged, that the positing of the problem is beyond your control, and that you must evolve in relation to the problem in a way that was previously both unthinkable and impossible, you have intuition in Bergson's sense. Intuition is a reflection on learning, a creative learning.

And that's why Deleuze makes such a big issue out of the role of fabric in baroque art (le Pli, Leibniz and the Baroque), the role of the curtain in the paintings of Bacon (Logic of Sensation), and the relationship between canvas, paint and brush-stroke.