November 15, 2005

Research Notes: porous minds and cracked–up agents

Follow-up to Research Notes: how radical can extended cognition be? from Transversality - Robert O'Toole

I just found this in a text file whilst tidying up my laptop. It doesn't seem to have been published yet. There may be a reason why I abandoned it. But here it is anyway...

Section 10.6 of Andy Clark's book Being There is entitled with the question "Where does the mind stop and the world begin?". For philosophy this is a very significant question. For cognitive science and AI, much less so (its just a design issue). Why not just adopt the latter position? Would that be such a scandal?

Clark's answer to the question is both pragmatic and realistic, whilst promoting a proportionate, specific and sufficiently detailled investigation of real minds and environments. This is quite a contrast to the vague generalizations of some phenomenological models.

For someone with an AI/cog-sci background (that I in part share), the identification of a boundary (even a porous one) should only be significant when it could contribute to our understanding of the capabilities, limitations and developmental process of real cognitive processes. Our boundary marking conditions would have to be ones that really make a difference to the cognitive process itself. For example, one interesting boundary marking condition would be:

how replaceable or otherwise is a specific (internal or external) cognitive artefact? Could the individual agent simply swap the artefact with another similar or even totally different artefact? And to what extent would this change the character of the agent?

A related, equally important, but different question is:

how dependent is the development of an agent upon a specific artefact, such that it's abscence makes a significant difference to that agent?

This gets close to our understanding of what an agent actually is: it has a relatively consistent and pervasive character existing over time and to some extent surviving changes to the environment in which it exists. Whilst at the same time, its development and continuation is dependent upon the existence of key artefacts within that environment. It is as Clark says, closely coupled. Furthermore, the agent tends to influence the environment in which it exists so as to promote the continuation of these characteristics, so that an agent tends to be associated with an environment (reverse evolution), whilst the environment tends to promote certain characteristics in the agent and classes of agents (evolution).

This, to readers of recent dynamical systems theory (and the likes of Deleuze and Guattari), is quite an obvious model: 1) there are arangements of mechanisms that interact with and consume other mechanisms through processes of ordering, selection, managed preservation and controlled degradation; 2) these mechanisms have selective principles (the character traits) that are repetetively applied over time; 3) some of these repetitive mechanisms reproduce the conditions of their own production and reproduction; 4) and fewer still reproduce the conditions that make their own reproduction more likely, more desired by the environment in which they exist. Or in shore: they are desiring machines.

I would say that this is stating the obvious. Certainly there is a degree of convergence towards such a model in evolutionary biology. And I'm sure there will also be such a convergence in AI development. So why is it likely that philosophers will still consider it to be controversial? Why does it seem OK in biology, but radical when applied by, for example, the psychotherapist Felix Guattari, to the problem of fixing broken minds and bodies?

Thinking is selecting, is doing.


- No comments Not publicly viewable


Add a comment

You are not allowed to comment on this entry as it has restricted commenting permissions.