All entries for Wednesday 16 November 2005
November 16, 2005
Key features of scientific activity that are commonly thought of as being essentially human are:
- Attending to certain phenomena to be observed, quantified, classified, manipulated;
- Collecting and isolating phenomena into experimental arrangements;
- Postulating connections between phenomena as models of reality;
- Hypothesing and seeking the existence of objects not represented by presented phenomena;
- Prioritising and valuing certain instances of the above over other instances;
- Planning the execution of all of the above.
I don't think it is implausible that these activities could occur without human involvement. Note that an anti-humanist does not need to demonstrate that a single clearly individuated intelligence, a robot scientist, need be responsible for all of these activities. It is just as valid, and perhaps more realistic, to argue that some of them are carried out by widely dispersed agencies (networks, environments, ecologies).
An anti-humanist conception of science is certainly plausible. To prove the point, do we need to point to an activity that includes all of the above processes, but without human intervention? Or perhaps it is enough just to show that each of these processes could be carried out by a non-human agency?
Eric Mattews, in his The Philosophy of Merleau-Ponty, argues that the recasting of the phenomenological reduction is driven by a need to make sicence, and the objectivist view of the world that it encourages, realise that there is always a human element to it: perception, and the phenomenology thereof. For Merleau-Ponty, perception is always already inextricably tied to a human perspective, with psychological, historical, political. and social specificities. Following this, we could say that Merleua-Ponty argues for a humanist appreciation of science.
This makes some sense. Consider for example a science that were to conform to the most rational and well-ordered model: that of Popper for example. There is still something at the heart of such a science that we could recognize as science: perception, the attentional force that drives its selectivity, and the scientific imagination that pushes its investigative focus beyond the obvious, thus making new conjectures.
Is there an anti-humanist response? It would be necessary to demonstrate that a science without humans could percieve in an intelligent and selective attentional way, going beyond the obvious, forming new conjectures. Could there then be an AI scientist? A science without humans?
Considering the failure of the AI business, one would be encouraged to reject, laugh even, at the idea of a robot scientist. But another argument has arisen from the failure of traditional AI. Andy Clark has argued that the kind of cognitive perceptual processes that we are describing may actually happen more in the world as the operations of an extended cognitive apparatus. This is, in part, a deliberate application of Merleau-Ponty to AI. But it's side effect could be to undermine some of the humanism of Merleau-Ponty. The extended cognition thesis could demonstrate that processes such as the scientific imagination are actually much less human than we commonly think.
But we should still be cautious in calling this an anti-humanist position, suggesting an anti-humanist conception of science. Clark seems to believe in an inelliminable human element driving from some super-subjective level. To see an example of that refrain abandoned, we could turn to a more extreme position: Deleuze and Guattari. In a similar way, D&G see perception and thought as being the property of rhizomes (networks) of machines (processors). The networks and processors of human and scientific thought are multivarious, distributed and in most cases inhuman. Or rather, humans are in fact spread out across these assemblages which include social and economic organisations that control us more than we control them. This is a genuinely anti-humanist position.
But they go further. There is no recourse to an organizing driving super-subject. The drive behind perception, attention, innovation, that which can be seen as inelliminable to scientific activity, its desire, is said to be an emergent property of the assemblages of networks and proicesses: the ghost in the network. In his War in the Age of Intelligent Machines the Deleuzian Manuel De Landa demonstrates how AI science is the product of non-human forces (the military machine). In fact it is more likely that real working AI will be assembled out in the field from components combined without the conscious design of humans.
Note that this argument goes much further than the sociology of science in that it abandons the model of "rational subjects trapped in and manipulated by social, political and economic circumstance". If there is any rationality, it is out there amongst the machines. A long way from the phenomenology of perception.
If you are interested in this research, then contact me