Saturday, February 6, 2010

I recently caught a mention of Andy Clark and extended cognition in New Scientist. I couldn't quite make out what was going on in the piece, though. It seems to say that lifeless drops of oil can appear to "solve" a maze if the maze is treated with a pair of chemicals in a gradient. The droplets apparently "correct" themselves when they come to a "wrong turn" and keep at it until they exit.

Does anyone know any more than that about this experiment? I'm curious to know about the details of the gradiant. Is the gradient configured in a way such as to practically encode the solution to the maze? (I.e., higher gradient at points more deeply into "wrong turn territory," lower gradients at points closer to the maze's solution.) Or is the gradient more course-grained, higher and lower simply according to absolute distance from the exit? If the latter, then how does a droplet "know" to turn back when it hits a "wrong turn"? If the former, how is this particularly interesting from an ex-cog perspective? (It must be since Andy Clark is talking about it in that connection!) If the gradient is fine grained enough to simply be a laid down path to the end of the maze, why would we say there's any sense in which the droplet itself "solves" the maze? If anything's solving the maze, it's the droplet-gradient system. Myself, for my illustrations of extended cognition, I'd prefer the cognition (or cognition-analogue as the case may be) to "belong" properly to a particular part of the cognizing system, rather than simply to the system as a whole. For if it belongs to the system as a whole, then it seems we simply have a cognitive system, not an extended cognitive system.

In other words, if writing a dissertation is an act of extended cognition, I'd prefer not to say that there's a cognitive agent consisting of the writer plus his materials plus parts of the internet and so on. Rather, I'd prefer to say that there's a cognitive system consisting of all of that, which belongs to a cognitive agent that is only part of that system. ("Belongs to" in the sense that the things happening in the system are mental activities of the agent.)

Now loom large problems of demarcation, and of the distinction between (and the drawing of desiderata concerning) extended and merely embedded cognition. Must think!

A Paper At A Conference

I'll be presenting a paper titled "Extended Cognition and Personal Identity" at the 2010 meeting of the Southern Society for Philosophy and Psychology. The paper is more about personal identity than extended cognition--it takes ex-cog for granted, then asks what this means for identity--but still, it seems relevant to the blog. I'll let you guys know how it goes.