Monday, March 29, 2010

Mark of the Cognitive Posts

Last year I blogged through some of the principal ideas that went into a paper on Noë's Enactivism.  (Link to the paper below.)  Thanks to Gennady Erlikhman and Anthony Morse for helpful comments via the blog and drafts.  (Thanks as well to Nivedita Gangopadhyay and Larry Shapiro for comments on a draft.)

Now I'm sort of blogging my way through ideas regarding the mark of the cognitive here.  I'm not yet sure there is a paper here though.  Feedback welcome.


Saturday, March 27, 2010

Author Meets Critics on Rupert's "Cognitive Systems and the Extended Mind"

102nd Meeting of the Southern Society for Philosophy and Psychology
April 15-17, 2010
Westin Peachtree
Atlanta, GA

Saturday Afternoon                  International D
2:30 p.m. - 4:45 p.m.           
PHILOSOPHY SESSION XIX

Book Symposium on Rob Rupert’s Cognitive Systems and the Extended Mind

Chair: Kristofer Rhodes, University of California, Irvine

2:30    Colin Klein, University of Illinois, Chicago
Integration, Invariance, and Rupert's System-Based Approach to Demarcation
   
    Commentator: Rob Rupert, University of Colorado

3:30    Ken Aizawa, Centenary College of Louisiana
    Rupert’s Extended Cognition
   
    Commentator: Rob Rupert, University of Colorado

Tuesday, March 9, 2010

Saturday, February 6, 2010

I recently caught a mention of Andy Clark and extended cognition in New Scientist. I couldn't quite make out what was going on in the piece, though. It seems to say that lifeless drops of oil can appear to "solve" a maze if the maze is treated with a pair of chemicals in a gradient. The droplets apparently "correct" themselves when they come to a "wrong turn" and keep at it until they exit.

Does anyone know any more than that about this experiment? I'm curious to know about the details of the gradiant. Is the gradient configured in a way such as to practically encode the solution to the maze? (I.e., higher gradient at points more deeply into "wrong turn territory," lower gradients at points closer to the maze's solution.) Or is the gradient more course-grained, higher and lower simply according to absolute distance from the exit? If the latter, then how does a droplet "know" to turn back when it hits a "wrong turn"? If the former, how is this particularly interesting from an ex-cog perspective? (It must be since Andy Clark is talking about it in that connection!) If the gradient is fine grained enough to simply be a laid down path to the end of the maze, why would we say there's any sense in which the droplet itself "solves" the maze? If anything's solving the maze, it's the droplet-gradient system. Myself, for my illustrations of extended cognition, I'd prefer the cognition (or cognition-analogue as the case may be) to "belong" properly to a particular part of the cognizing system, rather than simply to the system as a whole. For if it belongs to the system as a whole, then it seems we simply have a cognitive system, not an extended cognitive system.

In other words, if writing a dissertation is an act of extended cognition, I'd prefer not to say that there's a cognitive agent consisting of the writer plus his materials plus parts of the internet and so on. Rather, I'd prefer to say that there's a cognitive system consisting of all of that, which belongs to a cognitive agent that is only part of that system. ("Belongs to" in the sense that the things happening in the system are mental activities of the agent.)

Now loom large problems of demarcation, and of the distinction between (and the drawing of desiderata concerning) extended and merely embedded cognition. Must think!

A Paper At A Conference

I'll be presenting a paper titled "Extended Cognition and Personal Identity" at the 2010 meeting of the Southern Society for Philosophy and Psychology. The paper is more about personal identity than extended cognition--it takes ex-cog for granted, then asks what this means for identity--but still, it seems relevant to the blog. I'll let you guys know how it goes.

Monday, January 25, 2010

That wouldn't be a very interesting experiment

Several of Rupert's arguments (and some from Adams and Aizawa as well) rely on the fact that, particularly in an experimental setting, organismically bounded cognitive performance can differ wildly from purported examples of extended cognitive performance. For example, if I am asked to memorize a list of items, I can generally remember up to seven, and over a relatively short period of time I will tend to forget more and more of these items, as long as I am not concentrating on them. But give me a notepad, and I can "remember" a practically endless number of items, and short of losing the notepad or its suffering some kind of unlikely damage, I will probably never "forget" these items. Similarly, if I'm asked to read a book and then put it away, answering questions about its content, I will perform very differently than I will if I am allowed to keep the book and refer to it while answering questions. (On some accounts, in the right circumstances, a reader and a book together form an extended cognitive system, with many of the reader's beliefs about the subject matter of the book residing in the book itself.)

Since performance differs so widely between the two cases, the argument goes, skepticism is called for on a few grounds. The main idea, I think, is that the wide difference in performance suggests we have a difference in natural kind. They don't behave similarly under relevantly similar circumstances, so they're not the same kind of thing. They don't give similar results under similar experimental conditions, so the science which performs these experiments shouldn't treat them as being of a kind.

In a JoP article, if I recall correctly, Rupert even puts it this way. There simply is no interesting cognitive science experiment we can do on a person's memory if we allow for his written notes to count as "memory."

I agree (or, if I'm remembering my Rupert wrong, then I posit) that no particularly interesting cogsci experiment could be done on such a subject. But importantly, there's a distinction between a cogsci experiment and an interesting cogsci experiment. If I were to set up, right now, a study seeing how long it takes people to forget an item after memorizing it as part of a list of ten items, this also wouldn't be very interesting--for many such experiments have already been done and the results are fairly well estabalished. Nevertheless, despite the fact that it wouldn't be interesting, this doesn't make it fail to be an experiment studying a subject in cognitive science. On the contrary, it's a paradigmatic example of such a study.

Similarly, I'd argue, an experiment on subjects including their notebooks, allowing their notebooks to be counted as part of their "memory," may not be interesting--because we all know exactly what would happen beforehand, for example--but that doesn't mean it's not a study of cognitive systems.

But what about the wide difference in performance mentioned above? The relevance of this is unclear to me. Individuals do differ in their cognitive abilities, after all. Some people can remember more than ten items. A few rare individuals--not all of them with disorders such as autism!--can remember, apparently, a practically endless number of items in a list. Do studies of these subjects not count as studies of cognitive agents? Or course not. They're cognitive, despite the fact that they perform very differently than most other cognitive agents we've studied.

It seems to me that cognition is a problem that different systems solve in different ways. Most human organisms solve it in a way that brings along with it limitations like "only seven items" and so on. Some human organisms seem to solve it in some other ways. Other organisms solve it in still other ways. And the use of a notebook to record memories is yet another way to solve the cognition problem.

It is valuable for many practical and scientific reasons to study the ways human organisms tend to solve the cognition problem when denied the use of external resources. This tells us something about the general class of "brain-based" or "brain-bound" cognition methods. But this doesn't mean that's the only kind of cognition there is, of course!

But it should be asked, what's the use of a general category of "cognition" if the interesting scientific work gets done about only particular means of cognition? Aren't the various means the real kinds here, and isn't generalized "cognition" itself something of a red herring?

One suggestion here might be that it's too easy to be mislead by examples like the notebook used to memorize lists. That's a trivial task. But more complex tasks of various sorts, carried out with the aid of a notebook may yield more interesting correlations between important variables. And it doesn't seem implausible, at least, to think that, as we vary the complexity of tasks incrementally, and the amount and kind of external resources available, then in some range of complexity we'll find an interesting continuation of patterns that begin at the point of no resource availability. (In other words, we might find a function that aptly, using few variables, describes the subject's performance continuously all the way from a state of zero resource availability to a state of great resource availability, given sufficient tax complexity.)

That's just a suggestion--maybe the beginning of a suggestion of one way to experimentally confirm or disconfirm extended cognition, at least of certain types or given certain accounts.

Another suggestion is that even if extended cognition turns out not to be a good scientific kind, it may well be a conception that needs to be adhered to for humanistic, political or moral purposes. Perhaps what Otto is doing is quite discontinuous, empirically speaking, from what any of us are doing when we remember things. And yet perhaps interfering with Otto's notebook is best concieved of, for moral purposes, as interfering with his personal coherence in the same way that interfering with my memory would be interfering with my personal coherence.

Can good moral categories really fail to track the scientific ones in this way? I'm not sure.

Friday, January 22, 2010

Cognition and Digestion

Digestion is a way that an organism turns part of the world external to itself into a part of the world internal to itself. In the case of human beings, this process involves bringing the external bit into contact with the organism, and acting on the external bit in various ways that make it apt for absorption into the organism. There's no clear boundary between the time at which the external bit is fully external and the time at which it is fully absorbed. The boundary is fuzzy--but there are clear areas on either side of the boundary. The food on my plate is clearly external to me. The protein molecules doing their work inside my cells are clearly internal.

That's how humans digest. Amoebas digest a little differently, moving a part of themselves closer to the external object rather than bringing the external object closer to themselves.

Both approaches are ways to rearrange the environment so that external objects can be appropriated, their parts being made parts of the organism.

If beehives can be considered organisms in their own right (a notion taken seriously by a few biologists, google "superorganism") then they digest in yet a third way. Rather than reaching parts of themselves (still connected to themselves) out toward external objects, and rather than bringing external objects wholesale into themselves, they send out pieces of themselves as "agents" to digest the external object where it stands. The agents then bring the digested product back to the organism itself--the beehive.

I visualize extended cognition as working something like this third mode of digestion.

(For "organism," read "cognitive system." Right now I think the significance of Extended Cognition is that the boundary of the cognitive system is not the same as the boundary of the organism. In other words, "organism" is a good natural kind, "cognitive system" is a good natural kind, and instances of the latter are not always inside or coextensive with the latter. They sometimes merely overlap organisms. Extended cognitive systems would be examples. And they could potentiall exist overlapping no organism at all. Some form of artificial intelligence instantiated purely on electronic hardware would count as an example of that.)

The analogy isn't exact. My cognitive system doesn't send disconnected "agents" out to the environment to alter it in a way making it part of my cognitive structure. But neither, like an amoeba, does my cognitive system "envelope" the external. Rather, it sends out agents that remain connected to me. For example, my hands as they manipulate a pencil or a set of blocks. And the "manipulation" performed by my "agents" to make the external part of my cognitive system can be pretty subtle--having perhaps no physical effect on the external object at all but altering the norms that govern that physical object. (Which in turn necessitates, on my account of norms, which I don't have space to write about here, that the external object now has different dispositions in virtue of its relations to its environment, so it's not as though there's no physical cash-out to the account of manipulation I'm describing here.) A structure I'm using as an external memory store may never be physically contacted by me, but my cognitive acts with relation to that structure alter the norms governing that structure--i.e., make it the case that the structure should accurately reflect the state of something else, perhaps--in a way that incorporates the structure into my own cognitive structure.

Like digestion, cognition can be fuzzy. Food in my mouth is sort of outside me, sort of inside me. The structure I just mentioned, similarly, is "sort of" part of my cognitive structure, "sort of" not. Or put differently, it's only weakly part of my cognitive structure--not very robustly incorporated into me at all. But it's on the line somewhere not all the way on the "not cognitive" end of the scale. It's moved over a little bit, on account of my cognitive manipulations of the environment designed to make that structure part of my cognitive structure and, I think, part of myself. Sort of! (Fuzzily, just a little.)

Otto's notebook is much further into the cognitive end of things. An implant on my skull directly connected to my brain is even further. And so on.

That's what I think right now. Cognition (more properly, "cognitive incorporation") and digestion are both ways of making my environment part of myself. Digestion is a way of making my environment part of my metabolism, and chemical properties are probably the ones most relevant to this process. Cognitive incorporation is a way of making my environment part of my thinking (my information guided goal directed norm governed purpose unifying structure), and informational/representational properties are probably the ones most relevant to this process. Things can be more or less digested, and things can be more or less cognitively incorporated. Things can be digested at a distance, and things can be cognitively incorporated at a distance.

Not an argument of course! I'm just describing and analogizing.

I wrote the above in response to something in Rupert's new book, but I've strayed a bit from the point I was responding too and this is already long enough so I'll say something more substantive more directly about the Rupert in a future post.