Several of Rupert's arguments (and some from Adams and Aizawa as well) rely on the fact that, particularly in an experimental setting, organismically bounded cognitive performance can differ wildly from purported examples of extended cognitive performance. For example, if I am asked to memorize a list of items, I can generally remember up to seven, and over a relatively short period of time I will tend to forget more and more of these items, as long as I am not concentrating on them. But give me a notepad, and I can "remember" a practically endless number of items, and short of losing the notepad or its suffering some kind of unlikely damage, I will probably never "forget" these items. Similarly, if I'm asked to read a book and then put it away, answering questions about its content, I will perform very differently than I will if I am allowed to keep the book and refer to it while answering questions. (On some accounts, in the right circumstances, a reader and a book together form an extended cognitive system, with many of the reader's beliefs about the subject matter of the book residing in the book itself.)
Since performance differs so widely between the two cases, the argument goes, skepticism is called for on a few grounds. The main idea, I think, is that the wide difference in performance suggests we have a difference in natural kind. They don't behave similarly under relevantly similar circumstances, so they're not the same kind of thing. They don't give similar results under similar experimental conditions, so the science which performs these experiments shouldn't treat them as being of a kind.
In a JoP article, if I recall correctly, Rupert even puts it this way. There simply is no interesting cognitive science experiment we can do on a person's memory if we allow for his written notes to count as "memory."
I agree (or, if I'm remembering my Rupert wrong, then I posit) that no particularly interesting cogsci experiment could be done on such a subject. But importantly, there's a distinction between a cogsci experiment and an interesting cogsci experiment. If I were to set up, right now, a study seeing how long it takes people to forget an item after memorizing it as part of a list of ten items, this also wouldn't be very interesting--for many such experiments have already been done and the results are fairly well estabalished. Nevertheless, despite the fact that it wouldn't be interesting, this doesn't make it fail to be an experiment studying a subject in cognitive science. On the contrary, it's a paradigmatic example of such a study.
Similarly, I'd argue, an experiment on subjects including their notebooks, allowing their notebooks to be counted as part of their "memory," may not be interesting--because we all know exactly what would happen beforehand, for example--but that doesn't mean it's not a study of cognitive systems.
But what about the wide difference in performance mentioned above? The relevance of this is unclear to me. Individuals do differ in their cognitive abilities, after all. Some people can remember more than ten items. A few rare individuals--not all of them with disorders such as autism!--can remember, apparently, a practically endless number of items in a list. Do studies of these subjects not count as studies of cognitive agents? Or course not. They're cognitive, despite the fact that they perform very differently than most other cognitive agents we've studied.
It seems to me that cognition is a problem that different systems solve in different ways. Most human organisms solve it in a way that brings along with it limitations like "only seven items" and so on. Some human organisms seem to solve it in some other ways. Other organisms solve it in still other ways. And the use of a notebook to record memories is yet another way to solve the cognition problem.
It is valuable for many practical and scientific reasons to study the ways human organisms tend to solve the cognition problem when denied the use of external resources. This tells us something about the general class of "brain-based" or "brain-bound" cognition methods. But this doesn't mean that's the only kind of cognition there is, of course!
But it should be asked, what's the use of a general category of "cognition" if the interesting scientific work gets done about only particular means of cognition? Aren't the various means the real kinds here, and isn't generalized "cognition" itself something of a red herring?
One suggestion here might be that it's too easy to be mislead by examples like the notebook used to memorize lists. That's a trivial task. But more complex tasks of various sorts, carried out with the aid of a notebook may yield more interesting correlations between important variables. And it doesn't seem implausible, at least, to think that, as we vary the complexity of tasks incrementally, and the amount and kind of external resources available, then in some range of complexity we'll find an interesting continuation of patterns that begin at the point of no resource availability. (In other words, we might find a function that aptly, using few variables, describes the subject's performance continuously all the way from a state of zero resource availability to a state of great resource availability, given sufficient tax complexity.)
That's just a suggestion--maybe the beginning of a suggestion of one way to experimentally confirm or disconfirm extended cognition, at least of certain types or given certain accounts.
Another suggestion is that even if extended cognition turns out not to be a good scientific kind, it may well be a conception that needs to be adhered to for humanistic, political or moral purposes. Perhaps what Otto is doing is quite discontinuous, empirically speaking, from what any of us are doing when we remember things. And yet perhaps interfering with Otto's notebook is best concieved of, for moral purposes, as interfering with his personal coherence in the same way that interfering with my memory would be interfering with my personal coherence.
Can good moral categories really fail to track the scientific ones in this way? I'm not sure.
Monday, January 25, 2010
Friday, January 22, 2010
Cognition and Digestion
Digestion is a way that an organism turns part of the world external to itself into a part of the world internal to itself. In the case of human beings, this process involves bringing the external bit into contact with the organism, and acting on the external bit in various ways that make it apt for absorption into the organism. There's no clear boundary between the time at which the external bit is fully external and the time at which it is fully absorbed. The boundary is fuzzy--but there are clear areas on either side of the boundary. The food on my plate is clearly external to me. The protein molecules doing their work inside my cells are clearly internal.
That's how humans digest. Amoebas digest a little differently, moving a part of themselves closer to the external object rather than bringing the external object closer to themselves.
Both approaches are ways to rearrange the environment so that external objects can be appropriated, their parts being made parts of the organism.
If beehives can be considered organisms in their own right (a notion taken seriously by a few biologists, google "superorganism") then they digest in yet a third way. Rather than reaching parts of themselves (still connected to themselves) out toward external objects, and rather than bringing external objects wholesale into themselves, they send out pieces of themselves as "agents" to digest the external object where it stands. The agents then bring the digested product back to the organism itself--the beehive.
I visualize extended cognition as working something like this third mode of digestion.
(For "organism," read "cognitive system." Right now I think the significance of Extended Cognition is that the boundary of the cognitive system is not the same as the boundary of the organism. In other words, "organism" is a good natural kind, "cognitive system" is a good natural kind, and instances of the latter are not always inside or coextensive with the latter. They sometimes merely overlap organisms. Extended cognitive systems would be examples. And they could potentiall exist overlapping no organism at all. Some form of artificial intelligence instantiated purely on electronic hardware would count as an example of that.)
The analogy isn't exact. My cognitive system doesn't send disconnected "agents" out to the environment to alter it in a way making it part of my cognitive structure. But neither, like an amoeba, does my cognitive system "envelope" the external. Rather, it sends out agents that remain connected to me. For example, my hands as they manipulate a pencil or a set of blocks. And the "manipulation" performed by my "agents" to make the external part of my cognitive system can be pretty subtle--having perhaps no physical effect on the external object at all but altering the norms that govern that physical object. (Which in turn necessitates, on my account of norms, which I don't have space to write about here, that the external object now has different dispositions in virtue of its relations to its environment, so it's not as though there's no physical cash-out to the account of manipulation I'm describing here.) A structure I'm using as an external memory store may never be physically contacted by me, but my cognitive acts with relation to that structure alter the norms governing that structure--i.e., make it the case that the structure should accurately reflect the state of something else, perhaps--in a way that incorporates the structure into my own cognitive structure.
Like digestion, cognition can be fuzzy. Food in my mouth is sort of outside me, sort of inside me. The structure I just mentioned, similarly, is "sort of" part of my cognitive structure, "sort of" not. Or put differently, it's only weakly part of my cognitive structure--not very robustly incorporated into me at all. But it's on the line somewhere not all the way on the "not cognitive" end of the scale. It's moved over a little bit, on account of my cognitive manipulations of the environment designed to make that structure part of my cognitive structure and, I think, part of myself. Sort of! (Fuzzily, just a little.)
Otto's notebook is much further into the cognitive end of things. An implant on my skull directly connected to my brain is even further. And so on.
That's what I think right now. Cognition (more properly, "cognitive incorporation") and digestion are both ways of making my environment part of myself. Digestion is a way of making my environment part of my metabolism, and chemical properties are probably the ones most relevant to this process. Cognitive incorporation is a way of making my environment part of my thinking (my information guided goal directed norm governed purpose unifying structure), and informational/representational properties are probably the ones most relevant to this process. Things can be more or less digested, and things can be more or less cognitively incorporated. Things can be digested at a distance, and things can be cognitively incorporated at a distance.
Not an argument of course! I'm just describing and analogizing.
I wrote the above in response to something in Rupert's new book, but I've strayed a bit from the point I was responding too and this is already long enough so I'll say something more substantive more directly about the Rupert in a future post.
That's how humans digest. Amoebas digest a little differently, moving a part of themselves closer to the external object rather than bringing the external object closer to themselves.
Both approaches are ways to rearrange the environment so that external objects can be appropriated, their parts being made parts of the organism.
If beehives can be considered organisms in their own right (a notion taken seriously by a few biologists, google "superorganism") then they digest in yet a third way. Rather than reaching parts of themselves (still connected to themselves) out toward external objects, and rather than bringing external objects wholesale into themselves, they send out pieces of themselves as "agents" to digest the external object where it stands. The agents then bring the digested product back to the organism itself--the beehive.
I visualize extended cognition as working something like this third mode of digestion.
(For "organism," read "cognitive system." Right now I think the significance of Extended Cognition is that the boundary of the cognitive system is not the same as the boundary of the organism. In other words, "organism" is a good natural kind, "cognitive system" is a good natural kind, and instances of the latter are not always inside or coextensive with the latter. They sometimes merely overlap organisms. Extended cognitive systems would be examples. And they could potentiall exist overlapping no organism at all. Some form of artificial intelligence instantiated purely on electronic hardware would count as an example of that.)
The analogy isn't exact. My cognitive system doesn't send disconnected "agents" out to the environment to alter it in a way making it part of my cognitive structure. But neither, like an amoeba, does my cognitive system "envelope" the external. Rather, it sends out agents that remain connected to me. For example, my hands as they manipulate a pencil or a set of blocks. And the "manipulation" performed by my "agents" to make the external part of my cognitive system can be pretty subtle--having perhaps no physical effect on the external object at all but altering the norms that govern that physical object. (Which in turn necessitates, on my account of norms, which I don't have space to write about here, that the external object now has different dispositions in virtue of its relations to its environment, so it's not as though there's no physical cash-out to the account of manipulation I'm describing here.) A structure I'm using as an external memory store may never be physically contacted by me, but my cognitive acts with relation to that structure alter the norms governing that structure--i.e., make it the case that the structure should accurately reflect the state of something else, perhaps--in a way that incorporates the structure into my own cognitive structure.
Like digestion, cognition can be fuzzy. Food in my mouth is sort of outside me, sort of inside me. The structure I just mentioned, similarly, is "sort of" part of my cognitive structure, "sort of" not. Or put differently, it's only weakly part of my cognitive structure--not very robustly incorporated into me at all. But it's on the line somewhere not all the way on the "not cognitive" end of the scale. It's moved over a little bit, on account of my cognitive manipulations of the environment designed to make that structure part of my cognitive structure and, I think, part of myself. Sort of! (Fuzzily, just a little.)
Otto's notebook is much further into the cognitive end of things. An implant on my skull directly connected to my brain is even further. And so on.
That's what I think right now. Cognition (more properly, "cognitive incorporation") and digestion are both ways of making my environment part of myself. Digestion is a way of making my environment part of my metabolism, and chemical properties are probably the ones most relevant to this process. Cognitive incorporation is a way of making my environment part of my thinking (my information guided goal directed norm governed purpose unifying structure), and informational/representational properties are probably the ones most relevant to this process. Things can be more or less digested, and things can be more or less cognitively incorporated. Things can be digested at a distance, and things can be cognitively incorporated at a distance.
Not an argument of course! I'm just describing and analogizing.
I wrote the above in response to something in Rupert's new book, but I've strayed a bit from the point I was responding too and this is already long enough so I'll say something more substantive more directly about the Rupert in a future post.
Subscribe to:
Posts (Atom)