One criticism of the Extended Mind hypothesis (EMH) turns on the idea of a distinction between “original” and “derived” content. (Adams and Aizawa offer an important argument against EMH along these lines in The Bounds of Cognition.)
A representation has “derived” content inasmuch as it has the content it has because of the way intentional agents regard it. The paradigmatic example is written text. It is supposed to be that written text has its meaning derivedly, because it has its meaning as a result of intentional agents’ having developed and applied conventions to the text that are meant to determine its meaning.
Meanwhile, a representation has “original” content inasmuch as it has its content independently of the way any agent regards it. Many people suppose that brains make use of internal representations that have their content, not because someone interprets them as having that content, but rather, simply “originally.”
How there can be original content is the subject of some dispute, but most people agree that there must be original content. Haugeland (Having Thought, p128) argues for this very succinctly. “Derivative intentionality, like an image in a photocopy, must derive eventually from something that is not similarly derivative; that is, at least some intentionality must be original (nonderivative).”
Dennett, of course, argues that there is no original intentionality. (How can he answer Haugeland’s succinct argument just quoted? That’s a bit beyond the scope here but briefly, in my view, he doesn’t think there’s any such thing as derived intentionality either. Taking the intentional stance toward something doesn’t make anything exist—intentionality, representation, or anything else—that didn’t before. He’s something of a fictionalist about these things. Yet he thinks the terms can be used in a way that is responsive to real patterns in the world. I’ve said too much, and too much that is confusing, so I’ll just leave off.)
What does the original/derived distinction have to do with the Extended Mind? Take the famous example of Otto from Chalmers and Clark’s original paper. (See description in previous blog post). Otto uses his notebook by writing in it and reading from it. These are paradigmatic examples of the use of representations for their derived content. Yet many people think that it is a mark of the mental that it involves original content. Arguably, a normal person recalls the kinds of things Otto recalls using representations in her brain that have original content, and that’s why what she’d be doing would really be thinking. What Otto is doing, involving derived content in the way that it does, can’t count as real thinking. It is a substitute for thinking, but it’s not the real thing.
One might quibble over whether normal cases of remembering always involved original content in the relevant way. Can’t it be that we human organisms develop and apply natural conventions toward many of our mental states, making them into representations with derived intentionality? Wouldn’t Otto’s notebook activities be relevantly like this?
I don’t want to take that tack, though. Rather, I want to suggest that when Otto reads from his notebook (or when any of us reads naturally in our native language) he’s (we’re) using the written representations for original content, not derived content. I want to suggest, in other words, that our use of written symbols in the normal reading and writing process uses meaning had by those symbols independently of the way intentional agents treat or regard them. How can I say this? It seems blatantly wrong!
Since this post has become longer than I expected, I’ll have to beg off until I make my next post. (Preview, to maintain plausibility: It’s important in the development of scientific and technological concepts that they not turn only on the history of a system, but also on its present causal powers. This runs afoul of certain popular views esp. in the philosophy of biology, but I’ll address that too. Anyway, note that the sense in which written texts have derived intentionality might turn only on these texts’ history.)
In the meantime, the present post perhaps provides fodder for discussion of the place that these kinds of criticisms (one’s starting from the original/derived distinction) have in discussions of EMH. Or perhaps it could be interesting to discuss whether Dennett’s view escapes the kind of criticism stated succinctly by Haugeland as quoted above. Adams and Aizawa think not—they think Dennett “nowhere comes to grips” with what they call the problem of the “lone thinker.” (You can probably guess from the context what a problem like that of a “lone thinker” would amount to for a view that says there is no original intentionality. I'll clarify in comments if need be, but I don't want to take up any more space here.)
I’ll post again soon.
Subscribe to:
Post Comments (Atom)
Adams and Aizawa defend the derived/non-derived content distinction against the type of objections raised by Dennett in Aizawa, K. & Adams, F., 2005: Defending Non-Derived Content, Philosophical Psychology, 18, 661-669.
ReplyDeleteDennett and many others who object to this distinction seem to blur the difference between being causally derived (there is a causal process that generates the first mind with the first content) and being semantically derived (Mind A with content imposes meaning on a symbol in mind B so that what the symbol in mind B means is what Mind A intended it to mean, not what Mind B uses the symbol to represent in its own cognitive economy).
Adams and Aizawa (2001 and 2008) maintain that the first mind on Earth had only non-derived content (we are not theists). We also maintain that to build a mind out of a machine, the mind would need to employ symbols to represent the world that are meaningful to that system due to its own interactions with its environment.
Remember Searle in the Chinese room. Squiggles and Squoggles were meaningful to Chinese speakers, but not to him. To be meaningful to him, they have to play an intentional role in his cognitive economy because of what the symbols mean to him, not only what they mean to others.
Think of Turing. To build a computer that can think, the symbols have to play a role in the system in virtue of their contents and the contents have to be meaningful to the system, not only to the programmers.
Think of Dretske or Fodor's causal/informational theories of naturalized semantics. Their theories explain how symbols with content derive from informational origins (which cannot be false), to symbols with meanings (that can be false, that can misrepresent). And the contents of the symbols are aquired by the systems in virtue of their informational and causal/counterfactual origins. The connections are between the symbols and the things the symbols are about. There are no other "meaners" in the causal chains from which the contents of the symbols could semantially derive.
Adams & Aizawa think that this is a necessary condition of getting a mind up and running. Therefore, for this reason, and others that they discuss in their papers and book, they do not believe that Clark and Chalmers or any other true believers in extended mind have yet produced examples in which what is external to the mind plays the proper role to constitute cognitive processing of symbols with non-derived content or meaning.
Good luck with the blog, Kris.
Yours,
fa
Fred Adams
Linguistics & Cognitive Science
University of Delaware
In challenging our invocation of non-derived content, Kris promises a theory according to which texts have non-derived content. That looks to be an uphill battle, since texts are paradigms of things that are supposed to bear derived content.
ReplyDeleteBut, what are the roads not taken? One is offered by David Chalmers (on his blog) and Justin Fisher (in a forthcoming review of The Bounds of Cognition). They propose counterexamples: cases of cognitive states that have derived content. They propose that occurrent beliefs derived their content from dispositional beliefs. Imaginative states derive their content from perceptual states. States of thinking derive their content from perceptual states. Some concepts are acquired with the help of pre-existing concepts.
Coincidentally, Justin Fisher's review of The Bounds of Cognition--a critical notice really--has just appeared in the August 2008 issue of the Journal of Mind and Behavior
ReplyDeletehttp://www.umaine.edu/jmb/current.html
This comment has been removed by the author.
ReplyDeleteHi Kris,
ReplyDeleteYour suggestion that the writings in Otto's notebook have original content sounds interesting. I'm looking forward to more posts on this. Just a quick comment on this suggestion. As Andy Clark notes, supposing that the squiggles in the notebook have original/intrinsic content doesn't exclude them from also having derived content:
"...just because the symbols in the notebook happen to look like English words and require some degree of interpretative activity when retrieved and used, that need not rule out the possibility that they also come to satisfy the demands of being, in virtue of their role within the larger system, among the physical vehicles of various forms of intrinsic content." (Supersizing the mind, p. 90)
(I guess the degree of "interpretative activity" must be quite minimal here though - or occurred in the past when Otto learned to rely on the notebook - at least if a criterion for something to count as part of the cognitive system Otto is supposed to be that the use is automatic and the notebook transparent in use...)
The implication of this, of course, is that you might be able to agree with Ken Aizawa that "texts are paradigms of things that are supposed to bear derived content" (which is true of course), but still claim that the texts might bear original content under certain conditions...
Anyway, nice blog!
Cheers
Olle
PhD student, philosophy
University of Edinburgh
Olle said:
ReplyDelete"The implication of this, of course, is that you might be able to agree with Ken Aizawa that "texts are paradigms of things that are supposed to bear derived content" (which is true of course), but still claim that the texts might bear original content under certain conditions..."
Exactly right. This is part of what I want to suggest.
I'll post something further on this in the next couple of days.
As Haugeland understands it, derived content is content something has in virtue of something else that has the same content. If the meaning of the word ‘dog’ is derived from our concepts, then we have concepts that have the same content as ‘dog.’ So how might words/sentences of a natural language have content that is not derived? Pointing out that the content of a word or sentence somehow depends on the contents of concepts/thoughts is not sufficient to show that such content is derived, for it might be that the relevant thought contents are not the same as the contents of the resulting words/sentences.
ReplyDeleteTo illustrate the point, consider how informational theories account for the content of, e.g., HORSE (understood as a symbol in the language of thought). According to these stories, HORSE means horse because of some nomic relation that obtains between the presence of horses and tokenings of HORSE. But as any perceptual psychologist will tell you, such a relation is mediated by internal states that themselves plausibly have content (though perhaps such states do not have ‘conceptual content,’ as this phrase is often understood). To avoid circularity, these mediating states had better not themselves contain HORSE. So if the HORSE/horse connection obtains in this manner, then though the content of HORSE will not be derived from the contents of the mediating representations, that HORSE has the content it does will depend on something else with content.
Now, perhaps something like this holds of terms in a natural language. They have contents, and having the contents they do does depend on the contents of mental representations; it’s just that their contents are not derived from the contents of such representations (again, as Haugeland understands this). Of course, this is compatible with another way of running an argument for original content: there must be some content that does not depend on other content. But it is risky for critics of EMT to use this as an argument against EMT, on pain of it turning out that the paradigm cases of states with non-derived content (e.g., beliefs) themselves have derived content.
I think that Anonymous' last post is excellent. The last bit is very much like a point Justin Fisher raises in his critical notice of The Bounds of Cognition. Unfortunately, it is not clear to me how to avoid/solve this problem.
ReplyDeleteIncidentally, where does Haugeland propose/defend this condition on derived content?
In footnote 6 to “Intentionality All-Stars,” Haugeland writes: “According to my distinction…intentionality is original just in case it isn’t derivative (namely, from the intentionality of something else with the same content).”
ReplyDeleteThanks, Anon.
ReplyDelete