Wednesday, May 13, 2009

Orig/Deriv pt 2nd: When I read, must the words I’m reading have meaning for me only derivedly?

When I read written text, I am using the representations in the text for their content. Could this content be original, as opposed to derived?

Different writers have made the distinction between original and derived content in different ways. For our purposes, it will serve to make the following two points about the distinction.

  1. In general, a representation's content is supposed to be derived if it has that content by virtue of intentional agents' handling of the representation. In general, a representation's content is supposed to be original if it's not derived.
  2. Whatever the distinction between original and derived content is, that which marks the distinction can be cast in terms of the present causal powers and dispositions of representations, and not just in terms of the histories of representations.

1), or something near enough, is true, I think, of all the original/derived distinctions that people have made over the years.

What about 2)? At least in the discussion over Extended Cognition, the original/derived distinction seems to be treated as what I'll call a technological distinction. Many people think that not all distinctions in Science need to be castable in terms only of present causal powers. (So, for example, many philosophers think that the distsinction between biologically functional and nonfunctional objects rests not on their present causal powers, but on their [for example, natural-selectional] histories. Two things could be causally identical, in causally identical environments, yet one be a heart, and the other fail to be a heart.) But if we are concerned to know what we can do now, what we can build, what we can do with this or that tool, then we are going to be concerned centrally about technological distinctions and not (directly) with scientific ones. I hope I've gestured sufficiently in the direction of "technological distinction" to give you an idea of what I mean by it. For a definition, I think it will do in a pinch to say that a technological distinction just is a distinction between kinds that can't be cast except in terms of present causal powers and dispositions.

As I said, in the discussion over Ex-Cog, it seems to me that the original/derived distinction is a technological one. On this very blog, for example, Adams began his response to my first orig/deriv post by making a claim about the significance of original intentionality for the project of building (my emphasis) a mind out of a machine.

But why have written texts been supposed to be paradigmatically derived in their intentionality and not original? One idea is that they have derived content because they have had their meanings assigned to them by a community of readers. But that is to mark a distinction based on history, not based on present causal powers or dispositions.

Another idea might be to claim that when we read, as we are reading, we are assigning meanings to symbols. But is this so? Possibly not. For though we must obviously represent words as symbols and choose which meanings to apply to them as we are learning to read, once we know how to read, it's not so clear that we are applying meanings to the text anymore. Rather, it may be that by learning how to read, we've made ourselves such that texts now simply trigger meanings, rather than our in any sense assigning those meanings as we read. The assignment happened in the past, as we were learning—but that in itself doesn't make the text's meaning "derived" in the sense relevant to the Ex-Cog discussion, relying as it does only on a consideration of the history of the habit of assigning that meaning to that symbol.

If a representation has meaning for an agent without that agent representing the representation as a representation and explicitly applying a meaning to that representation, then, I suspect, that representation has its meaning for that agent originally. And I also suspect that when we read texts in our native language with facility, the symbols in those texts have meaning for us without our representing those representations as representations or our explicitly applying meanings to those representations. I suspect these texts have their meaning for us "automatically" in a way which makes them, for us, original rather than derived meanings.

These posts always turn out longer than I expect. I want to invite discussion of what I've said so far, and in a few days, I'll follow up with reasons that I suspect that when we read written texts, they have their meaning for us without our representing them as representations and applying meanings to them.

Questions that could be discussed about the present post are the following. Am I right to insist that in the discussion about Ex-Cog, the original/derived distinction must be understood in terms of present causal powers? Am I right to suggest that what it means for something to have derived content for an agent in this sense is for the agent to be representing the representation as a representation to which it might apply any of a number of possible meanings? Can my use of terms like "trigger" and "assign" be sharpened in some way in order to make it clear whether I'm onto something or off my rocker? I've set up a dichotomy between representations having their meaning via representation as representation on the one hand, and on the other hand representations having their meaning "automatically" or by a simple "triggering" for an agent. Is this dichotomy valid? Or could there be other ways for representation to have their meaning, and which count as constituting a representation as having its content derivedly?

7 comments:

  1. I am not sure that you can assume that the derived/non-derived distinction must be made ahistorically. Moreover, I don't think you can assume that derived content must be understood ahistorically. There have been many discussions of "swampman" and moving back and forth between Earth and Twin-Earth that are concerned with historical conditions.

    ReplyDelete
  2. Ken, thanks for the reply.

    I was trying to indicate how I would answer an objection like yours by pointing to the way it seems the original/derived distinction, at least in the discussion over ex-cog, is a "technological" distinction. But it occurs to me that there's an apparent counterexample to my claim that if we're asking "how to build" something we must be concerned only with the thing's present causal dispositions. One such conterexample is as follows. If I want to "build" ("make" is probably the better word here but the idea is the same) a dollar bill, it matters how I _made_ it and not just what the thing I made can presently do. I can make a perfect facsimile of a dollar bill, with all the same causal powers as a dollar bill, and yet if the thing doesn't have the right history, I haven't made ("built") a dollar bill. I've made a fake.

    I'll respond to that point in a moment. But first, to answer your own point more directly:

    So far I've seen two types of defenses of the idea that Swampman's state do not have biological (and, ipso facto, cognitive) functions.

    1. Some people (Millikan in "On Swampkinds" and Neander in "Swampcow" come to mind) argue that if we were to treat Swampman as of a kind with human beings, then we would have no ground from which to make biological generalizations about human beings. For the grounding for biological generalizations involves the fact that humans are produced by replication (with mutation) from other humans. This fact about them grounds the inference that humans tend to share functional characteristics. But if Swampman is a human as well, we no longer have a ground for that kind of inference about humans, and we can no longer do Biology.

    2. Insisting that Swampman is biofunctionally and cognitively like us involves reliance on something called the "Internalist Intuition,"--the view (if I recall correctly, not having the article at my fingertips) that functional characteristics must supervene on physical structure--and on Dretske's theory, I.I. must be false. (You can guess that the person I'm thinking of who gives an argument of this type 2 is Dretske. To be clear, he is talking about an entity called "Twin Tercel," but his argument applies to Swampman as well.

    Regarding argument type 2, I won't present a knock-down argument for I.I. here because there's no space. (Also, incidentally, I don't have such an argument to give!) But what I'm going to say about the original/derived distinction doesn't rely on I.I. so the point is moot.

    Regarding argument type 1, the point is well taken as concerns Swampman, but it doesn't apply to the original/derived distinction as it is used in discussions over ex-cog. There's a practical reason not to admit Swampman as being of a biological kind with human beings. If you did so, you'd no longer have good reasons for biological generalizations. That leaves you in the cold when it comes to the practice of Biology. But there is no such practical reason for thinking that cognizers must have a particular kind of history. At least, I can't think of one. Can you? Would you affirm the following, and if so, why?

    Claim A: If the original/derived distinction could be made in terms purely of present causal powers or dispositions, then we could no longer make scientific generalizations about cognizers.

    If you wouldn't affirm Claim A, then it seems the kind of reasoning given by teleofunctionalists who think Swampman has no biological functions doesn't apply in the case of the original/derived distinction. The mention of Swampman in this context would appear to be a dead end.

    I'm not generally a fan of "shifting the burden of proof" (in my experience, both sides should understand themselves as having something to prove) but in this case, I find I simply can't see a reason to think that we must understand original content as essentially involving history in order to do cognitive science. I can see how this would go for biofunction and biology, but I can't see how this would go for original content and cognitive science. I don't mean to be shifting the burden of proof, rather, I'm saying "I just don't see it, but can you show me?"

    It may be that I simply haven't read enough, and the right, research recording the practice of cognitive science to have a good enough idea what would be required to ground the kinds of generalizations it makes.

    Now, about the dollar bill. The reason the counterexample works is that part of what it is to be a dollar bill involves how agents ought to handle the dollar bill. We care about the history because facts about the history make a difference as to how we should handle the object. But is it plausible that anyone (or anything?) should care about the history of a representation for any analogous reason? Do facts about the history of a representation make a difference as to how anyone (or anything?) should handle the representation? It seems to me any functionally (in the computational sense) equivalent representation can do the same job--I can see no reason an organism should treat representations differently based on their histories. (There it is again: "I can't see it, but can you show me?")

    It might be that in the usual course of events, the history of a representation is a good indicator of its usefulness for this or that job, so the history might matter in a practical sense. But that doesn't seem to support the notion that a particular representation _just doesn't count_ as original unless it's got the right history. A thing _just doesn't count_ as a dollar bill unless it's got the right history. There seems to be no analogous necessity attached in the case of original content. This seems to me to be the crucial distinction between the dollar bill case and the original content case.

    Making a representation with original content seems more akin to a task like "make a ball four inches wide" than a task like "make a hammer." Something's being a hammer depends on how it is used by agents. Something's being a ball four inches wide does not.

    Searle makes a distinction in this area in his book The Construction of Social Reality. The terms he uses for the distinction aren't springing to my mind at the moment. But it's the distinction between properties like "being heavy" and "being tuesday". The former are properties had by things independently of the way we treat them. The latter are properties had only because of the way we treat them. (Or something like that.) The funny thing is, I remember at the time thinking he hadn't done a satisfactory job of showing there is any such distinction to be made, but now I'm relying on something like it. Looks like I need to go back and re-read.

    ReplyDelete
  3. Perhaps Claim A should be weakened to: If the original/derived distinction could be made in terms purely of present causal powers or dispositions, then we could no longer use it to make scientific generalizations about cognizers.

    (The difference is, I added the words "use it to".)

    ReplyDelete
  4. Kris,

    You write:

    “There's a practical reason not to admit Swampman as being of a biological kind with human beings. If you did so, you'd no longer have good reasons for biological generalizations. That leaves you in the cold when it comes to the practice of Biology. But there is no such practical reason for thinking that cognizers must have a particular kind of history. At least, I can't think of one.”

    A familiar argument given for thinking that cognizers must have a particular kind of history goes something like this: cognizing involves representing. Nothing is a representation in virtue of its intrinsic properties (e.g., marks on a blackboard do not represent anything in virtue of their intrinsic properties). Instead, what makes something a representation depends on how it is used. But since we need to make room for the possibility of misrepresentation, we need to distinguish how a representation is used from how it is supposed to be used. To say that a system is supposed to use a representation in a certain way is to say that the system has a certain function. Functions are not intrinsic properties, but historical properties. Thus, since cognition presupposes representation, representation presupposes function, and function presupposes history, cognition presupposes history.

    A bit quick, to be sure. But I think it conveys the gist.

    ReplyDelete
  5. I should clarify, of course I know of the reasons most people think that for something to be a representation, it has to have the right history. What I don't know of is a good reason for people to think that in order for a representation to have _original content_ it has to have the right history.

    It happens that I also don't think representations have to have the right history in order to represent what they represent. I don't agree that functions are historic properties. But that's beside the point in this particular thread of conversation. (The volume _Representation in Mind_, ed. Clapin, Staines and Slezak, contains several nice essays on this topic, arguing in a few different ways that teleofunctions can be understood in terms of a system's present dynamics rather than its historical etiology.)

    ReplyDelete
  6. Hi, I want to start a library of blogs on me and my pals philosophy site, it is here,

    dissidentphilosophy.lifediscussion.net

    How can I contact you, I am not familiar with blogs ...

    ReplyDelete
  7. Comment on Kris Rhodes' Wednesday, May 13th Posting

    Natika Newton
    natika.newton@gmail.com

    This is a follow-up to Kris Rhodes argument that present causal powers, not historical origins, should determine whether representations are original or derived.

    Where do underived representations come from? Adams and Aizawa propose a version of the causal theory of Fodor, Dretske, and Cummins: representations represent via some natural process that originally caused them, or brought them into being. Rhodes proposes that present causal powers are determinative. I think he is right, but I would refer to the role of original content in representations not as causal factors related to the process, which implies that they might be external to the process itself, but as inseparable constituents of the process, both conceptually and empirically. In my argument I will use the term “representations” after Prinz, who holds that emotions represent one's core relations to the environment. As such, Prinz's “gut feelings” are representations that are intrinsic to cognition.
    If underived content is crucial to cognition, what originally generates the content? The user of the underived representations understands them as having meaning for her. Since she conferred the meaning, her understanding cannot be undermined by someone else's reinterpretation. She knows what she meant. How does this bond between user and representations become so strong, such that only use of these non-derived representations generates true cognition?
    There seems to be only one possibility: the original content includes the representation of some goal of the user. The user is motivated toward some achievement, such as to reach a decision or solve a problem. If this motivation is to be the mark of the cognitive, then it must determine the entire process as cognitive, not just the beginning. Normally a process such as solving a logical proof consists of stages, each requiring a selection of the next path. Selection requires (a) a goal, (b) evaluation of the alternatives as routes to the goal, and (c)some signal that one path is the best option. Without this ongoing motivation, which takes the form of representations in human cognizers, the system is incomplete. It is just a tree diagram branching exponentially, with nothing to distinguish the various branches. The system has no goal because each choice point leads to a different end, and there is nothing in the system either to distinguish one end from the others or to cause the various branches to converge. Its only content is the whole set of value-neutral branches.
    Even if the non-derived content is a simple “Ah, there!” in some primitive creature (A/A), that assignment of content is a motivated, goal-directed “act” understood by the agent in some such terms as “Ah, there is something important to my well-being!”
    It might be thought that the relevant priorities can be programmed into a machine that is thereby a cognizer – like the choice heuristics programmed into Big Blue, the computer chess master. Perhaps it could be. But the present issue is what the mark of the cognitive is in human thinkers who may use inanimate objects as tools to aid their thinking, and meaning and understanding are essential to that thinking. Meaning and understanding are found in the brain and are not intrinsic to handy tools themselves.
    It might also be thought that this ongoing evaluation is simply an external causal feature in the cognitive process. It is true that it has a causal function, as does every stage in a process that leads to another stage. But without the evaluation, as mentioned above, there is only an abstract tree of possibilities, and no active process that one could call the process of cognition.

    ReplyDelete