Not directly related to extended cognition, but these remarks spun out of my thinking about Swampman, which spun out of my thinking about extended cognition.
Too long to put here, but here's a link:
Swampman Takes The Turing Test
The linked document is an organized collection of remarks shaping into an argument. I wouldn't even call it a "draft" yet.
Basically, I argue that the Turing Test provides no evidence that a machine is thinking, because in order for it to provide such evidence to us, we'd have to make certain assumptions which either trivialize or cancel out any conclusions we might have drawn from our observations during the test. The Turing Test provides no evidence for psychology in the same way that an examination of Swampman provides no evidence for biofunction.
Friday, October 16, 2009
Subscribe to:
Post Comments (Atom)
I was really confused by this. I admit that I only read the first part closely, but it seemed really off base. Plausible accounts of inductive logic usually involve Bayesian updating or some such. But then we do have evidence for thinking that Swampman has a heart, etc. Because the history of swampman is not our only evidence about him, but also the fact that he behaves / looks / etc. (as stipulated) just like a human being. In fact, the history of something is of relatively minor importance compared to the immediate external features of it when performing inductive generalizations. When I find a fossil, see a duck, meet an old friend on the street, etc., strictly speaking I never have knowledge of the history of these objects. But I do however treat them as a fossil, duck, friend, etc. on the basis of the features I observe.
ReplyDeleteOf course, in an absolute sense, this inference isn't justified, but that's just Hume's problem. It applies to any inductive generalization. The whole point of Hume's problem is to demonstrate that *any* argument of the form "it looks like it behaves in accordance with rule x, therefore it behaves in accordance with rule x" is not sound. That's just plain as good as inductive reasoning gets. We don't (in fact *can't*) "know" that "Anything that looks human is human" any more or less than we (can) know that "If it can behave like a psychological agent for five years, then it is appropriately treated as a psychological agent" - these arguments have logically the same structure and fail in soundness independent of any contingent facts about the world we may have observed. That's the very point.
But, correspondingly, this also means they are equally valid as arguments forms. If you buy one you must buy the other.
I think I can answer your concerns. The reason we can make inductions despite Hume's problem is that we are able to make certain innocuous assumptions about the uniformity of nature. Nature doesn't _have_ to be uniform, but in fact, it is. Lucky for us. And this uniformity is what lets certain inductions work for us.
ReplyDeleteBiological generalizations and inductions work because of a certain uniformity in the biological world, a uniformity that comes from the fact that biological entities arise through the process of natural selection. That process tends to preserve biological features. That's the "uniformity" so to speak that we must (and do) assume to hold in order to make biological generalizations and inductions. (Thus sayeth Millikan, anyway, and I think she's basically right.)
But an entity like Swampman doesn't share that selection history, so the uniformity principle just referred to doesn't apply to him. This means biological generalizations don't properly apply to him either. That's the argument that Swampman doesn't have a heart--an argument I _don't_ agree with, but I _do_ agree that we coudln't properly apply the concept "heart" to the lump of flesh inside Swampman's chest if we were to encounter him. (I think he has a heart, but I think we can't know that if we actually encounter him in our own world.)
In the set of remarks I linked to, I argued that similar considerations apply to a machine taking the Turing Test. In order to conclude that it's a thinker, you must make certain uniformity assumptions about its behavior--and those assumptions are justified when applied to human thinkers, but not when applied to computers running programs designed by hummans. In other words, it makes sense to assume that humans behaving similarly have similar psychological profiles, but it _doesn't_ make sense to assume that _just anything_ behaving similarly to a human shares a human's psychological profile. Humans share traits and histories that ground a uniformity assumption about the psychological nature of their behavior. There are no such shared traits or histories to ground such an assumption about a computer's behavior. (Or rather, there _could_ be such shared traits or histories if the computer were designed the right way, but that is to be determined by examination of the computer's inner workings, not just by examination of its behavior.)
Did I make it worse?
First, uniformity assumptions: the way these are cashed out in Hume and in most models of inductive reasoning, frequency of co-occurrence of traits is used to support inferences from the presence of one trait to the presence of another. By definition, swampman shares many traits with an ordinary human being, so by any of these models of inference we could move to the assumptions that he shares other, perhaps not directly observable, traits.
ReplyDeleteSecond, the problem about swampman's heart: my understanding is that the argument goes like this - we want to define a heart in terms of its function, but functions requires correctness conditions and we want to cash these out in some naturalistic way. The teleological solution is that we cash function out in terms of the history of the organism / organ / whatever. Now, IF you subscribe to this strategy for deriving functional norms from evolutionary history, and IF you want to define organs like the heart in terms of their function, THEN you're forced to the conclusion that swampman doesn't have a heart.
This line of argument is never going to work against the Turing test, however. The reason is that the Turing test itself constitutes a different strategy for defining function. Turing says, here's an operational definition of intelligence. If you subscribe to a definition of intelligence like that which lead to the swampman problem, i.e., IF you think it must be cashed out in terms of function, *and* IF you think that function can only be cashed out in teleological terms, THEN of course the history of the device *and not whether it can pass the Turing test* is what is relevant for attributing intelligence to it.
The point is this: Millikan and Turing are offering different analyses of function. The Millikan analysis does not subvert the Turing test in any way because the Turing test just depends on a different analysis. For my money, a much better one! Instead of trying to imagine swampman taking the Turing test, we should be figuring out how to take Turing's strategy and apply it to swampman's heart in order to avoid the ridiculous conclusion that he doesn't have one!
[Maybe something like this (?): take a human heart and a swampman organ found in the corresponding place. Transplant them both into patients who need heart transplants. If no statistical difference in survival or recovery rate can be discerned between those receiving heart transplants and those receiving swampman organ transplants, then the swampman organ is a heart.]