Saturday, October 15, 2005

fodor's modules

from my recent report handout:

Fodor is vague, but can perhaps be pinned down to some general claims:
He doesn’t get much more specific than saying that modules are “input systems”.
Says they’re not to be thought of as touch, smell, sight, hearing, taste, plus language. Instead something “more in the spirit of” Gall’s phrenology.
He will commit to saying...
- they’re domain specific,
- their operation is mandatory,
- their representations are mostly inaccessible to central systems,
- they’re fast,
- they’re informationally encapsulated,
- they have shallow outputs,
- they’re associated with fixed neural architecture,
- they exhibit characteristic breakdown patterns.
All of this seems like it’s difficult to argue with, partly because it’s vague, partly because it seems like it must be true to some degree, based on computational efficiency, basic knowledge of brain anatomy, and intuition/experience...

The more interesting part is where he talks about central systems. Basically it’s the familiar argument that if all this other stuff is encapsulated, there must be domain-general capacities (a little man) that pull it all together. These central systems, he says, are non-modular.
- Belief fixation is a process of “rational nondemonstrative inference”.
- Central processes are “Quinean/isotropic”.
This seems to mean that a central process like belief fixation takes into account all other beliefs and their global properties, but we don’t understand how it works. Somehow it gets around issues like the Frame Problem. The way this is accomplished is through unstable, instantaneous connectivity, and diffuse, changing neural connections.

Conclusion:
- perception and language are encapsulated, and have fixed neural architecture.
- “thought” is all-knowing, connected to everything, and has flexible architecture.
So if I want to argue against this idea, I just have to show that thought and perception/language don’t have such different neural architecture, and/or there is no rational all-knowing central processing in this sense.

more on modules:
In section III.6, which is ostensibly about how input analysers have 'shallow' outputs, he spends a long time talking about language and categorization and whether context matters. Contextual influences on language understanding would constitute breaches in informational encapsulation. he gives some of the (now) typical examples about how we make observations and comments like "there's a dog" or "there's a chair" much more often than ones like "there's a miniature poodle" or "there's a piece of furniture" (so at a meduim level of abstraction rather than more specific or more general). he links this to the idea that in communication we have to balance how informative our utterances are with how much effort it takes to generate them. he makes the claim that modularity is somehow tied in with this sort of preference of medium-abstraction contexts. like they're the ones that require the least amount of contextual information and can be done most quickly. the long and the short of it is that he thinks that this sort of quick understanding/recognition of categories gets at what kinds of modules there are, and that it's done without any top-down information.

The obvious objection to this, without even getting into any facts about neurophysiology or development, is that in different contexts, the categories that are recognized quickly and automatically shift. like when at a dog show, a judge would automatically identify dogs by breed, while in his/her normal day to day life, he/she probably would make observations more on the dog level. or a chess player automatically seeing a board in terms of the moves rather than it just being a chess board, depending on if the arrangement is one that makes sense in a game context. Maybe Fodor could defend his theory saying that for different people, like experts or not, the most salient categories would be different in different people, but stay the same within each individual. I don't think that's true. Any kind of priming experiment would suggest otherwise.

0 Comments:

Post a Comment

<< Home