How to Be a Meaning Atomist

Eric Lormand
University of Michigan
email: lormand@umich.edu

Under review: April, 1996

Meaning atomists hold, roughly, that there is a small stock of semantically primitive representations in a linguistic or mental system (the "atoms"), and every other representation in the system is completely definable by these. The main difficulties for atomism are the apparent scarcity of genuine definitions and the lack of a principled, plausible, specification of the atoms. My aim is to develop a solution to these problems, at least for systems of mental representations.

1 What is meaning atomism?

Meaning atomism is a theory about the spreading of meanings within a representational system.<1> On meaning atomism about a system S, the meanings of all the representations in S can be specified completely in terms of a relatively small set of semantically atomic (or simple, primitive, basic) representations in the system. In other words, meaning atomists about a language, mind, or other system S hold that a small stock of representations in S are sufficient for defining the meaning of all the rest.<2> How small a set of atoms does meaning atomism require? For familiar natural languages, an atomist believes that syntactically complex representations ("brown cow") are typically definable using their syntactic components ("cow"), but also believes that syntactically simple representations are typically definable. Similarly, atomists think that the meanings of mental representations "corresponding" to simple morphemes of natural language (e.g., one’s idea of cows) are typically definable. By contrast, a nonatomist takes syntactically simple linguistic representations and their mental counterparts to be typically indefinable.

Meaning atomists must satisfy two requirements. First, they must provide a theory of atoms—a specification of the representations that are semantic atoms. Second, atomists must provide a theory of definitions—a specification of how each nonatom can be defined in terms of ensembles of atoms. These projects have fallen on hard times in contemporary philosophy, and meaning atomism has fallen along with them. Traditional theories of atoms have appealed to one or another epistemological criterion: the atoms are taken to be those representations that are applied incorrigibly (e.g., to "sense data") or at least noninferentially (e.g., by "direct" observation). Traditional theories of definitions hold that atomic definitions provide analytically necessary and sufficient conditions for the applicability of nonatoms. Such traditional theories are open to very strong objections. They yield particularly strong versions of the two "dogmas" made infamous by Quine (1953): semantic reductionism (e.g., of theoretical to observational terms, or of "distal" observational terms to "proximal" sense data), and commitment to the existence—and even abundance—of truths of meaning in familiar systems of representation.

Over time, opponents of atomism have objected to these dogmas by arguing that the reductionist set of atoms is too meager, that attempts to enlarge the set are unprincipled (see section 4), that analyticities are at best scarce (in familiar languages or minds), and that people typically do not appeal to necessary and sufficient conditions for the applicability of their representations (see section 2). There is not much hope for saving traditional meaning atomism from these objections. My aim in this paper, then, is to supply an alternative theory of definitions that weakens the properties required of them (see sections 2 and 3), and an alternative theory of atoms that avoids reductive, epistemologically privileged, foundations (see sections 4 and 5).

Of course, if one insists on atomism at all costs, there are innumerable theories of atoms and definitions that will suit. Any arbitrary selection of a few representations in a system (to play the role of "atoms"), coupled with any arbitrary method of pairing other representations with ensembles of these (to play the role of "definitions"), could be used to satisfy meaning atomism. What reasonable constraints on theories of meaning might block such maneuvers? I think we can find some by focusing on a notion of meaning that is suitable for use in cognitive sciences such as cognitive psychology, cognitive neuroscience, psycholinguistics, and artificial intelligence, as they might (or might not) be opposed to common sense. By and large, cognitive scientists are concerned primarily with mental representations (e.g., contentful psychological items, perhaps akin to beliefs and ideas), and only secondarily with public-linguistic representations (e.g., English assertions, sentences, or words). Against eliminativism, behaviorism, and instrumentalism, I will assume without argument that there really are mental representations (though perhaps we haven’t discovered all the types, and perhaps the types we currently posit don’t exist), and that they are causal intermediaries between perceptual and motor organs. Nevertheless, I will try to avoid controversial claims about their specific form or function.<3>

I take it that, at a minimum, cognitive science appeals to the meaning of mental representations for three theoretical purposes: (i) to help specify the mental representations, (ii) to help express generalizations about their (actual or idealized) functional role—generalizations relating mental representations not only to external conditions, via processes such as perception and action, but also to other mental representations, via processes such as inference, and (iii) to help explain these generalizations. I will be interested primarily in the pursuit of these goals—in psychosemantics rather than linguosemantics or semantics as a whole.<4> In order to fulfill the purposes of psychological explanation, psychosemantic kinds should be psychological "natural kinds" rather than kinds arbitrary with regards to (actual or idealized) psychological generalizations. I assume, then, that theories of atoms and definitions should be nonarbitrary from the point of view of cognitive-scientific explanation, so that they should track psychologically interesting natural kinds and distinctions. If we accept theories that are not psychologically natural, we risk compromising the potential role of meaning in psychological explanation.

My main claim on behalf of my proposed version of meaning atomism is modest; I will argue that it is tenable in the face of standard objections. However, let me suggest why it may be preferable to theories that reject meaning atomism altogether. As a preliminary point, notice that even if we can reach correct theories of atoms and definitions, meaning atomism is only part of a complete theory of meaning. A complete atomistic theory of meaning must also satisfy a third requirement: it must specify the meanings of the atoms. In other words, some theory has to be given about what it is for atoms to have the particular meanings they have. Without this, we may know which ensembles of atoms spread meaning to which nonatoms, but we will not know which meanings they spread. In this paper I do not mean to offer any such theory of meaning for the atoms, but I assume that the correct one will consist, at least partially, of a theory of reference conditions for the atoms—a theory specifying the ways the world has to be in order for an atom to be about some existing entity (i.e., to have a referent). Philosophers interested in reference hold a wide range of views relevant to this issue, accounting for reference conditions in terms of causation, information, verification, evidence, teleological function, psychological explanation, interpretation, translation, and similarity (not to mention disquotational theories and the increasingly popular itself-and-not-another-thing theories of reference). Call such a theory a "substantive" theory of reference, as opposed to an theory of definitions which specifies how reference conditions, whatever they are, spread from one set of representations to another. In this paper, I mean to be neutral as regards all substantive theories of reference.

The strategic reason for wanting an atomist theory is that atomism promises greatly to simplify the task of providing a substantive theory of reference. Although both nonatomists and atomists need to explain reference, atomists can break the explanation into two easier stages. At the first stage, given a theory of atoms, we only need a substantive account that applies directly to the small set of atoms. At the second stage, given a theory of definitions, we have an indirect account of the reference of the much larger set of nonatoms. Without such a two-stage process, nonatomists would need a substantive account of reference that applies directly to all representations (or at least to all syntactically simple ones).

This latter task is much more difficult, as we may judge from nearly every attempt to provide a substantive theory of reference. For example, causal, informational, or teleological theories of reference conditions are typically more plausible for cases like "dark", "curved", and "this" than for cases like "sofa", "water", "unicorn", and "Santa Claus". If the latter terms are nonatomic—say, definable with terms like the former—then a substantive theory of reference conditions need not apply directly to them. When theorists of reference are modest, they explicitly announce that their theories are only applicable to a restricted set of representations—e.g., observational predicates, or logical particles, or demonstratives. When theorists are bold, counterexample industries spring up overnight.<5> The interesting thing is that the counterexamples nearly always involve apparent nonatoms such as (the mental correlates of) "cow", "water", "unicorn", and "set", rather than apparent atoms. Defenders of any of the available theories of reference, therefore, might find it useful to plug meaning atomism into their proposals, by focusing only on the reference of the atoms, and letting atomism spread meaning to the nonatoms. Instead of trying to develop a theory of reference conditions that can apply directly to troublesome representations such as "God", "unicorn", and "cow", we can make do with a theory that applies only to the atoms. Otherwise, in my view, it is too hard to make a nonatomist theory of reference work. Barring some major breakthrough, the only way we're going to get an adequate theory of reference is to accept the widespread presence of definitions in terms of atoms, so we had better find a way to live with this, despite the objections to definitions and atoms.<6>

2 Troubles for atomism

In addition to a widespread and understandable pessimism based on a history of failed theories of atoms and definitions, atomism faces two objections which apply in advance of the search. These threats stem from what has become known as "inferential role semantics" (IRS, also sometimes called "conceptual", "functional", "psychological", or "causal" role semantics; see Block, 1986). IRS holds that the meaning of any mental representation depends at least partially on other mental representations that are inferentially—e.g., causally, functionally, and rationally—related to it.<7> A full IRS theory of meaning may appeal to more than inferential relations, such as causal relations between mental representations and nonmental phenomena. But such noninferential relations are not solely relevant to meaning, according to IRS. Perhaps the main attraction of IRS is that it promises to underwrite psychological purposes for being interested in meaning, by tying meaning partially to the psychological role of mental representations.

The first IRS objection to atomism is simply that atoms would violate IRS. Consider a system S, containing a representation a that the atomist alleges to be atomic. It seems that any serviceable theory of atoms will have to treat a’s meaning as independent of all other representations in S—how else could it be semantically primitive? But such an account of a’s meaning threatens to undercut psychosemantic purposes (i)-(iii), by making a’s meaning completely independent of a’s particular relations to other representations.

The second objection concerns definitions and a branch of IRS called "meaning holism". Consider a representation n, in S, that an atomist alleges to be nonatomic. The atomist must hold that n is completely definable using a small subset of the other representations in S, namely, an ensemble of the atoms. It seems then that any serviceable theory of definitions will have to draw a (sharp or vague) distinction between (a) the other representations in S that are part of n’s definition, and so are relevant to or constitutive of n’s meaning, and (b) those that are not part of n’s definition, and so are irrelevant to or merely collateral for n’s meaning. In short, any theory of definitions seems to need a constitutive/collateral distinction. Of course, any arbitrary method of ordering the inferential relations of a representation, coupled with any arbitrary method of dividing the series in two, could be used to generate a meaning-relevant vs. meaning-irrelevant distinction. But I am assuming that semantic kinds and distinctions are nonarbitrary from the point of view of cognitive-scientific explanation. I argue elsewhere, given some plausible and live psychological speculations, that there may well be no psychologically explanatory distinction that should be used to separate some representations relevant to n’s meaning from some representations irrelevant to n’s meaning (Lormand, 1996). But I do not mean to defend IRS or holism in this paper. They enter as worst-case scenarios for atomism; my aim is to develop theories of atoms and of definitions that would survive these most extreme threats.

Here is the strategy I recommend to the atomist for compatibility with IRS and meaning holism. One root worry is that the atomist commitment (1) precludes the IRS claim (2):

(1) An atom has a primitive meaning, independent of inferentially related representations.

(2) The meaning of any representation depends on some inferentially related representations.

The second root worry is that the atomist commitment (3) precludes the holist commitment (4):

(3) A nonatom is defined in terms of a small subset of inferentially related representations.

(4) The meaning of any representation depends on all inferentially related representations.

But the apparent incompatibility between (1) and (2), and between (3) and (4), is based on a certain implicit presupposition about representations and meanings. Essentially, the presupposition is that there is a one-to-one correspondence between token representations and meanings, so that token representations are unambiguous meaning-bearers. The apparent inconsistency between atomism and IRS/holism depends on (2)’s and (4)’s talk of "the" meaning of a representation, construed as the single meaning. But if a representation has multiple meanings, all at once, (1) and (2) are compatible, and so are (3) and (4). (We can construe "the meaning" of a representation as numerically indefinite, covering all a representation’s meanings-with-an-s, just as we speak of "the effect" of an event meaning an indefinite number of its effects-with-an-s.) An atom can have a primitive meaning—satisfying (1)—while also having other nonprimitive meanings—satisfying (2). Each definition of a representation can be in terms of a small subset of atoms—satisfying (3)—even if its total set of meanings depends on all other representations in its system—satisfying (4). This reconciles meaning atomism with IRS and meaning holism.

To illustrate this, I need to introduce more details about multiple meanings, although it is not my purpose in this paper to provide a full development and defense of the multiple meanings view (MM). Suppose that for any given (token or type) representation r in S, the set of other representations in S (with which r is potentially used in inference) can be divided into several (suitably characterized, perhaps overlapping) "units" such that r expresses several meanings, perhaps one for each of these units. For purposes of illustration, think of each unit for a representation as a separable rough test for the acceptable use of that representation.<8> To take a simplified example, suppose that a small child, Larry, uses a token mental representation [bird] with the following three tests among many others: [feathered flying animal], [thing similar enough to B1,...,Bn] (where the [Bi] represent alleged birds), and [thing called "a bird" by mommy]. Suppose, implausibly though for simplicity, that the representations in these three tests are wholly composed of atoms (we will have to explain what that comes to in sections 4 and 5). In this case, on the view I propose, each unit is a definition—as opposed to "the" definition—of [bird], and [bird] has at least three meanings, one associated with each of these three definitions.<9>

MM can be accepted without accepting meaning holism; even if r has multiple units, taken together these may not contain every other representation in S. But if each other representation does find its way into some unit(s) for r, like proper holists we don’t have to distinguish between representations that are and aren’t meaning-relevant to r. Each unit would provide a separate meaning for r, and would be vitally relevant to that meaning, and completely irrelevant to others. But the total set of meanings of r—i.e., its "meaning" in the numerically indefinite sense—would depend on each unit. Via some meaning or other, r would depend semantically on every other representation in S, which would satisfy meaning holism for r.<10>

If suitable notions of units and atoms can be developed, the multiple meanings proposal also secures meaning atomism and its advantages regarding the theories of reference and concept acquisition (see section 1). Perhaps we can give the meanings of a nonatom in terms of its associated units, and give the meanings of the representations in these units in terms of the meanings of their associated units, and so on down to units wholly composed of atoms. We can even have IRS and meaning holism for the atoms, since an atomic representation can have some atomic meanings (that do not depend on other representations), while its other meanings are nonatomic (and do so depend), so that even its total set of meanings depends on all the other representations. This requires (a) showing how chains of associated tests can lead to atoms rather than merely in circles (see section 3), (b) giving a theory of atoms (see section 4), and (c) giving a theory of the atomic meanings—e.g., reference conditions—of atoms.

3 Toward a theory of definitions

A definition for a representation r is, at a minimum, an ensemble of other representations in r’s system that shares a meaning with r. Intuitively, we would expect two synonymous or semantically equivalent representations to be "treated the same" in inference, i.e., for the system to include an actual or idealized disposition (to a degree, on balance, given time, interest, etc.) to "substitute" them for each other in a suitably specified range of mental contexts (see ch. 2 of Sperber and Wilson, 1986, for some related suggestions). To take a traditional-looking example, we would expect a person to have some disposition to infer [X is a mother of a parent] from [X is a grandmother], or to infer [every grandmother is F] from [every mother of a parent is F]. First, I will try to explain what I mean by "contexts", then by a "suitably specified range" of contexts.

I mean "mental context" as a catch-all term here, since we don’t know which sorts of contexts there are. If mental representations have other representations as syntactic parts, then a context for a formula is simply a formula that contains it.<11> What "substitution" of r2 for r1 in such a context comes to, then, is causing r2 to have the same part-to-whole relations that r1 had previously. The story is similar if, as in some versions of connectionism, mental representations are unstructured states, functionally connected in certain ways. In this case, a context for a representation is simply other representations to which it is connected. Substitution of r2 for r1, in this case, is causing r2 to have the same connections as r1.

The thorny problem is "suitably specifying" the range of contexts in which substitutability should be required for sameness of meaning. The most natural strategy for an atomist is to require that a person be disposed to make such substitutions in arbitrary contexts. This might work if people use what are often called "classical" definitions, ones that might be supposed to provide necessary and sufficient conditions for the applicability of a representation. Specific examples of classical definitions tend to be very controversial, but a short list can at least illustrate what atomists have traditionally hoped for. A system might be disposed to substitute [aunt] for [(sister of a parent) or (wife of a brother of a parent)], [bicycle] for [two-wheeled pedaling land vehicle], [cat] for [animal that is or was a kitten], or [gold] for [element with atomic number 79]. Such commitments may be maximally psychologically strong or deep, in some sense; if enough are, they form a (sharp or vague) nonarbitrary psychological kind that might be used to generate a constitutive/collateral distinction, and with it a theory of definitions. But given that we mean to allow for the possible truth of meaning holism, it is too stringent to require classical definitions, maximal psychological strength, or substitutability in arbitrary contexts. People are disposed to substitute each representation in many nonclassical ways.<12> If the holist is right, none of these are strongest in arbitrary contexts, and neither are classical substitution dispositions. For any substitution disposition, there are contexts in which it is overridden in favor of competing substitution dispositions, even under idealized reflection (see Lormand, 1996). For this reason, we should not require definitions to underwrite substitution in arbitrary contexts.

Instead, we should invoke the multiple meanings view, with an eye toward accepting that many or all of the substitution dispositions mentioned in the previous paragraph, both classical and nonclassical, yield definitions. On this view, [cat] simultaneously shares one meaning with [(ex)kitten], another with [purring animal], and others with [thing with enough of: chases mice, scratches furniture, ...], [thing similar to Cat1,...,Catn], [thing called "cat" by experts], [thing of a natural kind that best fits enough alleged-cats], etc. A proper theory of definitions should offer a principle by which something gets on this list, of course. Given the aim of shaping a notion of meaning that is serviceable for psychology, we can expect that something is on the list if it is of the same psychological kind as these examples, without legislating in advance what that kind is (presumably we’d need contrasting cases of things apparently not on the list of semantic equivalents for [cat], such as [animal] or [Eiffel Tower]). We should try to speculate about this kind, so long as the resulting suggestions are taken in this spirit.

Given MM we can be very generous about what gets on the lists of semantic equivalents, without worries about meaning holism. So my basic response to the threat of holism is to relax, and require of a definition only that it underwrite dispositions to substitute in arbitrary contexts unless overruled by stronger coalitions of substitution-dispositions. We need not favor classical over nonclassical substitution dispositions, or draw any other kind of constitutive/collateral distinction, to secure a theory of definitions, and meaning atomism.

Although MM allows us to admit generously of many definitions, without fear of holism, a few more constraints help us zero in on the most appropriate psychological kind. Consider, first, how to separate mere entailments from genuine definitions. Sometimes one substitutes [mother] and [grandmother] for each other—e.g., one infers [X is a mother] from [X is a grandmother]; or one infers [every grandmother is F] from [every mother is F]. Why don’t these substitutions create a semantic equivalence between [mother] and [grandmother]? Because they do not reflect bidirectional dispositions to substitute [mother] for [grandmother] and vice versa in the same contexts, with the same strength. For example, we aren’t (so) disposed to infer [X is a grandmother] from [X is a mother], or to infer [every mother is F] from [every grandmother is F]. Finally, perhaps we can rule out some substitutes that wholly depend on other substitutes. For example, [very old mother] may substitute for [grandmother] only because [mother of a parent] does. (So were [mother of a parent] to cease substituting, [very old mother] would cease. But not vice versa.) By accepting this sort of dependence condition we might avoid extreme meaning holism, and treat [grandmother]’s meaning as independent of [very old]’s meaning.<13>

Philosophers of language are familiar with troublesome contexts that generate failures of substitutability, such as quotation contexts, intentional contexts, and modal contexts. While we might normally substitute "mother of a parent" for "grandmother", we don’t do so in the context "the word ‘grandmother’ has 11 letters". It is common to dismiss quotation contexts, since these contexts are said to contain not the word "grandmother" but a name of the word, specifically, "‘grandmother’". Perhaps on the same grounds we can set aside mental quotation contexts, if there are any, since they contain not [grandmother] but [[grandmother]].<14> Intentional contexts are a further problem, if there are mental intentional contexts without quotation (e.g., [S has a belief that grandmothers are kindly]). Such contexts are (notoriously) ambiguous: we can substitute [mothers of parents] for [grandmothers] on a de re construal, but we cannot clearly do so on a de dicto construal. On the multiple meanings view, this ambiguity of [belief that] should arise from its having purely de re and purely de dicto substitutions. However, the most natural ways to formulate a purely de dicto substitute is to use mental quotation: [S has a belief with the same content as [grandmothers are kindly]], or perhaps Stich’s (1984) version [S has a belief similar to the one in me that governs "grandmothers are kindly"], etc. If so, then substitution problems in mental intentional contexts reduce to problems about quotation contexts. By contrast, most modal contexts among linguistic and mental representations seem irreducibly nonquotational: e.g., [necessarily, grandmothers are kindly]. To capture our target psychological natural kind, perhaps we should simply require definitions to underwrite (weak) dispositions to substitute in such modal contexts.

To mark all these psychological restrictions on the substitutability relation required for definitions, let me call "psubs" pairs of representations that meet whatever the best such requirements are for capturing the psychological natural kind I have been illustrating. Then we can say that if two representations are psubs, they are synonymous or semantically equivalent. Drawing a deep breath, I might express my best guess at what psubs are as follows: psubs are pairs of mental representations that a thinker is psychologically disposed to substitute for each other, to the same degree, in arbitrary nonquotational mental contexts, unless overruled by or wholly dependent on other psubs. If this really is a psychological natural kind, then perhaps we will eventually discover a simpler way to specify it—e.g., as pairs connected by a certain kind of psychologically real psubstitution rule. But for the time being, I want to insist that the sample psubs carry more weight than my attempt to specify them generally.

Thus far, I have tried to say what it is for two mental representations to be directly psubstitutable. This is not enough to save atomism, for the direct psubs of a representation may not suffice to specify its meaning in terms of atoms. In the rest of this section, then, I will try to characterize indirect psubstitution, which is, roughly, the relation obtained by suitably constrained chains of direct psubs. There are two main problems with characterizing such chains: providing an asymmetric direction for definitions, and avoiding circular chains of definitions.

Since the members of a pair of psubs stand in a symmetric relation (as should be expected of synonyms or semantic equivalents), either may be viewed as a definition of the other. Which "direction" of definition leads closer to the atoms? Syntactic simplicity yields an answer. We should use psubstitutability only to determine the meanings of syntactically simple representations; syntactically complex representations don’t have a (meaning-relevant) list of psubs. The meaning of syntactically complex (mental) representations is given by the meaning of their parts, so it is irrelevant whether [my best friend] psubs for [Gandhi’s grandma].<15> So the idea is that to specify the meaning of a syntactically simple representation, one finds its direct psubs. Then one finds (and substitutes) the direct psubs of the syntactically simple parts of these psubs, and so on until one reaches representations that consist wholly of semantically atomic parts.

If a given chain of psubs is eventually to reach the atoms, none of the psubs should be circular relative to the chain. In other words, a psub should not reintroduce a representation that has already been eliminated by an earlier psub in the chain. This noncircularity requirement raises a worry about "local holism". A group of representations is locally holistic if all chains of psubs starting from any of them lead through the others and back to the original, so that their meanings seem only to be specifiable in terms of one another. Ideas of beliefs and desires are typical examples: we normally take beliefs and desires to bear their (causal, explanatory, etc.) relations to behavior by working together. It may be, then, that we do not understand either of these notions in isolation from the other. If so, how can atomism be saved? Of course, the atomist can simply assert that in every locally holistic group of representations, some or all are atomic. But since atomism should seek to minimize the number of atoms, it should handle locally holistic representations such as [belief] and [desire] without assuming that they are atomic.

Suppose that one has the following psubs for [belief] and [desire], respectively: [thing that is B and explains behavior with a desire], and [thing that is D and explains behavior with a belief].<16> To make matters worse for atomism, let these be the only representations associated with [belief] and [desire]—otherwise, there might be some independent way to specify their meanings. Now, try to form a chain out of these two psubs. The psub for [belief] contains [desire], so we substitute the psub for [desire] into the psub for [belief], yielding an indirect psub for [belief], [thing that is B and explains behavior with a (thing that is D and explains behavior with a belief)]. If neither [belief] nor [desire] are atomic, how can we avoid this circle? When a chain of psubs is circular—when it reintroduces a representation that has already been eliminated—then we can "Ramsify" or existentially quantify over the new occurrence (see Lewis, 1983). This yields for [belief] the semantic equivalent [thing that is B and explains behavior with a thing that is D and explains behavior with something] or, eliminating the redundancy, [thing that is B and explains behavior with a thing that is D].<17> In this way atomism can handle local holisms.

Let me summarize the theory of definitions I am recommending. Two mental representations have a same meaning iff one results from the other by (repeated) elimination of syntactically simple representations by their psubs, and (repeated) elimination of previously eliminated representations by variables. I suppose that a thinker has many psubs for his (syntactically simple) mental representations. To find out what a given representation means—that is, to determine one of its meanings—we "follow around" suitable chains of psubs until we reach those semantic equivalents for the mental representation that involve only atomic representations. (Then, having secured atomism, we pass the buck off to some theory of meaning for atoms.) But which are the atomic representations? Where does this process come to an end?

4 Toward a theory of atoms

Empiricists have always thought of atomic representations as those having some special epistemological status. Philosophers who consider empirical observation and logic to be the fountains of all knowledge tend to choose their atoms accordingly. In this section I discuss difficulties for resulting versions of semantic reductionism (in Quine’s 1953 sense), and I describe how the meaning atomist can exploit constructs in cognitive science to provide an alternative criterion for atoms. In section 5, I use this theory to describe several specific semantic atoms.

The most extreme form of reductionism is phenomenalism (see, e.g., Carnap, 1967). Phenomenalists typically want atoms to figure in infallible or incorrigible beliefs. They recognize two such kinds of atoms. First are representations at the very sensory periphery, allegedly untainted by theoretical presuppositions. These atoms are the so-called "sense data" or, nearly enough, transduced proximal representations such as retinal ones. Along with sense data, phenomenalists choose a sprinkling of logical connectives as their atoms.<18> Their hope is to insure the rigor of science by reducing all scientific claims to logical constructions out of sense data. However, this form of reductionism does not even come close to working. Such a basis is insufficient for generating the meanings even of observational representations of "distal" stimuli (e.g., [square surface]), much less the meanings of all theoretical representations (e.g., [electron]). We simply do not represent distal conditions as logical constructions of proximal conditions—the difference, after all, is marked by the difference in meaning between "distal" and "proximal"!

Faced with the failure of phenomenalism, verificationists and positivists relax their epistemological scruples, switching to a distal, but still observational, criterion for atoms. The result is a basis consisting roughly of Locke’s "simple Ideas". Locke takes his simple Ideas to be those generated by (inner or outer) observation (e.g., ideas of colors, shapes, distances, pains), plus some logical connectives. Although observational beliefs are not incorrigible, the motivation behind these views is still epistemological, since observational conditions are supposed to be knowable without inference. However, a distal, observational criterion seems purely arbitrary, since many contemporary scientists think perception itself involves rapid, unconscious inference to the best available explanation of proximal stimuli.<19> Why should distal, observational representations be atomic, when representations of proximal stimuli seem to be more atomic? Furthermore, Lockean proposals do not avoid reductionism, since an observational basis is not sufficient to characterize all meaning, especially the meanings of theoretical representations. Locke thought that the idea of causation met his criterion, which would help make the set of atoms less meager. However, as Hume suggests, causal facts may not be known purely observationally.

The most recent version of atomism is due to Miller and Johnson-Laird, 1976. In recognition of the failure of phenomenal and observational reductionism, they embrace an enlarged set of atoms, including theoretical representations of causation, intention, time, and so on. However, by further relaxing the empiricist’s epistemological criteria for atoms, they present their set of atoms as a complete hodge-podge. They are left with no criterion for atoms, except perhaps whatever they need to make their definitions work out right. And since their definitions, predictably, don’t tend to work out right (see Fodor, 1981), their strategy would naturally lead to the conclusion that virtually every representation is atomic. In short, previous conceptions of atoms fail to provide a nonarbitrary way to enlarge the set of atoms just enough to avoid objectionable forms of reductionism. Without the multiple meaning view, definitions are likely to be few and far between, so a set large enough will be as large as the set of words in a natural language, plus the set of mental representations that have no natural language counterparts. But with MM and the widespread availability of definitions, we can make do with fewer atoms.

What we would like is a way to enlarge the set of Locke’s atoms in the direction of Miller and Johnson-Laird’s atoms, that (a) is principled, (b) keeps the set comfortably small, and (c) avoids reaching down to phenomenal sense-data. As with the discussion of substitutability in section 3, there is some temptation to appeal to whatever psychological natural kind unifies [red], [linear], [cause], [and], etc. But unless we do fill in some details, a reasonable guess is that the only relevant natural kind is that of all mental representations, or at least all mental representations associated with syntactically simple linguistic items.

A distal, observational basis is less meager than a proximal, phenomenal one, and so is more attractive from the standpoint of meaning atomism. However, it raises the following puzzle: why should observational representations be considered atomic when they, like theoretical representations, are produced by inference? I think the key to the solution of this puzzle stems from recent efforts in cognitive science to draw a psychologically interesting distinction at roughly the place considered by Locke and the verificationists. Even if perceptual systems are inferential, it is striking that Locke’s atoms coincide more-or-less with representations at the "interface" between what Fodor (1983) calls the "central" system and what he calls the various "modular" systems (or "modules"). According to this picture of the mind, transducers (sense organs) dump representations of proximal conditions into the modules, which use them inferentially to dump representations of distal conditions into the central system. I want to suggest that this notion of a (computational) "system" can be used as part of a psychological criterion of atoms. First, I will try to explain more generally what distinguishes one system from another, and then consider the psychosemantic relevance of such distinctions.

For convenience, in previous sections of this paper, I have been using the phrase "representational system" to cover whole minds (and idiolects). However, I want to take seriously the idea that a mind is divided into several relevant representational systems. What separates one system in a mind from another? A system is a group of representations and processes that are "isolated" from other groups—roughly, this holds when the processes in one group are not alterable in rationally explicable ways by the representations in the other group. One system is then said to be "impenetrable" by the other (Pylyshyn, 1984). The early perceptual processes comprising the modules are prime examples, and so is the central system, if there is such a thing. (It is an open question whether what Fodor calls "the" central system itself factors into separable systems.) Of course, separate systems must communicate, by exchanging representations. This is accomplished by "direct" causal connections between processes in one system and representations in another. For example, there may be processes in the visual systems that produce (some) [red]-representations in the central system—such as the representation [this is red]—and there may be processes in the central system that produce (some) [red]-representations in the visual systems—such as the query [is this red?]. Even though processes in one system can, by producing representations, activate processes in another system, processes in one system have no control over the internal workings of processes in another.<20>

As I mentioned, the dividing line between perceptual modules and central systems coincides nicely with the observational representations postulated by traditional meaning atomists. Therefore, it is tempting to try to squeeze a criterion for atoms out of the distinction between central systems and the perceptual modules. However, we can’t plausibly stipulate that atoms are representations at the boundary between modules and central systems, without an account of why the boundary between modules and the central system should be relevant to atomicity. Also, we need a theory of atoms (and more generally, of meaning) that applies to all mental representations, not only the representations involved in central systems. I think we can motivate a more general claim if we suppose that atomicity is relative to a given system, that a representation can be atomic for one system even though it is complex for another system. For example, as might be expected on MM, [red] may have one, atomic, meaning within the central system, even though it has another, complex, meaning within a visual system. Even though [red] may be produced by complex inference within the perceptual modules, and may even have visual or near-retinal "psubs", [red] is plausibly atomic for the central system, since the central system has no "grasp" of the complex visual "psubs" of [red].

Accordingly, I suggest the following sufficient (but not necessary, just yet) condition for a mental representation r to be atomic for a particular system S1: r is atomic if r is at the interface of S1 and some other system S2. More precisely, r is atomic for S1 if r is directly connected (as either cause or effect) to a process in another system S2, one which is not penetrable by S1. (A causal connection is "direct", here, if it is impenetrable by S1.) In other words, r must be produced in S1 by activity in S2, or else S1 must use r to produce activity in S2. It’s okay for this activity in S2 to be inferential, or structured in whatever sense it may be, so long as the system in question, S1, does not have the ability to control the internal nature of this activity.

The distinction between perceptual modules and central systems yields, at best, the observational basis postulated by the verificationists. A more interesting source of atomic representations is what Pylyshyn (1984) calls the "functional architecture" of a system. Every computational system contains representations and processes, but some of these processes must be primitive—that is, such that their inner structure is not penetrable by the other representations and processes in the system. These primitive processes constitute the system’s functional architecture. Now, so far I have identified the atomic representations of a system S1 with those that are associated with processes impenetrable by S1 in other systems. It would be arbitrary, however, not to include as atomic those representations of S1 (if any) that are associated with impenetrable processes in S1 itself. I suggest as a sufficient and necessary condition, then, that a representation r is atomic for a system S iff r is directly connected to a process impenetrable by S, whether that process is part of S or not.<21> So representations associated with the functional architecture of a system are atomic for the system. We can distinguish this "internally" generated atomicity from the "externally" generated atomicity discussed above. As I will suggest in the next (and final) section, it is the fact that atomicity can be generated either internally or externally that unifies, as a psychologically natural kind, the logical connectives and sensory/perceptual representations of traditional meaning atomists. Furthermore, the internal source of atomicity promises a way for meaning atomism to avoid objectionable reductionism, without arbitrariness.

5 Some candidate atoms for the central system

In order to illustrate the criteria for atoms, I will adopt Fodor’s hypothesis that there is such a thing as "the" central system. (As I mentioned in section 4, it is open whether or not central representations and processes separate into many functionally distinct systems, or are combined into a single, huge one.) This assumption makes it more difficult for atomists to account for all the meanings of central representations, since presumably (a) the larger the central system is, the more semantically nonatomic representations there are in it, and (b) the fewer central systems there are, the fewer semantic atoms arise from interfaces between systems. To a first and last approximation, the atomic representations of the central system are those associated with processes that are, in Pylyshyn’s (1984) terms, "cognitively" impenetrable. (Cognitive impenetrability is simply a special case of "system impenetrability", where the system is the central one.) Thus, a representation A is atomic for the central system iff it is associated with some cognitively impenetrable process, either an external one (in another system) or an internal one (a part of the functional architecture of the central system itself).

Observational and behavioral atoms

First, I will focus on externally generated atoms for the central system. If certain processes of sense perception are cognitively impenetrable, and if these processes activate representations in the central system, then these perceptual "outputs" are atomic for the central system. It is open precisely what these representations are, but chances are good that they are near to Lockean observational atoms such as representations of colors ([red]), shapes ([symmetric]), locations ([between]), smells, sounds, etc.<22> Perceptual tracking processes may provide "demonstrative" representations ([this], [that]) and identity comparisons between the referents of these representations ([same-as]). (A mental demonstrative might be [thing-#582], or an unstructured [thing-tracked-by-process-#17], or [red-surface-in-front-of-my-eyes], which we sometimes try to express by tracking an object and saying "this" or "that".) Modular processes that compare perceptual similarity may provide other atoms ([looks-like], [feels-like]). Similarly, the phonetic/syntactic module associated with public language input converts acoustic (or graphemic) patterns into representations of items in public language (e.g., the word "cat", the letter "c", or the phoneme /k/). These representations may be output into the central system, where they may achieve the status of atomic representations. The perceptual modules may treat these outputs as complex, in one way or another, but the central system may treat them as atomic.

The perceptual modules needn’t be the only external sources of atoms, however, since some representations in the central system may activate cognitively impenetrable processes in motor or glandular modules. In this way some representations of simple actions ([move], [touch]) may be atomic, as well as representations of properties specifically associated with glandular processes such as adrenaline flow ([tasty], [dangerous], [sexy]). Importantly, given multiple meanings, an atomist theory does not need to say that the (total) meaning of any of these behavioral or observational representations is exhaustively determined by their dedicated modular processes. Presumably, these representations have several other meanings derived from one’s central beliefs involving them, i.e., from their psubs in the central system.

Logical and modal atoms

Next, turn to internal sources of atoms, particularly those related to the sample psubs from section 3. First, processes sensitive to the logical properties of mental representations seem to be part of the functional architecture of the central system. We don’t control the internal structure of these processes; plausibly, for example, one’s central system is disposed to deduce [q] from [p and q], or from [p] and [if p then q], regardless of whether one "believes in" conjunction or modus ponens (Carroll, 1895). If any mental representations have something like logical form, then there are probably distinctive cognitively impenetrable processes associated with the central representations [and], [is_a], [thing], and other elements of predicate logic. Many AI theories are devoted to the way these representations guide processes of predicating, enumerating, deducing, and detecting contradictions. In addition to predicate logic, "counting" or "measuring" processes corresponding to the use of quantifiers ([all], [most], [some], etc.) may also be impenetrable. Distinctive impenetrable processes underlying our ability to imagine counterfactual situations, and to integrate this reasoning with "factual" reasoning, may provide atomic sentence operators like [possibly], [necessarily], or [actually]. This is what provides a unity between the logical connectives and the sensory/perceptual representations of the empiricists. The unifying feature is a kind of psychological, not epistemological, primitiveness.

Stereotype-forming atoms

To explain stereotypical categorization, cognitive psychologists have postulated two basic kinds of processes: one involving "feature lists", and one involving "exemplars" (see the examples in note <12>). A feature list for a category is a representation of properties taken to be possessed by typical members of the category. Categorization of a novel object proceeds not by checking whether it has all of the features in the list, but by checking whether it has "enough" of them, that is, whether it has a sufficient weighted sum of them. Now, which sum is treated as sufficient varies from category to category, and is cognitively penetrable. But the processes that determine what it is for a sum to be compared with a threshold, and what basic steps depend on the result of the comparison, are cognitively impenetrable. Perhaps these processes are associated with a mental representation [x has more than y of z] which stands for a relation among an object x, a threshold value y, and a feature list z. In the same way, the atom [x is y-similar to z] may be associated with a process of comparing an object x to some exemplars z along dimension y. One can cognitively penetrate which objects, exemplars, and dimensions to use for a given category, but one probably cannot penetrate how the comparison takes place given these inputs.

Metalinguistic atoms

When the central system receives representations of public-language words and sentences from the phonetic/syntactic module, what does it do with them? It can simply store these representations in memory, enabling us to remember the words. But another thing it does is to associate these representations of words ["cat"] with corresponding conceptual representations [cat], enabling us to understand the words, or at least to use them competently. Although which representations are associated is cognitively penetrable, the process by which they are associated are not. Perhaps this process provides us with the atomic representation [represents] (or [is called], [refers to], etc.), as in ["cat" represents cats], or [cats are things actually represented by "cat"] (cf. note <12>). In addition, we can accept what someone says as [true] independently of understanding it or associating it with a conceptual representation. Again, which linguistic statements are accepted is penetrable, but the processes in the functional architecture by which they are accepted are not.

Explanatory atoms

The final examples I will describe concern processes of abductive inference, or inference to the best available explanation. We do not yet have a very thorough understanding of the psychological processes of abductive inference, or of nondemonstrative inference construed more generally. Nevertheless, available simplified proposals are plausible enough to excuse speculation based upon their broad features. In particular, I will rely on details of the abductive component of Thagard’s (1988) system PI (for "processes of induction"). PI is a rule-based inference system, like most others in AI. Although these rules are sometimes written in the form [if p then q], they are processed not as mere material implications but as [p might-explain q] would be. Given that q needs to be explained, what PI does is to search for rules with [q] as consequent; in other words, PI searches for potential explanations of q. This is not epistemologically fancy, since a "potential explanation" is treated as any (possibly horrible, yet psychologically salient) competitor for the title of "best" explanation. Nevertheless, [might-explain] is atomic for PI, since there is no cognitive penetration of the associated search and retrieval processes.

The interesting relevant features of PI are its processes for comparing the virtues of potential explanations. Simplifying a little, PI contains two such processes, associated with the representations [more-consilient] and [simpler]. The [more-consilient] process takes as input two competing potential explanations, and compares them to see what other things they potentially explain. As a fixed—that is, cognitively impenetrable—function of these sorts of considerations, the system can come to accept, for example, [p1 is a more-consilient explanation than p2 for q]. The [simpler] process compares competing explanations to see how many "cohypotheses" they require. As a cognitively impenetrable function of this, the system can come to accept, for example, [p1 is a simpler explanation than p2 for q]. Also, as a fixed function of comparisons of consilience and simplicity, the system can come to accept [p1 is a better explanation than p2 for q]. (Once again, the system can penetrate which [p]’s and [q]’s go into these processes, but cannot penetrate how the processes work, given the [p]’s and [q]’s.) Even if PI forms only a part of the processes underlying explanatory inference in humans, these representations might have at least one atomic meaning, by being distinctively connected to a cognitively impenetrable process.

Perhaps with enough ingenuity cognitive scientists will find evidence of representations connected to impenetrable processes associated with temporal memory (e.g., [time], [later], or [simultaneous]), causal reasoning (e.g., [causes]), planning or understanding action (e.g., [intends], and perhaps a [self]-idea), desire formation (e.g., [good]), etc. It is these sorts of representations, together with the internally generated ones I have mentioned, that allows atomism to generate the meanings of theoretical representations, without implausibly reducing these meanings to that of purely observational representations. The impenetrable central processes do not provide anything like verification procedures or even evidential procedures (in all cases) for the applicability of their associated representations. Thus one need not be a verificationist or a reductionist to adopt a principled atomism, properly construed.

 

NOTES

<1> A system is a group of representations poised to work together as (parts of) reasons, e.g., as (co)premises or (co)conclusions of inference. Natural languages and individual minds (at a time) may serve as familiar examples, although minds perhaps divide into smaller systems whose representations are inferentially shielded from one another (see section 4). By a "representation" I mean any entity—object, state, event, process, act, etc.—with semantic content. Words and sentences are representations, as are utterances of words and sentences. Ideas and propositional attitudes are representations, regardless of whether they involve representational objects in addition to representational states.

<2> In this usage, a definition of a representation r is a strictly synonymous or semantically equivalent ensemble of representations for r. Sometimes "definition" is used more loosely to cover, say, a semanticist’s theoretical description that refers to (rather than has) r’s meaning. This kind of description will not in general help atomism, since the representations in the description need not be in r’s system at all. Similarly, I don’t think atomism can rest with a relation weaker than definition, such as "reference fixing" (Kripke, 1972, pp. 55ff.). On some views, for example, "the predominant liquid that we drink from faucets, sail on in lakes, and bathe in" fixes the reference of but does not define "water". Although proper reference-fixing descriptions for r will in general be in r’s system, they need not move from r toward the atoms—there is little reason to consider "faucet", "lake", or "bathe" more atomic than "water". Also, reference fixers do not serve atomism because they do not provide complete specifications of meaning.

<3> Since I shall often need to refer to specific (alleged) mental representations in giving examples, it will help to establish some descriptive conventions. I shall refer to mental representations using formulae enclosed within square "mental quotation mark" braces, although I do not assume that mental representations must have all of the typical properties of formulae—internal syntactic structure, ability to be written and copied, and so on. (Thus my terms "representations" and "ideas" are catch-alls for items in a language of thought, in a connectionist network, in a system of mental imagery, and so on.) For convenience, I shall normally use formulae of English, as in [snow is white]. When I use a linguistic representation in braces, I mean to specify a mental representation that distinctively "governs" at least some of one’s uses of the linguistic representation. In expressing a certain thought, which involves the representation [snow is white], one is likely to utter the words "snow is white". On occasion, I mean only that the mental representation is in some specified way similar to those which people may use to govern their linguistic representations.

<4> There are corresponding theoretical interests in the meaning of nonmental representations, especially "idiolect" meaning or the alleged meaning one’s words have "to one" independently of anyone else’s use or understanding of these words. But concern with mental representations is more likely to yield interesting generalizations, since the behavior of mental representations is plausibly more regular than that of linguistic representations. Perceptions and thoughts about redness are much more likely to bear lawful or other systematic relations to red things (and to other mental states) than utterances about redness are to bear such relations to red things (and to other utterances).

Other reasons for focusing first on mental representations are internal to meaning atomism. Even traditional atomists who were centrally interested in linguistic meaning found themselves looking to mental representations for their stock of atoms. This is no accident, since few if any linguistic representations meet the traditional epistemological criteria for atoms. Sense data, generally speaking, do not have corresponding public language terms; for example, we do not normally use words referring to particular retinal conditions. And while we do have words for observational conditions such as colors, we do not have words or phrases to express every color that we can distinguish in perception.

It is common to reserve the word "content" for mental meaning, and the word "meaning" for linguistic content, but I will speak unreservedly. Despite the focus on mental rather than linguistic meaning, psychosemantics does not necessarily "change the subject" of meaning or adopt a new sort of meaning, such as narrow content. It is a perfectly possible for psychosemantic interests to converge with other interests in meaning. However, I will neither assume nor deny this at the outset. I regret that space does not allow a proper treatment of the many options for extending the present discussion to public language.

<5> Industry watchers can begin with Fodor (1990), Loewer and Rey (1991), and Stich and Warfield (1994).

<6> Atomism might also simplify the theory of concept acquisition. An atomist can hold that new ideas are typically acquired by defining them in terms of old ones, and that only a small set of atomic ideas, at most, are innately available or otherwise unlearned. An opponent of atomism (e.g., Fodor, 1981) may be compelled to hold, by contrast, that almost all ideas are innately available to be "triggered" into use by experience. One difficulty for this view is explaining what the pretriggered ideas consist in, so that one can account for a difference between (say) the allegedly indefinable pretriggered idea of carburetors and that of mufflers. This would also have to be done in such a way as to make it plausible that the pretriggered ideas can be effects (at least, side effects) of familiar evolutionary forces. Nonatomists who want to avoid this extreme nativism must cast about for a nondefinitional theory of concept learning, that is, one that allows for a meaning "gap" between new concepts and constructs out of old ones. The difficulty here is to explain how this gap is bridged by learning, or else to come up with a relevant mechanism of conceptual change besides triggering and learning. Even if these difficulties are not fatal (and I am not suggesting they are), we neatly avoid them if meaning atomism is true.

<7> Any inferential role theory of meaning needs a noncircular—and so, roughly, nonsemantic—account of which causal relations among representations count as "inferential" for purposes of the theory. Is the "associative" causal connection between [salt] and [pepper] inferential? How about the causal relation between representations early in the visual system and perceptual beliefs? I hope my discussion is abstract enough to be consistent with any natural answers to such questions.

<8> In addition to the illustrations immediately following, I will describe several more such tests in section 3. There I will try to characterize the right notion of a unit not in terms of tests but in terms of certain inferential dispositions to substitute representations for one another.

<9> As I mentioned in section 1, I remain neutral about how (the suitably specified) atomic representations get their meanings, and so neutral with respect to which meanings they provide for their associated nonatomic representations. But note that it is entirely possible for the various atomic definitions to differ in reference. These tests or definitions can "come apart"—that is, can determine different properties or classes of things. In this case Larry’s [bird] has multiple referents (or multiple conditions for reference) as well as multiple definitions. For further elaboration and defense of the multiple meanings view, see Lormand (1996).

<10> Nevertheless, we would also be able to secure a large degree of meaning "stability", or constancy of meaning despite differences in inferential relations. Two representations can have different total sets of units but still share meaning, if the sets "overlap", or share at least one unit. Even after a change in one unit, a representation still has many of the same meanings it did before.

<11> Semantically "complete" formulae such as mental sentences can also be in contexts, such as arguments.

<12> Here are three other important kinds of substitution. First, there is stereotyping (Smith and Medin, 1981): substituting [aunt] for [bonneted kisser of an uncle], [bicycle] for [thing in a box marked "Schwinn"], [cat] for [purring mouse chaser], [gold] for [precious yellow metal], [game] for [thing with enough of: fun, competition, ...], or [chair] for [thing similar to Chair1,...,Chairn]. There is also metalinguistic deference to experts or to looser groups of language users (Putnam, 1975): substituting [cat] for [thing called "cat" by mommy], [... by zoologists], [... by most English speakers], [... by enough of the people I talk to], etc. Finally, there is appeal to (unknown) hidden properties of exemplars: substituting [cat] for [thing of a natural kind that best fits ...] or [thing made of the same stuff as ...] or [thing that shares enough (deep, important) explanations (laws, causes, effects) with...] [enough alleged-cats].

<13> However, I am not inclined to speculate that there is much complete, nonmutual dependence. To the extent that there is, I am more than willing not to endorse extreme holism—that would only help the cause of atomism. My chief aim here is to secure meaning atomism, and to show that it is compatible with whatever degree of meaning holism we find ourselves forced into accepting.

<14> However, since we are trying to provide (parts of) a theory of meaning, this would require a nonsemantic way of specifying, at least roughly, which contexts are quotational. This is difficult since, at least in English, marks that "look" like quotation marks are sometimes used for other purposes. I imagine that an IRS should try to find a functional criterion for mental quotation, but I don’t have one to suggest.

<15> Perhaps an IRS should mark exceptions—e.g., syntactically complex mental idioms such as [kick the bucket]—by the absence of certain inferential relations to their parts—e.g., [kick] and [bucket].

<16> Let B and D be features one takes to distinguish beliefs from desires. For example, [B] might be [sensitive to sensory evidence] while [D] might be [sensitive to needs]. Or [B] might be [called "a belief"] while [D] is [called "a desire"]. There are other possibilities. (Also, here I use [explains] to stand for a variety of potentially relevant relations—causation, rationalization, etc.)

<17> Similarly, in defining [desire] we would reach [thing that is D and explains behavior with a thing that is B and explains behavior with a desire)], and so, simplifying, [thing that is D and explains behavior with a thing that is B].

<18> Sense-data beliefs and logical beliefs may not be literally infallible or incorrigible, but a phenomenalist might appeal instead to whatever distinctive epistemologically interesting properties these beliefs have.

<19> See, e.g., Marr (1982) and Rock (1983). The range of explanations available in perception may be severely limited, or fixed innately, but this does not undermine the claim that there are inferences.

<20> There are several difficult questions about what does and does not count as impenetrability. For example, consider processes of digesting food, reaching orgasm, falling asleep, or changing moods. These seem to be impenetrable by the central system, since what one thinks doesn’t seem to have control over these processes. One can, as a rationally explicable result of thinking, put oneself under conditions in which it is likely that one will or will not digest food, reach orgasm, fall asleep, or change moods. The key point seems to be that although thinking can help to activate these processes, there are aspects of the internal workings of these processes that thinking cannot change. On second glance, however, there are problems with this formulation. Any given microprocess within digestion—such as a cell’s converting sugar to stored energy—can be controlled as a rationally explicable result of thinking. All one needs to do, for example, is deliberately take surgeon’s tools in hand and tamper with the cell. But if there are no impenetrable processes, then on the theory of atoms I will suggest, there would be no atoms. So we need some constraints on how thinking is "allowed" to change a process. We cannot plausibly stipulate that using hands and knives is "no fair". For all we know, minds may use tiny hands and knives to implement normal mental transactions. My shaky inclination is to require that a process is impenetrable by a system if the system cannot change the process using only devices with this function (and their parts), according to their function. I have no account of functions to offer, though I recommend Millikan’s work (1984).

<21> This claim needs sharpening in half-niggling, half-interesting ways. Every representation is connected (say) to impenetrable gravitational processes, or to general, impenetrable storage processes. But we lose meaning atomism if we suppose that every representation is atomic. To exclude these cases, we might require that an atom is distinctively connected to an impenetrable process, or (more strongly, but more obscurely) that it functions as a "tag" or "subroutine label" for the process, or (even more strongly and obscurely) that the process is some nonrepresentational, one-way analog of a psub for the representation.

<22> I don’t suppose (or deny) that the atoms (here and below) themselves govern the use of the public-linguistic items I use to specify them. The English terms are rough, semantically suggestive, placeholders for whatever representations are associated with the processes I mention.

REFERENCES

Block, N. 1986: "Advertisement for a Semantics for Psychology", in Midwest Studies in Philosophy, volume 10.

Carroll, L. 1895: "What the Tortoise Said to Achilles", in Mind, volume 4.

Carnap, R. 1967: The Logical Structure of the World. London: Macmillan.

Fodor, J. 1981: "The Present Status of the Innateness Controversy", in Representations. Cambridge: MIT Press.

——— 1983: The Modularity of Mind. Cambridge: MIT Press.

——— 1990: A Theory of Content. Cambridge: MIT Press.

Kripke, S. 1972: Naming and Necessity. Cambridge: Harvard University Press.

Lewis, D. 1983: "How to Define Theoretical Terms", in Philosophical Papers vol. 1. Oxford: Oxford University Press.

Loewer, B. and G. Rey, eds. 1991: Meaning in Mind. Oxford, Blackwell.

Lormand, E. 1996: "How to Be a Meaning Holist", in The Journal of Philosophy, volume 93.

Marr, D. 1982: Vision. San Francisco: W.H. Freeman.

Miller, G. and P. Johnson-Laird 1976: Language and Perception. Cambridge: Harvard University Press.

Millikan, R. 1984: Language, Thought and Other Biological Categories. Cambridge: MIT Press.

Putnam, H. 1975: "The Meaning of ‘Meaning’", in Mind, Language, and Reality. Cambridge: Cambridge University Press.

Pylyshyn, Z. 1984: Computation and Cognition. Cambridge: MIT Press.

Quine, W. 1953: "Two Dogmas of Empiricism", in From a Logical Point of View. Cambridge: Harvard University Press.

Rock, I. 1983: The Logic of Perception. Cambridge: MIT Press.

Smith, E. and D. Medin 1981: Categories and Concepts. Cambridge: Harvard University Press.

Sperber, D. and D. Wilson 1986: Relevance. Cambridge: Harvard University Press.

Stich, S. 1984: From Folk Psychology to Cognitive Science. Cambridge: MIT Press.

——— and T. Warfield, eds. 1994: Mental Representation. Oxford, Blackwell.

Thagard, P. 1988: Computational Philosophy of Science. Cambridge: MIT Press.