LINGUIST List 15.3342
Tue Nov 30 2004
Disc: Final Posting: Disc: Deep Structure/Initial PP
Editor for this issue: Naomi Fox <foxlinguistlist.org>
To post to LINGUIST, use our convenient web form at
Re: 15.3231, Disc: Deep Structure/Initial PP
2. Ahmad Reza
RE: 15.3318, Disc: Deep Structure/Initial PP
Message 1: Re: 15.3231, Disc: Deep Structure/Initial PP
From: Thomas Hoffmann <thomas.hoffmannsprachlit.uni-regensburg.de>
Subject: Re: 15.3231, Disc: Deep Structure/Initial PP
Philip Carr's note
actually raises a few interesting questions:
If one adopts a computational theory of mind, how can one avoid postulating
I fully agree with that. If, as assumed , e.g., in the Minimalist Program
(Chomsky 1995) syntax is considered an optimal solution to interface
conditions (i.e. constraints imposed by the Conceptual-Intentional and the
Articulatory-Perceptual systems), then all postulated syntactic
steps are interpreted as mental processes/operations.
Now, I'm a theoretical linguist and would gladly be corrected by psycho-
and neurolinguists, but the way I see it, we nevertheless have the same
problems as in the 1960s/1970s:
1. How can we measure the number/complexity of mental processes?
Philip Carr mentions the ill-fated Derivational Theory of Complexity
(DCT) which assumed that postulated analyses were taken to be analyses of
on-line mental operations . As Fodor et al (1974) showed, experimental data
seemed to undermine the DCT.
One problem, e.g., was that sentences apparently involving more
transformations sometimes turned out to be easier to process: in the
Standard Theory adjectives in a pre-modifier function such as 'the
small cat' were sometimes supposed to be derived from underlying relative
clauses, i.e. the cat which is small (cf. Fodor et al 1974: 327).
Note first of all, that a lot of the sentences considered more complex in
the Standard Theory would receive a much simpler Minimalist analysis
(direct merger of Adj in pre-modifier position, no underlying relative
clause). So I think looking at the results of the 1960s/1970s experiments
and reinterpreting from a Minimalist perspective might actually yield a few
interesting results. [For the Minimalist junkies out there: this might not
be easy as I first thought. Chomsky (2000) e.g. claims that long-distance
AGREE is simpler than MOVE, which actually is COPY+MERGE+AGREE. If we just
look at syntax this might suggest that AGREE should be less complex than
MOVE. Yet, I wonder whether long-distance AGREE isn t more complex for LF
since it involves another PROBE-GOAL search whereas identifying copies of
moved elements might be easier.]
However, another point is that the human parser might use non-grammar
information in addition to grammar, i.e. some kind of heuristic principle
(cf. Fodor et al 1974; and as far as I know there are still enough
psycholinguists out there subscribing to this hypothesis but feel free to
correct me if I'm wrong). So is there a parser that doesn't just
use grammatical info and if so, how can we disambiguate the influence of
2. What are the predictions towards grammatical complexity and mental work
load in other theories?
Take e.g. the latest grammatical theory:
Construction Grammar (cf. Fillmore and Kay 1999; Goldberg 2003). In
Fillmore and Kay's (1999; also in Goldberg's 2003) version of Construction
Grammar a sentence can also be the combination/parallel activation of a
number of constructions.
So Goldberg, e.g., considers 'What did Liza buy the child' to consist of 6
types of constructions: 1)the six lexical items, 2) the ditransitive
constructions, 3) the question construction, 4) the Subj-Aux inversion
construction, 5) the VP construction and 6) three NP construction (2003:
Has anyone ever thought about testing whether an increased number of
constructions leads to greater computational work load?
3. And what about neuronal activity? As far as I can see a lot of the 1960
s/1970s studies showed that allegedly higher complexity didn't result in
prolonged comprehension time. But what about increased neuronal activity?
Has anyone so far carried out an fMRI (functional magnetic resonance
imaging) study on increased neuronal network activation as an effect of
syntactic complexity? And if so, do these studies also contrast production
and processing of sentences?
Chomsky, N. 1995. The Minimalist Program. Cambridge,
Mass.: MIT Press.
Chomsky, Noam. 2000. Minimalist inquiries: the
framework . In: David Michaels, Roger Martin and
Juan Uriagereka, eds.. Step by Step: Essays on
Minimalist Syntax in Honor of Howard Lasnik.
Cambridge,Massachusetts: MIT Press.89 155.
Kay, Paul & Charles J. Fillmore. 1999. Grammatical
constructions and linguistic generalizations: The
What s X doing Y? construction . Language 75,1: 1-
Fodor, J. A., T.G. Bever and M.F. Garrett. 1974. The
Psychology of Language: An Introduction to
Psycholinguistics and Generative Grammar. New York:
McGraw-Hill. [cf. esp. 318ff.]
Goldberg, A. E. 2003. Constructions: a new
theoretical approach to language TRENDS in
Cognitive Sciences 7,5: 219-224.
Jackendoff, R. 2002. Foundations of Language: Brain,
Meaning, Grammar, Evolution. Oxford: Oxford University Press.
Department of English and American Studies
University of Regensburg
Message 2: RE: 15.3318, Disc: Deep Structure/Initial PP
From: Ahmad Reza Lotfi <ahmadreza_lotfihotmail.com>
Subject: RE: 15.3318, Disc: Deep Structure/Initial PP
Philip Carr univ-montp3.fr> wrote:
>Ahmad Lofti's points (Linguist 15.3303) are interesting. We see here an
>attempt to sustain a process-based interpretation of 'psychological
>realism', but defined in terms of parallelism, rather than sequential
>But it's still a process-based approach, and I find it hard to see how
>that can fit with the idea that one is attempting to characterise
>(Chomskyan) mental *states* ('knowledge'), as opposed to *processes* ('use
>One might think that declarative frameworks would be better suited to
>characterising static mental states, rather than mental activities, but
>even with declarative approaches, one sees appeal (implicit or otherwise)
>to the idea of processes (such as structure-building).
Although it may sound too radical (if not absurd!) to some readers, I find
the division of the world into its states and processes rather artificial
(though prhaps still legitimate given man's limitations in understanding
what's going on around/within him): As the world is in permanent motion and
change, it's only the human mind that takes one snapshot out of a process
and terms it a state; a single slide taken away from the film in progress
on the screen for scrutiny. What Chomsky does in characterising mental
states (knowledge of language) is to make this real-time mental process
stand still momentarily in order to see what's going on there. It's a
forced move on the scientist's part to make sense of the reality, but not
the reality itself.
While Chomsky has never claimed his theories to be those of mental
processing/performance but of mental states/competence, the very
terminology he's always employed since ST through GB and finally in MP
strongly suggests he's well aware of the potentialities of his competence
model in paving the way to afford a performance model embracing mental
processing. Given the existing gap between competence and performance(= one
between states and processes), which is due to (a) the complexity of the
world, and (b) our present limitations in seeing what's in the flow rather
in a frame-by-frame presentation of the process, we might decide to
approach Chomskyan ideas cautiously once in the realm of psychological
processing. This does not mean, however, that the potentialities of
generative models should be left unexplored.
Ahmad R. Lotfi
Assistant Professor of linguistics,
Chair of English dept.
Azad University at Khorasgan (IRAN)
Respond to list|Read more issues|LINGUIST home page|Top of issue