On Allosemy
It seems like I am always complaining about the status of
semantics in the theory of grammar. I complain when its ignored, and then I
complain when its done in a way I don´t like, I complain and complain. Today is not going to be any different.
At the ROOTS IV conference, we had a number of lexical
semantics talks, which clearly engaged with meaning and generalizations about
root meaning. Then we had the morphology talks. But I´m not convinced those two groups of
people were actually talking to each other.
Now, the thing about Distributed Morphology is that it doesn’t believe
in a generative lexicon, so all of the meaning generalizations that are in the
lexicon for the lexical semanticists have to be recouped (if at all) in the functional structure, for DM and its
fellow travellers, me included. This is not a deep problem if we are focusing
on the job figuring out what the meaning
generalizations actually are in the first place, which seems independent of
arguing about the architecture. But there is also a danger that the
generalizations that the lexical semanticists are concerned about are perceived
as orthogonal to the system of sentence construction that morphosyntactians are looking at. Within DM, the separation of the system into
ROOT and functional structure already creates a sharp division whereby meaty
conceptual content and grammatically relevant meanings are separated
derivationally. This in turn can lead to
a tendency to ignore lexical conceptual semantics if you are interested in
functional morphemes, and to suspect that the generalizations of the lexical
semanticists are simply not relevant to your life (i.e. that they are not part
of the `generative system´). To the
extent that there are generalizations and patterns that need to be accounted
for, we need to look to the system of functional heads proposed to sit above
the verbal root in the little vP. But
more challengingly, we need to relate them via selectional frames to the sorts
of ROOTS they combine with in a non ad hoc manner. If, in addition, we require a constrained
theory of polysemy, the problem becomes even more complex. I think we are nowhere close to being able to
solve these problems. Perhaps because of
this, I think that standard morphological and syntactic theories currently do not yet
engage properly with the patterns in verb meaning, by which I mean both
constraints on possible meanings, and the existence of constrained
polysemies. I contend that the architecture
that strictly separates the conceptual content of the root from the functional
structure in a derivational system must resort to crude templatic descriptive
stipulations with which to handle selection.
This architecture also obscures the generalizations surrounding
polysemy.
One of the interesting talks in the conference that was one
of the few that attempted to integrated worries about meaning into a system
with DM-like assumptions, was the contribution by Neil Myler. Neil was
interested in tackling the fact that the verb have in English is found in a wide variety of different
constructions, and he was interested in giving a unified explanation of that
basic phenomenon. To that extent, I
thought Neil´s contribution was excellent, and I agreed with the motivation,
but I found myself uncomfortable with
some of the particular tools he used to put his story for have together. The issue in question involves the deployment
of Allosemy.
Let me first complain about the word Allosemy. It´s
pronounced aLOSSemi, right? That´s how
we are supposed to pronounce it. Of course, doing so basically destroys all
recognition of the morphemes that go into making it , and renders the word
itself semantically opaque even though it is perfectly compositional.
I hate it when stress shift does that.
Curiously, the problem with the pronunciation is similar to
the problem I have with its existence in
the theory, namely that it actually obscures the semantics of what is going on,
if we are not careful with it.
Let´s have a look at how Allosemy is deployed in a series of recent works by Jim Wood, Alec
Marantz and Neil Myler (We could maybe call them The NYU Constructivists for short). I am supposed to be a fellow
traveller with this work, but then why do I feel like I want to reject most of
what they are saying ?? Consider the
recent paper by Jim Wood and Alec Marantz, which you can read here .
So to summarize briefly, the idea seems to be that instead
of endowing functional heads with a semantics that has to remain constant
across all its instantiations, we give a
particular functional head like little v N possible semantic meanings, and then say
that it is allosemic. In other words it is N-ways ambiguous
depending on the context. This allows
syntax to be pure and autonomous. As a
side effect this means that meaning can be potentially built up in different ways, and the same structure can
have different meanings. The cost?
COST 1: In
addition to all the other listed frames
for selection and allomorphy, we now have to list for every item a subcategorization
frame that determines the allosemic variants of the functional items in the
context of insertion. (Well, if you like construction
grammar……)
COST 2: Since the mapping between syntactic structure
and meaning can no longer be relied upon, there is no chance of semantic and
syntactic bootstrapping for the poor infant trying to learn their
language. I personally do not see how
acquisition gets off the ground without bootstrapping of this kind.
COST 3: (This is
the killer). Generalizations about hierarchy and meaning correspondences like
the (I think exceptionless) one that syntactic embedding never inverts
causational structure is completely mysterious and cannot fall out naturally
from such a system (see this paper of mine for discussion).
PAYOFF: Syntax gets to be autonomous again.
But wait. We want this exactly, Why? Because Chomsky showed us the generative
semanticists were wrong back in the sixties?
And anyway, isn’t
syntax supposed to be quite small and minimal now, with a lot of the richness
and structure coming from the constraints at the interface with other aspects
of cognition? Doesn’t this lead us to expect that abstract syntactic structures
are interpreted in universally reliable ways?
Allosemy says that the only generalities are syntactic ones.
Like `I have an EPP feature’ or` I introduce an argument’. It denies that there
are any generalities at the level of abstract semantics. I would argue rather that the challenge is to
give these heads a general enough and underspecified
semantics so that the normal
compositional interaction with the rest of the structure these things compose
with will give rise to the different polysemies seen on the surface. Allosemy is not the same as compositionally potent underspecification. The
strategy of the Woods and Marantz paper is to go for a brute force semantic
ambiguity which is controlled by listing selectional combinations. It is perfectly clear that this architecture
can describe anything it wants to. And while one might be able to do it in a
careful and sensible way so as to pave the way for explanation later on, it is
also perfectly clear that this particular analytic tool allows you to describe
loads of things that don’t actually exist!
So, isn’t this going backwards, retreating from explanatory adequacy?
Of course, the rhetoric of the Woods and Marantz paper
sounds lovely and high-minded. The head that introduces arguments (i* ) is
abstract and underspecified. The kind
of thing a syntactician can love. (There
is also another version of i* which is modulated by the fact that a ROOT is adjoined to it, and this version is
the one that introduces adjuncts and is influenced by the semantics of the ROOT
that adjoins to it). However, core i* is
nothing nothing new, in fact it is a blast from the past (not in a bad way, in
fact). It is just a notational variant of
the original classical idea of specifier, where it was the locus for the
subject of predication (as in the the classic and insightful paper by Tim
Stowell from 1982, Subjects across Categories here). And the i* with stuff adjoined to it is what
happens when you have an argument introduced by a preposition. So i* is only
needed now because we got rid of specifiers and the generality of what it means
to be a specifier.
So. Allosemy. Can we just not do this?