Monday 14 September 2015

Allosemy---- No thanks.

On Allosemy


It seems like I am always complaining about the status of semantics in the theory of grammar. I complain when its ignored, and then I complain when its done in a way I don´t like, I complain and complain.   Today is not going to be any different.

At the ROOTS IV conference, we had a number of lexical semantics talks, which clearly engaged with meaning and generalizations about root meaning. Then we had the morphology talks.   But I´m not convinced those two groups of people were actually talking to each other.  Now, the thing about Distributed Morphology is that it doesn’t believe in a generative lexicon, so all of the meaning generalizations that are in the lexicon for the lexical semanticists have to be recouped (if at all)  in the functional structure, for DM and its fellow travellers, me included. This is not a deep problem if we are focusing on  the job figuring out what the meaning generalizations actually are in the first place, which seems independent of arguing  about the architecture.  But  there is also a danger that the generalizations that the lexical semanticists are concerned about are perceived as orthogonal to the system of sentence construction that morphosyntactians  are looking at.   Within DM, the separation of the system into ROOT and functional structure already creates a sharp division whereby meaty conceptual content and grammatically relevant meanings are separated derivationally.  This in turn can lead to a tendency to ignore lexical conceptual semantics if you are interested in functional morphemes, and to suspect that the generalizations of the lexical semanticists are simply not relevant to your life (i.e. that they are not part of the `generative system´).  To the extent that there are generalizations and patterns that need to be accounted for, we need to look to the system of functional heads proposed to sit above the verbal root in the little vP.  But more challengingly, we need to relate them via selectional frames to the sorts of ROOTS they combine with in a non ad hoc manner.  If, in addition, we require a constrained theory of polysemy, the problem becomes even more complex.  I think we are nowhere close to being able to solve these problems.  Perhaps because of this, I think that standard morphological  and syntactic theories currently do not yet engage properly with the patterns in verb meaning, by which I mean both constraints on possible meanings, and the existence of constrained polysemies.  I contend that the architecture that strictly separates the conceptual content of the root from the functional structure in a derivational system must resort to crude templatic descriptive stipulations with which to handle selection.  This architecture also obscures the generalizations surrounding polysemy.  

One of the interesting talks in the conference that was one of the few that attempted to integrated worries about meaning into a system with DM-like assumptions, was the contribution by Neil Myler. Neil was interested in tackling the fact that the verb have in English is found in a wide variety of different constructions, and he was interested in giving a unified explanation of that basic phenomenon.  To that extent, I thought Neil´s contribution was excellent, and I agreed with the motivation, but I found myself  uncomfortable with some of the particular tools he used to put his story for have  together.  The issue in question involves the deployment of  Allosemy.  

Let me first complain about the word Allosemy. It´s pronounced  aLOSSemi, right? That´s how we are supposed to pronounce it. Of course, doing so basically destroys all recognition of the morphemes that go into making it , and renders the word itself semantically opaque even though it is perfectly compositional.
I hate it when stress shift does that. 
Curiously, the problem with the pronunciation is similar to the problem I have with  its existence in the theory, namely that it actually obscures the semantics of what is going on, if we are not careful with it.

Let´s have a look at how Allosemy is deployed in a  series of recent works by Jim Wood, Alec Marantz and Neil Myler (We could maybe call them The NYU Constructivists for short). I am supposed to be a fellow traveller with this work, but then why do I feel like I want to reject most of what they are saying ??   Consider the recent paper by Jim Wood and Alec Marantz, which you can read here .

So to summarize briefly, the idea seems to be that instead of endowing functional heads with a semantics that has to remain constant across all  its instantiations, we give a particular functional head like little v  N possible semantic meanings, and then say that it is allosemic.   In other words it is N-ways ambiguous depending on the context.    This allows syntax to be pure and autonomous.  As a side effect this means that meaning can be potentially built up in  different ways, and the same structure can have different meanings. The cost?   

COST 1: In addition to  all the other listed frames for selection and allomorphy, we now have to list for every item a subcategorization frame that determines the allosemic variants of the functional items in the context of insertion. (Well, if you like construction grammar……)

COST 2:  Since the mapping between syntactic structure and meaning can no longer be relied upon, there is no chance of semantic and syntactic bootstrapping for the poor infant trying to learn their language.  I personally do not see how acquisition gets off the ground without bootstrapping of this kind.

COST 3: (This is the killer). Generalizations about hierarchy and meaning correspondences like the (I think exceptionless) one that syntactic embedding never inverts causational structure is completely mysterious and cannot fall out naturally from such a system (see this paper of mine   for discussion).

PAYOFF:  Syntax gets to be autonomous again.
But wait. We want this exactly, Why?  Because Chomsky showed us the generative semanticists were wrong back in the sixties?

And anyway,  isn’t syntax supposed to be quite small and minimal now, with a lot of the richness and structure coming from the constraints at the interface with other aspects of cognition? Doesn’t this lead us to expect that abstract syntactic structures are interpreted in universally reliable ways?

Allosemy says that the only generalities are syntactic ones. Like `I have an EPP feature’ or` I introduce an argument’. It denies that there are any generalities at the level of abstract semantics.  I would argue rather that  the challenge is to give these heads a general enough and underspecified  semantics so that the normal compositional interaction with the rest of the structure these things compose with will give rise to the different polysemies seen on the surface. Allosemy is not the same as compositionally potent underspecification.  The strategy of the Woods and Marantz paper is to go for a brute force semantic ambiguity which is controlled by listing selectional combinations.  It is perfectly clear that this architecture can describe anything it wants to. And while one might be able to do it in a careful and sensible way so as to pave the way for explanation later on, it is also perfectly clear that this particular analytic tool allows you to describe loads of things that don’t actually exist!  So, isn’t this going backwards, retreating from explanatory adequacy?


Of course, the rhetoric of the Woods and Marantz paper sounds lovely and high-minded. The head that introduces arguments (i* ) is abstract and underspecified.   The kind of thing a syntactician can love.  (There is also another version of i* which is modulated by the fact that  a ROOT is adjoined to it, and this version is the one that introduces adjuncts and is influenced by the semantics of the ROOT that adjoins to it).  However, core i* is nothing nothing new, in fact it is a blast from the past (not in a bad way, in fact).  It is just a notational variant of the original classical idea of specifier, where it was the locus for the subject of predication (as in the the classic and insightful paper by Tim Stowell from 1982, Subjects across Categories here).  And the i* with stuff adjoined to it is what happens when you have an argument introduced by a preposition. So i* is only needed now because we got rid of specifiers and the generality of what it means to be a specifier. 

So. Allosemy. Can we just not do this?  


7 comments:

  1. What’s the difference between allosemy and Allosemy? Languages do seem to have elements that have more than one meaning depending on their syntactic contexts. Perhaps the solution you’re hinting at is where the semantic contribution of an element is a bunch of very abstract relationships that are filled in by dependence on context.

    I think this way about roots, that they supply a bunch of Dowty’s proto-roles that can be gathered together and assigned as θ-roles if the syntax provides appropriate arguments. Some roots are bivalent or trivalent and demand that their collections of proto-roles be mapped to argument structure, so that if you want an intransitive from a divalent root then you have to detransitivize it. Then there are roots that are monovalent and so any proto-roles available beyond their one requirement have to be mapped through causativizing structures. Noun classification and applicatives fall out of this with the additional arguments being mapped as restrictions rather than saturations to the same proto-role realizations (θ-roles).

    ReplyDelete
    Replies
    1. Yes, I agree there are loads of cases where we want to do precisely this--- give the lexical item some underspecified or abstract semantics which is then filled in or modified in a systematic way depending on the context. There are many approaches along these lines in principle depending on how you set up the starting specifications and what kinds of changes are triggered by what contextual factors. The thing that I DONT want to do is say that the position is devoid of semantics and that a particular meaning is chosen from an arbitrary list based on selection. In this latter strategy, there is no reason for any of the semantic choices to bear any kind of systematic relationship to each other (or to the conditioning factors, other than by the stipulation of a listed concurrence frame). I think this is what Wood and Marantz are proposing when they talk about allosemy, unless I have misunderstood something.

      Delete
    2. I think the approach with unstructured lists is probably a reaction to how hard it is to figure out the semantics for some things. It’s not necessarily a *good* reaction, but I can see the motivation for doing so. It’s saying “this widget does a bunch of things that don’t superficially seem to be unifiable”. Unstructured lists are basically always a sign of not working out the underlying topology of something. But working that out in these kinds of cases usually means you have to be (a) really dedicated to working out the semantics, and (b) extremely knowledgeable about the lexical inventory and its syntactic interactions.

      Delete
    3. True. Nobody said linguistics was easy. :) When it comes to allosemy, I just don´t want us to be dazzled by rhetoric, I just want to make sure we call a spade a spade.

      Delete
  2. This comment has been removed by the author.

    ReplyDelete
  3. I agree that giving "v" multiple meanings, without also claiming that there is simply a v_1, v_2, v_trans, v_unerg, v_unacc (which in fact may be lexicalized differently, but that's another matter), seems to be going too far. But it's a natural extension to such heads of the way we analyze allosemy in all sorts of other, overt head cases, right? In fact, given that Peter Svenonius has shown that there is no theoretically interesting distinction between "lexical" and "functional" in DM/Minimalism (in his 2014 Theoretical Linguistics paper), we should be utterly surprised if "v" etc did *not* in fact show this kind of behavior. I'm thinking of idiomization, "grammaticalization" (which is the same thing), and all such phenomena of partial meaning maps that fall under the rubric "conventionalization" (which is in fact all of them--it's all just a matter of degree). So whatever we say about "strings" in "pull strings" or "beans" in "spill the beans", or "is to" in "He is to be arraigned today", we can say about "v". Though I worry very much about losing the beautiful generalizations that Gillian has so wonderfully written about, my main objection to the Wood and Marantz paper is that it is formally incoherent to state negative universal conditioning environments as they do in their (30):

    [[v]] <--> \lambda P.... / __ (no external argument) [sic]

    Why not just use the Elsewhere Principle for this? I don't know.

    Plus, their suggestion about how "on" comes to be a Preposition--by "valu[ing] the categorial feature of i* as p" is certainly putting the cart before the horse. The mystery of the workings of the feature system, the ontology of feature values, and how "valuing" works is utterly opaque, and seems designed to rescue the obscure notion (a presumptively false one) that roots never select for anything. (Of course, many instances of "on" cannot have a uniform lexical semantics, but are nevertheless still Ps... in idioms, in phrasal verbs, in many selected PP contexts. I guess this whole part just loses me.)

    ReplyDelete
  4. Yes, I agree that there are different `contents´in some sense to v, just as there are different `contents´to P. But DM´s problem is entirely of its own making. By insisting on a serialized separation between functional items on the one hand, and content words in the form of roots on the other, they make it impossible to say this in a direct and transparent way. (The new root adjunction strategy is one step towards trying to get basic descriptive patterns back, but I agree its completely technically opaque). But they would not have this problem if they simply admitted that form and content exist in parallel---- conceptual content is smeared all over the structural semantic skeleton (which indeed is pretty abstract, but not empty)---- and have an architecture that directly represents that. Also, I do not deny the existence of some cases of allosemy which we need for idioms etc (just as allomorphy also exists). But adopting this particular analytic tool for EVERYTHING essentially asserts that there are no semantic generalizations or unities (only syntactic ones), and it allows us to abandon the search for suitably abstract and muscular underspecified denotations.

    ReplyDelete