Thursday, 18 June 2015

Anticipation: Roots

ROOTS

The recent meeting of syntacticians in Athens has whet my appetite for big gatherings with lots of extremely intelligent linguists thinking about the same topic, because it was so much fun.  

At the same time, it has also raised the bar for what I think we should hope to accomplish with such big workshops. I have become more focused and critical about what the field should be doing within its ranks as well as with respect to communication with the external sphere(s).

The workshop I am about to attend on Roots (the fourth such) to be held in New York from June 29th to July 3rd, offers a glittering array of participants (see the preliminary program here http://wp.nyu.edu/roots4/wp-content/uploads/sites/1403/2015/02/roots4_program.pdf ), organized by Alec Marantz and the team at NYU.   

Not all the participants share a Distributed Morphology (DM)-like view of `roots’,  but all are broadly engaged in the same kinds of research questions and share a generative approach to language. The programme also includes a public forum panel discussion to present and discuss ideas that should be more accessible to the interested general public. So Roots will be an experiment in having the internal conversation as well as the external conversation. 

One of the things I tend to like to do is fret about the worst case scenario.  This way I cannot be disappointed.  What do I think is at stake here, and what is there to fret over in advance you ask?  Morphosyntax is in great shape, right?

Are we going to communicate about the real questions, or will everyone talk about their own way of looking at things and simply talk past one another?  Or will we bicker about small implementational issues such as should roots be acategorial or not? Should there be a rich generative lexicon or not?  Are these in fact, as I suspect,  matters of implementation,   or are they substantive matters that make actual different predictions?  I need a mathematical linguist to help me out here.  But my impression is that you can take any phenomenon that one linguist flaunts as evidence that their framework is best, and with a little motivation, creativity and tweaking here and there, that you can give an analysis in the other framework´s terms as well.   Because in the end these analyses are still at the level of higher level descriptions, and it may look a little different but you can still always describe the facts.  

DM in particular equips itself with an impressive arsenal of tricks and  magicks to get the job done. We have syntactic operations of course, because DM prides itself on being `syntax all the way down´.  But in fact, but we also have a host of purely morphological operations to get things in shape for spellout (fission, fusion, impoverishment, lowering what have you), which are not normal actions of syntax and sit purely in the morphological component.  Insertion comes next, which is regulated by competition and the elsewhere principle, where the effects of local selectional frames can be felt (contextual allomorphy and subcategorization frames for functional context).   After spellout, notice that you still get a chance to fix some stuff that hasn´t come out right so far, namely by using `phonological´ readjustment rules, which don´t exist anywhere else in the language´s natural phonology.  And this is all before the actual phonology begins. So sandwiched in between independently understood syntactic processes and independently understood phonological processes, there´s a whole host of operations whose shape and inherent nature look quite unique. And there´s lots of them. So by my reckoning,  DM has a separate morphological generative component which is different from the syntactic one. With lots of tools in it.

But I don´t really want to go down that road, because one woman´s Ugly is another woman´s Perfectly Reasonable, and I´m not going to win that battle. I suspect that these frameworks are inter translatable and that we do not have, even in principle, the evidence from within purely syntactic theorising, to choose between them.

However, there might be deep differences when it comes to deciding what operations are within the narrow computation and which ones are properties of the transducer that maps between the computation and the other modules of mind brain.  So it´s the substantive question of what that division of labour is, rather than the actual toolbox that I would like to make progress on.

To be concrete, here are some mid-level questions that could come up at the ROOTs meeting.

Mid-Level Questions:
A. Should generative aspects of meaning be represented in the syntax or the lexicon? (DM says syntax)
B.  What syntactic information is borne by roots? (DM says none)
C. Should there be late insertion or  should lexical items drive projection? (DM says late insertion)

Going down a level, if one accepts a general DM architecture, one needs to ask a whole host of important lower level questions to achieve a proper degree of explicitness:

Low-Level Questions
DM1: What features can syntactic structures bear as the triggers for insertion?
DM2: What is the relationship between functional items and features? If it is not one-to-one, can we put constraints on the number of `flavours` these functional heads can come in?
DM3: What morphological processes manipulate structure prior to insertion, and can any features be added at this stage?
DM4: How is competition regulated?
DM5: What phonological readjustment rules can apply after insertion?

There is some hope that there will be a discussion of the issues represented by A, B and C above. But the meeting may end up concentrating on DM1-5.

Now, my hunch is that in the end,  even A vs. B vs. C are all NON-ISSUES. Therefore, we should not waste time and rhetoric trying to convince each other to switch `sides’.  Having said that, there is good evidence that we want to be able to walk around a problem and see it from different framework-ian perspectives, so we don’t want homogeneity either. And we do not want an enforced shared vocabulary and set of assumptions.  This is because a particular way of framing a general space of linguistic inquiry lends itself to noticing different issues or problems, and to seeing different kinds of solutions.   I will argue in my own contribution to this workshop on Day 1, that the analyses that adopt as axiomatic the principle  of acategorial roots prejudges and obscures certain real and important issues that are urgent for us to solve.  So I think A, B and C need an airing.

If we end up wallowing in DM1-5 the whole time, I am going to go to sleep.  And this is not because I don’t appreciate explicitness and algorithmic discipline (as Gereon Mueller was imploring us to get more serious about at the Athens meeting), because I do. I think it is vital to work through the system, especially to to detect when one has smuggled in unarticulated assumptions, and make sure the analysis actually delivers and generates the output it claims to generate.   The problem is that I have different answers to B than in the DM framework, so when it comes to the nitty-gritty of DM2,3 and 5 in particular, I often find it frustratingly hard to convert the questions into ones that transcend the implementation.  But ok, it’s not all about me.

But here is some stuff that I would actually like to figure out, where I think the question transcends frameworks, although it requires a generative perspective. 

A Higher Level Question I Care About
Question Z.  If there is a narrow syntactic computation that manipulates syntactic primes and  has a regular relationship to the generation of meaning, what aspects of meaning are strictly a matter of syntactic form, and what aspects of meaning are filled in by more general cognitive processes and representations? 

Another way of asking this question is in terms of minimalist theorizing. FLN must generate complex syntactic  representations and semantic skeletons that underwrite the productivity of meaning construction in human language. What parts of what we traditionally consider the `meaning of a verb’  are contributed by (i) The narrow syntactic computation itself, (ii) the transducer from FLN to the domain of concepts (iii) conceptual flesh and fluff on the other side of the interface that the verb is conventionally associated with. 

Certain aspects of the computational system for a particular language must surely be universal, but perhaps only rather abstract properties of it such as hierarchical structuring and the relationship between embedding and semantic composition. It remains an open question whether the labels of the syntactic primes are universal or language specific, or a combination of the two (as in Wiltschko’s recent proposals). This makes the question concerning the division of labour between the skeleton and the flesh of verbal meaning also a question about the locus of variation. But it also makes the question potentially much more difficult to answer. To answer it we need evidence from many languages, and we need to have diagnostics for which types of meaning we put on which side of the divide.  In this discussion, narrow language particular computation does not equate to  universal. I think it is important to acknowledge that. So we need to make a distinction between negotiable meaning vs. non-negotiable meaning and be able to apply it more generally. (The DM version of this question would be: what meanings go into the roots and the encyclopedia as opposed to meaning that comes from the functional heads themselves).

There is an important further question lurking in the background to all of this which is of how the mechanisms of storage and computation are configured in the brain, and what  the role of the actual lexical item is in that complex architecture.  I think we know enough about the underlying patterns of verbal meaning and verbal morphology to start trying to talk to the folks who have done experiments on priming and  the timing of lexical access both in isolation and integrated in sentence processing.   I would have loved to see some interdisciplinary talks at this workshop, but it doesn’t look like it from the programme. 

Still, I am going to be happy if we can start comparing notes and coming up with a consensus on what we can say at this stage about higher level question Z. (If you remember the old Dr Seuss story, Little Cat Z was the one with VOOM, the one who cleaned up the mess).


When it comes to the division of labour between the knowledge store that is represented by knowing the lexical items of one’s language, and the computational system that puts lexical items together, I am not sure we know if we are even asking the question in the right way.  What do we know of the psycholinguistics of lexical access and deployment that would bear on our theories?  I would like to get more up to date on that. Because the minimalist agenda and the constructivist rhetoric essential force us to ask the higher level question Z, and we are going to need some help from the psycholinguists to answer it.  But that perhaps will be a topic for a different workshop.

Tuesday, 2 June 2015

Athens: Final Instalment

I would like to start off by saying, in case it wasn´t already perfectly obvious, that the posts I have been making are my own highly subjective highlights and interpretations from an extremely contentful and interesting event.  I am not even attempting to provide a proper transcript, and these are not even Minutes. They also have no official status, in the sense that the organizers have no idea what I am writing.  I thought I would start my last post in this fashion, because the final instalment will probably be even more subjective and interpretational than the previous ones.

I ended my last post with the assertion that it is hard to agree on the content and formulation of our field´s MLGs.  To illustrate this I take a toy example from  the realm of argument structure and think out loud for a bit.   Suppose I range my own commitments and things I consider consensual in a kind of hierarchical ranking going from most general to most specific. The most general level is shared I expect by all generative syntacticians, while the lowest reaches might start to get more contentious.

GG1:  The Language System discrete and symbolic, and makes crucial reference to hierarchy in its complex representations.  

GG2: A linguistic representation includes the formation of dependencies and relations.  These all seem to come with their own specific locality conditions.


MLGs (?) For Verbal Syn-Sem

1. There is a grammatically relevant notion of SUBJECT that cannot be defined purely by reference to thematic/semantic properties.

2. In  the linguistic expression of an event where both agentive and patientive participants are obligatorily represented,  the nominal constituent representing the Agent is always hierarchically superior to the nominal constituent representing the Patient in the syntactic representation (SYN-SEM generalization).

3. A monoclausal verbal structure cannot express more than one temporally non-overlapping dynamic portion (SYN-SEM generalization).

4. ARGUMENTS (thematic and notionally obligatory participants related to ta verbal expression) behave in a linguistically distinct way from ADJUNCTS.
(lots of sub-generalizations here related to the formation of dependencies into the two types).

5. Argument structure and aktionsart generalizations  are properties of  the verbal projection, not  properties of  verbal lexical items.
(Depending on who you talk to, there are different sorts of feeding relations between the lexical verb and the verbal structure it appears with).

6. In a phrase structure representing the verbal event,  argument structure projections such as CAUSE and PASSIVE appear inside (i.e. hierarchically closer to the root) than inflectional and ASPECT, and TENSE projections.  


Ok, that was just off the top of my head, and I was trying to state the MLG level in terms that would be acceptable to the maximum number of people who would consider themselves generative syntacticians.  Notice that I didn´t put in Burzio´s Generalization, or express (5) in terms of acategorial roots.  For the former, that´s because I couldn´t think of a way to express it in primes that I accept in a way that makes it both contentful and true; for the latter, I would not agree with (5) if I had to accept that extra analytic step. 

There are also a lot of other things I could write down there that I believe are correct (with a fair amount of good reason), but which I reckon that too many other people would take issue with, so they didn´t make it.  But where is the cut-off ? 

Another thing.  Groups of syntacticians that share more terms of art, will have more specific commitments in common. But are they MLGs really, or are they  just agreements about how to use the toolbox?  

Finally, some of the things one might want to write down as an MLG have been demonstrated and tested on only a small (and typologically narrow) set of the worlds languages. They are up there because they look good so far. There would be nothing on the list if  we had to confine ourselves to things that are true of every world language.   I think it is fair to concede that deep engagement with the facts and properties of currently less well understood languages can sometimes radically change the terms of the MLGs that ultimately turn out to be correct. (Dechaine was the leading voice of caution here).

It is worth emphasizing that the list above is both provisional and highly descriptive. Some of them may end up having a fan of sub-generalizations; some of them might end up being just be tendencies, or confined to certain language groups. 

In all of this, we must not lose sight of the fact that this list is not a list of Universals in the sense of Universal Grammar, since we all  think that whatever languages have in common, they have to be the abstract things that underwrite and give rise to these patterns and tendencies.  Once this is recognized, even tendencies and conditional generalizations are valuable, because they give insight into what those commonalities might be. It is an empirical issue what level of abstraction the common UG properties might exist. It might be just MERGE, plus a range of cognitive tendencies, learning biases, and 3rd factor design properties.



I found it a good exercise to try to write some of these things down. And I also found it an interesting exercise to see a whole room full of generative syntacticians trying to brainstorm a list together.   We can all agree broadly, but it is much harder to make the fine grained judgements that this kind of list requires in a consensual way.

But perhaps absolute consensus is neither possible nor desirable. The effort to transcend parochiality is good, but the list should have a more flexible and pluralistic status if it is going to have any good effects.

It certainly seems true to me that if we had such a list,  however imperfect, it could be immensely useful in guiding research questions and providing a platform for genuinely comulative advance, especially after we have made the effort to state our commitments in the maximally general way possible so as to communicate across frameworks, and ultimately across disciplines.   


What did We Accomplish?

I seriously hope that the subcommittee set up impromptu on the floor at Athens will manage to negotiate the minefield of the The List  and come up with something that at the very least can serve as a springboard for discussion and further hypothesis testing (replications and extensions). 


We also had a nice affirming experience in Athens in the sense that it was impossible to leave that event without thinking that syntacticians are serious,  smart and committed and doing a lot of good and responsible work.

Finally, we came up with a number of practical suggestions for how we can manage the outreach to schools, to the public, and to academics in other disciplines.  This was something we could all agree on.

So its all Good, Right?

The syntacticians at the Athens meeting are real live people, and so they straddle the whole spectrum of personality types  with respect to thoughts on the Road Ahead, and the reasons for the call----- Happy, Bashful, Sneezy, Sleepy, Grumpy, Dopey and Doc.  I want to concentrate for a moment on Happy and Grumpy.
Happy is the syntactician who was a little baffled by the terms of the call, and thinks that internal to syntax there is  no problem, no crisis, and no reason at all for this meeting.  Grumpy is the syntactician who sort of darkly suspects that the reason we have been so bad at communicating outside our own tribe is that we have some internal issues to resolve as well.   I speak as one who would classify herself as Grumpy in this regard.  I think, for example, my friend and colleague David Adger is Happy. (I hope David will not yell at me for this, but I think we have actually had this conversation).  This could just be a personality thing.  But if I can generalize, (and I know I am getting myself into trouble here)  I would say that Happy is a syntactician working in the US  or the UK who is comfortable using the canonical minimalist toolbox, terms and framework language.  Grumpy was usually living in non English speaking Europe, and often had fewer mainstream commitments at the implementational level.   I think Grumpy would be much happier if syntactic theorizing used a less parochial toolbox, emphasized generalizations at the MLG level more, and if it was a little bit more multilingual in its engagement with other implementational languages and of the bridging discourses to other disciplines.

I´m sure I´ve forgotten something, but it´s gone on too long already.


Monday, 1 June 2015

Athens Day 3

Day 3 consisted of no new panel sessions--- just summaries and discussion.  In the morning, we tried to raise and discuss in more detail the most important issues that emerged from the meeting.

One interesting issue in the morning discussion was the idea of 3rd Factor principles and the extent to which they can, in  practical terms, constrain the theories we develop. This had been a recurring theme, and people clearly pulled in different directions here.  Hornstein had already spoken out strongly in favour of top down general principles that are used to constrain the questions we ask and the kinds of solutions we seek at the linguistic level.   3rd Factor principles such as general cognitive limitations and proclivities, and logical facts about memory, or learning, or the way symbolic systems configure themselves for maximum efficiency etc.  are the boundary conditions for what the grammar ends up looking like, and have contentful implications, apparently belied by their abstractness.  Rizzi expressed in a clear calm way what I take to be the source of resistance many working linguists feel in practice which is that we don’t have clear enough ideas to make  a priori  choices about what those 3rd factor principles are. There is an empirical dimension to these facts, and our choices of which are relevant where, that makes it hard to use them at a concrete level.  What is the `perfect solution’  to the problems imposed by the interfaces? And why do we think that what our human logic considers to be conceptually perfect, or simple,  is what biology considers to be perfect? Maybe biology thinks repetition and overlap and reuse is `perfect’.  We are all just speculating here.
I think Rizzi is right in this, but I also think Hornstein is right to emphasize the role of top down theorizing, and the drive for abstractness in keeping syntax healthy.

In the afternoon we attempted a much more positive contribution by trying to brainstorm as a collective to come up with a consensual list of  major results and mid level generalizations that had emerged within the generative linguistic tradition in particular.

This seemed important for a number of reasons.

A.   It is important because it reminds  us of what our current state of knowledge is and allows the next questions to be asked in a way that will be genuinely cumulative and productive. 

Generative syntax has often been accused of being narrow, jargonistic, or impenetrable, so that gives us a second reason.  

B.  It is important to try to state the important discoveries and generalizations of the field in terms that are general enough to transcend frameworks and particular implementations, as a healthy mental exercise for ourselves. 

Actual day to day syntactic analysis requires formal discipline and precision in practice, to ensure that the model one builds is indeed capturing the data one has described, and specialized terminology is unavoidable.   (If I understand MΓΌller correctly, the substance of his complaint about our field was that much recent work seems to have lost the skills or inclination to pursue a syntactic analysis at this `algorithmic’ level. I don’t know if this is true or not. It might be. There’s unfortunately a fair amount of shoddy syntactic papers out there, but there’s good and bad work in every field). At the same time, we certainly do not want to say that the results of the generative enterprise are confined to those discovered in one particular framework or architecture.  Or that those discovered in different frameworks and in different eras are not commensurable.   Nobody in the room wanted to say that. And with respect to older versions of Chomskian grammar such as GB, it was explicitly affirmed that GB was responsible for establishing certain MLGs for the field that are a central part of the cumulative legacy of generative syntax.

 Our mathematical linguist on the ground here in Athens, Thomas Graf,  kept insisting and reminding us that he could not see that anything substantive could hinge on the differences at the level of  grammar architecture that the field often seems to be preoccupied with arguing about.  This came up strongly in the derivations vs. representations session, and again in the discussions of how the formal grammar interacts with the parser. 

In discussions over lunch and dinner, we syntacticians felt compelled to  pursue this with Graf in more detail.  There is a strong intuition among us, that some theories (with a little t) are just better at capturing particular generalizations  than others. We did not like to be told that these `small-t-differences’  were illusory, and not worth arguing about.   Graf, in turn, voiced a completely different perspective on small-t-Differences  that resonated strongly with me. In mathematics, the reality of a phenomenon transcends small-t-differences, but choice of small-t framework is a powerful determinant of how easy it is to solve a  particular analytic problem, whether generalizations emerge in a way that is intuitive to the human perceiver, and even the kinds of issues one even looks at. So translating into different small-t frameworks is actually a Thing--- a positive tool, that allows you to walk all the way around a particular problem and see its front side and its back side and its top side etc.   Some small-t-theories allow a particular pattern to emerge gracefully to our senses where their translation into a notational equivalent would not.  Being able to be precise about our small-t-theories but also in some cases switch between them should a be seen as a source of strength for problem solving, not a source of point-scoring fodder for article publications. 

In addition, we have a third reason for why attempting to state the midlevel generalizations is important.

C. It  is important because it allows us to scale up our claims to a level of granularity at which we are able to interface with our colleagues in related academic disciplines, potentially narrowing the commensurability gap.


Having said that, the exercise of trying to do this in a group of one hundred people, and syntacticians of different persuasions was difficult, frustrating and instructive in equal measure.  

It is easy to agree when the discussion takes abstract form.
But as soon as an actual list was attempted, disagreements started to emerge about what counted as a mid level generalization:  why/if  we should be doing this at all;  how theoretically charged the terms  should be; judgements of size and/or importance; level of crosslinguistic generality;  how much consensus does there need to be on something for it to make it onto that list.  It's a minefield.

After three days of stimulating discussion,  and No Shouting (unlike what I had predicted),  irritations started to emerge.  There was so much residual good will in the room, that most of us stuck at it and battled through. If we get a List that anybody likes at all or thinks is important, it will be a triumph of unity.  And I still have hope that such a list will emerge. A committee has been put in charge of taking the 50 or so bullet points on our communally generated first draft and making it into something that makes sense.  But I fear that the difficulties here are representative of deep internal disagreements/perspectives about the current state of the field. 


In my next post, I will continue with my summary of where I think the internal faultlines are. I will reiterate my support for the exercise in articulating MLGs, and I will give an example of my own version of the  list in one small subdomain of syntax that I actually know something about (Argument Structure).