DAY 1: Summary
First of all I just want to say how surprisingly inspiring it is to be here. I feel like I am part of a big conversation with super smart people who share my nerdy passion. Such a Big Conversation one rarely gets to have. All grumpiness on my part is in danger of vanishing in the heady atmosphere of just having a damn good time listening to people I respect, debating the big issues surrounding my favourite topic.
A recurring theme of day 1 was the balance of power between language description, analysis and implementation issues, and higher level theorizing. An important concept in the discussions was the term `mid level generalizations’ which refer to the concrete results of bread and butter generative syntax (whether GB, LFG or HPSG) which would not have been discovered without the explicit goals and methodologies of generative grammar (MLGs). We all seem to agree that these are many and I will try to cobble together a list for a later blog post. (Thanks to Deal, incidentally, for insisting that Hornstein call this the MLG level and not GB, as he was referring to it. ) For example, Merchant in his contribution in the morning made the explicit claim that the discovery of locality in grammar (suitably relativized to individual phenomena) is one of the big and important MLGs in the field.
Hornstein thinks that minimalism is an exercise in meta level grammatical theorizing, and that its job is not to create MLGs at all. My impression is that Hornstein feels quite happy with the current bag of MLGs and that we are ready to move to the next step (even though there is still more to be done to harvest MLGs).
Dechaine’s contribution in that session was interesting to me because she expressed the view that I am very sympathetic towards, that our MLGs are still hopelessly limited. Specifically, our failure to internalize the facts of more unfamiliar language types (so far) casts doubt on the generality of our so called ML`G’s, so that much more work needs to be done before we can be satisfied, even at that level. So the syntactician’s job of generating MLGs, presumably at successive degrees of abstraction, is far from over. The Typology session, which consisted of Dechaine,, Baker and Bobalijk drove home this point with respect to the problem of examining a representative sample of languages, when attempting to approach genuine universals, and a genuine formal typology.
Bobalijk also explicitly took up the theme of the relationship of data to high theory. They are not separate strands; they need to sit together at the same table. If you do one without the other then you risk trying to do high level generalization over the WRONG mid level generalizations. Bobalijk says for example that we don’t have good generalizations about Case yet.
The Typology session was inspiring from the point of view of generating concrete productive suggestions for making progress. Dechaine promoted the idea of more collaborative research to fill out the typological requirement in cases where one wants to make arguments about universals. This involves changing some of the sociology of the field. One cannot do this kind of thing on one’s own, but there might be ways to start building data bases and lists of generative questions that we would like to have answers to, for each language that gets studied or described. What we essentially want is Databases with MLGs, triggered by the questions that only generativists ask. (As all the panelists pointed out, in many cases this work is urgent since many of the more understudied languages are on the brink of extinction.)
Pesetsky pointed out that there really needs to be a proper venue for unremarkable but potentially useful findings, which led to the semi-facetious suggestion of setting up a Journal of Unremarkable Results. I think this is a great title. Once it gets popular, we should have a Journal of Experimental Nonreplications.
In the session on Derivations vs. Representations , Müller argued that recent syntactic work has largely neglected explanation and explicit proofs of the adequacy of the proposed analysis--- what he called the `algorithmic level’ . He looked at LI issues from nineties compared to noughties and counted the number of numbered `example’ -like thingies that corresponded to data and the number of example-like thingies that corresponded to the statement of principles of theorems. He claimed that the number of principles had gone down drastically, and this was evidence for his assessment. I actually could not make sense of his methodology or what those numbers meant. I also found that he seemed to be worrying about the stuff that I WASN’T worried about. But that might be a personal taste thing (I've never liked housekeeping and cleaning although I force myself to do it). I do agree that explicitness is necessary if you want to say anything at all about anything. It is at this technical implementational level that the issue of representations vs. derivations comes up, although Müller was clear to point out that his algorithmic level was neutral between the two. (Indeed, he thought there were independent reasons to favour derivations, but this was a separate question).
In the old days this was a major big picture topic. Sells was a great introduction to this session because he was there in Stanford on the ground when the interest in this question was at its peak and when the choices between GB, LFG and HPSG were strongly informed by where you stood on this issue. I also remember it being the big meta issue of my own graduate career. Graf from the audience pointed out that one can prove that the things such as derivation vs. representation that provoke so much internal debate can be shown to be notational variants at a very deep level. He’s a mathematical/computational linguist so I assume he knows what he is talking about at a formal level. More generally, it seemed to me that the young people in the audience were very unimpressed by the derivations vs. representations issue, and were inclined to dismiss it as a non issue. We Grey-Heads kept going back to it, trying to push on what potential evidence there might be for choosing between the two. In fact, I really want the mathematical guys to win this one because I am SO bored of this particular issue.
In general, we had a number of differences of opinion on whether higher level considerations such as Darwin’s problem should drive our research programme.
Sells attacked the a priori notion of perfection saying that language is messy like people, and that it is amazing that we see the amount of order that we do. But a priori elimination of imperfections seems like a poor top-down imperative. Hornstein thought that the ONLY way to generate hypotheses (which are crucial for doing science) is to have top-down higher level considerations, since hypotheses themselves do not emerge naturally from data. I completely agree with this and I think most of us in the room, as scientists, agreed too. The real question seems to be which higher order considerations, but also I think what is the abstraction gap between the MLGs and the top-down considerations that are driving theory construction. Roberts came out strongly against using Darwin’s problem as any kind of motivation, and would be in favour abandoning all discussions of language evolution. Note that the nature of the high level generalization driving research is clearly subject to trends and fashion. Top-down is all very well, and is scientifically necessary, but how do we choose our favourite higher order principle? Some high level considerations change over time, and these are for different reasons. This could be a natural organic change, such as in the case of Plato’s problem which one might argue has disappeared from the syntactic discourse because in fact that it has been so thoroughly internalized. Different is the case of the issue of parameters and parameter setting which seems to have fractionated and dissolved beyond recognition. (We had a couple of strong Parameters people in the room however, so this is of course an exaggeration).
Here’s MY question:
What is the most productive and creative distance between top down principles and bottom up surface generalizations?
My personal taste is that Darwin’s Problem is at too high a level of abstraction and the lack of commensurability between the primes of cognition and the primes of syntactic theorizing make it strikingly unproductive in some cases, and often misleading and wrong when wielded clumsily.
Couple this with the suspicion that most of our understanding at the MLG level is still woefully incomplete, and it looks like we are in very dangerous territory.
So how bad is it? Most people in the room think that we know way more important stuff now than was known before. Moreover, there are questions could not even have been posed if it weren’t for the generative enterprise.
Ok, Ok, I’m going to go round the room and make a list. Stay tuned.