DAY 1: Summary
First of all I just want to say how surprisingly inspiring
it is to be here. I feel like I am part of a big conversation with super smart
people who share my nerdy passion. Such
a Big Conversation one rarely gets to have. All grumpiness on my
part is in danger of vanishing in the
heady atmosphere of just having a damn
good time listening to people I respect,
debating the big issues surrounding my favourite topic.
A recurring theme of day 1 was the balance of power between
language description, analysis and
implementation issues, and higher level theorizing. An important concept in the discussions was
the term `mid level generalizations’
which refer to the concrete
results of bread and butter generative
syntax (whether GB, LFG or HPSG) which would not have been discovered without
the explicit goals and methodologies of generative grammar (MLGs). We all seem to agree that these are many and
I will try to cobble together a list for a later blog post. (Thanks to Deal, incidentally, for insisting
that Hornstein call this the MLG level and not GB, as he was referring to it. )
For example, Merchant in his contribution in the morning made the explicit
claim that the discovery of locality in grammar (suitably relativized to
individual phenomena) is one of the big and important MLGs in the field.
Hornstein thinks that minimalism is an exercise in meta
level grammatical theorizing, and that its job is not to create MLGs at
all. My impression is that Hornstein
feels quite happy with the current bag of MLGs and that we are ready to move to
the next step (even though there is still more to be done to harvest
MLGs).
Dechaine’s contribution in that session was interesting to
me because she expressed the view that I am very sympathetic towards, that our
MLGs are still hopelessly limited.
Specifically, our failure to internalize the facts of more unfamiliar
language types (so far) casts doubt on the generality of our so called ML`G’s,
so that much more work needs to be done before we can be satisfied, even at
that level. So the syntactician’s job
of generating MLGs, presumably at successive degrees of abstraction, is far from over. The Typology
session, which consisted of Dechaine,, Baker and Bobalijk
drove home this point with respect to the problem of examining a
representative sample of languages, when attempting to approach genuine
universals, and a genuine formal typology.
Bobalijk also explicitly took up the theme of the
relationship of data to high theory. They are not separate strands; they need
to sit together at the same table. If
you do one without the other then you risk
trying to do high level generalization over the WRONG mid level
generalizations. Bobalijk says for
example that we don’t have good generalizations about Case yet.
The Typology session
was inspiring from the point of view of
generating concrete productive suggestions for making progress. Dechaine promoted the idea of more
collaborative research to fill out the typological requirement in cases where
one wants to make arguments about universals.
This involves changing some of the sociology of the field. One cannot do this kind of thing on one’s
own, but there might be ways to start building data bases and lists of generative questions that we would like to
have answers to, for each language that gets studied or described. What we essentially want is Databases with
MLGs, triggered by the questions that only generativists ask. (As all the
panelists pointed out, in many cases this work is urgent since many of the more
understudied languages are on the brink of extinction.)
Pesetsky pointed out that there really needs to be a proper
venue for unremarkable but potentially useful findings, which led to the
semi-facetious suggestion of setting up a Journal
of Unremarkable Results. I think
this is a great title. Once it gets
popular, we should have a Journal of
Experimental Nonreplications.
In the session on Derivations
vs. Representations , Müller
argued that recent syntactic work has largely neglected explanation and
explicit proofs of the adequacy of the proposed analysis--- what he called the
`algorithmic level’ . He looked at LI issues from nineties compared to noughties
and counted the number of numbered `example’ -like thingies that corresponded
to data and the number of example-like
thingies that corresponded to the statement of principles of theorems. He claimed that the number of principles had gone down
drastically, and this was evidence for his assessment. I actually could not make sense of his
methodology or what those numbers meant.
I also found that he seemed to be worrying about the stuff that I WASN’T
worried about. But that might be a personal taste thing (I've never liked housekeeping and cleaning although I force myself to do it). I do agree that explicitness is necessary if you want to say anything at all about anything. It is at this
technical implementational level that the issue of representations vs.
derivations comes up, although Müller
was clear to point out that his algorithmic level was neutral between the two. (Indeed,
he thought there were independent reasons to favour derivations, but
this was a separate question).
In the old days this was a major big picture topic. Sells
was a great introduction to this session because he was there in Stanford on
the ground when the interest in this
question was at its peak and when the choices between GB, LFG and HPSG were
strongly informed by where you stood on this issue. I also remember it being the big meta issue
of my own graduate career. Graf
from the audience pointed out that one
can prove that the things such as derivation vs. representation that provoke so
much internal debate can be shown to be notational variants at a very deep
level. He’s a mathematical/computational
linguist so I assume he knows what he is talking about at a formal level. More generally, it seemed to me that the
young people in the audience were very unimpressed by the derivations vs.
representations issue, and were inclined to dismiss it as a non issue. We
Grey-Heads kept going back to it, trying to push on what potential evidence there might be for choosing
between the two. In fact, I really want the mathematical guys to win
this one because I am SO bored of this particular issue.
In general, we had a number of differences of opinion on
whether higher level considerations such as Darwin’s problem should drive our
research programme.
Sells attacked the a priori notion of perfection saying that
language is messy like people, and that it is amazing that we see the amount of order that we do. But a priori elimination of imperfections seems like a poor top-down
imperative. Hornstein thought that the
ONLY way to generate hypotheses (which are crucial for doing science) is to
have top-down higher level considerations, since hypotheses themselves do not
emerge naturally from data. I
completely agree with this and I think most of us in the room, as scientists, agreed too. The real question seems to be which higher order considerations, but also I think what is the abstraction gap between the
MLGs and the top-down considerations that are driving theory construction. Roberts came out strongly against using
Darwin’s problem as any kind of motivation, and would be in favour abandoning
all discussions of language evolution. Note that the nature of the high level
generalization driving research is clearly subject to trends and fashion. Top-down is all very well, and is scientifically necessary, but how do we choose
our favourite higher order principle?
Some high level considerations change over time, and these are for
different reasons. This could be a
natural organic change, such as in the case of Plato’s problem which one might
argue has disappeared from the syntactic discourse because in fact that it has
been so thoroughly internalized. Different is the case of the issue of
parameters and parameter setting which seems to have fractionated and dissolved
beyond recognition. (We had a couple of
strong Parameters people in the room however, so this is of course an
exaggeration).
Here’s MY question:
What is the most productive and creative distance between
top down principles and bottom up surface generalizations?
My personal taste is
that Darwin’s Problem is at too high a level of abstraction and the lack of
commensurability between the primes of cognition and the primes of syntactic
theorizing make it strikingly unproductive in some cases, and often misleading
and wrong when wielded clumsily.
Couple this with the suspicion that most of our
understanding at the MLG level is still woefully incomplete, and it looks like
we are in very dangerous territory.
So how bad is it?
Most people in the room think that we know way more important stuff now
than was known before. Moreover, there
are questions could not even have been
posed if it weren’t for the generative enterprise.
Ok, Ok, I’m going to go round the room and make a list. Stay
tuned.
Well, sounds like Gereon's is using 'algorithmic level' in a very weird way. A grammatical theory is a computational level specification of what the actual link between sound/sign and meaning is, irrespective of derivations or constraints or whatever. And I'm obviously younger than I think, because I agree with Graf here: it's probably notational, and even if the notations are different in terms of how complex they have to be to deal with, say, opacity, we're not at the point where that's a desideratum.
ReplyDeleteAgree with you that Darwin's problem is at too high a level of abstraction, or at least I find it isn't a driver in my own work. My own feeling is that we should be theoretically pluralistic and opportunistic, and as long as we're doing good science, we'll make progress. I think there's a crucial role at the analytical level, trying to understand phenomena within and across languages using the tools of analysis at our disposal, and using successful analysis to challenge theoretical principles. That said, we need people to be thinking at the theoretical level to develop those principles.
As you know, I'm a sunny optimistic person, so it's not bad at all!
Wish I could have been there with you guys. But why ask for the moon, we have New york!
fun fun fun! keep it coming! :) Thanks!
ReplyDelete"but there might be ways to start building data bases and lists of generative questions that we would like to have answers to, for each language that gets studied or described." Frans Plank has been building one of exactly these for years. Check out http://typo.uni-konstanz.de/archive/intro/
ReplyDeleteAs to Gillian's question --- I guess LFG takes the inductive strategy. Look at a lot of empirical data, figure out generalizations, create theories, check them with new data, revise or make new ones as more evidence is accumulated.
Thanks for the link, Miriam! I will take a look at it, although my first impression is that it is at a slightly surfacey level than what the linguists in the room were after. I mean that we want something that is allowed to use theoretical units that have been established by our theories. Like the notion of SUBJECT , or dependencies or clitics. I also don't necessarily just want a list of universals, I want a certain properties to be tracked. Like I might want to know whether you can combine the languages epistemic modal with non-stative verbs. Or whether you can put the bottom of a dependency inside a complex NP.
DeleteStuff like that
First, @Gillian: thanks for these posts! As someone who is not in Athens for this gathering but interested to hear what's going on, they are invaluable. Also, thank you for leaving the posts open for comments so we can have a conversation here. I'm actually writing to reply directly to something Amra wrote; I hope that's okay.
ReplyDelete@Amra: To borrow from Bobaljik's statement for the Athens event, one needs both induction and deduction (and a "tug of war" between the two, as he puts it). I dare say that induction alone, much like deduction alone, is close to pointless. Induction in the absence of some kind of restrictive theory (or at least a general idea) of what possible and impossible outputs of the inductive process might look like amounts to an exercise in summarizing the data. Since no one doubts that the data can be summarized (to _some_ nonzero extent), we have no way of knowing whether the data having been so summarized constitutes a significant result or not.
To be clear, this is not a pro-minimalism or anti-LFG stance; people proposing analyses in constraint-based lexicalist frameworks surely have a sense of "this is or isn't an insightful/interesting/significant analysis" when they're looking at one. That sense implies a criterion against which the results of induction are measured (formally or informally). And so I think we're all in the same boat, here.
In the same vein, deduction in the absence of induction risks a loss of touch with the facts themselves. I think this is something that a few of the statement-writers were bemoaning, at least the way I read their remarks.
And so, as Bobaljik said, the interesting action happens when both forces are exerted.
Yeah, I agree with Bobalijk too. I think he is simply articulating what most good linguists actually do in practice, whether they put it to themselves that way or not. I agree with Miriam that LFG seems to emphasize or favour one of the directions in the tug of war at the expense of the other, and rhetorically underplays the other. But there's no doubt that they have it. In any case, I think it is always better to be explicit about your topdown influences and not pretend that you can consider data purely and objectively.
ReplyDeleteI think that there are projects now in progress that are beginning to create data bases based on units of theoretical interest in a typologically diverse set of languages. Here is one such NSF funded project at UCLA with Dominique Sportiche as the PI: http://grantome.com/grant/NSF/BCS-1424336
ReplyDeleteand there is the related Syntactic Structures of the World's Languages: http://sswl.railsplayground.net
Thanks for these summaries, Gillian. Much appreciated.
ReplyDeleteI'm puzzled by the idea that Plato's problem has disappeared from the discourse because it has been "so thoroughly internalized". It's true that it's something that folks are well versed in alluding to, but it seems to play almost no practical role in guiding analytical choices. That's unfortunate, since we have so much better opportunities to take it seriously nowadays than we did 20-30 years ago. In particular, we can get a far better idea than in the past about what learners' experience looks like, and hence we can make more confident claims about what is plausibly observable or non-observable in learners' input. This should be quite useful in constructing accounts of constrained cross-linguistic variation.
Again, thanks for posting.
Gillian, thx for this. I think that my views are a little different from the way you describe them. I do not think that all MLG work has been done. I believe that many are roughly right and that some might be wrong and that certainly others await discovery. However, I don't think that this means that we need to wait to begin to see how to unify them. Just as Explanatory Adequacy does not wait for the last word about descriptive adequacy (we study UG even if we have not yet perfectly described any given G, so too with factoring features of UG). Moreover, the only way to learn how to do this kind of thing is to do it. So cutting one's teeth on MLGs is a good way to proceed. Again, I take 'On Why Mvmt' to be a paradigm of the kind of work this would be.
ReplyDeleteLast point: I agree with both David and Colin: we should be very pluralistic and opportunistic and sadly IMO PP has been largely ignored and only very partially internalized.
Ah, well I got you wrong then to claim that you think that the work of MLGs is largely done. Point taken. You just think that its not too early to be working on the next stage up.
DeleteOn Plato's problem being only partially internalized, you might be right about it. But it would be wrong I think to judge its effect on syntactic work just by looking at whether it is explicitly mentioned or not. I have noticed syntacticians pointing out native speakers robust grammaticality judgements of unusual combinations as being particularly relevant, precisely because it is highly unlikely they could have been explicitly learned or heard before, which is a kind of version of a poverty of the stimulus argument. But I think I probably need to do some counting before I will stick out my neck again as to whether it has more or less influence on theorizing now than in the eighties for example.
Thanks Gillian. Your response on PP raises an interesting point. If the standard of comparison is "Does PP get the same attention that it did in the 80s?", then the answer is probably something like "Not as much explicit mention, but day-to-day practices are relatively similar". If the standard of comparison is "Does PP get comparable attention, relative to what is currently needed and possible?", then the answer is perhaps quite different. 35 years ago we knew a lot less about syntactic variation and the best that we could do was make educated guesses about learners' experience. So a smart syntactician could get a long way using a back-of-the-envelope approach. That same approach is less likely to cut it nowadays, because we know so much more about variation (it often seems dauntingly complex), because we can relatively easily make more-than-guesses about what is observable, and because we're getting closer to being able to say sensible things about what kind of experience is needed to drive what kinds of inferences. Your comment makes me wonder whether people have quite different views on the current status of PP, because they're judging it relative to different starting points.
DeleteWhy does this matter to syntactic theories? Because there's a tendency to want to constrain all syntactic variation in a similar way. And because the kinds of phenomena that lend themselves most readily to large-scale cross language surveys (and to deep dialect syntax projects) are also the kinds of phenomena that are more readily observable. This leads to a situation where we face sometimes dizzying variation, and where a 1980s style account of constrained variation seems like a hopeless dream. I wonder if things would look less daunting if we more systematically distinguished readily observable variation (i.e., PP not an issue) from not-so-observable variation. For example, we wouldn't want to see the kind of variation in scope phenomena that we see in clitic placement. That's just a hunch, but to go down that path would certainly require more than the old back-of-the-envelope approach to PP. (To be clear, I love good back-of-the-envelope arguments, but sometimes they're just the start.)
`Your comment makes me wonder whether people have quite different views on the current status of PP, because they're judging it relative to different starting points.´ I think that is exactly right.
DeleteIn the Athens meeting, Legate made a couple of interventions to the effect that she thought one should not wield PP arguments cavalierly within the context of purely syntax papers without knowing something about the actual acquisition literature. But now, I think maybe the argument should be made more forcefully that every syntactician should be more educated with respect to the things they are working on, about the results of acquisition and variation. Its only in that way that we can integrate considerations from PP properly into our theorizing. It makes things harder for the day to day working syntactician of course. But as Rizzi said at some point during Day 2: ´´You´re just saying that Science is Hard´´
Thanks. I agree that these things should be part of a syntactician's toolkit. But there would be better or worse ways of implementing it. I think that there are specific questions and tools relating to PP that should be very helpful, and would ideally be incorporated into the curriculum for syntax, or for generative grammar more broadly. What I think would be less helpful would be to have courses that focus on the primary results of language acquisition research. That field, even the more linguistically inclined part, tends to have less to contribute to what a syntactician really needs. It's more focused on "Interesting things that kids do". I've seen cases where a language acquisition course was instituted as a required part of the linguistics curriculum, and it back-fired, likely for that reason.
DeleteJust wanted to say thanks a lot for reporting on the events in Athens! It's invaluable for the geographically disadvantaged ;)
ReplyDelete