Thursday, September 25, 2003

previous entry | main | next entry | TrackBack (4)


Why David Adesnik is really wrong

When I started reading David Adesnik's "jeremiad" against political science while he was guest-blogging at the Volokh Conspiracy, I started to cringe. Then I got mad.

David is a very smart guy, but there's a lot in this post that Adesnik either distorts or gets flat-out wrong. Chris Lawrence has already taken him to task -- though Laura of Apt. 11D sympathizes with Adesnik -- but there's so much that's wrong with his post that I'm going to have to indulge in a quasi-fisking here. Adesnik's post is in italics and indented:

The secret to success in America's political science departments is to invent statistics. If you can talk about regressions and r-squared and chi-squared and probit and logit, then you can persuade your colleagues that your work is as rigorous as that of a chemist, a physicist, or (at worst) an economist.

There's a very big difference between creating new data and using new statistical techniques to analyze old data. I strongly suspect Adesnik's source of irritation is the latter. The former is way too rare in the discipline, especially in international relations. Mostly that's because building new data sets takes a lot of time and the rewards in terms of professional advancement are not great, whereas relying on old data has no fixed costs.

This is one reason why Pape's article is worthy of note -- he actually collected new data, which leads to results that Adesnik himself admits are "surprising."

David mistakenly conflates creating new data with the use of fancy statistical techniques when they're not necessary. The latter can be a occupational hazard -- though I'd argue that the greater danger is the proliferation of sophisticated regression analysis software like STATA to people who don't have the faintest friggin' clue whether their econometric model corresponds to their theoretical model.

And unsurprisingly, in the rush to invent statistics, political scientists have made whatever assumptions they needed to justify their work. As historian John Gaddis has shown, political scientists actually have a very poor understanding of what science is. As a result, their work has suffered considerably.

Sigh. Of all the social sciences -- including economics -- I'll bet that political scientists actually spend the most time discussing what constitutes proper scientific work. This is partly due to insecurity, but it's also due to a refreshing humility about the difficulty of the enterprise. For good examples of this sort of debate, click here for one example, and here's another. And, for good measure, click here, here, and here. Note that some of these works disagree with each other -- and I certainly disagree with some of them. [So, has any good come from these books?--ed. Sometimes I think this has generated a healthy debate within the discipline, and other times I think it's just navel-gazing.]

While I haven't read Pape's APSR study, the points he makes in the NYT are pretty much nonsense. For example in support of the assertion that suicide attacks are not connected to religion, Pape points out that...

[quote here about the Tamil Tigers being responsible for the most suicide terrorist attacks--DD]

The [Tamil] Tigers' behavior only reinforces the belief that suicide bombing is a product of ideological extremism. But since you can't put a number on extremism, political scientists have hard time studying it. (Which is why there are historians and anthropologists.)

I have no doubt that historians can, through closely argued scholarship, identify which groups are extremist -- ex post. The key is to find descriptive characteristics that can be identified ex ante. Without ex ante markers to identify proper explanatory variables, theories degenerate into tautologies. Islamic affiliation is a descriptive category that can be identified ex ante, and Pape's discovery that it's not correlated with suicide attacks is a relevant and counterintuitive finding.

To be fair, Pape has some good points. As his study shows, democracies are the almost exclusive targets of suicide attacks, because liberal political systems are vulnerable to terror. Moreover, he is probably right that there is an element of rational calculation behind such attacks, since even extremists have an interest in success. Still, it is absolutely impossible to explain the tactics of Al Qaeda or Hamas without reference to their perverse ideologies.

This is a nice summary of Pape's value-added. On the "perverse ideologies" question, I don't think Pape would disagree. Without the ideology, it's impossible to delineate these groups' substantive preferences.

The real problem is that Pape, like so many political scientists, abandons all nuance in deriving policy programs from his work.

As I see it, the cause of this unsubtle approach is political scientists' obsession with statistics, a pursuit that dulls their sensitivity to the compexity of real-world political events. If numbers are your thing, you're going to have a hard time explaining why Israelis and Palestinians have spent five decades fighting over narrow tracts of land.

I agree with Adesnik that one can draw different conclusions from Pape's findings than he does -- and this is a weakness in the paper. However, to attribute this to Pape's obsession with statistics is amusing on a number of levels, many of which Chris Lawrence explained. Let's just say that Bob Pape would not be considered welcome at a meeting of the large-N brotherhood at APSA. Indeed, Pape fully supports the Perestroika movement that I've discussed previously.

So then, what is to be done? As you might of heard, many political science programs require training in statistics but not foreign languages. That trend has to be sharply reversed. Learning foreign languages promotes immersion in foreign cultures and ideas, which in turn make it hard to ignore the role of those cultures and ideas in the realm of politics. Given that politics is an art rather than a science, there is no substitute for getting inside the minds of those we study.

I'm perfectly happy to see more cultural immersion, but the notion that such training will automatically induce greater understanding is horses@&t. Witness the self-criticisms -- or rather, the lack thereof -- within the Middle Eastern Studies community in the wake of 9/11. These people are deeply immersed in the culture and language of the Arab peoples. Is Adesnik really suggesting that people like Edward Said can enlighten us about the region?

In conclusion, politics is an art and a science, a simple fact that many people within and without political science seem incapable of understanding.

And for Pete's sake, read the whole paper before penning a jeremiad like that.

UPDATE: Adesnik continues on his jeremiad in this post (though he's right on Moneyball). He gets it wrong again when he says:

The great flaw of modern political science is its desire to imitate microeconomists (and share in their prestige) by developing theorems that explain and predict the behavior of rational actors. Of course, that is exactly the wrong way to go about things. It is only when political scientists recognize that ideas and values are what drive politicians and voters that they will begin to produce something worthy of the name "science".

Chris Lawrence explains what's wrong with this statement.

ANOTHER UPDATE: David Adesnik responds in non-jeremiad fashion. See also Josh Chafetz.

posted by Dan on 09.25.03 at 12:48 AM




Comments:

One of the things that amuses me about this debate is that, with few exceptions--Gaddis being one of the more notable ones--historians tend to be very methodologically sloppy. They do good archival work but don't feel any compunction about picking the facts that fit their theory and telling the story they want to tell.

posted by: James Joyner on 09.25.03 at 12:48 AM [permalink]



I agree with everything said in this post. I think a great number of qualitative researchers will be offended too by Adesnik's antropological view on what the discipline ought to be like. By the way, most departments still have a language requirement or allow substitution between languages and statistics. Political science is actually quite a diverse discipline and will remain so.

Another nice example of "large N-work" that gives different insights than deeply immersing oneself in a single case is that of james fearon and davide laitin on civil wars. While they don't argue that ethnic, economic and religious grievances are unimportant, they point out that such grievances exist in a great many countries and in only a few of them civil wars erupt. This leads them to an analysis of the conditions under which such grievances are likely to lead to insurgency, a point missed by most case studies. for a nice summary see: http://news-service.stanford.edu/news/september25/civilwar-925.html

posted by: Erik on 09.25.03 at 12:48 AM [permalink]



"Is Adesnik really suggesting that people like Edward Said can enlighten us about the region?"

I think the general point though has very little to do with Said, though he makes for a convenient punching bag. For a lot of the things political science studies there are no datasets, and in many cases learning the language and/or culture of a foreign country is the best way to do some research. I don't think the argument is that this leads "automatically" to greater understanding. (Much like having a great p-value doesn't make for more rigorous or scientific results.)

posted by: Seb on 09.25.03 at 12:48 AM [permalink]



Since I don't know anything about the PS field, for me the thing that popped out about David's essay was this:

"For example in support of the assertion that suicide attacks are not connected to religion....The [Tamil] Tigers' behavior only reinforces the belief that suicide bombing is a product of ideological extremism..."

But the idea that it's extremist ideology, not religion per se, was Pape's whole point, wasn't it? David actually seems to be agreeing with him, while saying he's not.

posted by: Kevin Drum on 09.25.03 at 12:48 AM [permalink]



I'm sure you didn't know this at the time, and I have no love for Edward Said's work or beliefs at all, but it is unfortunate that you criticize him by name on right when he has just died.

posted by: John Thacker on 09.25.03 at 12:48 AM [permalink]



Hey! Don't misquote me. I wasn't agreeing with Adesnik. I just think that the discipline should not exclude qualitative research. APSR hasn't published a qualitative paper in ages. This is leading to some resentment against quant-jocks. But not from me!

posted by: Laura on 09.25.03 at 12:48 AM [permalink]



Laura, Pape's paper is, for all intents and purposes, a qualitative paper. He presents data, but all the inference is qualitative (eyeballing the numbers and drawing conclusions based on that, rather than using any regressions or chi-squares or anything remotely quantitative).

Speaking of the August APSR, my copy seems to have disappeared... I wonder what I did with it? Grr.

posted by: Chris Lawrence on 09.25.03 at 12:48 AM [permalink]



BTW, Josh Chafetz now has a post up at OxBlog dissenting in part and agreeing in part.

posted by: Chris Lawrence on 09.25.03 at 12:48 AM [permalink]



Glad to see people are interested in this esoteric topic. Not glad that no one agrees with me!

In any event, my response is now up on OxBlog.

posted by: David Adesnik on 09.25.03 at 12:48 AM [permalink]



Although I would tend to agree with Dave on the abuse of statistics, I'm rather more in favor of persons knowing enough about the use of them in order that they may attempt to abuse them in the first place.

That is to say: at least with a (sic)language - albeit mathematical - we can disagree on the same terms. Otherwise Poli-Sci might as well go by Poli-div(inity).

How about this: I'll support your argument to have language requirements replace static at the university - if you support my movement to cram a statistics requirement up the soft-gray mass of the nation's sixth-graders?

posted by: Art Wellesley on 09.25.03 at 12:48 AM [permalink]



David is RIGHT on the money to send a warning shot to political "scientists" attempting to quantify their ART.

I am a 20 year practitioner in finance and an author of several academic journal articles. The academic portion of finance has been hijacked for the last 50 years by quants and they are in the process of ruining the practicing portion with their ideas of causality and precision. (they have already ruined the economics profession - now a laughing stock in the marketplace)

The problem for the quants in the social sciences is that their models etc must predict HUMAN BEHAVIOUR. It is not enough to explain what HAPPENED (we know this already)or relationships (we know this intuitively), you must be able to predict what WILL happen. In short, time varying parameters (and variables) and shifting distributions over time. Lack of autocorrelation yada yada. It can't be done. Give it up. It's crap.

Go back to your world of intellectual insecurity. You are smarter than you think.

posted by: Ken on 09.25.03 at 12:48 AM [permalink]



Quoth Ken:

"The problem for the quants in the social sciences is that their models etc must predict HUMAN BEHAVIOUR. It is not enough to explain what HAPPENED (we know this already)or relationships (we know this intuitively), you must be able to predict what WILL happen."

Ken, I think that you misplaced the emphasis in your post; the emphasis should have been on the word PREDICT, and not on the words following (human behavior).
If this (prediction) is the criterion of "scientificness", then meteorology, AND aeronautic engineering are NOT sciences, since both of them confront real-world problems that are analytically intractable, and therefore, neither field is able to deterministically predict outcomes. (Incidentally, they get around this problem by simulation, which, in my opinion, is the direction in which political science should be moving.)

posted by: oneangryslav on 09.25.03 at 12:48 AM [permalink]



The more I do analysis of large n dta sets, the less I like it. Measurement issues and model specification all too often give researchers crucial leeway. I'm not talking John Lott here, instead I'm talking about the decisions about which measure of poverty to control for, etc. THe term of art in the academy is "publication bias" but it can have an ideological bent as well.

And all too often we fetishize our measurements. Which in retrospect is an interesting choice of words, but there it is. For example, maternal education is frequently used as a proxy for parental background. It does leave out a lot of other relevant info, but if you have maternal education data, you've covered yourself. Speaking of education, we use any test data we can find as an outcome measure, even if the test isn't based on curriculum, has a ridiculous psychometric margin of error etc. And don't get me started on the legislative studies practice of using interest group ratings to control for legislative ideology such that models explain the votes of members of congress based on how they voted in the past. Predictive, yes. Adding to our understanding? Not likely.

Can quantitative analysis, maximum likelhood estimation, etc add to our understanding? Yes. Of course. But here, I'm reminded of John Lott again. It can increase your understanding even more than a gun can keep your home safe. But it is still subject to similar misfirings.

posted by: riume on 09.25.03 at 12:48 AM [permalink]



The more I do analysis of large n dta sets, the less I like it. Measurement issues and model specification all too often give researchers crucial leeway. I'm not talking John Lott here, instead I'm talking about the decisions about which measure of poverty to control for, etc. THe term of art in the academy is "publication bias" but it can have an ideological bent as well.

And all too often we fetishize our measurements. Which in retrospect is an interesting choice of words, but there it is. For example, maternal education is frequently used as a proxy for parental background. It does leave out a lot of other relevant info, but if you have maternal education data, you've covered yourself. Speaking of education, we use any test data we can find as an outcome measure, even if the test isn't based on curriculum, has a ridiculous psychometric margin of error etc. And don't get me started on the legislative studies practice of using interest group ratings to control for legislative ideology such that models explain the votes of members of congress based on how they voted in the past. Predictive, yes. Adding to our understanding? Not likely.

Can quantitative analysis, maximum likelhood estimation, etc add to our understanding? Yes. Of course. But here, I'm reminded of John Lott again. It can increase your understanding even more than a gun can keep your home safe. But it is still subject to similar misfirings.

posted by: riume on 09.25.03 at 12:48 AM [permalink]






Post a Comment:

Name:


Email Address:


URL:




Comments:


Remember your info?