Friday, October 29, 2004
previous entry | main | next entry | TrackBack (1)
The scientific method revealed!!
Henry Farrell posts on a tongue-in-cheek article in PS: Political Science in Politics. As Chris Lawrence observes, the highlight of the short essay is a footnote explaining the scientific method:
It's funny because, all too often, it's true. posted by Dan on 10.29.04 at 06:39 PMComments: They forgot the step in between 2 and 3 -- "Apply for Grant." posted by: Kieran Healy on 10.29.04 at 06:39 PM [permalink]I don't get it. What's the joke? posted by: Flipper on 10.29.04 at 06:39 PM [permalink]Flipper: I am not a political scientist or any other kind of scientist, but I think I get it. There's a reality out there. A scientist should study the literature and from this literature deduce a hypothesis about that reality. He should then collect data from reality and test his hypothesis. If you take data first, then look for patterns and then complete a hypothesis post hoc, you are cheating. The literature, not post hoc evaluations, should direct research. #3 refers to the Straw Man technique. Honest debate and inquiry requires an acknowledgement and debunking of alternate theories. If you only use alternative theories that are weak or deliberately over-simplified or misrepresented, you aren't being an honest scientist. I think, however, this footnote is missing a few important steps: 1. Massage your data according to the opinions of those on the tenure committee. 2. Massage your data according to the opinions of those you want to have sex with at the next conference. As I said, I am not a scientist, but I did help maintain computers at a lab once, and I learned about scientific method doing shots with the scientists in the computer room. posted by: Tito on 10.29.04 at 06:39 PM [permalink]"Massage your data according to the opinions of those you want to have sex with at the next conference." This apparently proves I'm going to the wrong conferences. posted by: Chris Lawrence on 10.29.04 at 06:39 PM [permalink]I'm sorry, but those five steps lead to clearly biased conclusions. I think you can reduce the error by adding Step 3.5, where the hypotheses are ranked by a large number of people from a variety of educational, socioeconomic, religious, and political backgrounds. Sometimes it helps to extract related information from other studies that sound similar and pool the data for meta-analysis based on the observations from Step 4. (Uh - I had to hyphenate that dirty word above - seems Dan's site is averse to those who get wood through chemical means. It contains 'see-al-iss'(oh my god - I'm blushing) in the middle.) (This is cr*p; we don't need four more year of this kind of ass-ignorant censorship!) posted by: germ on 10.29.04 at 06:39 PM [permalink]So this must be the method by which you chose to choose Kerry, eh? posted by: chrisapps on 10.29.04 at 06:39 PM [permalink]I think this is why good social scientists use several different statistical tests using multiple & independent measures of their variables. And they also test their hypotheses against critical cases (cases where their hypothesis is *least* likely to hold true). Its also an argument for using different methods to test the same hypothesis again and again (since stats only show potential relationships & patterns, but they do not necessarily show causality). Its like the old snicker's bar commercial: if no matter how you slice the data, it still comes up "nuts", then maybe you've got something there. But if you have to turn your head and squint to see the relationship, then try, try again. Sounds like someone is on a hiring committee this year, lol? posted by: Zak Taylor on 10.29.04 at 06:39 PM [permalink]"So this must be the method by which you chose to choose Kerry, eh?" Sorry to disappoint you, chrisapps. This is what your current administration did to the bioethics committee. Scientific American, May, 2004 "So this must be the method by which you chose to choose Kerry, eh?" Sorry to disappoint you, chrisapps. This is what your current administration did to the bioethics committee. Scientific American, May, 2004 This is just a drop in the bucket of examples that the Bush administration only listens to what it wants to hear, and will stop at nothing to stifle legitimate, but opposing, discourse. posted by: germ on 10.29.04 at 06:39 PM [permalink]Addressed to Tito (above). In the natural sciences, it is certainly not cheating to take data first, and then form a hypothesis (Political science? please...). It is necessary to test that hypothesis on an independent data set, though. Most people outside the sciences have a very wierd conception of how it is done, probably because textbook science is "cleaned up" and simplified. No one, and I mean no one, sits in their office daydreaming about their chosen subject, comes up with a testable hypothesis, designs the experiment, collects the data, and publishes the results. That is just not the way real science is done. It may be the way political science is done (sorry! I just can't help myself! Obviously the rigorous, sound, brilliant scientists of this site, CT, and other notables are excepted from this) posted by: Paul Orwin on 10.29.04 at 06:39 PM [permalink]Well, Paul, it is the "deductive ideal" as taught in textbooks. I don't know any empirically-oriented political scientists who follow the "deductive ideal" though. Of course, the normativists (i.e. the people who study politics using other methods of inquiry) then use the fact we don't live up to the ideal to beat us over the head, so we have to pretend we do. Or something. posted by: Chris Lawrence on 10.29.04 at 06:39 PM [permalink]No offense, but the deductive ideal is bullshit. It is an interesting idea in philosophy, but like so much "philosophy of science", it has nothing to do with the ostensible questions of that field, "What is science?" and "How do you do science?". It I think stems from a philosophical bond to the Platonic ideal, that concepts are floating in some ineffable ether, waiting to be plucked out and tested by scientists in white coats and horned rimmed glasses. It seems to me that most science works about like this; A senior level (read, full professor) scientist has a long track record of experimentation, and some open questions he wants answered. He (and it almost always is) gives you a choice of questions to answer, and you go off and try. While you are doing this, anomalous results crop up, and you try to figure out why. Eventually, some unexpected twist takes you to the magic "LPU" (all scientists know what that stands for). You publish your results, get your degree, and move on. Eventually, you get your own lab, and start trying to figure out what to do. Either you 1) follow on with stuff someone else you worked for did, or 2) try to think of something new. In the name of tenure, most choose 1). In the course of this, some wierd result that you keep getting leads you to a new idea, and then you follow that for a while. If you are smart, and focused, you can develop a pretty interesting research program based essentially on figuring out what went wrong in a few preliminary experiments! And that, I'm afraid, is how science works for most people. posted by: Paul Orwin on 10.29.04 at 06:39 PM [permalink]Post a Comment: |
|