A debate among political scientists erupted this week over a New York Times op-ed by Dr. Jacqueline Stevens arguing that, "Political Scientists are Lousy Forecasters." The debate comes amid efforts in Congress to not allow political science research to compete for National Science Foundation grants.
The very discipline, whose primary purpose is to contribute to our understanding of democratic processes and institutions and pass along that knowledge through publishing and teaching, is being attacked as both too partisan and too inconsequential to be deserving of public support.
"It's an open secret in my discipline: in terms of accurate political predictions (the field's benchmark for what counts as science), my colleagues have failed spectacularly and wasted colossal amounts of time and money," wrote Stevens, professor of political science at Northwestern University.
The context of Stevens' op-ed is a House of Representatives vote in May that would eliminate National Science Foundation (NSF) funding for political scientists. The vote was mostly along party lines with Republicans voting to eliminate the funding and Democrats supporting it. An effort is currently underway to pass a similar measure in the Senate.
Political science is the only social science that would be eliminated from consideration for NSF grants. Historians, economists and sociologists, for example, could still compete for the grants. This would not be the first time that political science has been the target of Republican congressmen. In 2009, Sen. Tom Coburn (R-Okla.) introduced an amendment that would have done the same.
Political scientists went to their blogs to defend themselves against their colleague's attack.
Henry Farrell, associate professor of political science at George Washington University, gave the most thorough critique of Stevens at The Monkey Cage, one of the most popular political science blogs.
Stevens' claim that accurate predictions represent the "benchmark for what counts as science" was roundly condemned by her critics.
"There really isn't much work at all by political scientists that aspires to predict what will happen in the future," Farrell wrote.
Steve Saideman, Canada Research Chair in International Security and Ethnic Conflict at McGill University, agreed, writing, "most of us do not aspire to provide accurate predictions of single events. Most of us seek to understand the causes of outcomes, ... Indeed, having more understanding should allow us to develop expectations. Having less understanding or no understanding is probably not the pathway to predicting anything."
"Thumb through any of our scholarly journals, and you will find almost no forecasts of future political events. We are in the explaining business, not the predicting business," Seth Masket, associate professor of political science at University of Denver, added.
One of the main criticisms of Stevens is how she characterized a study that represented the primary evidence for her conclusions.
Philip Tetlock, a political psychology professor at University of Pennsylvania, published a study in 2005 that evaluated the prognostication abilities of 284 experts over 20 years.
Tetlock concluded that, in Stevens words, "chimps randomly throwing darts at the possible outcomes would have done almost as well as the experts."
(This quote became the muse for The New York Times graphics department, which drew a monkey throwing darts at dartboards with words like, "China, Middle East, change, peace, and crisis," to accompany Stevens' op-ed.)
Stevens critics found much wrong, and a bit of irony, in her use of Tetlock to defend her thesis.
Stevens criticism is of political scientists in academia, and those who conduct quantitative research in particular. Tetlock's sample of experts, though, includes lots of different experts, not just political scientists in academia who conduct quantitative research, Farrell points out. Tetlock's experts included anyone that the media would designate an "expert." Indeed, one would not even need a Ph.D., much less one in political science, to be included in Tetlock's sample.
Tetlock's study is not aimed at evaluating how well political scientists make predictions. Rather, it studies the predictive ability of experts who inform public debates and government advice, Erik Voeten, associate professor of geopolitics and justice at Georgetown University, mentioned.
Interestingly, Tetlock also concludes that the media tend to favor those who are poor predictors because poor predictions are often easier to package for its readers. True predictions, in other words, are often too complicated and messy for the media to explain to its readers.
Ironically, Voeten concludes from Tetlock's research the opposite of Stevens' conclusion. Since expert opinion is so poor at prediction, quantitative models, like the kind that NSF funding supports, have comparatively better predictive ability and become even more important to further human knowledge.
"The second point is that simple quantitative models generally do better at prediction than do experts, regardless of their education. This is not because these models are that accurate or because experts don't know anything but because people are terrible at translating their knowledge into probabilistic assessments of what will happen," Voeten writes.
Voeten supports this view by pointing to research showing a political science model that does a much better job at predicting the outcome of Supreme Court cases than constitutional law experts.
Several of the critics wrote that Stevens has a fundamental misunderstanding of her own discipline.
For instance, Anton Strezhnev, a Ph.D. candidate in government at Harvard, wrote, "Professor Stevens' definition of what constitutes scientific knowledge is remarkably limiting. Prof. Stevens is staking out a very extreme position by implying that the existence of randomness – 'messy realities' as she calls it – makes all attempts at quantification meaningless."
At the end of her argument, Stevens does not suggest doing away with NSF funding for political science, as House Republicans have done, but changing the way it is allocated. She would prefer a lottery "to shield research from disciplinary biases of the moment."
Saideman believes this indicates that Stevens' op-ed actually represents an older debate, known as the "Perestroika" movement, over what constitutes legitimate political science research.
For her last sentence, Stevens writes, "I look forward to seeing what happens to my discipline and politics more generally once we stop mistaking probability studies and statistical significance for knowledge."
Saideman sees in this sentence a reflection of Perestroika, which has been around for a couple of decades now. Those within the movement had grown frustrated that many of the top political science journals, which were mostly publishing quantitative research (those using statistical techniques and large data sets) and not qualitative research (such as interviews, focus groups and participant observation).
"Aha!" Saideman wrote, "See, this was not about predicting stuff, it was about quantitative work. Her animus is not really about failing to predict the end of the Cold War – which qualitative analysts also did not predict. Her target is, despite the token lines in the conclusion about getting some data, quantitative work."
Saideman was also angered that Stevens, "a left-wing political scientist," was providing fuel for Republican attempts to defund NSF political science grants.
On her blog, Stevens described her critic's arguments as "ferocious, incoherent, illogical and often ad hominem." She has not responded specifically to any of the criticisms yet, but promises to do so sometime next week.
Disclosure: This reporter is a member of the American Political Science Association.