Harvard University's Michèle Lamont on navigating the world of fundraising and grant writing for educators.
Lamont is Robert I. Goldman Professor of European Studies and Professor of Sociology and African and African American Studies, and Senior Adviser on Faculty Development and Diversity, Faculty of Arts and Sciences at Harvard University. She is the author of How Professors Think: Inside the Curious World of Academic Judgment (Harvard University Press, 2009)
With the passage of the Recovery Act, the National Science Foundation and the National Institute of Health will be funding research that had been rejected in previous rounds of evaluation. It is with much fanfare that President Obama announced this program as "the biggest increase in basic research funding in the long history of America’s noble endeavor to better understand our world . . . Just as President Kennedy sparked an explosion of innovation when he set America’s sights on the moon, I hope this investment will ignite our imagination once more, spurring new discoveries and breakthroughs that will make our economy stronger, our nation more secure, and our planet safer for our children."
This rhetorical flourish contrasts with the simple fact that the administration is asking funding agencies to "revisit" past panel decisions; good proposals that were unsupported in previous rounds due to budgetary constraints will now receive funding. This is quite extraordinary from the perspective of academic evaluation, and this turnabout begs the question: How good is good enough? How do we know that proposals that were turned down three months ago should now be funded? Where is the bar?
Academics live and die by peer review. Unbeknownst to most people, we spend an inordinate amount of our time evaluating one another by comparing all manner of scholarly output – scholarly articles, grant proposals, promotion files, book manuscripts, etc. Making fine-grained distinctions requires developing extensive classification systems. These are used to determine what research has already been conducted and what is an important issue from a range of perspectives. This is how academics come to define the frontiers of knowledge and questions worth asking, and it is the topic I explore in How Professors Think: Inside the Curious World of Academic Judgment (Harvard University Press, 2009). This book sheds light on this world of evaluation by opening the black box of peer review. Based on interviews with experts and observation of funding panels, I analyze how university professors go about evaluating one another, making decisions, and making sense of their activities and the world they inhabit
Academics often talk about a "bar" we strive to remain above, but sometimes find ourselves below. This legendary bar is intangible, yet most of us persist in looking for it. The same holds for "excellence" – a polysemic term if ever there was one. It is an empty vessel in which academics put whatever they choose – often whatever inspires them. At times they place most weight on originality, and other times it is significance, feasibility, or more evanescent qualities such as clarity, the ever-elusive elegance, intellectual sophistication, or the display of erudition. This is not to say that excellence doesn’t exist – simply that it is open-ended and variable. It all depends on what is being compared. Thus the "movable" (and at times, "removable") bar.
To better understand this process, think about the real estate market. Your prospective real estate agent visits your house, makes notes, returns to his office, looks at the comparables, and comes up with a tentative price for your house, based on the characteristics of the house itself, and on prices for similar houses in the area that have sold recently. If you are astute, you do some comparison shopping. You ask for estimates from several experienced realtors and take the average to figure out what your house is worth. For them as for you, the relevant points of comparison are somewhat indeterminate, just as is the object being evaluated. Which houses should yours compared with? What are the most appropriate dimensions for comparison? It all depends, as value is relationally defined and context is everything. Ditto for grant proposals.
NSF and NIH can now revisit their funding decisions because the "rejects" were relational rejects, rejects in a universe of very strong proposals, in which many "greats" were not funded for a host of reasons – ranging from a poor match with the objectives of the funding program, to having too much or too little theory, or not being explicit enough about methods or deliverables. Knowing with which groups of proposals any one proposal was compared would be essential to determining its true value. This is so because scholarly evaluation suffers from much inconsistency. Sometimes proposals are grouped based on discipline, period, or geographic area concerned, and sometimes they are grouped based on where the applicant’s name falls in the alphabet (woe to the Zs whose proposals are read when the committee is weary). Sometimes they are grouped based on topic (all the proposals that concerned neglected authors haling from an underrepresented group for instance). Moreover, although proposals are often ranked on a single list, this ranking is often based on several criteria that are not always commensurate – that can be cross-cutting or incompatible (as methodological rigor and originality are sometimes thought to be). This is so because different types of knowledge shine under different lights – with economics and history representing different types of academic excellence.
Where does that leave us? If "the bar," just like "excellence" are largely rhetorical tricks of the trade used by academics to talk about this weird process they are constantly engaged in, they do not refer to anything concrete beyond this activity of judging, selecting, and most often, deselecting. In our world of overabundance of scientific output, sustained by several thousand American institutions of higher education, there are far more competitors than there are resources.
Thus, there is no reason to lose sleep over our movable bar, since it has never really existed – at least, as a rigid reality. As long as American academics are able to survive the many checkpoints of quality control that define their lives while continuing to deliver the goods, American taxpayers should be able to put up with a bit of inconsistency and unpredictability. But we should not forget that the tragedy of expert judgment is that those who judge are also those who are best positioned to benefit from the system. In this, academics are not different from many other high-status professional groups who stand to gain from the Recovery Act.