« Gun rights rally in Chicago this Frida | Main | Interesting public opinion poll »
"Peer review"
Frequently I've seen the claim that articles published in historical or medical journals are somehow more reliable than those in law reviews, since the former are subject to "peer review," i.e., review by other scholars in the field, whereas law reviews are subject to edit by student editors. As the career of Mr. Bellesiles suggests, peer review doesn't prevent little things like wholesale invention of evidence or gross misreading of authorities. I suspect that, unless the peers have a lot of specific expertise in the subject, to where they know the evidence by heart, it's a bit more like "I like his conclusion."
UPDATE: a comment has an excellent point. In the hard sciences, peer review functions to, basically, keep crackpots or people with highly questionable theses out. There, science is grounded in reality and established method. But in fields like history, it is precisely the "innovation," i.e., what could likely be crackpot, that is cherished, and only the very narrow thesis is objectively provable. A reviewer could do what law review editors would -- check out every footnote to make sure the authorities cited actually support the statement -- or just skip that and judge whether the argument sounds interesting and doesn't clearly conflict with anything the reviewer knows to be true (and in the case of gun ownership as evidenced in original probate records, odds are the reviewer knows nothing of the question. I do wonder that none of Belleseile's reviewers bothered to read the militia statutes he claimed to cite, tho).
The other problem comes when a discipline is met with an entirely new extension, i.e., treating gun violence as if it were disease. The average medical expert has little idea of criminology or its analytics, and "sounds like a good idea to me" will prevail. An argument that walnut hulls will cure cancer will meet with skepticism, and detailed verification of the claims, arguments about whether guns are a net value in light of self defense will not.
Clayton Cramer has a post, relating to this article in Inside Higher Ed. It relates to a study that involved actually reading published, peer-reviewed articles and comparing them to the sources they cited.
"31 percent of the references in public health journals contained errors, and three percent of these were so severe that the referenced material could not be located.”
“[A]uthors’ descriptions of previous studies in public health journals differed from the original copy in 30 percent of references; half of these descriptions were unrelated to the quoting authors’ contentions.”
So much for peer review.
Hat tip to Joe Olson...
6 Comments | Leave a comment
you're barking up the wrong tree, mr. Hardy. peer review isn't meant to verify references or proofread footnotes. it's meant to check that the paper's conclusions follow from what data and reasoning the paper presents (internal consistency check), and --- more importantly --- don't blatantly contradict known reality in the field, a smell test of whether the paper under review is actually credible. the "peer" part of peer review is meant to ensure that the noses doing this smell test know what's reasonable within the field and what isn't.
whether or not these goals are actually reached and served by peer review as currently practiced within any given field is a valid and useful question. but merely saying that bad references and clumsy footnotes pass peer review isn't necessarily a condemnation of the system, since that sort of editorial proofreading is not really what peer review is meant to do.
what you're pointing out is basically that a lot of peer reviewers seem to be very sloppy, which is a bad sign but not a fatal flaw. hard science papers do not stand or fall on their references and footnotes, after all; they depend on their evidence and on whether or not their conclusions match observable reality.
how well this works in the softer sciences, i really don't know. i am particularly ignorant on how reviewers within the field of history judge a paper. it doesn't really seem like the criteria used in the hard sciences, or even in math, would well apply there --- you'd have to ask a historian, or maybe a philosopher of science who's spent time talking to historians, i suppose.
(in perfect fairness, i'm just as ignorant about how lawyers handle review. after all, don't you guys basically make up your "reality" as you go along? well, by way of convincing the relevant court to go along with a new precedent, anyway...)
The term, "Peer Review" is very misleading. Ideally, it implies a review by someone of equal education and ability. All to often, this process deteriorates to a review by someone capable of urination. I presume this is to preclude review by the differentially living; i.e. dead.
Another problem with the peer review process is that no matter how inept, incomrehensible or inadequate a report may be, the reviewer is constrained from honestly reviewing the report for fear that person reviewed will return the favor. Another constraint to honest peer reviews is the possibilty of a poor review may jeopardize the funding of the report writer.
Peer review does vary depending on the field. I am in information systems and often review of conference papers are done by those who have submitted papers to that conference. Often this means PhD students but that is not necessarily bad. At that point usually a PhD student is extremely well read in the field. Problem is they are not usually equipped to say whether the research actually was done right.
Second problem, which has been addressed in some literature, is poor interpretation of the literature. What I mean is that people do not read the entire article but cherry pick quotes that seem to support their viewpoint. Some of it is not on purpose but due to the cherry picking. This is often found when done with very well known papers but much harder for those papers not well known.
Within the IS field the review is done blind and since there are many people in the field one can say a paper is bad without fearing revenge. That though does depend on the reviewer because you are supposed to give solid reasons why it is bad and that is a key to raising the level of the papers.
Reviews for journals are a bit different in that the reviewers are usually the top of the profession and are looking to keep the reputation high. Therefore, most of the journals usually make sure it is of high quality and everything is accurate.
After more than 25 yrs performing reviews as a Ph.D. scientist, my opinion is that most peer reviews are bunk. Papers by "accepted" authors, good ol' boys, get published. Those on the outside of the mainstream do not. I worked for a decade to establish a double blind system so that the reviewer did not know who he/she was reviewing and the reviewee did not know who performed the review. That information was to be confidential with the current senior editor. This works sometimes. But often papers are accepted simply because the author is a buddy or a friend and rejected because the author is not PC or is just disliked by the reviewer. On one of my first papers, I was attacked because the reviewer didn't like my dissertation advisor. I wrote a six page scientific rebuttal (for a 5 page paper) to get the paper published. I've turned down papers that were extremely questionable, supplying copious explanations as to why, only to have the editor say the other reviews didn't have any problem with it. So the editor just picks another reviewer and away they go.
Peer review is intended to make scientific publication the best it can be BUT all too often the same forces that drive everything else drive the review process.
I am in the medical science field and during my Ph.D most of the peer review was done by the grad students. My Mentor would get an article and then send it out to the lab asking who wanted to review it. Not really much different than law journals.