Frequently I've seen the claim that articles published in historical or medical journals are somehow more reliable than those in law reviews, since the former are subject to "peer review," i.e., review by other scholars in the field, whereas law reviews are subject to edit by student editors. As the career of Mr. Bellesiles suggests, peer review doesn't prevent little things like wholesale invention of evidence or gross misreading of authorities. I suspect that, unless the peers have a lot of specific expertise in the subject, to where they know the evidence by heart, it's a bit more like "I like his conclusion."
UPDATE: a comment has an excellent point. In the hard sciences, peer review functions to, basically, keep crackpots or people with highly questionable theses out. There, science is grounded in reality and established method. But in fields like history, it is precisely the "innovation," i.e., what could likely be crackpot, that is cherished, and only the very narrow thesis is objectively provable. A reviewer could do what law review editors would -- check out every footnote to make sure the authorities cited actually support the statement -- or just skip that and judge whether the argument sounds interesting and doesn't clearly conflict with anything the reviewer knows to be true (and in the case of gun ownership as evidenced in original probate records, odds are the reviewer knows nothing of the question. I do wonder that none of Belleseile's reviewers bothered to read the militia statutes he claimed to cite, tho).
The other problem comes when a discipline is met with an entirely new extension, i.e., treating gun violence as if it were disease. The average medical expert has little idea of criminology or its analytics, and "sounds like a good idea to me" will prevail. An argument that walnut hulls will cure cancer will meet with skepticism, and detailed verification of the claims, arguments about whether guns are a net value in light of self defense will not.
Clayton Cramer has a post, relating to this article in Inside Higher Ed. It relates to a study that involved actually reading published, peer-reviewed articles and comparing them to the sources they cited.
"31 percent of the references in public health journals contained errors, and three percent of these were so severe that the referenced material could not be located.”
“[A]uthors’ descriptions of previous studies in public health journals differed from the original copy in 30 percent of references; half of these descriptions were unrelated to the quoting authors’ contentions.”
So much for peer review.
Hat tip to Joe Olson...