I consider this article very interesting because of its main quantitative result and also its inconclusiveness about it!
Let me explain. The journals' peer review process is believed to be the gatekeeper of scientific quality. There have been many studies trying to measure whether such a system "works". This article makes a fine compilation of quantitative results regarding the question of whether or not independent reviewers of a same article agree with each other or not. Different indicators allow us to conclude today that reviewers agree just a little more often than what would be expected by chance alone.
Then, when most readers - such as I - would expect a discussion on the meaning of such a result, we just get a wise "so what?". Indeed, on the one hand, you might think that might mean that an article will be selected at random. On the other hand, a low agreement can be also an indicator of the success of peer review as "it represents a wide sample of the various views on what is good and valuable research".
Then, I believe the real questions raised by the article are : what do we expect from journal peer review? Can you think of any quantitative indicator that can be measured and which could unambiguously show whether journals' peer review works? Think about it! It is not such a simple task!
Here are my personal answers and how I interpret the main result of this article.
Peer review is the foundation of science: it is a contradictory debate based on scientific arguments that eventually get resolved and converge on consensus. It takes time, and that consensus is what promotes a scientific thesis to the realm of a scientific fact. On the contrary, in the context of a journal, what is referred as "peer review" is not a debate but a selection process by approximately two people (and which requires anonymity for that very reason, unlike debate), which has a beginning and an end decided by an editor. I suggest to call it "peer trial" instead of peer review.
The inconclusiveness of the article originates from the mixing of these two different notions. Contradiction is the necessary starting point of a scientific debate (i.e. proper peer review), but our collective experience tells us that in the context of a journal it is more often the end, i.e. rejection by an editor who wants to "play it safe" rather than engage in the process of constructive critique.
If we agree that scientific peer review and peer trial are different things, then I believe this article has in fact a clear conclusion: peer trial (i.e. journal peer review) is a terrible failure as a selection process of the quality of an article. Proper scientific peer review cannot take place in the context of a journal.
Curator's insight
I consider this article very interesting because of its main quantitative result and also its inconclusiveness about it!
Let me explain. The journals' peer review process is believed to be the gatekeeper of scientific quality. There have been many studies trying to measure whether such a system "works". This article makes a fine compilation of quantitative results regarding the question of whether or not independent reviewers of a same article agree with each other or not. Different indicators allow us to conclude today that reviewers agree just a little more often than what would be expected by chance alone.
Then, when most readers - such as I - would expect a discussion on the meaning of such a result, we just get a wise "so what?". Indeed, on the one hand, you might think that might mean that an article will be selected at random. On the other hand, a low agreement can be also an indicator of the success of peer review as "it represents a wide sample of the various views on what is good and valuable research".
Then, I believe the real questions raised by the article are : what do we expect from journal peer review? Can you think of any quantitative indicator that can be measured and which could unambiguously show whether journals' peer review works? Think about it! It is not such a simple task!
Here are my personal answers and how I interpret the main result of this article.
Peer review is the foundation of science: it is a contradictory debate based on scientific arguments that eventually get resolved and converge on consensus. It takes time, and that consensus is what promotes a scientific thesis to the realm of a scientific fact. On the contrary, in the context of a journal, what is referred as "peer review" is not a debate but a selection process by approximately two people (and which requires anonymity for that very reason, unlike debate), which has a beginning and an end decided by an editor. I suggest to call it "peer trial" instead of peer review.
The inconclusiveness of the article originates from the mixing of these two different notions. Contradiction is the necessary starting point of a scientific debate (i.e. proper peer review), but our collective experience tells us that in the context of a journal it is more often the end, i.e. rejection by an editor who wants to "play it safe" rather than engage in the process of constructive critique.
If we agree that scientific peer review and peer trial are different things, then I believe this article has in fact a clear conclusion: peer trial (i.e. journal peer review) is a terrible failure as a selection process of the quality of an article. Proper scientific peer review cannot take place in the context of a journal.