JC Lehmann 2006 Measuring scientific quality

From OpenWetWare
Revision as of 08:27, 30 October 2007 by Jakob Suckale (talk | contribs)
Jump to navigationJump to search
back to journal club

Measure for measures by Sune Lehmann1, Andrew D. Jackson2 and Benny E. Lautrup2

1. Sune Lehmann is at the Department of Informatics and Mathematical Modeling, Technical University of Denmark, Lyngby.
2. Andrew D. Jackson and Benny E. Lautrup are at The Niels Bohr Institute, Blegdamsvej 17, DK-2100, Copenhagen, Denmark.

Are some ways of measuring scientific quality better than others? (there's no real abstract for this article)

summary

The authors test the usefulness of various measures of scientific achievement. They place their test group of SPIRES authors (physics) into 10 groups and determine whether authors initially assigned to a given category are predicted to lie in a different category. Alphabetical order is used as a sorting method without quality. It is compared to papers/year, the h-index, and mean number of citations per paper. [1]

Surprisingly, papers/year fares similar to alphabetical order, i.e. no good measure of quality. h-index and mean are better with the mean number of citations per paper being slightly better.

The authors state that many institutions use doubtful measures of quality. They state that impact factor does not reflect on the impact of a single publication, since it only describes the journal overall - i.e. citation rate of individual papers is largely without correlation the the journals impact factor. Also, they comment that one of the most widely used measures of scientific quality, the average number of papers published by an author per year, is at best a measure of industry rather than ability.

comments

This is a great paper and an overdue critical look at the usefulness of some indicators that our institutions and funding agencies are using/abusing as measures of quality. A must read. Jasu 10:11, 5 April 2007 (EDT)

links