JC Lehmann 2006 Measuring scientific quality: Difference between revisions

From OpenWetWare
Jump to navigationJump to search
(article info into box)
No edit summary
 
(2 intermediate revisions by the same user not shown)
Line 1: Line 1:
{{Back to journal club}}__NOTOC__
{{Back to journal club}}__NOTOC__


{| border=1 width=600px cellpadding=10
{| border=1 width=600px cellpadding=10  
|'''Measure for measures''' by Sune Lehmann<sup>1</sup>, Andrew D. Jackson<sup>2</sup>  and Benny E. Lautrup<sup>2</sup>
|bgcolor="lightcyan"|
<font size="+1">'''Measure for measures'''</font>
 
by Sune Lehmann<sup>1</sup>, Andrew D. Jackson<sup>2</sup>  and Benny E. Lautrup<sup>2</sup>


<small>1. Sune Lehmann is at the Department of Informatics and Mathematical Modeling, Technical University of Denmark, Lyngby.<br>
<small>1. Sune Lehmann is at the Department of Informatics and Mathematical Modeling, Technical University of Denmark, Lyngby.<br>
Line 8: Line 11:
</small>
</small>


Are some ways of measuring scientific quality better than others? (there's no real abstract for this article)
'''Are some ways of measuring scientific quality better than others?''' (no designated abstract)


* PMID 17183295 (only title & links)
* PMID 17183295 (only title & links)
* [http://www.nature.com/nature/journal/v444/n7122/full/4441003a.html Nature website] (only with subscription)
* [http://www.nature.com/nature/journal/v444/n7122/full/4441003a.html Nature website] [[Image:Padlock-closed.png]]
|}
|}



Latest revision as of 02:12, 6 November 2007

back to journal club

Measure for measures

by Sune Lehmann1, Andrew D. Jackson2 and Benny E. Lautrup2

1. Sune Lehmann is at the Department of Informatics and Mathematical Modeling, Technical University of Denmark, Lyngby.
2. Andrew D. Jackson and Benny E. Lautrup are at The Niels Bohr Institute, Blegdamsvej 17, DK-2100, Copenhagen, Denmark.

Are some ways of measuring scientific quality better than others? (no designated abstract)

summary

The authors test the usefulness of various measures of scientific achievement. They place their test group of SPIRES authors (physics) into 10 groups and determine whether authors initially assigned to a given category are predicted to lie in a different category. Alphabetical order is used as a sorting method without quality. It is compared to papers/year, the h-index, and mean number of citations per paper. [1]

Surprisingly, papers/year fares similar to alphabetical order, i.e. no good measure of quality. h-index and mean are better with the mean number of citations per paper being slightly better.

The authors state that many institutions use doubtful measures of quality. They state that impact factor does not reflect on the impact of a single publication, since it only describes the journal overall - i.e. citation rate of individual papers is largely without correlation the the journals impact factor. Also, they comment that one of the most widely used measures of scientific quality, the average number of papers published by an author per year, is at best a measure of industry rather than ability.

comments

This is a great paper and an overdue critical look at the usefulness of some indicators that our institutions and funding agencies are using/abusing as measures of quality. A must read. Jasu 10:11, 5 April 2007 (EDT)

links