JC Lehmann 2006 Measuring scientific quality

From OpenWetWare

(Difference between revisions)
Jump to: navigation, search
(outline, article details, links, abstract)
Current revision (05:12, 6 November 2007) (view source)
 
(11 intermediate revisions not shown.)
Line 1: Line 1:
-
== article ==
+
{{Back to journal club}}__NOTOC__
-
'''"Measure for measures"'''
+
{| border=1 width=600px cellpadding=10
 +
|bgcolor="lightcyan"|
 +
<font size="+1">'''Measure for measures'''</font>
by Sune Lehmann<sup>1</sup>, Andrew D. Jackson<sup>2</sup>  and Benny E. Lautrup<sup>2</sup>
by Sune Lehmann<sup>1</sup>, Andrew D. Jackson<sup>2</sup>  and Benny E. Lautrup<sup>2</sup>
-
<small>1. Sune Lehmann is at the Department of Informatics and Mathematical Modeling, Technical University of Denmark, DK-2800, Lyngby, Denmark.<br>
+
<small>1. Sune Lehmann is at the Department of Informatics and Mathematical Modeling, Technical University of Denmark, Lyngby.<br>
2. Andrew D. Jackson and Benny E. Lautrup are at The Niels Bohr Institute, Blegdamsvej 17, DK-2100, Copenhagen, Denmark.
2. Andrew D. Jackson and Benny E. Lautrup are at The Niels Bohr Institute, Blegdamsvej 17, DK-2100, Copenhagen, Denmark.
</small>
</small>
-
* PMID 17183295
+
'''Are some ways of measuring scientific quality better than others?''' (no designated abstract)
-
* [http://www.nature.com/nature/journal/v444/n7122/full/4441003a.html Nature website]
+
-
 
+
-
== abstract ==
+
-
Are some ways of measuring scientific quality better than others? Sune Lehmann, Andrew D. Jackson and Benny E. Lautrup analyse the reliability of commonly used methods for comparing citation records.
+
 +
* PMID 17183295 (only title & links)
 +
* [http://www.nature.com/nature/journal/v444/n7122/full/4441003a.html Nature website] [[Image:Padlock-closed.png]]
 +
|}
== summary ==
== summary ==
 +
The authors test the usefulness of various measures of scientific achievement. They place their test group of SPIRES authors (physics) into 10 groups and determine whether authors initially assigned to a given category are predicted to lie in a different category. Alphabetical order is used as a sorting method without quality. It is compared to papers/year, the h-index, and mean number of citations per paper. [http://www.nature.com/nature/journal/v444/n7122/fig_tab/4441003a_F1.html]
 +
Surprisingly, papers/year fares similar to alphabetical order, i.e. no good measure of quality. h-index and mean are better with the mean number of citations per paper being slightly better.
 +
The authors state that many institutions use doubtful measures of quality. They state that impact factor does not reflect on the impact of a single publication, since it only describes the journal overall - i.e. citation rate of individual papers is largely without correlation the the journals impact factor. Also, they comment that one of the most widely used measures of scientific quality, the average number of papers published by an author per year, is at best a measure of industry rather than ability.
== comments ==
== comments ==
 +
[[Image:3stars.png]] This is a great paper and an overdue critical look at the usefulness of some indicators that our institutions and funding agencies are using/abusing as measures of quality. A must read. [[User:Jasu|Jasu]] 10:11, 5 April 2007 (EDT)
 +
 +
== links ==
 +
* [http://en.wikipedia.org/wiki/Bibliometrics Bibliometrics] Wikipedia
 +
* [http://en.wikipedia.org/wiki/Hirsch_index h-index] Wikipedia
 +
* [http://en.wikipedia.org/wiki/Impact_factor Impact factor] Wikipedia

Current revision

back to journal club

Measure for measures

by Sune Lehmann1, Andrew D. Jackson2 and Benny E. Lautrup2

1. Sune Lehmann is at the Department of Informatics and Mathematical Modeling, Technical University of Denmark, Lyngby.
2. Andrew D. Jackson and Benny E. Lautrup are at The Niels Bohr Institute, Blegdamsvej 17, DK-2100, Copenhagen, Denmark.

Are some ways of measuring scientific quality better than others? (no designated abstract)

summary

The authors test the usefulness of various measures of scientific achievement. They place their test group of SPIRES authors (physics) into 10 groups and determine whether authors initially assigned to a given category are predicted to lie in a different category. Alphabetical order is used as a sorting method without quality. It is compared to papers/year, the h-index, and mean number of citations per paper. [1]

Surprisingly, papers/year fares similar to alphabetical order, i.e. no good measure of quality. h-index and mean are better with the mean number of citations per paper being slightly better.

The authors state that many institutions use doubtful measures of quality. They state that impact factor does not reflect on the impact of a single publication, since it only describes the journal overall - i.e. citation rate of individual papers is largely without correlation the the journals impact factor. Also, they comment that one of the most widely used measures of scientific quality, the average number of papers published by an author per year, is at best a measure of industry rather than ability.

comments

Image:3stars.png This is a great paper and an overdue critical look at the usefulness of some indicators that our institutions and funding agencies are using/abusing as measures of quality. A must read. Jasu 10:11, 5 April 2007 (EDT)

links

Personal tools