Wednesday, May 2, 2018

ACTIVE USERS OF THE LIRC MONTH OF April 2018


ACTIVE USERS OF THE LIRC
( 1st April to 30th April 2018)
 All the below Active Users are eligible for one extra library card for the month of 
May 2018.


Sr.No.
Member
Total Transaction
1
KOSSAMBE GAURI DATTARAM MINAL
27
2
HRISHIKESH MAHESH TELANG
27
3
ANTONY ALEX LEENA
27
4
JAYBHAY SHEETAL SHANTILAL MANISHA
26
5
POOJARY ROSHANI SHIVRAM GUNAVATI
25
6
DIVYA PANDIT
25
7
BHATNAGAR RISHABH SUSHIL RITU BHATNAGAR
24
8
JOSHI JIGNASA SANTOSH HEMLATA
24
9
SHRUTI SURESHAN RAJANI
24
10
SANTOSH RAMSHARA SHARMA
24
11
KALYANI JADHAV
23


Monday, April 23, 2018

Impact Factors Fail in Evaluating Scientists. Why Does the UGC Still Use Them?



The journal impact factor has numerous flaws, which makes it highly irresponsible for the UGC to rely on it to evaluate a teacher’s research performance and decide whether she gets a job or not.
University Grants Commission. Credit: PTI
18/Apr/2018

Since 2012, nearly 12,000 individuals and 500 organisations have signed the San Francisco Declaration on Research Assessment (DORA). This includes India’s Department of Biotechnology (DBT). In fact, in their joint Open Access Policy, the Department of Science and Technology (DST) and the DBT quote the central recommendation of the declaration verbatim, which is that journal-based metrics like the journal impact factor (JIF) should not be used as “a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decision.”
And yet this enlightened attitude has only partially filtered down to the ground.
The JIF is a simple metric, originally designed to help librarians decide what journals to buy for their libraries. It is the total number of citations received by a journal in the preceding two years divided by the total number of citable items published in those years.
In their book chapter preprint, Vincent Larivière and Cassidy R. Sugimoto lay out six major critiques of the JIF. The first is the inclusion of citations for “front matter” such as editorials, news reports, obituaries, letters to the editor, etc. in the numerator while not actually counting these items in the denominator as they aren’t ‘citable items’. The authors show how large journals like Nature and Science have used front matter to boost their impact factors. The second critique is the inclusion of self-citations (when a paper from a journal cites other papers from the same journal), which has led to documented cases of manipulation by unscrupulous editors.
The third critique is the arbitrariness of the two-year window for citations which favours certain disciplines over others. Larivière and Sugimoto write that, while “physics papers generate more citations than psychology papers within the first five years, the reverse is true for the following 25 years. … [U]sing a 30 – year citation window, we find that the first two years captures only 16% of citations for physics papers, 15% for biomedical research, 8% for social science papers, and 7% in psychology.”
Another related critique is that the JIF does not take into account the differences between fields and disciplines. Because of this, the indicator cannot be used to compare across disciplines. The difference in publication and reference practices means that “medical researchers are much more likely to publish in journals with high JIFs than mathematicians or social scientists.”
The fifth critique is the skewness of science research, i.e. that a small percentage of papers accounts for the majority of citations. Their analysis shows that for the large majority of journals indexed in the Journals Citations Report 2016, only 20-40% of papers receive as many citations as the JIF suggests.
The last critique is the systematic inflation of average JIFs, caused by a number of factors, including the rise in number of papers and references per paper. Some 56% of journals increased their JIFs between 2014 and 2015. Larivière and Sugimoto write, “As there is no established mechanism for acknowledging inflation in reporting, editors and publishers continue to valorise marginal increases in JIFs which have little relation to the performance of the journal.”
The JIF in India
In a recent paper, titled ‘Evaluation of research in India – are we doing it right?’, Muthu Madhan, Subbiah Gunasekaran and Subbiah Arunachalam discuss how the “the answer to the question in the title cannot be anything but ‘no.’”
(Muthu Madhan and Subbiah Arunachalam are affiliated with the DST Centre for Policy Research, Indian Institute of Science, Bengaluru, and Subbiah Gunasekaran with the CSIR-Central Electrochemical Research Institute, Karaikudi.)
They systematically go through the evaluation and promotion frameworks of a number of different regulatory agencies in India and critique their use of JIF and other metrics. The DBT and DST still include cumulative impact factor as a criterion for awards like the Ramalingaswami Reentry Fellowship, the Tata Innovation Fellowship, the Innovative Young Biotechnologist Award and the National Bioscience Awards for Career Development.
The Indian Council of Medical Research “routinely uses average IF as a measure of performance of its laboratories.” Laboratories of the Council of Scientific and Industrial Research on the impact factor and number of papers published to assess scientists.
The National Assessment and Accreditation Council uses various bibliometrics including impact factors in its accreditation process. It also asks for the “h-index of each paper”, which the authors describe as “patently absurd” because it betrays a fundamental misunderstanding of what the h-index means. Only individuals can have an h-index.
Business schools have instituted monetary incentives for publishing in high impact-factor journals. Dinesh Kumar, the chairperson of research and publications at IIM-Bangalore, told the Wall Street Journal in 2011 that the institute had been giving a cash award since 2006 to any faculty member whose paper was published in an ‘A-grade journal’.
The National Academy of Agricultural Sciences assigns impact factors of its own to journals and uses these scores to select fellows. This impact factor is calculated in an opaque and seemingly arbitrary manner. The authors argue, “The Annual Review of Plant Biology had an IF of 18.712 in 2007, which rose to 28.415 in 2010. Yet, the NAAS rating of this journal recorded a decrease of four points between the two years.”
But the most problematic deployment of the JIF is its use in the appointment and promotion of teachers by the University Grants Commission (UGC) and the All India Council for Technical Education. The UGC calculates an Academic Performance Indicator (API) score which includes points for research. According to UGC policy, teachers earn more points for papers published in journals with a higher JIF. The authors of the ‘Evaluation of research’ article summarise the rules thus:
The API score for papers in refereed journals would be augmented as follows: (i) indexed journals – by 5 points; (ii) papers with IF between 1 and 2 – by 10 points; (iii) papers with IF between 2 and 5 – by 15 points; (iv) papers with IF between 5 and 10 – by 25 points.
As discussed above, the JIF has a number of flaws. It fluctuates erratically from year to year because of the two-year window. It favours certain journals and disciplines. It doesn’t take into account any kind of field-normalisation. It doesn’t predict citations. And it suffers from a creeping inflation. This makes it highly irresponsible for the UGC to rely on JIFs to evaluate a teacher’s research performance and decide whether she gets a job or not.
The UGC has further compounded the arbitrariness of its policy by formulating awarding points based on the ranges of impact factors. As Madhan et al write,
Take the hypothetical case of a journal whose IF is around 2.000, say 1.999 or 2.001. No single paper or author is responsible for these numbers. If a couple of papers receive a few more citations than the average, the IF will be 2.001 or more and the candidate will get a higher rating; if a couple of papers receive less than the average number of citations the IF will fall below 2.000 for the same paper reporting the same work.
In 2010, Anthony van Raan, director of the Centre for Science and Technology Studies at Leiden University, the Netherlands, told Nature that, “If there is one thing every bibliometrician agrees, it is that you should never use the journal impact factor to evaluate research performance for an article or for an individual – that is a mortal sin.”
Metric-based assessments discourage risk-taking and long-term thinking among young scientists. It tells them that they can’t afford working on something that won’t lead to citations and papers immediately. Institutions need to follow the lead of the DBT and consider signing the DORA. It would be the first step in signalling to young researchers, as the declaration states, “that the scientific content of a paper is much more important than public metrics or the identity of the journal.”
Thomas Manuel is the winner of The Hindu Playwright Award 2016.

Source: The Wire / The Sciences dated 18 April, 2018
(accessed on April 23, 2018 at 2.45pm)

Featured Posts

Top Searches from “IEEE Xplore Digital Library" - 13th September 2024

  The Learning and Information Resource Centre is pleased to inform you about the Top Searches from  "  IEEE   Xplore   Digital Library...