Beyond impact factor, h-Index and university rankings: Evaluate science in more meaningful ways
The conference aims to:
• Highlight the limitations of metrics in capturing scientific quality and the resulting pressure on the quality of scientific output;
• Present assessment approaches - San Francisco Declaration on Research Assessment DORA, Leiden Manifesto for Research Metrics - that challenge conventional metrics;
• Consider whether steps are necessary to maintain the high quality of the Swiss science landscape long-term.
The last few decades saw an unprecedented growth in the number of scientists and scientific institutions competing for limited resources in terms of employment opportunities and research funding. The ambition to allocate the available means to the best scientists and science favoured the establishment of quantitative metrics to assess the scientific merit of the sheer volume of research output. Impact and citation factors related to journals and publications as well as rankings of institutions are the best-known such tools. Inadvertently, however, these measurements potentially undermine the quality in science because they incite violations of globally accepted research integrity principles with adverse effects: scientific progress is hampered, the value of science to society and policy-making as trusted and authoritative source is jeopardised, and public research funding is not used effectively.