Skip to Main Content
It looks like you're using Internet Explorer 11 or older. This website works best with modern browsers such as the latest versions of Chrome, Firefox, Safari, and Edge. If you continue with this browser, you may see unexpected results.

Metrics: Metrics

About SciVal Metrics

SciVal uses a broad range of metrics: 

  • Citation impact metrics - to measure the impact of citations
  • Collaboration metrics-  to measure the benefits of collaboration
  • Disciplinary metrics - to measure multidisciplinary 
  • Productivity metrics - to measure research productivity
  • Usage metrics - to measure viewing activity.

Common Metrics to consider

  • Citations per publication: average number of citations received per publication
  • Collaboration impact: The average number of citations received by publications that have international, national or institutional co-authorship 
  • Field-weighted citation impact: The ratio of citations received relative to the expected world average for the subject field, publication type and publication year
  • Outputs in the top citation percentiles: Publications that have reached a particular threshold of citations received
  • Outputs in the top journal percentiles: Publications that have been published in the world's top journals
  • Scholarly output: the number of publications.

Metrics to consider in SciVal

Scopus author profiles are available for SciVal users to view. An author can view their own profile or can define groups of researchers to view. Based on the researchers selected, publication sets can be viewed. These could be the work of one author, or a selection of publications for example those being considered for REF submission.

About Metrics

Metrics form part of an evolving and increasingly digital research environment, where data and analysis are important. However, the current description, production, and use of these metrics are experimental and open to misunderstanding. They can lead to negative effects and behaviours as well as positive ones.

Responsible metrics can be defined by the following key principles (outlined in The Metric Tide):

  • Robustness – basing metrics on the best possible data in terms of accuracy and scope
  • Humility – recognising that quantitative evaluation should support, but not supplant, qualitative, expert assessment
  • Transparency – that those being evaluated can test and verify the results
  • Diversity – accounting for variation by research field, and using a range of indicators to reflect and support a plurality of research and researcher career paths across the system
  • Reflexivity – recognising and anticipating the systemic and potential effects of indicators, and updating them in response.