On 27 & 28 March 2019 the final workshop of the *metrics project “Metrics In Transition” took place at the Göttingen State and University Library (SUB Göttingen). About 40 participants gathered in the historical building for the presentation of the project results as well as to hear further exciting talks about research evaluation and alternative metrics.
After the opening by library director Prof. Dr. Wolfram Horstmann, keynote speaker Joe Wass from Crossref talked about the role of open science infrastructures in metrics. Wass presented Crossref’s latest tool - Eventdata - which collects online mentions from specific sources (e.g. Twitter, Wikipedia, Reddit) based on links to DOI-based publications. At this stage, the community is invited to evaluate the data for its own purposes and contribute ideas for further services.
The *metrics project partners then presented the results of the project. In addition to reports on the results of the user studies carried out and evaluations of data from certain social media, the crawler developed by the project, the “Metrician”, was also presented.
After the poster session, in which the OpenAIRE Usage Metrics Service and the Counter-Bike tracking and reporting service were presented, the first workshop day ended with 3 breakout groups. The teams discussed the idea of a national provider of usage statistics and alternative metrics, the producers and users of alternative metrics, and evaluated the technical services developed in the project.
The second day started with a presentation by Prof. Dr. Isabella Peters from the project partner ZBW on approaches for measuring openness, which asked the question whether an evaluation in the open science field is meaningful at all and if there is an aspirable ending.
Afterwards, exciting projects on alternative metrics were presented, including the ROSI project, which will ideally continue the efforts from the *metrics project at the TIB Hannover.
Afterwards, various representatives of publishers, library networks and repositories presented the implementation of metrics into their services. Further approaches were presented in various discussions following the presentations. For example, the question arises who benefits from metrics and how their usability can be improved. Challenges include the variety of different persistent identifier systems used for the same work, and different versions of the same work. Another criticism of social media metrics: Do we really want an evaluation of our science system with highly manipulative systems that are intransparent and privately owned? Jasmin Schmitz of the ZB Med says: “We have failed if Altmetrics only become a new impact factor. It is better to learn to read “traces of use”.
Finally, the results of the group discussions were presented, which allow interesting conclusions but also raise new questions. The group on the development of a national statistical service put forward the thesis that such a service would achieve comparability, among other things, by creating an incentive system for Open Access publishing. As prerequisites, transparency, metadata quality, sustainability, scalability and comparability were mentioned. The discussion leaders will take the topic to the next meeting of DINI AG “Electronic Publishing”, but will also consider whether and how it could be tackled in a European or international context. Infrastructures like OpenAIRE with support from COAR would be suitable for this.
On the basis of the contents of the workshop, the group “Users and Producers of Alternative Metrics” drew conclusions for the adequate use of metrics as performance indicators, which were then summarised in four main areas: as long-term recommendations/measures, inter-disciplinary information on metrics and their limitations among young academics, the creation of new incentive systems to reward more diverse forms of scientific output, an increase in the transparency of selection criteria in recruitment procedures, and the creation of a central, neutral and thus credible measuring instance were identified. In the focus Ideas for More Useful Metrics, for example, it was discussed whether it would not make more sense to strive for a more precise understanding of a multitude of simpler metrics than to seek the “ultimate” indicator that unites all forms of scientific achievement in one value. The following indications to be considered directly when using metrics were presented, among others, the basic use of open databases and the attentive observation of the survey dates and associated implications. Among other things, the basic use of open databases and the attentive observance of the survey dates and associated implications were suggested as direct indications to be observed when using metrics. Open questions/problems of alternative metrics that are still to be dealt with urgently in the future were, for example, problems concerning social media platforms such as gaming, filter bubbles or the correct handling of particularly controversial or emotional topics. It is also unclear how effective it would be to reach decision-makers in science and how the frequently demanded new incentive systems for female scientists (see above) could be established. In addition, a lack of theoretical foundations similar to the citation theory was found for alternative metrics.
The group that inspected the technical services emphasized several points: The priority use of DOIs as identifiers and a mapping between DOIs and landing pages would be advantageous for the inclusion of as many publications as possible; the prerequisite would be the comprehensive information of researchers on the topic of identifiers. A supplementary service to Crossref’s Event Data would also be desirable. An overlying layer could then make the typical decisions for similar applications (e.g. in libraries or repositories) and process the raw data accordingly. Other users could benefit from the documented and transparent pre-processing and get started more quickly.
Astrid Orth summarized the results of the workshop as follows:
Behaviour on social media platforms is complex and diverse, making simple aggregation impossible. Since alternative metrics are not yet widely known, they should be used with great caution. They need more context and more openness to build trust and be accepted by researchers.
In particular, young researchers should be aware of the wide range of different types of metrics, their data sources, application areas, and acquiring strengths and limits. Become “metric-wise”!
As a recommendation she gave the participants the opportunity to examine the applicability of metrics in a certain context, because this is strongly dependent on the underlying database (heterogeneity & dynamics of platforms, API’s, functionalities) and the results can vary strongly. The current state of the art does not suggest that simple one-dimensional metrics correctly reflect scientific communication on web-based platforms.