Why would someone want to find out if the research is relevant?
Nowadays we have more information online than ever before. For example, there are more news articles in the world, and there is also more basic information like what people are eating for breakfast.
The same goes for research and the sciences. There are a lot of peer-reviewed papers being published in the world – now more than ever before – and therefore it can be harder to find out what research is relevant to you.
For example, if you are a researcher, you might want to know what other people in your field are researching. Or if you are a member of the public and you want to find out about a certain disease and you need to get access to the most up-to-date, high quality information available.
It is really important nowadays to be able to find the relevant among all the information that is out there online.
How do you traditionally measure if a specific scientific output is any good?
Traditionally, we have used citation data. That’s because when researchers write a paper, they describe other studies that are happening in their area of research, and they do so by linking to (or citing) peer-reviewed papers. People count those citations – the assumption being the more citations there are to a paper, the higher quality the work is.
Academia used to be a small world. Most researchers simply knew what was happening in their fields. But for reasons we just described, that is not necessarily possible anymore–there is too much research out there. So people have to use signals like citations to figure out what is the most important, most influential research in a subject area.
How many different signals are there?
There is a lot. And the number seems to be growing every day. Beyond citations, there are these signals that are left online, traces of interactions with research articles – we call these traces altmetrics. Altmetrics happen when somebody is tweeting about an article, if an article has been cited in Wikipedia, if an article has been mentioned in the news or in a public policy document – these are all ways to understand if members of the public are talking about research. Altmetrics can help us answer questions like, “Is this research having cultural relevance? Is it important to patient advisory group in biomedicine?”
How can one measure altmetrics? Is there a standard yet? Is there a website that makes sense of all the noise?
There is not a standard for altmetrics and I don’t necessarily think that that is bad thing. I think people get kind of nervous when there is not an authoritative way to measure something like altmetrics, but I would point out that there are not necessarily standards in citation either.
You have different citation databases–Google Scholar, Web of Science, Scopus–and each of these databases tracks the same basic thing, when research is cited in other research articles and books. But they don’t always report the same number of citations for the same works, and that’s because they all approach mining citations in a different way. So for Google Scholar, they pick up a lot more citations because they have less stringent requirements on what a citation should look like in order to appear in Google Scholar, whereas other databases like Web of Science might only count citations from certain journals. So, you are going to get a smaller number of citations from some databases as compared to others, but in some ways you could argue they are higher quality citations.
Similarly, there are a few altmetrics aggregators out there right now that each collect altmetrics in their own way. I work for a company called Altmetric. We track mentions of research across 17 different types of sources: social media, blogs, public policy, patents, and others. The way we track altmetrics is slightly different from other aggregators out there. Each of the current aggregators have our own way of collecting data, there is not necessarily a standard.
There are best practices, however, and Altmetric does our best to obey them. An example of best practices would be tracking things that are somewhat auditable. Like if I tweet about a piece of research, most altmetrics aggregators will only count that tweet if it can be assured to have existed in some way: either Twitter has to verify, “We know that a user called @skonkiel exists and she tweeted about this link”, or we ourselves have to be able to look back at and read the tweet to verify that it existed. This practice is necessary in order to keep people from gaming metrics, because everytime you introduce any kind of a metric there can be an incentive to game it for some people.
Some people seem to be really skeptic about altmetrics. Can you understand this criticism and how do you encounter this criticism?
I think I can understand it to a certain extent. Social media in general has only been around for ten or fifteen years. We are still trying to figure out what it means when somebody tweets about something, when somebody blogs about something, and so on. We need to know: what are their motivations for doing so?
I think it makes sense that there is some skepticism from scientists in particular, who for many, many years understood that there is a particular way that you should understand the influence of research. Typically, this has been by peer-reviewing other people’s research, which is still the gold standard and I think it should remain that way. Informed human judgement is always going to be the best way to understand research impact.
But on the other hand, I would encourage those who are skeptical about altmetrics to maybe broaden their understanding of what constitutes “influence”. It is not just about “What is high quality?” or “What is likely to make an impact in a field?” Research impact should also be about how people lives are touched: for example, whether someone becomes a new patron of the arts because a particular piece of writing has really moved them, so they decided to talk about it with their friends and get other people reading it.
Research touches people’s lives on a daily basis. Altmetrics, as it stands right now, is the best data that we have available to understand those types of connections that are made between the general public and researchers.
Research impact is not a zero sum game: it is not either peer-review or citations or altmetrics, and you cannot have any of the other. We should look at all of the information we now have available in order to find a greater texture to these stories that people tell about the impact of research.