The impact of published research is still routinely measured by traditional metrics, with article citation counts often carrying the most weight. Over the past decade, the influence of this measure has also expanded to cover conference proceedings, book series, and patent applications. At the same time, the range of metrics used by the scientific community has started to grow in recent years, with downloads, reviews, mentions and shares in social media emerging as legitimate indicators of impact. But there鈥檚 a whole host of other important ways that matter when assessing the impact of published research. From societal consequences, policy changes and improved diagnostic measures, to better career prospects, new research collaborations, greater exposure of ideas, and new funding opportunities - the impact of published work can be profound and far-reaching.
But if these more qualitative indicators of impact are difficult to track and measure, why should we invest time and effort in capturing them? We know that younger researchers have limited opportunity to gain a fair level of recognition for their work. While Altmetrics go some way to addressing this, citations are still considered the primary indicator of calibre, but it can take decades to build a profile this way. The other problem is 鈥榯he large volume of inconsistently used numerical data.鈥 (The Conversation, 2018). There is a tendency to think that numbers add more weight and validity, especially when it comes to proving success, but we shouldn鈥檛 automatically assume that statistics tell the full story. The world of research, as with many other aspects of our lives, has become somewhat 鈥榤etrics-fixated鈥, which brings its own set of challenges.
High profile research assessment initiatives such as the , have gone some way to shifting the focus of evaluation from more traditional metrics (such as citations), to wider economic and societal impact, by analysing thousands of case studies:
鈥溾 case studies provide a rich resource, demonstrating the breadth and depth of research impact 鈥 in a way that has not been revealed before. Universities claim changes and benefits to the economy, society, culture, public policy and services, health, the environment and quality of life.鈥 (The Conversation, 2018)
In this article we鈥檒l look at some of the broader outcomes resulting from two very different papers published in Nature-branded Journals. We talked to the corresponding authors of each paper to find out what kind of impact the work has had on their careers, their field of research, and the wider world. We also asked them about their thoughts on the continued value of traditional research metrics; how they see impact measures changing over time; and what additional measures they would like to see introduced.
is at DCSwinney consulting and author of the 2011 Nature Reviews Drug Discovery paper: ?. Previously CEO of the Institute for Rare and Neglected Diseases Drug Discovery (), and Director of Biochemical Pharmacology at Roche, Swinney has a background in pre-clinical drug discovery, specialising in inflammation and virology disease areas.
His work at Roche investigated molecules that caused desired biological responses 鈥 known as phenotypes 鈥 when tested in cellular and animal models of particular diseases. Historically, most drugs were discovered through such 鈥榩henotypic screens鈥, with their mechanism of action - such as the proteins they targeted - only being identified through further study. However, from the 1990s onwards, an alternative target-based approach became the dominant method for screening new drugs, spurred by the rapid growth in knowledge of potential drug targets emerging from genomics. This approach starts with a 鈥榬eductionist鈥 hypothesis that a particular target is important in a disease, and then seeks to identify molecules that specifically affect its activity to treat the disease. Importantly, target-based screens have much higher throughput than classical phenotypic screens, and so it was widely thought they could help identify new drugs more quickly. Of particular interest were new drugs that also had a new mechanism 鈥 called first-in-class drugs 鈥 as these promise to provide greater medical advances than those that just improve incrementally on an established drug.
However, by 2009, it was becoming apparent that target-based approaches were not delivering as many first-in-class drugs as hoped. This prompted Swinney to ask the question: 鈥榳here do new drugs come from?鈥 He began an in-depth analysis of the extensive research that led to 259 drugs discovered in the previous decade, spread across hundreds of research papers and organizations. Intriguingly, even though the target-based approach had been dominant for the whole period studied, Swinney and his co-author Jason Anthony found that the majority of first-in-class drugs were still coming from classical phenotypic screens. And part of the reason seemed to be that phenotypic screens enabled identification of molecules that achieved the desired effects at particular protein targets, in ways that typical target-based screens might miss.
So, with the productivity of target-based screening coming under scrutiny, drug discoverers were intrigued by the possibility that the classical approach might be more effective in finding first-in-class drugs. Furthermore, as new tools were emerging to enable phenotypic screening assays that were both higher-throughput and also better models of challenging diseases, the stage was set for a revitalization of interest in phenotypic screening. Since 2011, the approach has been taken up widely by major pharma companies and has also been the basis for academic initiatives such as the UK鈥檚 National Phenotypic Screening Centre.
Can you describe some of the ways your paper published in Nature Reviews Drug Discovery impacted you professionally and personally?
鈥淭he paper How were new medicines discovered? has opened lots of doors for me in the years since publication. It has given me the opportunity to travel the world and talk to hundreds of people in the field, which has been important in moving my ideas forward. It has also led to interactions with many pharma companies worldwide, as well as numerous societies, and I鈥檝e been able to combine and assimilate these broad perspectives to inform and advance my own thinking.
In terms of influence, I think the article encouraged researchers to re-think how drug discovery can be conducted, applying a more empirical and less reductive approach as a result. Over time, we saw the article build awareness and confidence in the idea that when it comes to drug discovery, 鈥榯here鈥檚 more than one way to skin a cat鈥. Its publication in Nature Reviews Drug Discovery was a catalyst for the subject attracting broader attention through outlets such as and , and we鈥檝e seen a resurgence in the phenotypic approach to drug discovery since 2011.鈥
How important are citations to you personally and to the wider research community?
鈥淐itations have never been the primary driver for me, but it鈥檚 nice as a scientist to see your work having an impact. It鈥檚 also beneficial to be cited from an organisational standpoint to provide a metric of success. That said, I had to search through many papers during the study of drug-discovery methods, and many of those I used for my work only had a small number of citations. For me, citation counts weren鈥檛 really the determining factor in deciding which research to read.
It鈥檚 true that citations are the easiest thing to follow in evaluating research, but their focus is narrow when it comes to measuring impact of the overall discovery process. We can break this process down into three stages: Creating basic knowledge; invention; and delivering a product to market. The pharma company (stage 3) gets the financial reward from the product, whilst scientists (stage 1) can get recognition from creating new knowledge and build reputation through citations (although this isn鈥檛 universally the case). But there鈥檚 less recognition and remuneration for those involved at the invention stage (stage 2) 鈥 the entrepreneurs or academics with a focus on applied science 鈥 and this is something that I think needs to change.鈥
is a Professor of Oncology and Pharmacology, Co-Leader of the Cell Stress and Nanomedicine Program at , and author of the 2016 Nature Biomedical Engineering paper: . Gao moved his Biomedical Engineering lab from Case Western Reserve University to in 2005, with the mission of applying engineering insights to simplify complex pathophysiology, so that medical practitioners can directly benefit from the findings for a wide range of diseases and provide better care to their patients.
With his nanoprobe paper, Gao wanted to make biology simple for physicians. The work applied established concepts in the electrical engineering space (building on the discovery of electronic transistors that won a Nobel Prize in the 1950s), as a way of amplifying biological signals. The goal of Gao鈥檚 lab is to design a transistor-like proton sensor that can digitise acidic signals. When cancer cells metabolise glucose molecules, they produce acids. The acidosis signal in tumours changes with time and location, which makes it difficult to separate out from similar signal fluctuations arising from normal processes going on in the body. Gao and his team have designed a nanoparticle sensor that stays silent in the blood but gets switched on inside a malignant tumour through fluorescent illumination. This helps surgeons more accurately stage disease and conduct surgery more precisely.
What effect did the publication of A transistor-like pH nanoprobe for tumour detection and image-guided surgery have on you both professionally and personally?
鈥淭he publication of this paper in Nature Biomedical Engineering resulted in a lot of different and positive outcomes, some that we had anticipated and others that were surprising. The publication was an important milestone for our team and it was satisfying to see the professional impact it had on the paper鈥檚 lead author, Tian Zhao and my clinical colleague, Baran Sumer. Seeing the effect this paper had on bringing teams of surgeons and scientists together was also very rewarding鈥攖he team as a whole accomplished more than members working alone.
The paper also helped with the commercial and clinical translation of the technology we had created. It crystalized the 鈥榬esearch story鈥 to investors and funding agencies, which was crucial in raising money to develop the technology and overcome the 鈥榁alley of Death鈥. Getting any medical technology into clinical practice needs a huge amount of resource. Being able to simplify our story helped us to make a compelling case and build investment as a result.
The paper鈥檚 publication in Nature Biomedical Engineering also helped us to communicate our ideas to physicians around the world. It built on a niche body of work and generated a lot of buzz in the medical community. A press release was issued after the paper was published, and that alone resulted in over 20 news networks contacting us, which was instrumental to getting the word out and raising awareness. Ultimately, the paper was a catalyst for fund raising to support the expense for preclinical and clinical development. The company OncoNano Medicine is now starting a Phase-II clinical trial, still a long road ahead before clinical use, but it is an accomplishment that has already exceeded the impact of a publication if citation is the only measure.鈥
How important are citations to you personally and to the wider research community?
鈥淚 still see the value of citations 鈥 they make you more self-critical and push scientists to produce the best work they can. But I don鈥檛 think of citations as an absolute, just part of a bigger whole along with altmetrics and other commercial and societal impacts. For me, the really fulfilling result of the paper published in Nature Biomedical Engineering has been the opportunity to translate our research discoveries into clinical products, create job opportunities for the students and local community, and eventually help patients by making technology in this area more robust and easier for doctors to use.鈥
Whilst still playing an important role in building institutional and individual reputations in the scientific community and providing a clear, widely understood indicator of research impact, citation counts didn鈥檛 define the research stories told by David Swinney and Jinming Gao. The spread of new ideas, new collaborations, job opportunities, improved disease diagnosis, and more effective patient care are just some of the long-term effects of the work described in their papers.
In a world driven by big data and increased reliance on metrics to show progress, it鈥檚 more important than ever to acknowledge the real-world impact of scholarly research. To stand back and look at what has been achieved in terms of innovation, improved knowledge, global health and a host of broader societal, environmental and cultural impacts. There鈥檚 also the risk that over-fixation on quantitative data can 鈥渄iscourage initiative, innovation, and risk-taking鈥 (Impact of Social Sciences, 2018) 鈥 the very elements underpinning scientific progress.
And as multidisciplinary research grows at unprecedented rates, traditional metrics that are most effective at assessing impact with a narrow lens (restricted by discipline and format) need to be supplemented by broader, more qualitative data. Organisations such as the UK Forum for Responsible Research Metrics are working with initiatives such as REF2021 to look beyond citations well into the future.
鈥淢etrics are of too low value to be useful in some panels so should not be universally adopted. In these panels the risk of misuse is greater than the small added value in some fields.鈥 (Universitiesuk.ac.uk, 2018)
Bibliography:
Impact of Social Sciences. (2018). Against metrics: how measuring performance by numbers backfires. [online] Available at: http://blogs.lse.ac.uk/impactofsocialsciences/2018/05/24/against-metrics-how-measuring-performance-by-numbers-backfires/ [Accessed 16 Oct. 2018].
The Conversation. (2018). What REF case studies reveal on measuring research impact. [online] Available at: https://theconversation.com/what-ref-case-studies-reveal-on-measuring-research-impact-39349 [Accessed 16 Oct. 2018].
Universitiesuk.ac.uk. (2018). [online] Available at: https://www.universitiesuk.ac.uk/policy-and-analysis/Documents/forum-for-responsible-research-metrics-response-to-REFconsultation2017.pdf [Accessed 29 Oct. 2018].
This article was written by Emma Warren-Jones, Director of Edible Content, from interviews with Jinming Gao and David Swinney in 2018.
Contact us for more information about Nature Research journals