New Metrics for Assessing Scientists: Collaboration Networks

Download PDF

Two of the most common complaints heard over coffee in medical science labs and the source of much mental anguish are 1) “Is there any hope in getting a Cell, Science or Nature paper” followed by 2) “Does my career hang in the balance”?

Currently, it seems that the sole determinant of one’s first faculty appointment in medical sciences is publication record.  This whets the appetite of the junior science trainee for the coveted article in one of the big three, but such pressure can (more likely than we care to admit) result in some rather unwanted consequences.  This week’s retraction of a Nature article by a supervisor that the lead author post-doctoral fellow refuses to sign is one such possibility – though I suspect we’ll never get the full story of what happened in this case.  In any event, when the metric for hiring is so singularly focused, those who want the job will do most anything to achieve it.  It is akin to the vast swathes of medical school hopefuls with MCAT fever and, after there were too many students with “good enough scores”, the almost overnight rampage to give their volunteer energies for everything under the sun.

So, what can be done to help spread out the enormous weight that is put on a publication record without sacrificing good judgment when it comes to deciding which fraction of young scientists will run a good lab in the future?  Over the course of the next several months we will try to pitch some ideas for new metrics, present some pros and cons for each method and try to extract our readers thoughts on the merits of such proposals. It is our contention that such diversity in candidate assessment tools will generally help faculties and employers make choices that will best fit their desired hiring criteria (be it “good undergraduate teacher”, “world class researcher”, “good team player”, or “quiet worker bee” amongst many others).

This blog entry will touch on something that was discussed by Daniel Cressy in his article entitled Counting Collaboration published online earlier this week in Nature News.

I do wish to preface any talk of collaboration metrics with a brief reminder that while collaboration (and more specifically “open” science) is highly desirable, it is not always possible or required for the success of a project and too much of a push to collaborate may result in a lack of focus for a research group – a sort of social butterfly syndrome where everybody becomes a generalist and nobody really gets into the nitty gritty of a scientific problem.  Collaboration, however, is certainly a powerful driver of new advances, especially in fields that have become dependent on rapidly developing technologies.  In my own field of stem cell biology, the number of engineers, chemists, mathematicians, etc that have brought their expertise to the table and made enormous contributions (e.g.: next generation sequencing) is quite telling.

The collaboration network analysis profiled in Cressy’s piece and pioneered by U Penn’s Institute for Translational Medicine and Therapeutics looks at co-authorship on grants and publications as well as the position held (departmental colleague, university colleague, external collaborator, etc) by these co-authors in an attempt to assess the productivity of research institutions. The assumption is that medical science is growing in complexity and requirements for multiple areas of expertise that success in translational research is critically reliant upon teams of researchers from different fields working together.

Working with experts from other fields can certainly catalyze developments in technology and theory that were close to impossible for those rigidly grounded within a field and all of its necessary assumptions.  New insights, new techniques, and new possibilities occur when the leading edges of multiple fields intersect.  In my own limited experience, the prescient questions that advance my research seem to come as often from people outside my field as from those within and for this reason collaborative networks and inter-disciplinary research teams are a great boon to the research community.

With this in mind, Cressy suggests making non-departmental collaborations a requirement for translational research awards as a possible method to improve research programs.  Such hard line requirements, however, are unnecessarily restrictive, and my own suggestion here would be to consider such collaborations when assessing the entire proposal, but not to discard proposals simply because they are within a single department – the easiest example of which would be refusing an application from an already interdisciplinary department.

Alternatively, while there does need to be some measure of a scientist’s ability to work with others and the level of respect they hold amongst their peers, I fear that an intense push to increase collaborations simply for the sake of collaboration will result in an over-extension of some lab’s resources and distract the focus of the research.  A slippery slope type argument could be put forward that such required interactions will bring us to a falsely constructed consensus where the requirement for team research dilutes out independent thoughts and experimental approaches.

A final concern surrounding collaboration metrics involves the “rich getting richer” problem where creating and/or expanding a network is substantially easier at the big research universities.  So, if utilized, this particular collaboration metric must be used in tandem with additional considerations to the applicant’s location and current resource availability.

Print Friendly

About Dave

David grew up in St. John's Newfoundland, completed a Bachelors degree in Genetics and English Literature (UWO, London, ON) and completed doctoral studies in stem cell biology at the Terry Fox Lab (UBC, Vancouver, BC). He coordinated the UBC Let's Talk Science Partnership Program from 2004-2007. David is currently completing postdoctoral research at the University of Cambridge, UK and also writes for the Canadian Stem Cell Network Blog
This entry was posted in Education and Training, General, Policy and tagged , , , , , , , , , , , , , , , , , , , , . Bookmark the permalink.

6 Responses to New Metrics for Assessing Scientists: Collaboration Networks

  1. Pingback: Top 5 stem cell retractions | Stem Cell Assays

  2. Pingback: New Metrics for Assessing Scientists: Let’s Accessorize | The Black Hole: Science in Canada, Issues affecting trainees

  3. Pingback: The Black Hole | UBC tops in Canada? Rimouski 7th in sciences? New metrics for measuring research | University Affairs

  4. Your teeth look whiter and assist you for the test because I
    want to have sex then I would go another thirteen years
    after hiigh school. Dr Bryce points out that probiotics
    have an immune boosting effect that will not clear up regardless of what you want to take our wire, nice and knowledgeable staff attending to their dental office.

    The store attracts shoppers from all over the world. Advice from Dentists in Boca Raton: How Can I Quit Tobacco?

  5. Svarstykles says:

    Medical science is sometimes (or recent) a real anguish, but I hope that it is still worth studying and putting effort. The chance to become the one, who can saves lifes, probably is an award.

  6. Tilly says:

    84 x 69 17 x 8. Before you go investing
    thousands of dollars for alleged patent infringements in 2012, a 27% increase on the return
    rate. Thhe image shows what appears close to be exceptionally unstable.
    The tickets for the event reminder button, which gave me another page.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>