Skip to main content
SearchLoginLogin or Signup

Transcript | Viral Validation: How the New Journal RAPID REVIEWS: COVID-19 Accelerates Peer Review and Publishing

Editor-in-chief Stefano Bertozzi speaks with the Orion Open Science podcast about the journal and its mission to validate and accelerate research related to the COVID-19 pandemic.

Published onAug 05, 2020
Transcript | Viral Validation: How the New Journal RAPID REVIEWS: COVID-19 Accelerates Peer Review and Publishing

Rapid Reviews: COVID-19 (RR:C19) aims to publish expert peer reviews of new COVID-19 research which will help validate and accelerate the discovery of high impact, useful studies. Editor-in-chief Stefano Bertozzi joined Dr. Luiza Bengtsson and Dr. Emma A. Harris on the ORION Open Science Project to discuss how the journal will work, the role human and AI input will play, and the importance of pan-disciplinary content.

A stream and edited transcript of the episode—“Viral Validation: How the New Journal Rapid Reviews: COVID-19 Accelerates Peer Review and Publishing”—can be found below:

Luiza Bengtsson (LB) Hello and welcome to the ORION Open Science podcast.

Emma Harris (EH) I'm Emma Harris.

LB I'm Luiza Bengtsson.

EH And we're broadcasting to you from Berlin, Germany.

LB Today's guest is Stefano Bertozzi. He is dean emeritus and professor of health policy and management at the UC Berkeley School of Public Health. He has previously directed the HIV and tuberculosis programs of the Bill and Melinda Gates Foundations and, amongst other roles, he was also the last director of the WHO global program on AIDS.

EH Wow, that's a lot.

LB Yeah. Quite an accomplished scientific career, I must say. And the reason why we're talking to him is because his newest role is editor-in-chief of a new journal from the MIT Press, Rapid Reviews, and what it is about and what it has to do with open science—well, let him tell us.

Stefano Bertozzi (SB) So, my name is Stefano Bertozzi, and I'm a professor at the School of Public Health at the University of California, Berkeley, in Berkeley, California. And I'm very pleased because I've been asked by the MIT Press to lead an effort to develop this new journal, Rapid Reviews: COVID-19, which is a collaboration between the MIT Press and Berkeley.

LB OK. And we are very excited to hear about this journal because it has a lot to do with open access and preprints and rapid information processing. So can you please tell us a bit more about the journal?

SB Well, it's something called an overlay journal, which I had not heard of before engaging in this process. And really what it was designed to do was to address the fact that we have lots and lots of information being put up on preprint servers by authors in advance of publication. And normally, when manuscripts are submitted to journals for publication, they go through a peer review process that helps to validate or debunk the information in that manuscript. But of course, by putting things up on preprint servers before peer review, it means that people don't know how much confidence to have in what's in that manuscript. And because COVID is so urgent, people are starting to act on those results without waiting for peer review. And that's normal, logical and, in fact, expected.

So, the question the MIT Press asked was: What could we do to evaluate those manuscripts more quickly than the traditional peer review system? And when you think about the peer review system, it's not just that you send a manuscript to a journal and they send it out for peer review. If that journal decides not to publish it, then you start over again with a new journal. It can take a very long time before a manuscript actually is published with peer review. What we've agreed to do with MIT was to say, let's flip this on its head. Let's do the peer reviews before a manuscript is submitted for publication. And so, what we're going to do is identify those manuscripts that we think are potentially important and send them out for immediate peer review and post those peer reviews so that everybody can see them. Then, the authors can decide what journal they want to submit it to. And that journal can, in fact, use the peer reviews to accelerate the process of deciding whether they want to publish the article or not. In addition, if we like it, if we get good peer reviews and we like the manuscript, we'll also give the authors the option of publishing it with us. But they're under absolutely no obligation to do that. They can publish it wherever they want. And the peer reviews will be open and public and they can take them wherever they want.

LB So the journal Rapid Reviews—that's basically just the peer reviews you’re publishing, right?

SB It's both. It will be [peer reviews] initially. We're certainly focusing on the peer reviews. And the other thing that's different is that each of those peer reviews will be a published object. So the authors will get credit for it, you can cite the review, and we'll also give authors the opportunity to publish a response to the reviews as another published object. And, of course, the authors can also revise their manuscript and post a revised manuscript that takes into account the comments that they've gotten from reviewers.

LB Okay: this sounds like an amazing solution to everything that the open science movement has been dreaming of, right? You combine the power of preprint—that immediate access, the no-costs, the copyright issues—with a peer review. And you also give credit for the peer review and it's not anonymous. And yeah, I mean, this kind of sounds like an amazing idea. First of all, what's the caveat? Is there any? And if there isn't any, why hasn't this been done before? But then also, how do you choose the articles to review? And what's the AI part of that? Because I read in the press release that there is an AI-aided review process.

SB Well, I have to say that this isn't my field. You probably know better than I do what else is happening. But I can tell you that I was quite familiar, because I've done reviewing for them and because I worked there for four years, with the Gates Foundation effort on open research. So, it's a similar concept. It's different in a couple of important ways. One is that it's a publishing platform for their grantees—not for anybody—and it's not topic-specific. It's really anything that the Gates Foundation funds. The second thing that's different about it is that it's also a preprint server. So it basically is providing to their grantees a preprint server that they can upload manuscripts on. What's similar to our effort is that they then seek peer reviews for those manuscripts and publish the peer reviews. And so I was familiar with that effort. I've been a peer reviewer for them. And I thought that this this concept was one that was very compatible with what MIT was trying to do.

Then in the discussions with MIT, I also said, listen, it's going to be easier to get people to serve on an editorial board, it’s going to get easier people to peer review, if they think there's also a direct line to actually publishing the paper. And so, we agreed that in addition to publishing the peer reviews, we would identify papers of the ones that we peer review that we would also like to offer publication to. Now, that's kind of a weird thing in publishing because as you're well aware, authors are forbidden from submitting a manuscript to more than one journal at the same time, right? So, they're not violating that with us because they've never submitted their paper to us. But they could take their manuscript to The Lancet knowing that if The Lancet says no, they've got a guaranteed "yes" from us because we've already told them we would publish it. So, it puts authors in a different position than they have been historically.

And let me answer the other question that you asked, which is, why isn't why isn't this appropriate for everything? Well, for one, I've got to acknowledge our gratitude to the Patrick McGovern Foundation because they've given us a three-hundred-fifty-thousand-dollar grant to make this possible. This isn't free, right? In order to do this for free, you've got to have somebody who's willing to pay for it. And the second thing is that I think there are areas where the urgency that we're trying to address with COVID is more important than others, right? So, that'll be one of our criteria in terms of which papers we select . . . which manuscripts we select to peer review.

LB Now the AI part, that's also interesting.

SB Okay, let me start with your broader question of how are we going to see what we're going to review? And then AI is part of that. AI can do a couple of things for us. One is that this is this journal is all about COVID, but it's not discipline-specific, right? Most journals are discipline-specific and sometimes they're also, you know, topic-specific. Like AIDS, for example. In this case, we're topic-specific but completely discipline agnostic. So we're interested in everything from anthropology to engineering, public health, clinical medicine, sociology, you name it. If it is about COVID and it appears on a preprint server somewhere—by the way, we're being very, very inclusive of what we mean by a preprint server. It can be anywhere where something is published in advance of a peer reviewed publication, or shared in advance of a peer publication. For example, I don't think that the National Bureau of Economic Research, which posts working papers, would have characterised themselves ten years ago as a preprint server. But we're considering the equivalent of a preprint server, right? So, places like that as well. And so, which of the papers of the hundreds that come out every day will we select for publication? Well, number one, we're going to select the ones that people are already paying attention to; so if clinicians are changing their practice, if policymakers are changing their decisions, if social media is lighting up, or the mainstream press is publishing about an article that's on a preprint server, then we think it's important to either validate those results or debunk them, okay? That's one thing that will sort of push things up in priority.

The second thing we have is we're working with folks in AI at the Lawrence Berkeley National Lab who, first of all, are helping us automatically categorise manuscripts by discipline and domain. So, we know which peer reviewers and which of our editorial teams should be focusing on that manuscript. Secondly, they're identifying potential peer reviewers—peer reviewers for us because they could look and see what similar articles, even if they're not about COVID, are in the literature. And who are the well-cited, well-respected authors of those closely related articles. So that helps us focus on a shorter list of potential peer reviewers.

And thirdly, and this is the clever and new part, and we're not sure how well it'll work. They are able, with some of their tools, to give us a score of how innovative they think the article is. And this is based on how much it's putting together concepts which previously have not been associated with each other in the COVID response. Now, we all can have a healthy dose of scepticism about how well that will work. But if it does work, it'll be yet another factor that can help push things up in our priority list. And two other things that we're doing in terms of selecting articles: One is that there are lots of other folks out there who are doing, for example, daily digests like the University of Washington is on specific disciplines for what is of interest to their readers. So, we might as well take advantage of the fact that they've already prioritised from several hundred biomedical and medical articles which ones they think are the most important. And finally, we are developing a team of doctoral students, graduate students, and undergraduates who are going to be scouring the literature that comes out every day and essentially voting for which of the articles that they're reading—which of the manuscripts that they're reading are most important or most interesting. They will be pushing up the pyramid, if you will, those articles or manuscripts that they think are most worthy. And then the editorial team will pick up from those which ones to send to a peer reviewer.

LB This makes it even more impressive to me, actually. It is amazing to me involving the doctoral students and the postdocs who are actually reading those papers and making their opinion. I mean, basically, how do you peer review? Well, you read an article, you understand, and then you make you make up your mind. You have an opinion about it, right? So it's amazing to use that in the process. I'm impressed, I must say. And it really sounds like what everybody in the open science movement is talking about. That's how we should be doing the peer review.

SB I'm excited about this sort of two-stage thing because it's a great learning experience for students right? So, you know, you're faced with an article or a manuscript. And the question is, is this good enough to warrant rapid peer review? And what we're doing is basically saying that that filter, if two people say yes, it's like a gold star, right? Then it jumps up in the priority and it goes from the initial stage review to a higher level review. And I know, in theory, that's happening on a daily basis because you have a team of people who are picking the things that our algorithms are putting at the top of the list, like, things that are being paid attention to on social media or things that have been preselected by somebody else's search mechanism. And then they're reviewing those. And then we, ideally, very quickly get those out to a peer reviewer.

Now, the other thing that we have started to work on and that some of our editorial board members have encouraged us to do is try to streamline the review process. One of the cases that people have made is that sometimes people either take a long time to review or are reluctant to accept a review because it's too big a lift. You know, they're in the habit of doing pages and pages of detailed review. So we're trying to figure out what is the right balance between giving people sort of open text flexibility to respond how they'd like and trying to structure the review more so that it's clear that what we're really looking for is a high quality one-page review. We're not looking for that six-page review that is going to take you a couple of weeks to get through. So getting that balance right is something that we're struggling with and I think we'll just learn as we go.

LB What happens when the money runs out? Is this going to continue as a paid journal, or . . . ?

SB Well, I could imagine that the publishing of manuscripts part could be self-funded just the same way other online journals are funded. But I don't see that that mechanism can work for the rapid review part. I think that will require either government or philanthropic support. I don't see a revenue model for that.

EH I read that your reviews go on to PubPub, and I hadn't really heard of that before. Could you just explain a little bit what that is and how it works?

SB PubPub is an open source publishing platform that grew out of the MIT Press and it supports collaboratively editing and publishing journals and monographs, and all kinds of open access scholarly content. I think it's a very appropriate platform for us to use for publishing both the reviews and the articles. And the idea is that, and they are actively working on this, linking to the preprint servers, right? So, if you go to medRxiv or SocArXiv and you see an article that's been that's been published on the preprint server, you want to see right next to it the link to the peer reviews. And our initial discussions have been very positive and I expect that to be a close collaboration with almost all of the preprint servers.

LB There is still information being published in the traditional closed journals, the peer-reviewed journals, right, on COVID? I assume you don't have any possibility or means to text mine in those articles. It's not so easy to get access to those.

SB Oh, well, we absolutely are. I mean, the articles that we're mining for peer reviewers are the published articles. We can also mine the preprint articles. But in terms of looking for quality reviewers, we're starting with the published articles. And because we're in the university with access to all of the journals, we can certainly mine them.

LB Yeah. Somebody's definitely going for the peer reviewers for the published articles and for the new content looking at the preprint servers . . .

SB Exactly.

LB Because that's where the new information is. I was wondering, do you think—there's always this fear when you start using an AI mechanism. Now, we have this elegant solution with the students basically at different levels, looking at and validating the AI approaches; but can you think of anything else that would make this approach . . . like any biases that you can think of?

SB Well, absolutely. I mean, the fact that we are going to differentially pay attention to things that are already getting attention makes me worry that we miss something that should be getting attention but isn't, right? Every time we do more to validate or debunk the things that are getting attention, we spend less of our peer review energy on things that we think are important but other people haven't decided are important yet. So, we have to get that balance right because we want to discover the gem and validate the gem that hasn't yet been discovered by the press or the social media sphere. But at the same time, we want to make sure that if people are paying attention to something and changing policy based on it, that we validate or debunk it as quickly as possible. So getting that balance right and, of course, that's a bias, right?

LB But this system can be gamed, right? I mean, basically, you can create a lot of social media attention around something. It doesn't have to be interesting or actually true in any way, it just has to be kind of sensational, right?

SB Exactly. And now, in addition to that, something I've had trouble expressing is what the right word is for this journal. Because people say, “well, this is multidisciplinary, right?” And I thought, well, most people when they say multidisciplinary are referring to something where multiple disciplines are collaborating on a particular study or article. In this case, it's multidisciplinary, but it's also monodisciplinary [research related to COVID-19]. And so whatever system you use to divide up peer review, I think that things that are truly multidisciplinary or interdisciplinary tend to suffer because the people who review them tend to be monodisciplinary and the manuscript doesn't conform to their traditions and academic standards if it actually bridges disciplines. I think this is obviously not a problem just for COVID, but I think it's a problem across the scientific enterprise. How do we appropriately value interdisciplinary/multidisciplinary research? I do worry about that. And I think that we need to make sure that we look for editorial board members and peer reviewers that bridge disciplines.

EH Unfortunately, we didn't have a huge amount of time to talk to Stefano. But one of the thoughts that was in my head was, and I think you touched on this when you talking to him as well, this idea of AI and peer review? I feel there's a possibility of expanding this and using it, because one of the problems that we've talked about with other guests is that the amount of papers versus the amount of peer reviewers versus the amount of time and credit that they get for it, it's just not sustainable.

LB No, there's no match there whatsoever.

EH Exactly. So, possibly AI could be one of many other solutions to this.

LB But I mean, as we said, there is this bias problem.

EH Of course.

LB I think it's a really nice twist with using or . . .

EH Employing?

LB Employing, working with students, to validate the AI decisions. I think that's really cool. I think you always need the human eye, at least for now.

EH Trouble with that is that they can only validate what the AI is giving to them in the first place. So it's still, I mean, you have to give some parameters to an AI algorithm. And it's difficult. That, inherently, has a value judgement on it: the most media attention, or the most prestigious, or the most, I don't know, the most authors, or whatever. You know? There inherently has to be a value judgement because that's how algorithms work. So it's difficult.

LB But I thought that there was another very nice twist about what he said—that one of the criteria will be the innovation potential, as in combining fields that have not been combined before. So, having an innovation score. And we have this episode on Blockchain coming up and that's kind of another way to solve this problem of judgement. And I think this innovation potential, it's a really nice idea. I think as long as it's clear how the judgement was made, so how those articles are selected for the review, and nobody's claiming that this is the absolute best and only way to do this, then I think it's OK. If it's fully transparent what is actually happening there.

EH I think transparency is the key. I mean, I think you have to be careful with quantitative measures of research. And I think this is something that researchers have complained about. But we can't keep doing what we're doing, especially when it comes to peer review and academic publishing. So we have to start innovating. I think this idea of how do we open up research, but maintain the ability to judge quality and the ability to assess importance is one of the key challenges that a lot of our guests have talked about. We ourselves have talked about, you know, in the course of the project and with researchers.

LB Yeah. I think this is a very nice new way of doing altmetrics, basically. And this is exactly what the Blockchain episode will be about, the altmetrics on Blockcahin. Really we have to be much more inclusive in assessing abilities. How do you choose who gets tenure and so on? It cannot be just the impact factor. It has to be more. Here, providing people with the credit for the peer review because you publish the peer review itself as a publication, that's really good.

EH That's excellent. I think that's really good. And it's something that has always flummoxed me from when I started to understand the research system as, you know, a graduate and a doctoral researcher. Okay, you do a peer review and then you don't get anything for it? It seemed very odd to me. So, I think anything that combats that is great. Going back to the AI, my only other thought about that is that it can only work within our current system; so you can try and program it, for instance, to search through all the articles or highly cited articles. But, for instance, there are a number of studies which have found that men cite themselves at a much higher rate than women cite themselves. So if, for instance, you built an algorithm to find the most highly cited articles or the most tweeted articles, you might immediately have a gender bias. Just because that's how people behave and the algorithm could only reflect how people behave.

LB And then if you only have male students reviewing the data, say, then you just reconfirmed the bias.

EH Exactly.

LB I was thinking on social media, maybe in the worst-case scenario, they might spend a lot of time debunking some stupid pseudoscience that just gets a lot of attention instead of, as he said, actually going for the really relevant but not-so-visible stuff.

EH Yeah, I mean, I noticed that the editorial board is very diverse geographically and they make a note of that in their initial press release—that they're aiming to try and be pandisciplinary (I guess would be the best term) but also to be more globally diverse. I think that's something that really needs to be dealt with. Because, for instance, I can only imagine that the medical research in, say, certain African countries, who have dealt with the Ebola outbreaks, would be very, very relevant to COVID. But if you've got this idea that, oh, research in Africa is not as high-quality as in the Western world or the developed world, whatever you want to call it, you might miss that. So, I think it's important to be pandisciplinary and more diverse globally as well, and this Rapid Reviews: COVID-19 journal seems to be making a good-faith effort to be both, which is great.

LB This is a really urgent problem, actually, really, this overload of information. I mean, it’s not overload because all research is good because it helps us understand the problem better; but a lot of people are already addressing this, and there have been all these hackathons. There was one from the German government. There was one from eLife, which was not COVID-related. There were already, last year, all kinds of like, you know, “let's find these solutions for science” initiatives, basically. And in many cases, they have been new ideas: how to deal with the quality assessment of peer-reviewed journals and using AI approaches. It's not a completely new idea—I've seen it before—but I haven't seen it like, I mean . . . there are results coming out of these hackathons, but I haven't seen them going like . . .you know, basically everyone knows about it, everyone is using it. This journal has the potential to be more visible.

EH Yeah. You’ve got to think about where it's coming from. It's coming from MIT, from Berkeley. These are the kind of institutions that really have the resources and the human resources and, you know, obviously they got this this funding to help them as well. They have the scale, I think it's important, to implement this. This is making me think of another interview we've recorded, actually—the one with Dylan. And looking at institutions that are embracing open science principles at their structural level. And I feel like this journal, this Rapid Reviews: COVID:19 journal, is being built on open access and open science principles from the ground up. So that's kind of at its core. And then everything organically comes out from that. Whereas I see a lot of this stuff has all the best of intentions, but it's trying to patch open access and open science principles onto an existing program or an existing system in sort of piecemeal way, you know. It's a bit like going to try and reboot a movie franchise, but it doesn't always work. And sometimes you just have got to start again.

LB I mean, I think this system, this idea as principle, would work for other areas in science, doesn't have to be any topic. So, let's say immunotherapy in cancer, for example. There are a lot of publications about that as well—also on preprint servers—and I don't see why this kind of principle could not be applicable to that topic or any other topic. In the end, if you have different organizational units in a cell, for example, you can come up with categories for scientific research which are more narrow than just cell biology or chemistry. But for COVID research or immunotherapy, cancer or anything, you could use the same principle. But yeah, I mean, it still does cost money. In the end, someone has to pay.

EH I mean, there is a lot of research funding around, governmental, private, and philanthropic. And a lot of it, I think, is diverted towards paying for subscriptions for journals, for instance, or paying to publish and so forth, or paying gold open access charges. Surely some of that money could be really diverted to a system like this, which I think is a much more effective and innovative way of dealing with things.

LB OK, so let's recap. I mean, because I'm really quite impressed. There are so many elements in this idea that are just so open science cool. First of all, it's a no-cost open access journal. The reviews—the peer reviews—are not anonymous and they're also being published. And the authors get credit for it. They are involving altmetrics in the judgement of which articles are getting reviewed, right? And this is done by algorithms. And the novelty there is that they are also judging the innovation potential of an article. And then this also gets validated by undergrad students, doctoral students, and postdocs. So the results of their AI judgement are being voted on by the students, by actual people, early career researchers. And what else?

EH They're available on this PubPub platform, which allows the author to then annotate the peer review so that they can answer if they want to. So, it's more even more transparent. And also, they will accept some of the ones that they peer review. They'll say, we would like to publish this as well. And the authors are then in the position that they can send it to say, I think the example that Stefano gave was The Lancet; and then if they don't get the answer they want from them, they already have a guaranteed yes. Or they don't have to send it to anywhere else. They have an immediate publication. So some of the very best papers that they peer review will also be published in the journal. Which gives a wider diversity of publishing options.

LB And this is possible because this comes from preprint servers and people do have the copyright for their article.

EH I've just had a thought that this essentially solves the problem that you've brought up a couple of times, Luiza; which is it's all very well to have preprints, but then who are the gatekeepers for journalists and science communicators who aren’t necessarily able to make the judgement as to the reliability of the research? So, I think this is a solution for that as well.

LB Yeah, actually, yes, totally. Although they're also looking at things that science journalists or journals or newspapers have already picked up. So, that might be a bit too late.

EH Yeah. But still, I guess, you could at least then go back and say we published this, but actually this review raises these points. It's not going to do anything about the tabloids, but the more responsible science journalists could definitely use it to do—maybe rather than breaking news—more features and breakdowns. So, this seems like a real step forward, and I'm really glad we had a chance to talk to Stefano about this. I think we'll both follow it quite closely and maybe we can do a follow-up interview in a year and see if they've managed to maintain the momentum that the COVID-19 crisis has provided. And in fact, that's something I'm kind of wondering about a lot of things we've been talking about recently—that kind of “will this last when everything goes back to normal,” whatever that means.

EH So that was it for today. Thank you very much for joining us. If you would like to get in touch, please follow us on Twitter @OOSP_Orionpod. You can: follow us, message us, retweet us. You can also email us directly at [email protected]. We'd love to hear from you. Suggest yourself or somebody else as a guest, or just ask us a question. The music was composed and produced by Fabio de Miguel. The sound mixing was done by Paulo Oliveira. And the ORION Open Science podcast is brought to you by the ORION Open Science Project, which is an EU-funded project which promotes open science. We hope you enjoyed the show and we will see you next time.

LB Thank you for listening.

CC-BY Luiza Bengtsson, Emma Harris, Zoe Ingram from the ORION Open Science Project

Comments
7
?
dfjhdfgj djfgjfg:

Contact the SASSA helpline or customer service number to inquire about the status of your benefits or application. Be prepared to provide your personal details and relevant information for verification purposes sassa srd identity verification fixed.

Anders Anson:

Es ist erstaunlich, wie ein einfacher klingeltöne meine Stimmung verbessert. Dieser hier ist nicht nur angenehm, sondern auch total einzigartig. Hat jemand von euch auch einen Klingelton, den er empfehlen kann? Ich bin offen für neue Ideen und würde gerne eure Favoriten hören!

?
Dianne Sims:

Harris on the ORION Open Science Project to talk about the journal's structure, the importance of interdisciplinary content, and the part that humans and AI will play. mario games

vranda van:

These Killer attitude video status download can turn a mundane day into a joyful adventure. Maintain resilience in the face of adversity, for a strong attitude can weather any storm.

?
john smith:

Explore the remarkable selection of Tag Heuer watches at Donysterling.co.uk. Experience the allure of fashion watches at Donysterling.com and Glitzstorm.com, and unlock a world of possibilities on your wrist.

?
James Robrt:

Finally, you’ll need to make sure that the router you buy supports the type of internet connection you have. Depending on your ISP, you may need a router that supports cable, DSL, or fiber. Be sure to do your research and make sure that the router is compatible with your internet connection. https://justwifirouter.com/best-wifi-router-for-rv/

Bryce Wo:
  • High regrind quality

  • Good visualisation

  • Simple control via touchscreen

  • Versatile applications to all kinds of plastic containers

  • Integrated system detects and monitors all processes