CSCCE Open-Source Tools Trial 3 Recap: OpenReview

The third Tools Trial in our open-source series focused on OpenReview – an open-source platform that supports open peer review, primarily for conference abstracts but with the ability to be customized and applied to other situations. OpenReview PI Andrew McCallum and Senior Software Engineer Melisa Bok joined us to share some history about the platform, along with a demo of some of its key features. 

We’re working on a series of tip sheets to consolidate much of the technical learnings from the entire series of Tools Trials, but in the meantime, if you missed the call you can watch the recordings and read a brief recap of the call below. 

You can also read/watch recaps of Tools Trial 1, which highlighted various ways of using GitHub to support community activities, and Tools Trial 2, which focused on tools to support events. 

Our next Tools Trial in this series will take place on Wednesday, 11 October at 10am EDT / 2pm UTC. We will be returning to GitHub, with presentations about how the Zarr community uses it to collaborate on technical documentation, how Rosetta uses GitHub teams to manage contributors, and how the Observational Health Data Sciences and Informatics team uses Bitergia to measure contributor analytics. More information | Add to calendar

An introduction to OpenReview

In his presentation, Andrew took us on a journey through the history of OpenReview’s inception, and how it began with a conversation about the value of separating the peer review process from the publication industry. Over the years, that evolved into OpenReview, a platform that is extremely popular in the computer science community for curating conference proceedings, and is becoming increasingly widely known in other fields, too. You can watch Andrew’s full presentation below. 

Explore OpenReview’s core features

Melisa Bok then took us on a tour of OpenReview in action – taking us through the process of creating an account, setting up your profile, and requesting to host a “venue” (e.g., an upcoming conference). Melisa focused much of her time on explaining OpenReview’s matching capabilities, which is a machine-learning supported process of examining people’s expertise and then recommending them as expert reviewers for conference papers. The matching system takes into account not only a scientist’s publications, but also their relationships with institutions and other researchers, thereby allowing it to identify potential conflicts of interest. If you’re interested in learning more, check out Melissa’s full presentation! 

Not just a conference tool…

While open peer review of conference abstracts is the dominant way people currently use OpenReview, that’s not its only application! In her introduction to Tools Trial 3, Emily Lescak highlighted two examples (at the 10 minute mark in the video below) from within the CSCCE Community of Practice of OpenReview being used to support the open review of proposal submissions to the Code for Science & Society’s Event Fund and the IOI Open Infrastructure Fund. Also included in her intro is a brief recap of the previous OS Tools Trials in this series. 

Discussion

After the presentations, we explored some interesting questions, both regarding the technical use of OpenReview as well as implications related to community management. 

How open should you be? 

Like other conference submission systems, OpenReview allows a program chair to decide whether author identities should be anonymous to reviewers and vice versa. In addition, the platform allows for submissions, and their accompanying reviews, to be public, at the discretion of the program chair, and enables public commenting. In nearly all venues, reviewer identities are anonymized. We discussed some of the pros and cons of various levels of openness and transparency, including the potential for trolling when everyone involved is anonymous. 

What can OpenReview tell us about the peer review process? 

OpenReview’s open API means that anyone can mine the data contained in the platform, providing a rich corpus for natural language processing researchers. A number of research groups have started to compare reviewer comments to the parts of the paper they reference, and categorizing them by type (e.g., critiques, complaints, and recommendations). 

What’s next for OpenReview? 

One participant on the call asked about OpenReview’s capacity for accepting submissions in multiple languages, which led to an interesting discussion about what’s coming next for the platform. Currently, venue owners have to take on the responsibility of translation themselves, however OpenReview does support markdown, and is working on creating the ability for reviewers to communicate about a paper in real time (rather than leaving a comment that may or may not get a response). We talked about how this related to other peer review platforms, and the challenges of asking people to review in the open when they might prefer to address their critiques to the authors directly. 

Thank you – and see you next time!

A big thank you to Andrew and Melisa, and to everyone who came and contributed to Tools Trial 3! Next time, Sanket Verma, Community Manager at Zarr, will share how his community uses GitHub to collaborate on technical documentation; Julia Koehler, a senior developer of Rosetta, will talk about how she uses GitHub Teams; and Paul Nagy will share how he uses Bitergia to manage the OHDSI (Observational Health Data Sciences and Informatics) project, with Bitergia’s Georg Link on hand to showcase more of the platform’s capabilities. We hope you’ll join us on Wednesday, 11 October at 10am EDT / 2pm UTC. More information | Add to calendar

Resources

More about OpenReview

Example use cases

Alternative tools for supporting open peer review