A Vision For Open Science
Does rewarding that which is easily measured lead to better science?
The transition to a more open model of doing science will involve numerous technical challenges in terms of how we can most effectively make code, data and published material cheaply and efficiently available. Beyond these technical challenges, however, we will also need to reflect on the optimal culture for facilitating good research. I will argue that the current culture is problematic, because researchers’ energy and time is so consumed by the short-term ‘publishing papers, to get grants, to publish papers’ cycle that we don’t have time to pursue solutions that could make our research more useful and important in the long term. One of the primary causes of this ‘publish or perish’ culture is a shift in higher education to rewarding output that is easy to quantify. Informally many academics agree that this reward model, and the culture that it promotes, are sub-optimal; the question, of course, is how we can change it. This talk will speculate on a number of options for broadening the way in which scientists are rewarded for their contributions to science (in particular for peer review), and actions we can take as individual researchers to challenge this culture and reward model.
Why have so many academics decided to boycott Elsevier?
On 1 February 2012, I posted a message to CVNet expressing doubts about whether I should be reviewing for journals which weren't open access. My message was prompted by the coincidence of a request to review a paper for Vision Research, and an increasing flurry of negative media coverage about Elsevier, its publisher. There were around 60 replies to my original post, some of which came back to me (rather than CVNet) with a request for anonymity. In the wake of the discussion on CVNet, I signed the online petition at thecostofknowledge.com, which allows individuals to state that they will refrain from publishing in and/or refereeing and/or carrying out editorial work for Elsevier journals. I will explain why I decided to do this, and also hypothesise as to why almost 10,000 other researchers (as of April 2012) have done the same thing.
Open Access and Author-Owned Copyright
Amye Kenall, Tim Meese and Peter Thompson
What are the barriers to starting an open-access journal? Much has been discussed about cost, and there are now more than a few successful production models one can point to. But what are the other barriers, the barriers to starting any new journal? For example, financing and developing a journal reputation. We offer some "notes from the field" from our experience with launching the open-access journal i-Perception. The second half of our talk focuses on author-owned copyright. We argue that the natural place of copyright is with the author and explain some reasoning behind various publishers' positions on copyright and permissions. Also, how might these policies be affected by various developments in public funding of research?
Publication bias, the File Drawer Problem, and how innovative publication models can help
One of the topics that has come up frequently in the discussions on open science has been the “file-drawer problem”, otherwise known as publication bias (Rosenthal, 1979, Psychological Bulletin, 86(3), 638-641). Traditional publishing practices have tended to favour positive results that reject the null hypothesis, leading some researchers to suggest that, in the extreme case, “most published results are false” (Ioannidis, 2005, PLos Medicine 2(8), e124). What does this mean for vision science, and how can an open science framework help address this problem? I will suggest that innovative publishing initiatives such as PsychFileDrawer.org and the Reproducibility Project can harness the new technologies available to researchers to encourage replication of important published research. In addition, new publication models could use methods similar to the registration of all clinical trials in medicine (e.g. initial peer review of only the Introduction and Methods) to help lessen or abolish publication bias.
Open experiments and open source
Have you ever tried to replicate someone’s study and found that they didn’t include sufficient detail for it to be possible? Or wanted to extend someone’s study, but avoided it because it was too much effort to generate their stimuli? Have you learned a new software language and wanted some working scripts to get started? Open science isn’t only about providing people with access to our findings. In the interests of both replicability and education, we should also be striving to provide full access to our actual experiments. This talk will focus on how we might encourage the sharing of experiment code as well as looking at the related movement of open-source software development for science.
Exploiting modern technology in making experiments: the academic app store
Ian M. Thornton
During the last decade, the commercial model for distributing software has undergone a complete revolution. Inspired by the success of music and video download sites, many companies now focus on volume sales of small, stand-alone applications or “apps” rather than on expensive software suites. Important factors behind this shift have been then rapid increase in processing power available on mobile devices, such as smart phones and tablets, and the consequent changes in how users prefer to interact with software. In this talk, I want to explore what these changes might mean for scientists in terms of the development and distribution of experimental ideas. In short, there are numerous open source environments that make it make it relatively easy to take existing experimental code and to produce cross-platform apps that can be freely downloaded both by academic colleagues and potential participants. Whether such ‘experimental apps’ are designed to run on standard desktop hardware or are specifically focused on the novel interface and data capture potential of mobile devices, there could be a number of advantages to adopting such a model. Here I will specifically focus on rapid development, quick and easy distribution, and the potential for mass, remote data collection.