Skrevet av Max Korbmacher, masterstudent i psykologi, studieretning atferd- og nevrovitenskap.
After it had developed into a global pandemic in the spring, the world’s current topic number one is the corona virus. While a focus on what we can learn about the pandemic’s negative consequences has been prominent over the last months, information on its potentials has received less attention. It is perhaps too early or pretentious to talk about a silver lining. Looking at the situation from a research perspective, this crisis seems like a great learning opportunity. But is it really that easy? And will the research around the crisis be of sufficient quality to further our knowledge?
Due to the urgency to find solutions, a range of peer-reviewed journals are calling for papers on Covid-19 with accelerated review processes, leading to a rush for publication. Additionally, many preprints, papers uploaded prior to their acceptance in a peer-reviewed journal, have already been published on the topic. Although it seems like an intuitively positive impact on scientific content production, this change may potentially degrade the quality of the research output.
The challenge of fast research
Rushed research projects and review processes can lead to a variety of qualitative problems, which will have to be controlled for in the crisis’ aftermath. First, reviewers might have felt bad for blocking important findings, or they may have been forced to review faster than usual and allowed flaws which they usually would not accept. Among other effects, this might lead to a less critical inspection of decisions made during the research process and hence how the research is influenced by questionable research practices (QRPs). Those are selective reporting, p-hacking, hypothesizing after the results are known, fabricating data, etc (1)Kerr, N. L. (1998). HARKing: Hypothesizing after the results are known. Personality and Social Psychology Review. doi: 10.1207/s15327957pspr0203_4, (2)Wicherts, J. M., Veldkamp, C. L. S., Augusteijn, H. E. M., Bakker, M., van Aert, R. C. M., & van Assen, M. A. L. M. (2016). Degrees of freedom in planning, running, analyzing, and reporting psychological studies: A checklist to avoid P-hacking. Frontiers in Psychology. doi: 10.3389/fpsyg.2016.01832. Moreover, many reviewers who have trouble making time for reviews, for example by having children staying at home, are simply excluded from reviewing the literature during that period.
Second, preprints are there to improve final manuscripts via unofficial reviews by peers. Publishing them, however, can be problematic for several reasons. When preprints are of low quality and not read by experts on the matter (and updated accordingly), they might be mis-cited as peer-reviewed publications. This can, in the worst case, lead to a spread of misinformation.
Third, the Covid-19 literature so far might paint a distorted picture. Many publications are non-empirical articles, preprints, or publications which show substantial methodological weaknesses (3)Scheel, A. (2020). Crisis research, fast and slow. Retrieved April 9, 2020, from The 100% CI website: http://www.the100.ci/2020/03/26/crisis-research-fast-and-slow/. Certain research projects might even have adverse effects for participants, showing how an increased focus on fast research outputs might seriously endanger research quality.
On the other side, speeding up the review process might simply have decreased the time to find reviewers and to go through the different review stages. Instead of collecting dust, manuscripts might have gotten read, resulting in the publication of interesting, and potentially urgently important, explorative research. And yes, there are projects with robust and transparent methods currently being conducted, some of which make use of preregistrations (4)Gonzales, J., & Cunningham, C. (2015). The promise of pre-registration in psychological research. Psychological Science Agenda, 29(8). Retrieved from https://osf.io/7pd5u/download and registered reports (5)Nosek, B. A., & Lakens, D. (2014). Registered Reports. Social Psychology. doi: 10.1027/1864-9335/a000192. These initiatives may decrease the likelihood of research being influenced by QRPs. For example, in the Covid-19 registered reports from the Psychological Science Accelerator (PSA), hundreds of psychologists first voted for a selection of projects to be executed. They are currently involved in a collective effort to produce interesting, open, intervention-directed data while trying to minimize biases.
As the waves settle, critical reassessment and replication of research and Covid-19-related claims will be necessary – a continuation of what the field has already started with after detecting replication problems (6)Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), 943-aac4716-8. doi: 10.1126/science.aac4716. When erroneous information has already reached the general public, updates and corrections will be required. This is obviously not the desired route of communicating science, as thereby trust might decrease and errors tend to live on.
Besides replicability, the findings’ generalisability must also be assessed (7)Yarkoni, T. (2019). The Generalizability Crisis. doi: 10.31234/OSF.IO/JQW35, and follow-up research, interventions and plans to better inform policy decision makers before, during and after the next crisis need to be designed. Maybe it will then be easier to answer questions such as ‘what are the best methods to react to future pandemics?’, ‘what psychological, social and physical harm can be expected and how do we minimize it?’, and ‘what help people cope with similar situations in the future?’
In the end, we are in a crisis situation which requires a fast generation of research. If the research community reacts appropriately, with a focus on quality and reproducibility, we can learn a lot.
Referanseliste [ + ]
|1.||￪||Kerr, N. L. (1998). HARKing: Hypothesizing after the results are known. Personality and Social Psychology Review. doi: 10.1207/s15327957pspr0203_4|
|2.||￪||Wicherts, J. M., Veldkamp, C. L. S., Augusteijn, H. E. M., Bakker, M., van Aert, R. C. M., & van Assen, M. A. L. M. (2016). Degrees of freedom in planning, running, analyzing, and reporting psychological studies: A checklist to avoid P-hacking. Frontiers in Psychology. doi: 10.3389/fpsyg.2016.01832|
|3.||￪||Scheel, A. (2020). Crisis research, fast and slow. Retrieved April 9, 2020, from The 100% CI website: http://www.the100.ci/2020/03/26/crisis-research-fast-and-slow/|
|4.||￪||Gonzales, J., & Cunningham, C. (2015). The promise of pre-registration in psychological research. Psychological Science Agenda, 29(8). Retrieved from https://osf.io/7pd5u/download|
|5.||￪||Nosek, B. A., & Lakens, D. (2014). Registered Reports. Social Psychology. doi: 10.1027/1864-9335/a000192|
|6.||￪||Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), 943-aac4716-8. doi: 10.1126/science.aac4716|
|7.||￪||Yarkoni, T. (2019). The Generalizability Crisis. doi: 10.31234/OSF.IO/JQW35|