A writer draws attention to the dangers of newer superintendence of scientific research in addition to many sources of scrutiny already existing. Andrew Barnas, the author of the piece thinks the idea of a perfect research project is an unattainable fantasy, most likely to be the kind of thing graduate students and some of their supervisors are sure to have been waiting for.
By Andrew Barnas
It almost goes without saying that in a resource-hungry world, waste is bad. And in a scientific environment in which the number of project proposals vastly outnumbers the amount of money available to fund them, any mechanism to minimise wasteful use of what funding there is must surely be a good thing. Or must it?
In a recent article published in Nature, Daniël Lakens argues that dedicated methodological review boards are necessary for universities to prevent fatal flaws in research design and data collection. According to the associate professor in industrial engineering and innovation sciences at Eindhoven University of Technology, such boards would identify fatal flaws in the methodologies of research proposals from researchers at their institution that could not be corrected post-data collection, thereby reducing wasted researcher effort.
Yet while this sounds like a good idea, such boards would be largely redundant. Standard research practices already encompass many checks and balances, both before and after data collection. Granting agencies review detailed methods proposals as it is in their best interests to fund research that will produce informative results. Moreover, researchers obtaining grants often do not work in isolation; they consult with colleagues in similar fields on project ideas, willingly accepting critical feedback before research begins.
Once funded, individual components of research programmes are typically structured as graduate theses or dissertation projects, for which the students have to submit a written research proposal to a graduate committee for approval. Additionally, if the research involves animal or human subjects, the proposal must pass an ethics or animal care board, whose members evaluate its likelihood of achieving its objectives. Graduate students are also required to attend university courses in methodological design and statistical analysis, and data collection and analysis are meant to be guided by a supervising professor. Then, of course, there is journals’ peer review process, through which all published papers must pass.
Despite all this, poorly conducted science does admittedly slip through the net. Peer review is not perfect, and individual experts all have biases. But the same would be true of methodological review board members; scientists are only human. Moreover, post-publication methods exist to correct significant errors in the literature.
Once a paper is published, the entire scientific community is free to evaluate the merits of the work for themselves and to weigh in where necessary. When serious flaws are found, rebuttal papers are published, which generate further discussion on the topic. Corrections can be published by the original authors, and in cases of serious error (or unethical practices), papers are retracted. This open-sourced vetting by the scientific community fortifies the scientific method.
None of this is to deny that we, as researchers, should carefully consider our study designs and analysis plans prior to data collection. And, of course, we should follow local regulations, institutional guidelines and ethical considerations designed to help us conduct good science. But while I empathise with Lakens’ sentiment that the goal of methodological review boards would be “not to gatekeep, but improve”, I suspect the improvement would be marginal given all the other layers of scrutiny – and would come at the cost of burdening already overloaded scientists with yet more bureaucracy.
After all, it seems unlikely that universities would form boards for specific disciplines; it is far more likely that they would create a more generalised board containing members from unrelated fields. Is it reasonable to think that a board of such diverse expertise would even be qualified to assess the specific data requirements and analytical techniques for a specific research project in a particular subfield?
Most importantly, the idea that flawed research is entirely useless is itself flawed. Science is a messy endeavour, and there is a considerable amount of trial and error associated with experiments and data collection. Often, we as scientists are doing the best we can under circumstances of failed experiments and uncooperative data collection. Embedding in students the idea that we need a perfect road map from project start to project finish is setting the next generation of trained professionals up for failure, because the idea of a perfect research project is an unattainable fantasy.
As Edward O. Wilson writes in his 2013 book Letters to a Young Scientist: “I know that the popular image of science is one of uncompromising precision, with each step carefully recorded in a notebook, along with periodic statistical tests on data made at regular intervals. Such is indeed absolutely necessary when the experiment is very expensive or time-consuming. It is equally demanded when a preliminary result is to be replicated and confirmed by you and others in order to bring a study to conclusion. But otherwise it is certainly all right and potentially very productive just to mess around. Quick uncontrolled experiments are very productive. They are performed just to see if you can make something interesting happen. Disturb Nature and see if she reveals a secret.”
Difficult problems of science require creative approaches. Yet additional methodological review boards are likely to discourage creativity at project conception, in favour of more standardised approaches with higher rates of success but much lower chances of making significant contributions to knowledge.
Andrew Barnas is a postdoctoral fellow in ecology at the University of Victoria, British Columbia. Twitter: @AndrewBarnas