Updated: Oct 18
Most of the exams and schooling we undertook in the nineties and early naughties were about testing knowledge. We went to school to learn things and memorise them. Proof of our efforts could be found in the coffee cups and candy wrappers, but proof of our ability to retain that knowledge was in the exam results we’d get at the end of the semester or year. With the advent of the internet during this same period, however, people stopped being the best repositories of knowledge about subjects. Instead, the term “Google it” became common parlance. Now if we have a question we ask cyberspace for answers.
The internet also changed the way research was conducted. Until as recently as the early years of the 21st century one would have to go to the science library, find the journal article of interest, go to the compactus, unroll the shelves at the right level, pull out the appropriate volume and issue of the actual journal, and flip through the pages to the article. Jules was in second year when the science librarian told her there was a new way to find research: online databases.
The pressure to produce journal articles has increased concurrently with the amount of information and data generated, leading to what is termed “publish or perish” in the academic world. That is, one’s potential career as a research scientist is highly correlated to the number of journal articles one gets published. This has sped up the pace of research output substantially, to the point that now we have the opposite problem society had when we were educated in the latter years of the 20th century and early years of the 21st: we are overwhelmed by information and data.
All of these problems have converged to create a new problem that is the crux of this blog: peer reviewed or otherwise, many are concerned that the quality of research in recent years has been dropping to concerningly low levels. Summarising a research topic in a literature review therefore risks using spurious data to identify gaps or trends in research.
Thankfully, medical science uses a process of information processing and data review that allows for both reproducible searching of the literature and an assessment of data quality to be undertaken. This is what is termed the systematic review and metadata analysis.
Much more thorough than the literature reviews of our undergraduate years, a systematic review is a process where the method by which literature was sourced is outlined in detail. Boolean search terms, the years within which searches were restricted, database types, and accessibility are all recorded as part of the research methods, in order to maximise the reproducibility of the literature search.
The metadata analysis component is the recording and analysis of information about the data in the literature articles collected during the systematic review. It sounds a bit meta ― excuse the pun ― however, the metadata are the answers to questions about quality one would normally ask of research. Examples of metadata we have collected in recent works include: the statistical design of the experiment; the number of replicates used; what controls were used; what method was used to analyse the data; how mortality was assessed; the length of the assessment; the instrument used to measure the chemical of concern, among many more.
Once the metadata are collected, the analysis begins and one focuses on assessing those age-old tenets of science: accuracy, reproducibility, precision, transparency, and standards of the research.
We at Murrang have been doing more systematic reviews using meta-data approaches for understanding the reliability of data available. Such understanding is important for understanding what is known and unknown when undertaking decision making. Examples include efficacy assessments, where we considered the reliability of data presented in a range of sources to support the efficacy of a chemical for its proposed use; and assessments where we considered the reliability of data to support decisions related to the contamination of organic substances such as composts and biosolids. As a result of these metadata analyses a genuine examination of how robust and reliable data were for decision making was made. Gaps and limitations in knowledge and conclusions, that are not always apparent as a result of the standard literature review process, became clear. The gaps and information available were then able to be evaluated to make decisions about environmental and human health protection.
If you are interested in finding out more about systematic review and metadata analysis, the medical sector has produced a number of useful guides, such as those at the following links:
Of course we also love talking science, and you can get hold of us via Murrang’s Contact page if you’d like to find out more from us.