Standardization Framework for Living Meta-Analysis on AMORE
About This Framework
What It Is
The AMORE Standardization Framework is a set of methodological requirements and recommended practices that all living meta-analyses hosted on the platform must follow. It defines the minimum standards for transparency, reproducibility, and scientific rigor.
Why It Exists
Oxytocin research spans diverse methodologies and populations. Without shared standards, meta-analytic evidence risks becoming fragmented and unreliable. This framework ensures that every project on AMORE meets a consistent baseline of quality that the research community can trust.
How It Evolved
The framework was developed by the expert steering committee. A consensus-driven process with two rounds of questionnaires resulted in the framework. The paper describing the development can be found here.
Mandatory Requirements
All requirements must be met for a living meta-analysis to be listed on AMORE. Click any item to read the rationale.
Living meta-analyses are updated over time, with authors performing an updated search since the last search was completed. Authors who contribute to the AMORE platform have the choice of how often they update their analysis, however, they must adhere to a maximum update frequency to ensure evidence remains up-to-date. The 24-month maximum was established through expert consensus to balance feasibility with the platform's commitment to current evidence synthesis.
Pre-registration separates confirmatory from exploratory analyses by documenting the research plan before results are known. Meta-analyses with health-related outcomes can be pre-registered in PROSPERO, while others can be pre-registered on the Open Science Framework. This protects against selective reporting and provides a public record that allows the community to evaluate whether the completed analysis followed its stated plan.
A key benefit of living meta-analyses is that they can provide up-to-date evidence. As peer review can take considerable time, results can quickly become out of date by the time they are accepted and published. Depositing the manuscript on a recognised pre-print server (such as the Open Science Framework) facilitates immediate reporting and ensures the research community has timely access to meta-analytic evidence.
Standardized reporting guidelines ensure that meta-analyses include all information necessary for readers to evaluate the methods and findings. PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) is the most widely used standard, but MOOSE (Meta-analyses Of Observational Studies in Epidemiology) and MARS (Meta-Analysis Reporting Standards) are accepted alternatives depending on the study design. Adherence to these guidelines makes the work interpretable, reproducible, and comparable across projects.
Meta-analyses are typically conducted using publicly available data reported in scientific publications or summary statistics privately provided by authors. Unlike individual participant data, it is not possible to identify any individual based on this summary data, which means data can be safely shared. Open meta-analysis data will more easily facilitate living meta-analyses and help others verify the analyses. Data can be deposited in recognised repositories such as the Open Science Framework or Zenodo.
One approach for facilitating transparent analyses is to share an analysis script (e.g., an R script). Another approach is to use free software packages such as JASP or jamovi, which include point-and-click meta-analysis options. These analysis files, along with underlying data, can be shared so that others can reproduce the analysis. The AMORE steering committee is available to provide advice on implementation.
A separate supplementary deviation report attached to the meta-analysis manuscript ensures that any changes from the pre-registered protocol are transparently documented.
Recommended Practices
Strongly encouraged practices that strengthen the quality and interpretability of your analysis.
A living meta-analysis introduces the issue of multiple testing. Once new evidence becomes available, a new test is run adding the new data to the existing data. Running sequential tests without appropriate alpha correction inflates the Type I error rate, also known as "accumulation bias." There are various approaches to combat this, including adjusting alpha levels according to the number of planned tests and Robust Bayesian analysis. Pre-specifying this strategy in the protocol ensures the approach is principled rather than post hoc.
Robust Bayesian meta-analyses yield Bayes Factors, which quantify the relative evidence for an alternative hypothesis over a null hypothesis (or vice versa). A Bayes factor of 10 suggests there is 10 times more evidence for one hypothesis relative to the other, representing strong evidence. One commonly used set of thresholds for interpreting Bayes factors are 3 (moderate evidence), 10 (strong evidence), and 30 (very strong evidence). The threshold of 10 was established through expert consensus as a satisfactory level for concluding the presence or absence of an effect.
Traditional frequentist meta-analyses generate p-values, which are used to decide whether to reject the null hypothesis. However, this approach cannot evaluate if there is evidence to support the null hypothesis, no matter the size of the p-value. Evaluating support for a null hypothesis is critical for falsifying hypotheses. Two alternative approaches are equivalence testing and Bayesian hypothesis testing. Authors are encouraged to propose a plan for evaluating evidence for a null hypothesis using either of these approaches.
Forest plots display individual study effects alongside the pooled estimate, making heterogeneity and outliers immediately visible. Funnel plots help assess potential publication bias by plotting effect sizes against precision. Together, these visualizations provide a transparent summary that complements the numerical results and helps readers form an independent judgment about the strength of the evidence. They are standard practice in well-conducted meta-analyses.