Past and Future Dependencies in Meta-Analysis: Flexible Statistics for Reducing Health Research Waste -- Judith ter Schure

This session will be a 10-minute practice talk with feedback. April 20th Judith gives a 10-minute talk at the NRIN Research Conference in the parallel session '2.1 Biases and solutions'<https://www.nrin.nl/agenda/nrin-research-conference-2018/>. Because the talk is so short, a practice round might help to get the focus right. Although the abstract involves all research topics related to meta-analysis, the talk will mainly discuss a newly defined phenomenon: 'Accumulation Bias'. Accumulation Bias is the bias in meta-analysis that occurs because clinical trials are, and ought to be, a dependent sequence; whether additional trials are performed and how many depends on previous trial results. See title and abstract below.
  • Past and Future Dependencies in Meta-Analysis: Flexible Statistics for Reducing Health Research Waste -- Judith ter Schure
  • 2018-04-12T14:00:00+02:00
  • 2018-04-12T15:00:00+02:00
  • This session will be a 10-minute practice talk with feedback. April 20th Judith gives a 10-minute talk at the NRIN Research Conference in the parallel session '2.1 Biases and solutions'<https://www.nrin.nl/agenda/nrin-research-conference-2018/>. Because the talk is so short, a practice round might help to get the focus right. Although the abstract involves all research topics related to meta-analysis, the talk will mainly discuss a newly defined phenomenon: 'Accumulation Bias'. Accumulation Bias is the bias in meta-analysis that occurs because clinical trials are, and ought to be, a dependent sequence; whether additional trials are performed and how many depends on previous trial results. See title and abstract below.
  • When Apr 12, 2018 from 02:00 PM to 03:00 PM (Europe/Amsterdam / UTC200)
  • Where L120
  • Add event to calendar iCal

In 2009, a paper estimated that 85% of our global health research investment is wasted each year. It recommended to reduce this waste by basing study design and reporting on (prospective) meta-analyses, involving decision making (which issues to research and how) and interpretation (how new trial results relate to previous research). However, conventional meta-analysis reporting – p-values and confidence intervals – is neither suitable for such decisions nor straightforwardly interpretable. As a decision procedure, it treats a sequence of trials as an independent sample, while in reality both (1) whether additional trials are performed and how many and (2) the timing of the meta-analysis (‘stopping rule’) often depend on previous trial results. Ignoring this introduces (1) bias and (2) optional stopping problems, while the existence of such dependencies has been empirically demonstrated, e.g. ‘Proteus effect’ and ‘citation bias’. To solve both (1) and (2), we propose ‘Safe Tests’ and a reporting framework. Which tests are ‘Safe’ (e.g. Bayesian t-test) and which are not (both Bayesian and frequentist tests) is intuitively discussed in the meta-analysis context, but mathematical detail is postponed to a forthcoming paper. The reporting framework is also focused on the meta-analysis context, but is based on an individual study setup put forward by Bayarri et al. (2016) as ‘Rejection Odds and Rejection Ratios’. Apart from decision making, our proposal also improves meta-analysis interpretation: reported values are related to gambling earnings, frequentist and Bayesian analyses and thus, apart from reducing waste, also contribute to the recently revived p-value debate.