Professor Julian Higgins explains why he believes the systematic review and meta-analysis methods described in many highly cited papers are routinely misunderstood or misused. Julian Higgins is Professor of Evidence Synthesis at the Bristol Evidence Synthesis, Appraisal and Modelling (BEAM) Centre at the University of Bristol. His research has focussed on the methodology of systematic review and meta-analysis and he has been senior editor of the Cochrane Handbook for Systematic Reviews of Interventions since 2003. He is an NIHR Senior Investigator and currently co-directs the NIHR Bristol Evidence Synthesis Group.
Systematic reviews and meta-analyses have become influential and popular. Papers describing aspects of the systematic review and meta-analysis toolkit have become some of the most highly cited papers. I will review those that appear at the top of the most-cited list and explain why I believe the methods described are routinely misunderstood or misused. These include a test for asymmetry in a funnel plot, the I-squared statistic for measuring inconsistency across studies, the random-effects meta-analysis model and the PRIMSA reporting guideline.
All content for Evidence-Based Health Care is the property of Oxford University and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Professor Julian Higgins explains why he believes the systematic review and meta-analysis methods described in many highly cited papers are routinely misunderstood or misused. Julian Higgins is Professor of Evidence Synthesis at the Bristol Evidence Synthesis, Appraisal and Modelling (BEAM) Centre at the University of Bristol. His research has focussed on the methodology of systematic review and meta-analysis and he has been senior editor of the Cochrane Handbook for Systematic Reviews of Interventions since 2003. He is an NIHR Senior Investigator and currently co-directs the NIHR Bristol Evidence Synthesis Group.
Systematic reviews and meta-analyses have become influential and popular. Papers describing aspects of the systematic review and meta-analysis toolkit have become some of the most highly cited papers. I will review those that appear at the top of the most-cited list and explain why I believe the methods described are routinely misunderstood or misused. These include a test for asymmetry in a funnel plot, the I-squared statistic for measuring inconsistency across studies, the random-effects meta-analysis model and the PRIMSA reporting guideline.
Dr Jason Oke, gives a talk on Stein's work, the paradox and some of its more controversial results and consider the implications for evidence-based medicine Dr Jason Oke, Principal Statistician at Abbott Diabetes Care, was previously a senior statistician at the Nuffield Department of Primary Care Health Sciences, University of Oxford. He has a wealth of experience in applying statistics and data analysis across many health care domains. He is passionate about advancing evidence-based health care practice and policy through rigorous research and teaching.
Next to counting, averaging is the most basic and important practice in statistics. For over 150 years it was thought that nothing was uniformly better than the sample average for the purposes of estimation or prediction. In 1955, Charles Stein proved this wasn't true when considering three or more independent unobservable quantities. In 1961, Willard James and Charles Stein proposed an alternative estimator - the James-Stein estimator - which improved on the simple averaging approach no matter what the true values of the unobservable quantities. Although Stein's work was initially met with resistance and was slow to be accepted among statisticians, its principal idea is now used widely across statistics and evidence-based medicine.
Evidence-Based Health Care
Professor Julian Higgins explains why he believes the systematic review and meta-analysis methods described in many highly cited papers are routinely misunderstood or misused. Julian Higgins is Professor of Evidence Synthesis at the Bristol Evidence Synthesis, Appraisal and Modelling (BEAM) Centre at the University of Bristol. His research has focussed on the methodology of systematic review and meta-analysis and he has been senior editor of the Cochrane Handbook for Systematic Reviews of Interventions since 2003. He is an NIHR Senior Investigator and currently co-directs the NIHR Bristol Evidence Synthesis Group.
Systematic reviews and meta-analyses have become influential and popular. Papers describing aspects of the systematic review and meta-analysis toolkit have become some of the most highly cited papers. I will review those that appear at the top of the most-cited list and explain why I believe the methods described are routinely misunderstood or misused. These include a test for asymmetry in a funnel plot, the I-squared statistic for measuring inconsistency across studies, the random-effects meta-analysis model and the PRIMSA reporting guideline.