A summary of interesting papers on meta-analysis and modeling publication bias and p-hacking
Recently I had an itch to learn a bit about meta-analysis methods, so I dug through some papers to learn a bit about it. In this process, I stumbled onto a pair of papers by Professor Maya Mathur at Stanford that I thought were very interesting.
In this paper
As a student with experience in algorithms research, I really liked this idea of a lower bound under minimal assumptions as a complement to any other method that might be used. I think that another real strength of the approach is its robustness to many typical limitations (heterogeneity, non-normal effects, small number of studies, or dependent effects). The paper contrasts this with funnel plots and statistical models. A funnel plot plots the point estimate on the horizontal axis and the standard error on the vertical axis. For smaller studies, you would expect to see greater variation in estimates, with the points creating a funnel shape. However, the spread in effects from small studies can come from real differences in those studies rather than publication bias, as some treatments may only be feasible to test on a small scale
While there is no silver bullet, I really like the clean, intuitive, approach the MAN offers.
Having enjoyed the first paper, I dug into to another paper from Mathur that builds out a framework for p-hacking and publication bias more generally
In the paper they just recommend doing some diagnostics including a QQ-plot for the Jeffrey’s prior, but explore the topic through extensive simulation in a later paper