Tuesday 5 July 2016

Cluster Failure and the Future of fMRI (2016)

All science is comparative - either comparing one thing to another, or one group to another (for example Treatment vs Control) or observations to a theory. Good quality quantitative studies require a numerical answer to the question Compared to What? 

HERE is a study by Anders Eklund, Thomas E. Nichols and Hans Knutsson that asks whether the statistical methods that are routinely used in tens of thousands of functional MRI (fMRI) studies around the world are any good. 

The authors provide a long, detailed and closely argued answer to this question. The short answer is NO.
The authors conclusions on the Future of fMRI are as follows.

It is not feasible to redo 40,000 fMRI studies, and lamentable archiving and data-sharing practices mean most could not be reanalyzed either. Considering that it is now possible to evaluate common statistical methods using real fMRI data, the fMRI community should, in our opinion, focus on validation of existing methods. The main drawback of a permutation test is the increase in computational complexity, as the group analysis needs to be repeated 1,000–10,000 times. However, this increased processing time is not a problem in practice, as for typical sample sizes a desktop computer can run a permutation test for neuroimaging data in less than a minute. Although we note that metaanalysis can play an important role in teasing apart false-positive findings from consistent results, that does not mitigate the need for accurate inferential tools that give valid results for each and every study. Finally, we point out the key role that data sharing played in this work and its impact in the future. Although our massive empirical study depended on shared data, it is disappointing that almost none of the published studies have shared their data, neither the original data nor even the 3D statistical maps. As no analysis method is perfect, and new problems and limitations will be certainly found in the future, we commend all authors to at least share their statistical results [e.g., via NeuroVault.org] and ideally the full data [e.g., via OpenfMRI.org]. Such shared data provide enormous opportunities for methodologists, but also the ability to revisit results when methods improve years later.