The Problem with “Evidence”

Sean McClure
2 min readNov 24, 2023

--

Science is assumed to be “evidence-based” but that term alone doesn’t mean much. What constitutes good evidence? How is evidence being used? Is it supporting or refuting a hypothesis? Was the hypothesis and experimental design predetermined or found ex post facto?

The reality is you can find “evidence” for almost any narrative. Limit the sample size, cherry-pick studies, etc. Systematic reviews, meta analyses, and randomized controlled trials are all susceptible to selective interpretation/narrative fallacy.

At the heart of the problem is the over-reliance on simplistic statistical techniques that do little more than quantify 2 things moving together.

Take Pearson’s correlation, based on covariance. Variation can increase simultaneously across 2 variables for countless reasons, most of which are spurious. Yet this simple notion of “causality” undergirds much of scientific literature.

Information-theoretic (entropy based) approaches on the other hand can assess *general* measures of dependence. Rather than some specialized (linear) view based on concurrent variation, entropy encompasses the amount of information contained in and between variables.

If you were genuinely interested in giving the term “evidence” an authentic and reliable meaning then the methods used to underpin an assertion would be rigorous.

We wouldn’t look to conveniently simplistic methods to denote something as evidential, rather we would look for a measure capable of assessing the expected amount of information held in a random variable; there is nothing more fundamental than information.

Consider Mutual Information (MI), which quantifies the amount of information obtained about one random variable through observing another random variable. This observing of the relationship between variables is what measurement and evidence is all about.

MI determines how different joint entropy is from marginal entropies. If there is a genuine dependence between variables we would expect information gathered from all variables at once (joint) to be less than the sum of information from independent variables (marginals).

If “evidence-based” science was genuinely invested in authentic measurement it would leverage *general* measures of dependence; that demands an approach rooted in information-theory. Without entropy you’re just picking data, choosing a narrative, and calling it “evidence.”

Further Reading

--

--

Sean McClure

Independent Scholar; Author of Discovered, Not Designed; Ph.D. Computational Chem; Builder of things; I study and write about science, philosophy, complexity.