The Theory Of Statistics As The
"Frequentist's" Theory Of Inductive Inference
Deborah G. Mayo
Virginia Tech
Abstract
Given Neyman's emphasis on inductive behavior, many may be surprised to find the above section
title in his 1955 paper "The Problem of Inductive Inference". But, as Lehmann (1995) points out,
Neyman regarded the Neyman-Pearson (N-P) formulation of tests "as an important contribution not
only to statistics but also to the philosophical problem of induction". However, Lehmann observes,
philosophers of science found (and generally find) little use for N-P tests in their approaches
to induction and confirmation; with few exceptions, where it was discussed at all, it was as a
foil to mount criticisms. I will touch only briefly on the assumptions of the logical empiricist/logical
positivist philosophy of science that gives rise to those criticisms (enough tears have been shed!).ÝMy main task is to unpack the above title.ÝThis I will do by showing the relevance of the broadly
"error statistical" methodology to current (post-logical empiricist) philosophy of science---something
that has been largely overlooked, much to the detriment of current problems in the philosophy of
evidence and inference. With key themes from Lehmann's and Neyman's papers serving as backdrop, I will
weave together several strands from:a) Popper's falsificationism (which came up short in cashing out the notion of "severe tests" because
it lacked an understanding of, or contact with, N-P methods),b) Pearson's attempt to distinguish evidential and behavioral interpretations of N-P tests
(which gave hints and examples but was never worked out),c) C.S. Peirce's philosophy of induction as severe testing (which partially anticipates the formal
machinery developed later by Neyman and Pearson).