Model-Selection Methods and the ‘Use-Novelty’ Criterion
Katie Steele (London School of Economics)

June 9, 2014, 5:00am - 6:00am
London School of Economics

London
United Kingdom

Sponsor(s):

  • British Society for the Philosophy of Science

Topic areas

Details

BRITISH SOCIETY FOR THE PHILOSOPHY OF SCIENCE

Ordinary Meeting, Monday, 9 June

Katie Steele (LSE)

Model-Selection Methods and the ‘Use-Novelty’ Criterion

5.15pm, room LAK T206 in the Lakatos Building at the LSE.
http://www.lse.ac.uk/mapsAndDirections/Home.aspx

Tea will be served beforehand.

** All welcome! **

ABSTRACT:

The ‘use-novelty (UN)’ criterion (Worrall 2006, 2010) tells us when fit-with-the-evidence should bear on our confidence in a theory: only when the evidence is new in the sense that it was not accommodated in the construction of the theory. Worrall further elaborates ‘accommodation' as a matter of using the evidence to fix the parameters of the more general theory in question; the UN criterion then states, in more precise terms, that evidence used to fix the parameters of a general theory does not confirm that general theory. This is a very powerful way of depicting why prominent examples of theories constructed in an ad hoc way to fit the evidence, such as Intelligent Design theories, receive no confirmation from said evidence. It is powerful, I would suggest, because it makes the UN criterion explicable in terms of a Bayesian model of confirmation. The problem, however, is that Worrall’s revised UN criterion deals only with a special case – the case where some given amount of evidence fully decides the parameters of a general theory, such that other possible parameter values are outright inconsistent with the evidence. In practice there are many more complicated cases where some range of parameter values cannot be ruled out by any given amount of evidence; in these cases, there is endless scope for refining/specifying the general theory in light of new evidence. These sorts of cases (or a highly qualified set of them) have given rise to so-called ‘model selection methods’ in statistics. This raises the question: Which model-selection methods make for plausible extensions of Worrall’s UN criterion to the complicated cases? In particular, some have argued that the ‘cross validation’ method is faithful to the UN criterion. Here I assess whether this is a fair appraisal, by contrasting cross validation with the more traditional Bayesian account of model selection, with reference to very simple models as well as models in climate science.

Supporting material

Add supporting material (slides, programs, etc.)

Reminders

Registration

No

Who is attending?

No one has said they will attend yet.

Will you attend this event?


Let us know so we can notify you of any change of plan.