Statistical Learning and Reliable Processing
Logic Seminar: Session 6
Statistical learning theory allows us to select a good classification rule from a (potentially infinite) set of candidate rules based only on a finite data set, where a 'good' rule is one that makes few errors when applied to phenomena outside the data set. The empirical nature of this method has opened up new discussions of old problems in philosophy of science, as they provide promising perspectives on both the principle of parsimony and the problem of induction. The reason statistical learning theory can do what it claims to is because of a theorem proved by Vapnik and Chervonenkis. In this seminar, Zhao starts with an examination of the rationale behind statistical learning theory and the VC theorem on which it depends. She then evaluates some of the philosophical discussions around statistical learning theory in light of the technical background. Finally, she draws attention to an underexplored connection between VC theorem and model theory, and comment on the implications of this connection.
Chair for Session: TBA
connect with us