Poster
in
Workshop: Your Model is Wrong: Robustness and misspecification in probabilistic modeling
Forcing a model to be correct for classification
Jiae Kim · Steven MacEachern
Scientists have long recognized deficiencies in their models, particularly in those that seek to describe the full distribution of a set of data. Statistics is replete with ways to address these deficiencies, including adjusting the data (e.g., removing outliers), expanding the class of models under consideration, and the use of robust methods. In this work, we pursue a different path, searching for a recognizable portion of a model that is approximately correct and which aligns with the goal of inference. Once such a model portion has been found, traditional statistical theory applies and suggests effective methods. We illustrate this approach with linear discriminant analysis and show much better performance than one gets by ignoring the deficiency in the model or by working in a large enough space to capture the main deficiency in the model.