Diese Seite ist aus Gründen der Barrierefreiheit optimiert für aktuelle Browser. Sollten Sie einen älteren Browser verwenden, kann es zu Einschränkungen der Darstellung und Benutzbarkeit der Website kommen!
Homepage LMU Homepage LMU Department of Statistics Working Group Methodological Foundations of Statistics and their Applications
Kopfbild Druck

Workshop on History of Statistics

Munich, 22nd and 23rd March 2016

Glenn Shafer, other researchers in history of statistics and therein interested people will meet for a workshop on Tue 22 and Wed 23 March. There will be presentations and also lots of room for discussion in an informal atmosphere.

Anyone interested in the history of statistics is welcome to join.

Location: Room 144 on the first floor (BE counting) of the Department of Statistics (Ludwigstraße 33, 80539 München)

Admission Fee: There will be NO fees.
However, we would like you to notify us for organisation purpose if you plan to join us for the full workshop or parts of it.

Contact: In case of questions feel free to contact us by means of the notification email address.

Tentative Program:



Introduction and Welcome


Glenn Shafer (Rutgers): The invention of random variables: concept and name

By tracing the history of the names variabile casuale, zufällige Variable, variable aléatoire, random variable, we learn something about the concept as well. The story features Galton as well as Laplace, and Chuprov, Frechet and Darmois as well as Markov, Cantelli, Neyman and Wald, and Doob.


Coffee Break


Wolfgang Pietsch (TU Munich): A causal approach to analogical inference

I discuss whether analogical reasoning constitutes more than a heuristic tool for hypothesis generation. In the first part, some historical accounts are reviewed focusing in particular on Keynes, Carnap, and Hesse. In trying to identify the source of the widespread skepticism regarding analogical inferences, I concentrate on the notions of similarity and in particular the relevance of differences between source and target. In the second part of the talk, I propose a conceptual framework how analogical inferences could be rendered more objective. To this purpose, I first argue for a distinction between what will be called a predictive and a conceptual type of analogical reasoning. I then take up a common intuition according to which analogical inferences of the predictive type hold if the differences between source and target concern only irrelevant properties. I attempt to make this idea more precise by specifying a notion of irrelevance in terms of a counterfactual analysis.


Conference Dinner (on a Dutch treat basis)



Thomas Augustin (LMU Munich) and Rudolf Seising (FSU Jena): The HiStaLMU project

Two short presentations introduce the HiStaLMU project (History of Statistics at LMU Munich). It aims at describing the institutional developments of the Department of Statistics at LMU and the methodological background driving them. In the first part of the talk, presented by Thomas Augustin, we try to identify and date some institutional milestones. Then some first theses and research questions concerning methodological positions are derived and put up for discussion. In the second part Rudolf Seising presents the design, practical implementation and first results of the oral history part of the project.


Coffee Break


Wolfgang Stegmüller and the philosophical roots of the Institute of Statistics in Munich

Wolfgang Stegmüller (1923-1991) was one of the most influential Philosophers of Science in the German-speaking world. In 1974 he and Kurt Weichselberger founded the Institute of Statistics and Philosophy of Science at the LMU in Munich. While his wife Margret describes their collaboration as purely practically motivated, Stegmüller’s work indicates that there is more about it: he focused on Philosophy of Statistics just in the years before the foundation of the institute and explicitly aimed at overcoming an 'enormous gap' between Philosophy of Statistics and Mathematical Statistics.


Uwe Saint-Mont (Univ. Appl. Sciences Nordhausen): On the logic and history of statistical tests

Every scientific investigation consists of (at least) two components: (1) A rather abstract tier, e.g., a theory, a set of concepts, or certain relations of ideas. (2) A rather concrete tier, i.e., empirical evidence, experiments or observations which constitute matters of fact. Statistical tests are particularly simple models for this endeavour. On the one hand, they consist of hypotheses (potential populations Hi) on the other hand there is data, i.e. a sample from one of these populations. Given the data, one may conclude something about the hypotheses, e.g., determine the most likely population Hi. Going from simple to complex we will present and examine four conceptions of statistical testing (Fisher, Likelihood, Bayes, Neyman & Pearson) and an elegant unified treatment. In a nutshell, due to the law of large numbers, testing is an easy discrimination problem that allows a straightforward solution. In other words, it can be treated nicely within an elegant and rather simple mathematical framework. Thus, it is quite amazing that the historical development has been quite involved. In particular, the statistical mainstream has pursued a complex line of argument whose application in psychology and other fields has brought some progress but has also impaired scientific thinking.


Lunch Break


Hans Fischer (KU Eichstätt-Ingolstadt): Statistical inference via direct and inverse probabilities in Laplace's work

From the beginning of his stochastic work around 1770, Laplace in a pragmatic way employed both direct and inverse probabilities, i.e., probabilities of future events under certain hypotheses, and, conversely, probabilities of hypotheses under the assumption that certain events have happened. With respect to inverse probabilities, he went far beyond Bayes's achievements, especially in deriving adequate asymptotic methods, by which he approached problems of population statistics, for example. Laplace's discussion of sums of independent random variables, which started in the 1770s as well, led to direct probabilities, but, for a rather long time, lacked usable approximations to those quite intricate probability formulas as usual in the case of large numbers. Only around 1810, Laplace succeeded in an asymptotic treatment of sums of independent random variables which, from today's point of view, can be considered as a fairly general version of the central limit theorem. This result immediately enlarged the range of application of direct probabilities and initiated a development which, eventually, led to the predominance of so called “frequentist statistics” in the 20th century.



Impressum   Datenschutz