Statistical assumption

Statistics, like all mathematical disciplines, does not infer valid conclusions from nothing. Inferring interesting conclusions about real statistical populations almost always requires some background assumptions. Those assumptions must be made carefully, because incorrect assumptions can generate wildly inaccurate conclusions.

Here are some examples of statistical assumptions:

  • Independence of observations from each other (this assumption is an especially common error[1]).
  • Independence of observational error from potential confounding effects.
  • Exact or approximate normality of observations (or errors).
  • Linearity of graded responses to quantitative stimuli, e.g., in linear regression.

Classes of assumptions edit

There are two approaches to statistical inference: model-based inference and design-based inference.[2][3][4] Both approaches rely on some statistical model to represent the data-generating process. In the model-based approach, the model is taken to be initially unknown, and one of the goals is to select an appropriate model for inference. In the design-based approach, the model is taken to be known, and one of the goals is to ensure that the sample data are selected randomly enough for inference.

Statistical assumptions can be put into two classes, depending upon which approach to inference is used.

  • Model-based assumptions. These include the following three types:
    • Distributional assumptions. Where a statistical model involves terms relating to random errors, assumptions may be made about the probability distribution of these errors.[5] In some cases, the distributional assumption relates to the observations themselves.
    • Structural assumptions. Statistical relationships between variables are often modelled by equating one variable to a function of another (or several others), plus a random error. Models often involve making a structural assumption about the form of the functional relationship, e.g. as in linear regression. This can be generalised to models involving relationships between underlying unobserved latent variables.
    • Cross-variation assumptions. These assumptions involve the joint probability distributions of either the observations themselves or the random errors in a model. Simple models may include the assumption that observations or errors are statistically independent.
  • Design-based assumptions. These relate to the way observations have been gathered, and often involve an assumption of randomization during sampling.[6][7]

The model-based approach is the most commonly used in statistical inference; the design-based approach is used mainly with survey sampling. With the model-based approach, all the assumptions are effectively encoded in the model.


Checking assumptions edit

Given that the validity of any conclusion drawn from a statistical inference depends on the validity of the assumptions made, it is clearly important that those assumptions should be reviewed at some stage. Some instances—for example where data are lacking—may require that researchers judge whether an assumption is reasonable. Researchers can expand this somewhat to consider what effect a departure from the assumptions might produce. Where more extensive data are available, various types of procedures for statistical model validation are available—e.g. for regression model validation.

Example: Independence of Observations edit

Scenario: Imagine a study assessing the effectiveness of a new teaching method in multiple classrooms. If the classrooms are not treated as independent entities, but rather as a single unit, the assumption of independence is violated. Students within the same classroom may share common characteristics or experiences, leading to correlated observations.

Consequence: Failure to account for this lack of independence may inflate the perceived impact of the teaching method, as the outcomes within a classroom may be more similar than assumed. This can result in an overestimation of the method's generalizability to diverse educational settings.

See also edit

Notes edit

  1. ^ Kruskall, 1988
  2. ^ Koch G. G., Gillings D. B. (2006), "Inference, design-based vs. model-based", Encyclopedia of Statistical Sciences (editor—Kotz S.), Wiley-Interscience.
  3. ^ Cox, 2006, ch.9
  4. ^ de Gruijter et al., 2006, §2.2
  5. ^ McPherson, 1990, §3.4.1
  6. ^ McPherson, 1990, §3.3
  7. ^ de Gruijter et al., 2006, §2.2.1

References edit

  • Cox D. R. (2006), Principles of Statistical Inference, Cambridge University Press.
  • de Gruijter J., Brus D., Bierkens M., Knotters M. (2006), Sampling for Natural Resource Monitoring, Springer-Verlag.
  • Kruskal, William (December 1988). "Miracles and statistics: the casual assumption of independence (ASA Presidential address)". Journal of the American Statistical Association. 83 (404): 929–940. doi:10.2307/2290117. JSTOR 2290117.
  • McPherson, G. (1990), Statistics in Scientific Investigation: Its Basis, Application and Interpretation, Springer-Verlag. ISBN 0-387-97137-8