M-estimator: Difference between revisions
CSV import |
CSV import Tag: Reverted |
||
| Line 50: | Line 50: | ||
{{statistics-stub}} | {{statistics-stub}} | ||
{{No image}} | {{No image}} | ||
__NOINDEX__ | |||
Revision as of 18:11, 17 March 2025
Template:Infobox statistical method
M-estimators (or M-estimates) are a broad class of estimators in statistics that generalize classical maximum likelihood estimation (MLE) and are used to provide robust estimates of parameters in the presence of outliers or other small departures from model assumptions. The "M" in M-estimator stands for "maximum likelihood-type."
Overview
M-estimators were introduced by Peter J. Huber in 1964 as part of an effort to create statistical methods that are robust to deviations from model assumptions. Unlike traditional estimators, which can be unduly influenced by outliers, M-estimators seek to minimize an objective function that is less sensitive to outliers. This makes them particularly useful in real-world data analysis where data often deviate from theoretical distributions.
Definition
An M-estimator of a parameter θ is defined as a solution to the following equation: \[ \sum_{i=1}^n \psi(x_i, \theta) = 0 \] where:
- \( \psi \) is a function chosen based on the specific characteristics of the data and the underlying model,
- \( x_i \) are the observed data points,
- \( \theta \) is the parameter to be estimated.
The function \( \psi \) is often derived from a loss function that measures the discrepancy between the observed data and the model predictions.
Examples
Common examples of M-estimators include:
- Huber's M-estimator, which uses a piecewise linear function as \( \psi \),
- Hampel's M-estimator, which uses a function that is more resistant to outliers in the tails of the distribution.
Properties
M-estimators have several desirable properties:
- Robustness: They provide stable estimates even when the data include outliers or are from heavy-tailed distributions.
- Asymptotic normality: Under certain conditions, M-estimators are asymptotically normal, allowing for the construction of confidence intervals and hypothesis tests.
- Efficiency: When the model is correctly specified and there are no outliers, M-estimators can achieve efficiency close to that of the maximum likelihood estimator.
Applications
M-estimators are widely used in various fields such as econometrics, biostatistics, and engineering, where robustness to outliers and model misspecifications is crucial.
See also
Further reading
For those interested in a deeper dive into the theory and application of M-estimators, advanced texts in statistical theory and robust statistics are recommended.

This article is a statistics-related stub. You can help WikiMD by expanding it!