Parametric model

From Wikipedia, the free encyclopedia
Jump to: navigation, search

In statistics, a parametric model or parametric family or finite-dimensional model is a family of distributions that can be described using a finite number of parameters. These parameters are usually collected together to form a single k-dimensional parameter vector θ = (θ1, θ2, …, θk).

Parametric models are contrasted with the semi-parametric, semi-nonparametric, and non-parametric models, all of which consist of an infinite set of “parameters” for description. The distinction between these four classes is as follows:citation needed

  • in a “parametric” model all the parameters are in finite-dimensional parameter spaces;
  • a model is “non-parametric” if all the parameters are in infinite-dimensional parameter spaces;
  • a “semi-parametric” model contains finite-dimensional parameters of interest and infinite-dimensional nuisance parameters;
  • a “semi-nonparametric” model has both finite-dimensional and infinite-dimensional unknown parameters of interest.

Some statisticians believe that the concepts “parametric”, “non-parametric”, and “semi-parametric” are ambiguous.1 It can also be noted that the set of all probability measures has cardinality of continuum, and therefore it is possible to parametrize any model at all by a single number in (0,1) interval.2 This difficulty can be avoided by considering only “smooth” parametric models.

Definition

A parametric model is a collection of probability distributions such that each member of this collection, Pθ, is described by a finite-dimensional parameter θ. The set of all allowable values for the parameter is denoted Θ ⊆ Rk, and the model itself is written as


    \mathcal{P} = \big\{ P_\theta\ \big|\ \theta\in\Theta \big\}.

When the model consists of absolutely continuous distributions, it is often specified in terms of corresponding probability density functions:


    \mathcal{P} = \big\{ f_\theta\ \big|\ \theta\in\Theta \big\}.

The parametric model is called identifiable if the mapping θ ↦ Pθ is invertible, that is there are no two different parameter values θ1 and θ2 such that Pθ1 = Pθ2.

Examples

Regular parametric model

Let μ be a fixed σ-finite measure on a probability space (Ω, ℱ), and \scriptstyle \mathcal{M}_\mu the collection of all probability measures dominated by μ. Then we will call  \mathcal{P}\!=\!\{ P_\theta|\, \theta\in\Theta \} \subseteq \mathcal{M}_\mu  a regular parametric model if the following requirements are met:3

  1. Θ is an open subset of Rk.
  2. The map
    \theta\mapsto s(\theta)=\sqrt{dP_\theta/d\mu}
    from Θ to L2(μ) is Fréchet differentiable: there exists a vector \dot{s}(\theta) = (\dot{s}_1(\theta),\,\ldots,\,\dot{s}_k(\theta)) such that
    
    \lVert s(\theta+h) - s(\theta) - \dot{s}(\theta)'h \rVert = o(|h|)\ \ \text{as }h \to 0,
    where ′ denotes matrix transpose.
  3. The map \theta\mapsto\dot{s}(\theta) (defined above) is continuous on Θ.
  4. The k×k Fisher information matrix
    I(\theta) = 4\int \dot{s}(\theta)\dot{s}(\theta)'d\mu
    is non-singular.

Properties

  • Sufficient conditions for regularity of a parametric model in terms of ordinary differentiability of the density function ƒθ are following:4
    1. The density function ƒθ(x) is continuously differentiable in θ for μ-almost all x, with gradient ∇ƒθ.
    2. The score function
      
    z_\theta = \frac{\nabla f_\theta}{f_\theta} \cdot \mathbf{1}_{\{f_\theta>0\}}
      belongs to the space L²(Pθ) of square-integrable functions with respect to the measure Pθ.
    3. The Fisher information matrix I(θ), defined as
      
    I_\theta = \int \!z_\theta z_\theta' \,dP_\theta
      is nonsingular and continuous in θ.

    If conditions (i)−(iii) hold then the parametric model is regular.

  • Local asymptotic normality.
  • If the regular parametric model is identifiable then there exists a uniformly \scriptstyle \sqrt{n}-consistent and efficient estimator of its parameter θ.5

See also

Notes

  1. ^ LeCam 2000, ch.7.4
  2. ^ Bickel 1998, p. 2
  3. ^ Bickel 1998, p. 12
  4. ^ Bickel 1998, p.13, prop.2.1.1
  5. ^ Bickel 1998, Theorems 2.5.1, 2.5.2

References

  • Bickel, Peter J. and Doksum, Kjell A. (2001). Mathematical Statistics: Basic and Selected Topics, Volume 1. (Second (updated printing 2007) ed.). Pearson Prentice-Hall. 
  • Bickel, Peter J.; Klaassen, Chris A.J.; Ritov, Ya’acov; Wellner Jon A. (1998). Efficient and adaptive estimation for semiparametric models. Springer: New York. ISBN 0-387-98473-9. 
  • Davidson, A.C. (2003). Statistical Models. Cambridge University Press. 
  • Freedman, David A. (2009). Statistical Models: Theory and Practice (Second ed.). Cambridge University Press. ISBN 978-0-521-67105-7. 
  • Le Cam, Lucien; Lo Yang, Grace (2000). Asymptotics in statistics: some basic concepts. Springer. ISBN 0-387-95036-2. 
  • Lehmann, Erich (1983). Theory of Point Estimation. 
  • Lehmann, Erich (1959). Testing Statistical Hypotheses. 
  • Liese, Friedrich and Miescke, Klaus-J. (2008). Statistical Decision Theory: Estimation, Testing, and Selection. Springer. 
  • Pfanzagl, Johann; with the assistance of R. Hamböker (1994). Parametric Statistical Theory. Walter de Gruyter. ISBN 3-11-013863-8.  MR 1291393







Creative Commons License