### Abstract

In a class of semiparametric mixture models, the score function (and consequently the effective information) for a finite-dimensional parameter can be made arbitrarily small depending upon the direction taken in the parameter space. This result holds for a broad range of semiparametric mixtures over exponential families and includes examples such as the gamma semiparametric mixture, the normal mean mixture, the Weibull semiparametric mixture and the negative binomial mixture. The near-zero information rules out the usual parametric √n rate for the finite-dimensional parameter, but even more surprising is that the rate continues to be unattainable even when the mixing distribution is constrained to be countably discrete. Two key conditions which lead to a loss of information are the smoothness of the underlying density and whether a sufficient statistic is invertible.

Original language | English (US) |
---|---|

Pages (from-to) | 159-177 |

Number of pages | 19 |

Journal | Annals of Statistics |

Volume | 27 |

Issue number | 1 |

State | Published - Feb 1 1999 |

Externally published | Yes |

### Fingerprint

### Keywords

- Information
- Mixture model
- Semiparametric mixture
- Structural parameter

### ASJC Scopus subject areas

- Mathematics(all)
- Statistics and Probability

### Cite this

*Annals of Statistics*,

*27*(1), 159-177.