ABSTRACT

Even in the simplest situation of a two-component mixture where only the mixing proportion η 1 is missing, the likelihood equation of a mixture model is not available in closed form. Obviously, in such a simple situation the maximum likelihood estimate (MLE) of the mixture parameters can be derived easily with standard optimization algorithms such as Newton–Raphson. But the number of parameters θ in a mixture model grows rapidly with the dimension d of variables and with the number G of components. This means the Newton–Raphson algorithm becomes expensive both mathematically and computationally for evaluating the observed information matrix of the vector parameter θ. Moreover, this algorithm does not increase the likelihood of being maximized at each of its iterations. It is thus no surprise that the Newton–Raphson algorithm is far from being the most exploited algorithm to derive the MLE of a finite mixture model.