MCQ IN COMPUTER SCIENCE & ENGINEERING

COMPUTER SCIENCE AND ENGINEERING

MACHINE LEARNING

Question [CLICK ON ANY CHOICE TO KNOW THE RIGHT ANSWER]
In Gaussian mixture model clustering, the number of Gaussian distribution functions used is equal to
A
Number of clusters
B
Number of attributes
C
Number of instances
D
Number of iterations
Explanation: 

Detailed explanation-1: -Gaussian mixture moel is supervised The number of Gaussian distribution functions used is equal to the number of clusters Gaussian Mixture model is equivalent to Quadratic Discriminant Analysis of GMM can be decomposed similarly to PCA.

Detailed explanation-2: -Gaussian mixture models (GMMs) are often used for data clustering. You can use GMMs to perform either hard clustering or soft clustering on query data. To perform hard clustering, the GMM assigns query data points to the multivariate normal components that maximize the component posterior probability, given the data.

Detailed explanation-3: -By changing the value of K you can plot the GMM likelihood for training and validation sets like the following. In this example it should be obvious that the optimal number of components is around 20.

Detailed explanation-4: -That is to say, the result of a GMM fit to some data is technically not a clustering model, but a generative probabilistic model describing the distribution of the data. Here the mixture of 16 Gaussians serves not to find separated clusters of data, but rather to model the overall distribution of the input data.

Detailed explanation-5: -Gaussian Mixture Models are useful in situations where clusters have an “elliptical” shape. While K-Means only use means (centroids) to find clusters, GMMs also include variance/covariance. This is exactly what gives GMMs an advantage over K-Means when identifying non-circular clusters.

There is 1 question to complete.