I'm studying about Gaussian Mixture Model and came across this code which draws a number of samples from 2 bivariate Gaussian distributions. Which I don't understand is the technique that is used in the code:
import numpy as np
# Number of samples per component
n_samples = 500
# Generate random sample, two components
np.random.seed(0)
C = np.array([[0., -0.1], [1.7, .4]])
X = np.r_[np.dot(np.random.randn(n_samples, 2), C),
.7 * np.random.randn(n_samples, 2) + np.array([-6, 3])]
(Original link: http://scikit-learn.org/stable/auto_examples/mixture/plot_gmm_selection.html#sphx-glr-auto-examples-mixture-plot-gmm-selection-py)
According to this Wikipedia link, we can generate multivariate Gaussian samples by Cholesky-decomposing the covariance matrix, then multiply it with a vector composed of components drawn from standard-normal distribution.
My question is the C variable in the code is not a lower triangle matrix, so how does it make sense in the multivariate Gaussian random generation?

numpy.random.multivariate_normalfor generating samples from the multivariate normal distribution. - Warren Weckessernumpy.random.multivariate_normal. What I want here is to understand the theory behind the code. - Anh Tuan