2015-11-20 262 views
0

我遇到了sklearn.mixture.dpgmm问题。主要问题是它没有为合成数据(2个分离的2D高斯)返回正确的协方差,它真的不应该有问题。特别是,当我做dpgmm._get_covars()时,无论输入数据分布如何,协方差矩阵的对角线元素总是恰好为1.0太大。这看起来像一个错误,因为gmm完美地工作(当限制到已知确切数量的组时)Sklearn.mixture.dpgmm无法正常工作

另一个问题是dpgmm.weights_没有意义,它们总合为一,但数值看起来毫无意义。

有没有人有解决这个问题,或看到明显错误与我的例子?

这里是我运行的确切脚本:

import itertools 
import numpy as np 
from scipy import linalg 
import matplotlib.pyplot as plt 
import matplotlib as mpl 
import pdb 

from sklearn import mixture 

# Generate 2D random sample, two gaussians each with 10000 points 
rsamp1 =  np.random.multivariate_normal(np.array([5.0,5.0]),np.array([[1.0,-0.2],[-0.2,1.0]]),10000) 
rsamp2 = np.random.multivariate_normal(np.array([0.0,0.0]),np.array([[0.2,-0.0],[-0.0,3.0]]),10000) 
X = np.concatenate((rsamp1,rsamp2),axis=0) 

# Fit a mixture of Gaussians with EM using 2 
gmm = mixture.GMM(n_components=2, covariance_type='full',n_iter=10000) 
gmm.fit(X) 

# Fit a Dirichlet process mixture of Gaussians using 10 components 
dpgmm = mixture.DPGMM(n_components=10, covariance_type='full',min_covar=0.5,tol=0.00001,n_iter = 1000000) 
dpgmm.fit(X) 

print("Groups With data in them") 
print(np.unique(dpgmm.predict(X))) 

##print the input and output covars as example, should be very similar 
correct_c0 = np.array([[1.0,-0.2],[-0.2,1.0]]) 
print "Input covar" 
print correct_c0 

covars = dpgmm._get_covars() 
c0 = np.round(covars[0],decimals=1) 
print "Output Covar" 
print c0 

print("Output Variances Too Big by 1.0") 

回答

1

按照dpgmm docs这个类是弃用在0.18版本,将在版本中删除0.20

您应该使用BayesianGaussianMixture类相反,与参数weight_concentration_prior_type设置与选项"dirichlet_process"

希望它有帮助

0

,而不需要编写

from sklearn.mixture import GMM 
gmm = GMM(2, covariance_type='full', random_state=0) 

你应该写:

from sklearn.mixture import BayesianGaussianMixture 
gmm = BayesianGaussianMixture(2, covariance_type='full', random_state=0)