2016-11-14 55 views

回答

0

使用下面的代码,而不是X Concat的垂直使用numpy.vstack成矩阵X所有字的嵌入,然后fit_transform它。

import numpy as np 
from sklearn.manifold import TSNE 
X = np.array([[0, 0, 0], [0, 1, 1], [1, 0, 1], [1, 1, 1]]) 
model = TSNE(n_components=2, random_state=0) 
np.set_printoptions(suppress=True) 
model.fit_transform(X) 

fit_transform的输出具有形状vocab_size x 2,因此您可以将其可视化。通过PIP畅达只是install scikit-learn通常的方式 -

vocab = sorted(word2vec_model.get_vocab()) #not sure the exact api 
emb_tuple = tuple([word2vec_model[v] for v in vocab]) 
X = numpy.vstack(emb_tuple) 
13

你不需要的一个开发者版本scikit学习。

要访问word2vec创建简单的用字字典作为索引模型中的词矢量:

X = model[model.wv.vocab] 

以下是它加载一些新闻组数据的简单但完整的代码示例,应用非常基本的数据准备(清理和分解句子),训练word2vec模型,使用t-SNE缩小尺寸,并可视化输出。

from gensim.models.word2vec import Word2Vec 
from sklearn.manifold import TSNE 
from sklearn.datasets import fetch_20newsgroups 
import re 
import matplotlib.pyplot as plt 

# download example data (may take a while) 
train = fetch_20newsgroups() 

def clean(text): 
    """Remove posting header, split by sentences and words, keep only letters""" 
    lines = re.split('[?!.:]\s', re.sub('^.*Lines: \d+', '', re.sub('\n', ' ', text))) 
    return [re.sub('[^a-zA-Z]', ' ', line).lower().split() for line in lines] 

sentences = [line for text in train.data for line in clean(text)] 

model = Word2Vec(sentences, workers=4, size=100, min_count=50, window=10, sample=1e-3) 

print (model.most_similar('memory')) 

X = model[model.wv.vocab] 

tsne = TSNE(n_components=2) 
X_tsne = tsne.fit_transform(X) 

plt.scatter(X_tsne[:, 0], X_tsne[:, 1]) 
plt.show()