我想使用使用n元组特征的sklearn
分类器。此外,我想进行交叉验证以找出n-gram的最佳顺序。然而,我有点卡住我如何能够把所有的东西放在一起。用n元组分类
现在,我有以下代码:
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.naive_bayes import MultinomialNB
text = ... # This is the input text. A list of strings
labels = ... # These are the labels of each sentence
# Find the optimal order of the ngrams by cross-validation
scores = pd.Series(index=range(1,6), dtype=float)
folds = KFold(n_splits=3)
for n in range(1,6):
count_vect = CountVectorizer(ngram_range=(n,n), stop_words='english')
X = count_vect.fit_transform(text)
X_train, X_test, y_train, y_test = train_test_split(X, labels, test_size=0.33, random_state=42)
clf = MultinomialNB()
score = cross_val_score(clf, X_train, y_train, cv=folds, n_jobs=-1)
scores.loc[n] = np.mean(score)
# Evaluate the classifier using the best order found
order = scores.idxmax()
count_vect = CountVectorizer(ngram_range=(order,order), stop_words='english')
X = count_vect.fit_transform(text)
X_train, X_test, y_train, y_test = train_test_split(X, labels, test_size=0.33, random_state=42)
clf = MultinomialNB()
clf = clf.fit(X_train, y_train)
acc = clf.score(X_test, y_test)
print('Accuracy is {}'.format(acc))
不过,我觉得这是错误的方式做到这一点,因为我创造的每一个循环列车测试分裂。
如果做的列车测试预先分割并分别应用到CountVectorizer
两个部分,除了这些部分具有不同shape
s表示,采用clf.fit
和clf.score
时会引起问题。
我该如何解决这个问题?
编辑:如果我尝试先建立一个词汇,我还是要多建几个词汇,由于对unigram的词汇是从二元语法的不同,等
举个例子:
# unigram vocab
vocab = set()
for sentence in text:
for word in sentence:
if word not in vocab:
vocab.add(word)
len(vocab) # 47291
# bigram vocab
vocab = set()
for sentence in text:
bigrams = nltk.ngrams(sentence, 2)
for bigram in bigrams:
if bigram not in vocab:
vocab.add(bigram)
len(vocab) # 326044
这再一次导致我需要为每个n-gram大小应用CountVectorizer
的相同问题。
构建的词汇首先,从训练集。没有什么能够阻止你把这两个单词和bigrams(以及更多)放在同一个字典中。 – alexis