请看documentation of cross-validation at scikit更了解它。
另外您还错误地使用了cross_val_predict
。它将在内部调用您提供的cv
(cv
= 10)将提供的数据(即您的情况中的X_train,t_train)再次分解为训练和测试,将估计器拟合到列车上并预测测试中的数据。
现在上车数据的X_test
,y_test
,你应该先满足您的estimtor的使用(cross_val_predict将不适合),然后用它来预测测试数据,然后计算精度。
简单的代码片段来描述上述(从您的代码借款)(请阅读注释,并询问是否也不懂):
# item feature matrix in X
X = data[features[:-1]].as_matrix()
# remove first column because it is not necessary in the analysis
X = np.delete(X,0,axis=1)
# divide in training and test set
X_train, X_test, t_train, t_test = train_test_split(X, t, test_size=0.2, random_state=0)
# Until here everything is good
# You keep away 20% of data for testing (test_size=0.2)
# This test data should be unseen by any of the below methods
# define method
logreg=LogisticRegression()
# Ideally what you are doing here should be correct, until you did anything wrong in dataframe operations (which apparently has been solved)
#cross valitadion prediction
#This cross validation prediction will print the predicted values of 't_train'
predicted = cross_validation.cross_val_predict(logreg, X_train, t_train, cv=10)
# internal working of cross_val_predict:
#1. Get the data and estimator (logreg, X_train, t_train)
#2. From here on, we will use X_train as X_cv and t_train as t_cv (because cross_val_predict doesnt know that its our training data) - Doubts??
#3. Split X_cv, t_cv into X_cv_train, X_cv_test, t_cv_train, t_cv_test by using its internal cv
#4. Use X_cv_train, t_cv_train for fitting 'logreg'
#5. Predict on X_cv_test (No use of t_cv_test)
#6. Repeat steps 3 to 5 repeatedly for cv=10 iterations, each time using different data for training and different data for testing.
# So here you are correctly comparing 'predicted' and 't_train'
print(metrics.accuracy_score(t_train, predicted))
# The above metrics will show you how our estimator 'logreg' works on 'X_train' data. If the accuracies are very high it may be because of overfitting.
# Now what to do about the X_test and t_test above.
# Actually the correct preference for metrics is this X_test and t_train
# If you are satisfied by the accuracies on the training data then you should fit the entire training data to the estimator and then predict on X_test
logreg.fit(X_train, t_train)
t_pred = logreg(X_test)
# Here is the final accuracy
print(metrics.accuracy_score(t_test, t_pred))
# If this accuracy is good, then your model is good.
如果你有较少的数据或不想要将数据分割成培训和测试,那么你应该使用的方法由@fuzzyhedge
# Use cross_val_score on your all data
scores = model_selection.cross_val_score(logreg, X, y, cv=10)
# 'cross_val_score' will almost work same from steps 1 to 4
#5. t_cv_pred = logreg.predict(X_cv_test) and calculate accuracy with t_cv_test.
#6. Repeat steps 1 to 5 for cv_iterations = 10
#7. Return array of accuracies calculated in step 5.
# Find out average of returned accuracies to see the model performance
scores = scores.mean()
注意的建议 - 也cross_validation最好用gridsearch用来找出估计的参数,针对给定数据执行最佳操作。 例如,使用LogisticRegression它定义了许多参数。但是,如果使用
logreg = LogisticRegression()
将仅使用默认参数初始化模型。也许参数值不同
logreg = LogisticRegression(penalty='l1', solver='liblinear')
可能对您的数据执行效果更好。这个搜索更好的参数是gridsearch。
现在至于你的第二部分scaling, dimension reductions等使用管道。您可以参考documentation of pipeline和下面的例子:
随时联系我,如果需要任何帮助。
非常感谢,男人!我修复了代码,现在它可以工作。功能中的目标并不是一个真正的问题,因为我的代码中的-1被拿走了,因为它是最后一列。所以真正的问题是,事实上目标不是np.array,正如你指出的那样(我说,我真的不明白它与机器返回的大小错误有什么神秘的关系)。 您是否对如何完成这个过程有所了解,即如何进行最终测试?我对我现在应该做的事情有点困惑。 – Harnak
我修改了我的答案,使用'model_selection.cross_val_score'包含了一个完整的过程。至于尺寸错误,在pd.dataframes和np.ndarrays之间工作可能会很痛苦。您可以使用'x.shape'打印每个模糊的故障排除。学习这些东西的最好方法是挖掘sklearn文档和教程。 – 2017-02-17 20:45:04
我不确定我是否理解正确。那么,使用cross_val_score可以使之前的分割变得不必要?我的意思是:不应该只在训练集上进行交叉验证,而不是在整套上进行交叉验证?或者我可能错过了交叉验证的重点。 – Harnak