1

我想通过logistic回归(这不是问题)来解决给定数据集上的分类问题。为了避免过度配合,我试图通过交叉验证来实现它(这里是问题):我缺少一些东西来完成程序。我的目的是确定准确度Python中的逻辑回归和交叉验证(使用sklearn)

但让我具体。这是我做了什么:

  1. 我分裂成组列车集和测试集
  2. 我定义使用
  3. 我用cross_val_predict方法(在sklearn.cross_validation)的logregression预测模型作出预测
  4. 最后,我测量精度

下面是代码:

import pandas as pd 
import numpy as np 
import seaborn as sns 
from sklearn.cross_validation import train_test_split 
from sklearn import metrics, cross_validation 
from sklearn.linear_model import LogisticRegression 

# read training data in pandas dataframe 
data = pd.read_csv("./dataset.csv", delimiter=';') 
# last column is target, store in array t 
t = data['TARGET'] 
# list of features, including target 
features = data.columns 
# item feature matrix in X 
X = data[features[:-1]].as_matrix() 
# remove first column because it is not necessary in the analysis 
X = np.delete(X,0,axis=1) 
# divide in training and test set 
X_train, X_test, t_train, t_test = train_test_split(X, t, test_size=0.2, random_state=0) 

# define method 
logreg=LogisticRegression() 

# cross valitadion prediction 
predicted = cross_validation.cross_val_predict(logreg, X_train, t_train, cv=10) 
print(metrics.accuracy_score(t_train, predicted)) 

我的问题

  • 从我的理解测试集不应被视为直到最后交叉验证应进行培训设置。这就是为什么我在cross_val_predict方法中插入X_train和t_train的原因。 Thuogh,我得到一个错误的说法:

    ValueError: Found input variables with inconsistent numbers of samples: [6016, 4812]

    ,其中6016是在整个数据集的样本数量,而4812是已被分割

  • 样本数据集中后的训练集数

    之后,我不知道该怎么办。我的意思是:什么时候X_test和t_test进场了?在交叉验证以及如何获得最终的准确性之后,我不明白我应该如何使用它们。

奖金问题:我也想的交叉验证的各步骤中执行缩放减少维数(通过特征选择或PCA)的。我怎样才能做到这一点?我已经看到,定义管道可以帮助扩展,但我不知道如何将其应用于第二个问题。

我真的很感激任何帮助:-)

回答

1

这是工作代码在样本数据框上测试。你的代码中的第一个问题是目标数组不是np.array。您的功能中也不应有目标数据。下面我将说明如何使用train_test_split手动分割训练和测试数据。我还展示了如何使用包装cross_val_score来自动分割,拟合和评分。

random.seed(42) 
# Create example df with alphabetic col names. 
alphabet_cols = list(string.ascii_uppercase)[:26] 
df = pd.DataFrame(np.random.randint(1000, size=(1000, 26)), 
        columns=alphabet_cols) 
df['Target'] = df['A'] 
df.drop(['A'], axis=1, inplace=True) 
print(df.head()) 
y = df.Target.values # df['Target'] is not an np.array. 
feature_cols = [i for i in list(df.columns) if i != 'Target'] 
X = df.ix[:, feature_cols].as_matrix() 
# Illustrated here for manual splitting of training and testing data. 
X_train, X_test, y_train, y_test = \ 
    model_selection.train_test_split(X, y, test_size=0.2, random_state=0) 

# Initialize model. 
logreg = linear_model.LinearRegression() 

# Use cross_val_score to automatically split, fit, and score. 
scores = model_selection.cross_val_score(logreg, X, y, cv=10) 
print(scores) 
print('average score: {}'.format(scores.mean())) 

输出

 B C D E F G H I J K ... Target 
0 20 33 451 0 420 657 954 156 200 935 ... 253 
1 427 533 801 183 894 822 303 623 455 668 ... 421 
2 148 681 339 450 376 482 834 90 82 684 ... 903 
3 289 612 472 105 515 845 752 389 532 306 ... 639 
4 556 103 132 823 149 974 161 632 153 782 ... 347 

[5 rows x 26 columns] 
[-0.0367 -0.0874 -0.0094 -0.0469 -0.0279 -0.0694 -0.1002 -0.0399 0.0328 
-0.0409] 
average score: -0.04258093018969249 

有用的参考文献:

+0

非常感谢,男人!我修复了代码,现在它可以工作。功能中的目标并不是一个真正的问题,因为我的代码中的-1被拿走了,因为它是最后一列。所以真正的问题是,事实上目标不是np.array,正如你指出的那样(我说,我真的不明白它与机器返回的大小错误有什么神秘的关系)。 您是否对如何完成这个过程有所了解,即如何进行最终测试?我对我现在应该做的事情有点困惑。 – Harnak

+0

我修改了我的答案,使用'model_selection.cross_val_score'包含了一个完整的过程。至于尺寸错误,在pd.dataframes和np.ndarrays之间工作可能会很痛苦。您可以使用'x.shape'打印每个模糊的故障排除。学习这些东西的最好方法是挖掘sklearn文档和教程。 – 2017-02-17 20:45:04

+0

我不确定我是否理解正确。那么,使用cross_val_score可以使之前的分割变得不必要?我的意思是:不应该只在训练集上进行交叉验证,而不是在整套上进行交叉验证?或者我可能错过了交叉验证的重点。 – Harnak

1

请看documentation of cross-validation at scikit更了解它。

另外您还错误地使用了cross_val_predict。它将在内部调用您提供的cvcv = 10)将提供的数据(即您的情况中的X_train,t_train)再次分解为训练和测试,将估计器拟合到列车上并预测测试中的数据。

现在上车数据的X_testy_test,你应该先满足您的estimtor的使用(cross_val_predict将不适合),然后用它来预测测试数据,然后计算精度。

简单的代码片段来描述上述(从您的代码借款)(请阅读注释,并询问是否也不懂):

# item feature matrix in X 
X = data[features[:-1]].as_matrix() 
# remove first column because it is not necessary in the analysis 
X = np.delete(X,0,axis=1) 
# divide in training and test set 
X_train, X_test, t_train, t_test = train_test_split(X, t, test_size=0.2, random_state=0) 

# Until here everything is good 
# You keep away 20% of data for testing (test_size=0.2) 
# This test data should be unseen by any of the below methods 

# define method 
logreg=LogisticRegression() 

# Ideally what you are doing here should be correct, until you did anything wrong in dataframe operations (which apparently has been solved) 
#cross valitadion prediction 
#This cross validation prediction will print the predicted values of 't_train' 
predicted = cross_validation.cross_val_predict(logreg, X_train, t_train, cv=10) 
# internal working of cross_val_predict: 
    #1. Get the data and estimator (logreg, X_train, t_train) 
    #2. From here on, we will use X_train as X_cv and t_train as t_cv (because cross_val_predict doesnt know that its our training data) - Doubts?? 
    #3. Split X_cv, t_cv into X_cv_train, X_cv_test, t_cv_train, t_cv_test by using its internal cv 
    #4. Use X_cv_train, t_cv_train for fitting 'logreg' 
    #5. Predict on X_cv_test (No use of t_cv_test) 
    #6. Repeat steps 3 to 5 repeatedly for cv=10 iterations, each time using different data for training and different data for testing. 

# So here you are correctly comparing 'predicted' and 't_train' 
print(metrics.accuracy_score(t_train, predicted)) 

# The above metrics will show you how our estimator 'logreg' works on 'X_train' data. If the accuracies are very high it may be because of overfitting. 

# Now what to do about the X_test and t_test above. 
# Actually the correct preference for metrics is this X_test and t_train 
# If you are satisfied by the accuracies on the training data then you should fit the entire training data to the estimator and then predict on X_test 

logreg.fit(X_train, t_train) 
t_pred = logreg(X_test) 

# Here is the final accuracy 
print(metrics.accuracy_score(t_test, t_pred)) 
# If this accuracy is good, then your model is good. 

如果你有较少的数据或不想要将数据分割成培训和测试,那么你应该使用的方法由@fuzzyhedge

# Use cross_val_score on your all data 
scores = model_selection.cross_val_score(logreg, X, y, cv=10) 

# 'cross_val_score' will almost work same from steps 1 to 4 
    #5. t_cv_pred = logreg.predict(X_cv_test) and calculate accuracy with t_cv_test. 
    #6. Repeat steps 1 to 5 for cv_iterations = 10 
    #7. Return array of accuracies calculated in step 5. 

# Find out average of returned accuracies to see the model performance 
scores = scores.mean() 

注意的建议 - 也cross_validation最好用gridsearch用来找出估计的参数,针对给定数据执行最佳操作。 例如,使用LogisticRegression它定义了许多参数。但是,如果使用

logreg = LogisticRegression() 

将仅使用默认参数初始化模型。也许参数值不同

logreg = LogisticRegression(penalty='l1', solver='liblinear') 

可能对您的数据执行效果更好。这个搜索更好的参数是gridsearch。

现在至于你的第二部分scaling, dimension reductions等使用管道。您可以参考documentation of pipeline和下面的例子:

随时联系我,如果需要任何帮助。

+0

关于网格搜索的好处。 – 2017-02-18 04:54:27

+0

谢谢。非常完整和有用的答案!是的,我试图找出sklearn文档中的一些东西,但我仍然对如何结合之前的分割和交叉验证感到困惑。现在它更清晰了 – Harnak