我是新来的机器学习和第一次尝试Sklearn。我有两个数据框,一个用于训练逻辑回归模型(具有10倍交叉验证)的数据和另一个用于使用该模型预测类('0,1')的数据。 这里是我到目前为止的代码使用教程我在Sklearn文档和Web上发现的位:逻辑回归sklearn - 火车和应用模型
import pandas as pd
import numpy as np
import sklearn
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import KFold
from sklearn.preprocessing import normalize
from sklearn.preprocessing import scale
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import cross_val_predict
from sklearn import metrics
# Import dataframe with training data
df = pd.read_csv('summary_44.csv')
cols = df.columns.drop('num_class') # Data to use (num_class is the column with the classes)
# Import dataframe with data to predict
df_pred = pd.read_csv('new_predictions.csv')
# Scores
df_data = df.ix[:,:-1].values
# Target
df_target = df.ix[:,-1].values
# Values to predict
df_test = df_pred.ix[:,:-1].values
# Scores' names
df_data_names = cols.values
# Scaling
X, X_pred, y = scale(df_data), scale(df_test), df_target
# Define number of folds
kf = KFold(n_splits=10)
kf.get_n_splits(X) # returns the number of splitting iterations in the cross-validator
# Logistic regression normalizing variables
LogReg = LogisticRegression()
# 10-fold cross-validation
scores = [LogReg.fit(X[train], y[train]).score(X[test], y[test]) for train, test in kf.split(X)]
print scores
# Predict new
novel = LogReg.predict(X_pred)
这是实现Logistic回归正确的方法是什么? 我知道在交叉验证后应该使用fit()方法来训练模型并将其用于预测。然而,由于我在列表理解中调用了fit(),所以我真的不知道我的模型是否“适合”并可用于进行预测。
发布一些数据。打印出df和df_data – skrubber