我已经优化了RandomForest,使用GridSearch进行嵌套交叉验证。 之后,我知道用最好的参数,我必须训练整个数据集,然后对超出样本的数据进行预测。如何处理从嵌套交叉验证获得的网格搜索中的best_score?
我需要两次模型吗?一种是通过嵌套交叉验证找到准确性估计值,然后是样本外数据?
请检查我的代码:
#Load data
for name in ["AWA"]:
for el in ['Fp1']:
X=sio.loadmat('/home/TrainVal/{}_{}.mat'.format(name, el))['x']
s_y=sio.loadmat('/home/TrainVal/{}_{}.mat'.format(name, el))['y']
y=np.ravel(s_y)
print(name, el, x.shape, y.shape)
print("")
#Pipeline
clf = Pipeline([('rcl', RobustScaler()),
('clf', RandomForestClassifier())])
#Optimization
#Outer loop
sss_outer = StratifiedShuffleSplit(n_splits=2, test_size=0.1, random_state=1)
#Inner loop
sss_inner = StratifiedShuffleSplit(n_splits=2, test_size=0.1, random_state=1)
# Use a full grid over all parameters
param_grid = {'clf__n_estimators': [10, 12, 15],
'clf__max_features': [3, 5, 10],
}
# Run grid search
grid_search = GridSearchCV(clf, param_grid=param_grid, cv=sss_inner, n_jobs=-1)
#FIRST FIT!!!!!
grid_search.fit(X, y)
scores=cross_val_score(grid_search, X, y, cv=sss_outer)
#Show best parameter in inner loop
print(grid_search.best_params_)
#Show Accuracy average of all the outer loops
print(scores.mean())
#SECOND FIT!!!
y_score = grid_search.fit(X, y).score(out-of-sample, y)
print(y_score)
只需调用'grid_search.score()'或'grid_search.predict()'就会产生同样的效果。因为它会自动内部访问'best_estimator_'。 –