1

我试图找出如何建立一个工作流程sklearn.neighbors.KNeighborsRegressor包括:放在一起sklearn管道+嵌套交叉验证

  • 正常化功能
  • 特征选择(20的最佳子集数字特征,没有特定的总)
  • 交叉验证超参数K的范围为1〜20
  • 交叉验证模型
  • 使用RMSE作为误差度量

scikit-learn中有很多不同的选项,我有点不知所措,试图决定我需要哪些类。

而且sklearn.neighbors.KNeighborsRegressor,我想我需要:

sklearn.pipeline.Pipeline 
sklearn.preprocessing.Normalizer 
sklearn.model_selection.GridSearchCV 
sklearn.model_selection.cross_val_score 

sklearn.feature_selection.selectKBest 
OR 
sklearn.feature_selection.SelectFromModel 

会有人请告诉我可能是什么定义这个管道/工作流程是怎样的?我想应该是这样的:

import numpy as np 
from sklearn.pipeline import Pipeline 
from sklearn.preprocessing import Normalizer 
from sklearn.feature_selection import SelectKBest, f_classif 
from sklearn.neighbors import KNeighborsRegressor 
from sklearn.model_selection import cross_val_score, GridSearchCV 

# build regression pipeline 
pipeline = Pipeline([('normalize', Normalizer()), 
        ('kbest', SelectKBest(f_classif)), 
        ('regressor', KNeighborsRegressor())]) 

# try knn__n_neighbors from 1 to 20, and feature count from 1 to len(features) 
parameters = {'kbest__k': list(range(1, X.shape[1]+1)), 
       'regressor__n_neighbors': list(range(1,21))} 

# outer cross-validation on model, inner cross-validation on hyperparameters 
scores = cross_val_score(GridSearchCV(pipeline, parameters, scoring="neg_mean_squared_error", cv=10), 
         X, y, cv=10, scoring="neg_mean_squared_error", verbose=2) 

rmses = np.abs(scores)**(1/2) 
avg_rmse = np.mean(rmses) 
print(avg_rmse) 

它似乎没有错误的,但有几个我所关注的是:

  • 我有没有正确地执行嵌套的交叉验证,使我的RMSE没有偏见?
  • 如果我想根据最佳RMSE选择最终模型,我是否应该使用scoring="neg_mean_squared_error"同时使用cross_val_scoreGridSearchCV
  • SelectKBest, f_classif是用于选择KNeighborsRegressor型号功能的最佳选择吗?
  • 我怎么能看到:
    • 它的功能子集被选为最佳
    • 其中选择K作为最佳

任何帮助,不胜感激!

+1

你的代码似乎很好。另外,这种方法对我来说是正确的。你有任何错误或意外的结果? – sera

+0

嘿,谢谢你的评论。我更新了我的帖子,提供了更多关于我的担忧的信息。 – Austin

回答

2

您的代码似乎没问题。

对于scoring="neg_mean_squared_error"两个cross_val_scoreGridSearchCV,我会做,以确保事情一样正常,可测试这一点的唯一方法是删除这两个中的一个,看看结果的变化。

SelectKBest是一个很好的做法,但你也可以使用SelectFromModel甚至其他的方法,你可以找到here

最后,为了得到最佳参数功能分数我修改了一下你的代码如下:

import ... 


pipeline = Pipeline([('normalize', Normalizer()), 
        ('kbest', SelectKBest(f_classif)), 
        ('regressor', KNeighborsRegressor())]) 

# try knn__n_neighbors from 1 to 20, and feature count from 1 to len(features) 
parameters = {'kbest__k': list(range(1, X.shape[1]+1)), 
       'regressor__n_neighbors': list(range(1,21))} 

# changes here 

grid = GridSearchCV(pipeline, parameters, cv=10, scoring="neg_mean_squared_error") 

grid.fit(X, y) 

# get the best parameters and the best estimator 
print("the best estimator is \n {} ".format(grid.best_estimator_)) 
print("the best parameters are \n {}".format(grid.best_params_)) 

# get the features scores rounded in 2 decimals 
pip_steps = grid.best_estimator_.named_steps['kbest'] 

features_scores = ['%.2f' % elem for elem in pip_steps.scores_ ] 
print("the features scores are \n {}".format(features_scores)) 

feature_scores_pvalues = ['%.3f' % elem for elem in pip_steps.pvalues_] 
print("the feature_pvalues is \n {} ".format(feature_scores_pvalues)) 

# create a tuple of feature names, scores and pvalues, name it "features_selected_tuple" 

featurelist = ['age', 'weight'] 

features_selected_tuple=[(featurelist[i], features_scores[i], 
feature_scores_pvalues[i]) for i in pip_steps.get_support(indices=True)] 

# Sort the tuple by score, in reverse order 

features_selected_tuple = sorted(features_selected_tuple, key=lambda 
feature: float(feature[1]) , reverse=True) 

# Print 
print 'Selected Features, Scores, P-Values' 
print features_selected_tuple 

结果使用我的数据:

the best estimator is 
Pipeline(steps=[('normalize', Normalizer(copy=True, norm='l2')), ('kbest', SelectKBest(k=2, score_func=<function f_classif at 0x0000000004ABC898>)), ('regressor', KNeighborsRegressor(algorithm='auto', leaf_size=30, metric='minkowski', 
     metric_params=None, n_jobs=1, n_neighbors=18, p=2, 
     weights='uniform'))]) 

the best parameters are 
{'kbest__k': 2, 'regressor__n_neighbors': 18} 

the features scores are 
['8.98', '8.80'] 

the feature_pvalues is 
['0.000', '0.000'] 

Selected Features, Scores, P-Values 
[('correlation', '8.98', '0.000'), ('gene', '8.80', '0.000')] 
+0

谢谢!我看到它显示了用于'kbest__k'的参数的数量,但是有没有办法查看哪些列被专门使用? “SelectKBest”只是尝试第一列,然后是第一列和第二列等等,还是尝试选择范围中的#个特征的每个排列? – Austin

+1

@Jake我编辑了我的文章。我添加了功能p值和分数的代码。我认为这是基于你在评论中提到的排列 – sera

+1

@Jake第二次更新我的答案。现在你可以得到选定的功能 – sera