现在我正在运行一个非常积极的网格搜索。我有n=135 samples
,我正在运行23 folds
使用自定义交叉验证火车/测试列表。我有我的verbose=2
。如何通过Scikit-Learn中的详细输出来估计GridSearchCV的进度?
以下是我跑:
param_test = {"loss":["deviance"],
'learning_rate':[0.01, 0.025, 0.05, 0.075, 0.1, 0.15, 0.2],
"min_samples_split": np.linspace(0.1, 0.5, 12),
"min_samples_leaf": np.linspace(0.1, 0.5, 12),
"max_depth":[3,5,8],
"max_features":["log2","sqrt"],
"min_impurity_split":[5e-6, 1e-7, 5e-7],
"criterion": ["friedman_mse", "mae"],
"subsample":[0.5, 0.618, 0.8, 0.85, 0.9, 0.95, 1.0],
"n_estimators":[10]}
Mod_gsearch = GridSearchCV(estimator = GradientBoostingClassifier(),
param_grid = param_test, scoring="accuracy",n_jobs=32, iid=False, cv=cv_indices, verbose=2)
我参加了stdout
看看详细的输出:
$head gridsearch.o8475533
Fitting 23 folds for each of 254016 candidates, totalling 5842368 fits
在此基础上,它看起来像有交叉的排列5842368
验证对使用我的网格参数。
$ grep -c "[CV]" gridsearch.o8475533
7047332
它看起来像有约700万已被迄今所做的交叉验证,但是,它比5842368
总拟合更多...
7047332/5842368 = 1.2062458236
然后,当我看stderr
文件:
$ cat ./gridsearch.e8475533
[Parallel(n_jobs=32)]: Done 132 tasks | elapsed: 1.2s
[Parallel(n_jobs=32)]: Done 538 tasks | elapsed: 2.8s
[Parallel(n_jobs=32)]: Done 1104 tasks | elapsed: 4.8s
[Parallel(n_jobs=32)]: Done 1834 tasks | elapsed: 7.9s
[Parallel(n_jobs=32)]: Done 2724 tasks | elapsed: 11.6s
...
[Parallel(n_jobs=32)]: Done 3396203 tasks | elapsed: 250.2min
[Parallel(n_jobs=32)]: Done 3420769 tasks | elapsed: 276.5min
[Parallel(n_jobs=32)]: Done 3447309 tasks | elapsed: 279.3min
[Parallel(n_jobs=32)]: Done 3484240 tasks | elapsed: 282.3min
[Parallel(n_jobs=32)]: Done 3523550 tasks | elapsed: 285.3min
我的目标:
我怎样才能知道我的网格搜索关于它可能花费的总时间的进度?
什么我困惑:
什么[CV]
线之间stdout
在stdout
千篇一律的总#和任务stderr
关系,?