我需要将我的数据分成训练集(75%)和测试集(25%)。我目前这样做与下面的代码:scikit-learn中的分层次训练/测试分裂
X, Xt, userInfo, userInfo_train = sklearn.cross_validation.train_test_split(X, userInfo)
但是,我想分层我的训练数据集。我怎么做?我一直在研究StratifiedKFold
方法,但不会让我指定75%/ 25%的分割,并且只对训练数据集进行分层。
我需要将我的数据分成训练集(75%)和测试集(25%)。我目前这样做与下面的代码:scikit-learn中的分层次训练/测试分裂
X, Xt, userInfo, userInfo_train = sklearn.cross_validation.train_test_split(X, userInfo)
但是,我想分层我的训练数据集。我怎么做?我一直在研究StratifiedKFold
方法,但不会让我指定75%/ 25%的分割,并且只对训练数据集进行分层。
[更新为0.17]
见sklearn.model_selection.train_test_split
文档:
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y,
stratify=y,
test_size=0.25)
[0.17 /更新]
有一个拉请求here。 但你可以简单地做train, test = next(iter(StratifiedKFold(...)))
并使用火车和测试指数,如果你想。
TL; DR:使用StratifiedShuffleSplit与test_size=0.25
Scikit学习提供了两个模块分层劈裂:
n_folds
培训/测试集,以使班级在两者中平等。赫雷什一些代码(从上面的文档直接)
>>> skf = cross_validation.StratifiedKFold(y, n_folds=2) #2-fold cross validation
>>> len(skf)
2
>>> for train_index, test_index in skf:
... print("TRAIN:", train_index, "TEST:", test_index)
... X_train, X_test = X[train_index], X[test_index]
... y_train, y_test = y[train_index], y[test_index]
... #fit and predict with X_train/test. Use accuracy metrics to check validation performance
n_iter=1
。你可以提测试尺寸这里一样train_test_split
代码:
>>> sss = StratifiedShuffleSplit(y, n_iter=1, test_size=0.5, random_state=0)
>>> len(sss)
1
>>> for train_index, test_index in sss:
... print("TRAIN:", train_index, "TEST:", test_index)
... X_train, X_test = X[train_index], X[test_index]
... y_train, y_test = y[train_index], y[test_index]
>>> # fit and predict with your classifier using the above X/y train/test
请注意,从'0.18.x'开始,'n_iter'应该是'StratifiedShuffleSplit'的'n_splits' - 并且它有一个略微不同的API:http://scikit-learn.org/stable/modules/generated/ sklearn.model_selection.StratifiedShuffleSplit.html – lollercoaster 2016-10-31 23:27:49
下面是连续/回归数据为例(直到this issue on GitHub解决)。
# Your bins need to be appropriate for your output values
# e.g. 0 to 50 with 25 bins
bins = np.linspace(0, 50, 25)
y_binned = np.digitize(y_full, bins)
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y_binned)
除了由@Andreas穆勒接受的答案,只是想补充一点,作为@tangy上面提到的:
StratifiedShuffleSplit最接近train_test_split(分层= Y) 用的附加功能:
#train_size is 1 - tst_size - vld_size
tst_size=0.15
vld_size=0.15
X_train_test, X_valid, y_train_test, y_valid = train_test_split(df.drop(y, axis=1), df.y, test_size = vld_size, random_state=13903)
X_train_test_V=pd.DataFrame(X_train_test)
X_valid=pd.DataFrame(X_valid)
X_train, X_test, y_train, y_test = train_test_split(X_train_test, y_train_test, test_size=tst_size, random_state=13903)
IMO,这应该是公认的答案。 – Proghero 2016-01-16 04:12:36
@Proghero:我编辑了我的答案0。17在另一个答案被接受后;) – 2016-01-19 16:19:40
@AndreasMueller是否有一种简单的方法来对回归数据进行分层? – Jordan 2016-09-14 09:53:51