由于您没有提供给我们数据集,我使用的模拟数据生成的手段为make_blobs
。从你的问题来看,还不清楚应该有多少测试样品。我已经定义了test_samples = 50000
,但您可以更改此值以适合您的需求。
从sklearn进口集
train_samples = 5000
test_samples = 50000
total_samples = train_samples + train_samples
X, y = datasets.make_blobs(n_samples=total_samples, centers=2, random_state=0)
如下片段分割数据分成训练集和测试具有平衡类:
from sklearn.model_selection import StratifiedShuffleSplit
sss = StratifiedShuffleSplit(train_size=train_samples, n_splits=1,
test_size=test_samples, random_state=0)
for train_index, test_index in sss.split(X, y):
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
演示:
In [54]: from scipy import stats
In [55]: stats.itemfreq(y_train)
Out[55]:
array([[ 0, 2500],
[ 1, 2500]], dtype=int64)
In [56]: stats.itemfreq(y_test)
Out[56]:
array([[ 0, 25000],
[ 1, 25000]], dtype=int64)
编辑
由于@geompalik正确地指出,如果数据集不平衡StratifiedShuffleSplit
不会产生均衡的分裂。在这种情况下,你可能会发现这个功能很有用:
def stratified_split(y, train_ratio):
def split_class(y, label, train_ratio):
indices = np.flatnonzero(y == label)
n_train = int(indices.size*train_ratio)
train_index = indices[:n_train]
test_index = indices[n_train:]
return (train_index, test_index)
idx = [split_class(y, label, train_ratio) for label in np.unique(y)]
train_index = np.concatenate([train for train, _ in idx])
test_index = np.concatenate([test for _, test in idx])
return train_index, test_index
演示:
我已经previuosuly你表示(代码这里没有显示),每类样本的数量产生的模拟数据。
In [153]: y
Out[153]: array([1, 0, 1, ..., 0, 0, 1])
In [154]: y.size
Out[154]: 55000
In [155]: train_ratio = float(train_samples)/(train_samples + test_samples)
In [156]: train_ratio
Out[156]: 0.09090909090909091
In [157]: train_index, test_index = stratified_split(y, train_ratio)
In [158]: y_train = y[train_index]
In [159]: y_test = y[test_index]
In [160]: y_train.size
Out[160]: 5000
In [161]: y_test.size
Out[161]: 50000
In [162]: stats.itemfreq(y_train)
Out[162]:
array([[ 0, 2438],
[ 1, 2562]], dtype=int64)
In [163]: stats.itemfreq(y_test)
Out[163]:
array([[ 0, 24380],
[ 1, 25620]], dtype=int64)
“X”的实际大小是多少? –