2017-08-22 70 views
2

我正在尝试使用MLPRegressor来训练和测试我的数据集。我有两个数据集(训练数据集和测试数据集),它们都具有完全相同的特征列和标签。这是我的数据集的例子:Scikit-learn MLPRegressor - 如何不预测负面结果?

Full,Id,Id & PPDB,Id & Words Sequence,Id & Synonyms,Id & Hypernyms,Id & Hyponyms,Gold Standard 
1.667,0.476,0.952,0.476,1.429,0.952,0.476,2.345 
3.056,1.111,1.667,1.111,3.056,1.389,1.111,1.9 
1.765,1.176,1.176,1.176,1.765,1.176,1.176,2.2 
0.714,0.714,0.714,0.714,0.714,0.714,0.714,0.0 
................ 

这里是我的代码:

import pandas as pd 
import numpy as np 

from sklearn.neural_network import MLPRegressor 

randomseed = np.random.seed(0) 

datatraining = pd.read_csv("datatrain.csv") 

datatesting = pd.read_csv("datatest.csv") 

columns = ["Full","Id","Id & PPDB","Id & Words Sequence","Id & Synonyms","Id & Hypernyms","Id & Hyponyms"] 

labeltrain = datatraining["Gold Standard"].values 
featurestrain = datatraining[list(columns)].values 


labeltest = datatesting["Gold Standard"].values 
featurestest = datatesting[list(columns)].values 

X_train = featurestrain 
y_train = labeltrain 

X_test = featurestest 
y_test = labeltest 

mlp = MLPRegressor(solver='lbfgs', hidden_layer_sizes=50, max_iter=1000, learning_rate='constant', random_state=randomseed) 

mlp.fit(X_train, y_train) 

print('Accuracy training : {:.3f}'.format(mlp.score(X_train, y_train))) 
print 

predicting = mlp.predict(X_test) 
print predicting 
print 

而这里的预测结果:

[ 1.97553444 3.43401776 3.04097607 2.7015464 2.03777686 3.63274593 
    3.37826962 -0.60260337 0.41626517 3.5374289 3.66114929 3.244683 
    2.6313756 2.14243075 3.20841434 2.105238 4.9805092 4.00868273 
    2.45508505 4.53332828 3.41862096 3.35721078 3.23069344 3.72149434 
    4.9805092 2.61705563 1.55052494 -0.14135979 2.65875196 3.05328206 
    3.51127424 0.51076396 2.39947967 1.95916595 3.71520651 2.1526807 
    2.26438616 0.73249057 2.46888695 3.56976227 1.03109988 2.15894353 
    2.06396103 0.66133707 4.72861602 2.4592647 2.84176811 2.3157664 
    1.68426416 2.56022955 -0.00518545 1.67213609 0.6998739 3.25940136 
    3.25369266 3.88888542 1.9168694 2.26036302 3.97917769 2.00322903 
    3.03121106 3.29083723 0.6998739 4.33375678 0.6998739 2.71141538 
-4.23755447 3.958574 2.67765274 2.68715423 2.32714117 2.6500056 
    ........] 

正如我们所看到的,也有一些负面的结果。如何不预测负面结果?此外,我的数据集包含所有正值。

+2

你需要一个这样或那样的预测值施以积极的限制。所以你可能想把你的问题从*为什么会显示负面结果*到*如何不预测否定结果*,或者更一般地说,*如何约束预测域*。 – Kanak

回答

0

假设您没有分类变量。另外,你在问题中提到你拥有所有的积极价值。 尝试使用SatandardSacler()标准化您的数据。使用您的X_train和y_train来获取standardize数据。

from sklearn import preprocessing as pre 
... 
scaler = pre.StandardScaler() 
X_train_scaled = scaler.fit_transform(X_train) 
X_test_scaled = scaler.fit_transform(X_test) 

初始化根据您的情况,fit缩放数据最佳参数模型,

mlp.fit(X_train_scaled, y_train) 
... 
predicting = mlp.predict(X_test_scaled) 

这应该这样做了。让我知道事情的后续。

此外,也有一些很好的读取,

https://stats.stackexchange.com/questions/189652/is-it-a-good-practice-to-always-scale-normalize-data-for-machine-learning https://stats.stackexchange.com/questions/7757/data-normalization-and-standardization-in-neural-networks