2017-06-17 93 views
0

我有两个数据集,这是这样的:数据结构差异/ TFLearn

input: 
array([[[ 0.99309823], 
      ... 
     [ 0.  ]]]) 

shape : (1, 2501) 

output: 
array([[0, 0, 0, ..., 0, 0, 1], 
     ..., 
     [0, 0, 0, ..., 0, 0, 0]]) 
shape : (2501, 9) 

我与TFLearn处理它;作为

input_layer = tflearn.input_data(shape=[None,2501]) 
hidden1 = tflearn.fully_connected(input_layer,1205,activation='ReLU', regularizer='L2', weight_decay=0.001) 
dropout1 = tflearn.dropout(hidden1,0.8) 

hidden2 = tflearn.fully_connected(dropout1,1205,activation='ReLU', regularizer='L2', weight_decay=0.001) 
dropout2 = tflearn.dropout(hidden2,0.8) 
softmax = tflearn.fully_connected(dropout2,9,activation='softmax') 

# Regression with SGD 
sgd = tflearn.SGD(learning_rate=0.1,lr_decay=0.96, decay_step=1000) 
top_k=tflearn.metrics.Top_k(3) 
net = tflearn.regression(softmax,optimizer=sgd,metric=top_k,loss='categorical_crossentropy') 

model = tflearn.DNN(net) 
model.fit(input,output,n_epoch=10,show_metric=True, run_id='dense_model') 

它的工作原理,但不是我想要的方式。这是一个DNN模型。我希望当我输入0.95时,模型必须给出相应的预测,例如[0,0,0,0,0,0,0,0,1]。然而,当我想进入0.95,它说,

ValueError: Cannot feed value of shape (1,) for Tensor 'InputData/X:0', which has shape '(?, 2501)' 

当我试图了解我意识到我需要(1,2501)形状的数据来预测我错了基于模型。

我想要的是输入中的每个元素,预测输出中的相应元素。如您所见,在实例数据集中,对于[0.99309823],实例数据集中的

,相应的输出为[0,0,0,0,0,0,0,0,1]。我想要这样训练自己。

我可能有错误的结构化数据或模型(可能是数据集),我解释了所有的东西,我需要帮助,我真的不知道。

回答

1

你的输入数据应NX1(N =样品数)维存档该转化([0.99309823] - > [0,0,0,0,0,0,0,0,1 ])。根据您的输入数据形状,它看起来更可能包括1个2501尺寸的样本。

  • ValueError: Cannot feed value of shape (1,) for Tensor 'InputData/X:0', which has shape '(?, 2501)'此错误意味着tensorflow期望你提供具有形状(,2501)的载体,但你喂养与形状(1,)矢量网络。

  • 例修改后的代码与虚拟数据:

import numpy as np 
import tflearn 

#creating dummy data 
input_data = np.random.rand(1, 2501) 
input_data = np.transpose(input_data) # now shape is (2501,1) 
output_data = np.random.randint(8, size=2501) 
n_values = 9 
output_data = np.eye(n_values)[output_data] 

# checking the shapes 
print input_data.shape #(2501,1) 
print output_data.shape #(2501,9) 

input_layer = tflearn.input_data(shape=[None,1]) # now network is expecting (Nx1) 
hidden1 = tflearn.fully_connected(input_layer,1205,activation='ReLU', regularizer='L2', weight_decay=0.001) 
dropout1 = tflearn.dropout(hidden1,0.8) 

hidden2 = tflearn.fully_connected(dropout1,1205,activation='ReLU', regularizer='L2', weight_decay=0.001) 
dropout2 = tflearn.dropout(hidden2,0.8) 
softmax = tflearn.fully_connected(dropout2,9,activation='softmax') 

# Regression with SGD 
sgd = tflearn.SGD(learning_rate=0.1,lr_decay=0.96, decay_step=1000) 
top_k=tflearn.metrics.Top_k(3) 
net = tflearn.regression(softmax,optimizer=sgd,metric=top_k,loss='categorical_crossentropy') 
model = tflearn.DNN(net) 
model.fit(input_data, output_data, n_epoch=10,show_metric=True, run_id='dense_model') 
+0

你是对的,谢谢rcmalli。 –

0

也是我的朋友警告过我同样的事情rcmalli。他说 重塑:

input = tf.reshape(input, (2501,1)) 

变化

input_layer = tflearn.input_data(shape=[None,2501]) 

input_layer = tflearn.input_data(shape=[None, 1]) 

可变尺寸必须是 “无”。在你错误的情况下,2501是你的数据集的数量级别(或者是其他语言,我从其他语言翻译过来的,但你明白了)。 1是恒定的输入量值。