2017-10-12 118 views
0

不能馈送形状的值(128,),我想问你的帮助解决这个错误。 我想通过使用本地csv文件来训练一个深度自动编码器网络,然后将它转换成一个numpy数组(由csv和numpy库)。但是这些数据绝不会影响我占位符的张量。Tensorflow - ValueError:张数'Placeholder_142:0',形状'(?,3433)

下面是深自动编码器的抽象:

class Deep_Autoencoder: 
    def __init__(self, input_dim, n_nodes_hl = (32, 16, 1), epochs = 400, batch_size = 128, learning_rate = 0.02, n_examples = 10): 

    # Hyperparameters 
    self.input_dim = input_dim 
    self.epochs = epochs 
    self.batch_size = batch_size 
    self.learning_rate = learning_rate 
    self.n_examples = n_examples 

    # Input and target placeholders 
    X = tf.placeholder('float', [None, self.input_dim]) 
    Y = tf.placeholder('float', [None, self.input_dim]) 
    ... 

    self.X = X 
    print("self.X : ", self.X) 
    self.Y = Y 
    print("self.Y : ", self.Y) 
    ... 

def train_neural_network(self, data, targets): 

    with tf.Session() as sess: 
     sess.run(tf.global_variables_initializer()) 
     for epoch in range(self.epochs): 
      epoch_loss = 0 
      i = 0 

      # Let's train it in batch-mode 
      while i < len(data): 
       start = i 
       end = i + self.batch_size 

       batch_x = np.array(data[start:end]) 
       print("type batch_x :", type(batch_x)) 
       print("len batch_x :", len(batch_x)) 
       batch_y = np.array(targets[start:end]) 
       print("type batch_y :", type(batch_y)) 
       print("len batch_y :", len(batch_y)) 

       hidden, _, c = sess.run([self.encoded, self.optimizer, self.cost], feed_dict={self.X: batch_x, self.Y: batch_y}) 
       epoch_loss +=c 
       i += self.batch_size 

     self.saver.save(sess, 'selfautoencoder.ckpt') 
     print('Accuracy', self.accuracy.eval({self.X: data, self.Y: targets})) 

在这里,我创建输入数据和下面你可以看到,我会印出它们的主要功能为您的信息(请注意,我在列真正感兴趣3只):

features_DeepAE = create_feature_sets(filename) 

Train_x = np.array(features_DeepAE[0]) 
Train_y = np.array(features_DeepAE[1]) 

print("type Train_x : ", type(Train_x)) 
print("type Train_x.T[3] : ", type(Train_x.T[3])) 
print("len Train_x : ", len(Train_x)) 
print("len Train_x.T[3] : ", len(Train_x.T[3])) 
print("shape Train_x : ", Train_x.shape) 
print("type Train_y : ", type(Train_y)) 
print("type Train_y.T[3] : ", type(Train_y.T[3])) 
print("len Train_y : ", len(Train_y)) 
print("len Train_y.T[3] : ", len(Train_y.T[3])) 
print("shape Train_y : ", Train_y.shape) 

在这里,我运行代码:

DAE = Deep_Autoencoder(input_dim = len(Train_x)) 
DAE.train_neural_network(Train_x.T[3], Train_y.T[3]) 

这些是打印输出,供参考:

type Train_x : <class 'numpy.ndarray'> 
type Train_x.T[3] : <class 'numpy.ndarray'> 
len Train_x : 3433 
len Train_x.T[3] : 3433 
shape Train_x : (3433, 5) 
type Train_y : <class 'numpy.ndarray'> 
type Train_y.T[3] : <class 'numpy.ndarray'> 
len Train_y : 3433 
len Train_y.T[3] : 3433 
shape Train_y : (3433, 5) 
self.X : Tensor("Placeholder_142:0", shape=(?, 3433), dtype=float32) 
self.Y : Tensor("Placeholder_143:0", shape=(?, 3433), dtype=float32) 
type batch_x : <class 'numpy.ndarray'> 
len batch_x : 128 
type batch_y : <class 'numpy.ndarray'> 
len batch_y : 128 

最后的误差:

ValueError异常:不能喂形状的值(128,),用于张量 'Placeholder_142:0'?,其具有形状“(,3433 )'

并且是......我在占位符#143中......这样可以测量很多故障(重新调整批次和/或张量,转换一个和/或另一个,寻找工作区互联网..) ! 如有需要,请随时索取更多信息。

+0

您可以打印其中一个批次以查看它的形状吗? –

+0

你可以上传你的代码吗?如果你这样做,我会尝试运行它。 – finbarr

+0

也许打印'batch_x.shape' –

回答

0

解决了,感谢Avishkar Bhoopchand和amirbar:

设置你的input_dim为1,并添加一个“虚拟”层面,你的batch_x和batch_y,使他们融入的形状占位符[?,1]这样:batch_x = np.array(data [start:end])[:,None]和batch_y = np.array(targets [start:end])[:, None]。 None将空维度添加到Numpy数组。

相关问题