2017-08-02 70 views
0

训练的时候成本不变的我是一个总的新秀,并试图使用tensorflow解决多输入多输出的问题。然而,培训的过程中,权重和网络的成本是不变的。下面是一些主要的代码,任何建议,将不胜感激!重量和tensorflow

learning_rate = 0.01 
training_epoch = 2000 
batch_size = 100 
display_step = 1 

# place holder for graph input 
x = tf.placeholder("float64", [None, 14]) 
y = tf.placeholder("float64", [None, 8]) 

# model weights 
w_1 = tf.Variable(tf.zeros([14, 11], dtype = tf.float64)) 
w_2 = tf.Variable(tf.zeros([11, 8], dtype = tf.float64)) 

# construct a model 
h_in = tf.matmul(x, w_1) 
h_out = tf.nn.relu(h_in) 
o_in = tf.matmul(h_out, w_2) 
o_out = tf.nn.relu(o_in) 

# cost: mean square error 
cost = tf.reduce_sum(tf.pow((o_out - y), 2)) 

# optimizer 
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost) 

# initializer 
init = tf.global_variables_initializer() 

# launch the graph 
with tf.Session() as sess: 
    sess.run(init) 

    for epoch in range(training_epoch): 
     pos = 0; 
     # loop over all batches 
     if pos < train_input_array.shape[0]: 
      # get the next batch 
      batch_i = [] 
      batch_o = [] 
      for i in range(pos, pos + batch_size): 
       batch_i.append(train_input_array[i].tolist()) 
       batch_o.append(train_output_array[i].tolist()) 
      np.array(batch_i) 
      np.array(batch_o) 
      pos += batch_size; 
     sess.run(optimizer, feed_dict = {x: batch_i, y: batch_o}) 
     print sess.run(w_2[0]) 

     if (epoch + 1) % display_step == 0: 
      c = sess.run(cost, feed_dict = {x: batch_i, y: batch_o}) 
      print("Epoch: ", "%04d" % (epoch + 1), "cost: ", "{:.9f}".format(c)) 

回答

0

我认为你需要改变你的成本函数,以reduce_mean

# reduce sum doesn't work 
cost = tf.reduce_sum(tf.pow((o_out - y), 2)) 
# you need to use mean 
cost = tf.reduce_mean(tf.pow((o_out - y), 2)) 
+0

我改成了reduce_mean,但损失仍然是不变的。 – Dennis

+0

您可以尝试使用momentm优化,数目较多的 –

+0

谢谢你的建议的参数,我改变了权重变量初始化从tf.zero到tf.random_normal,损失终于开始下降。 – Dennis