2016-07-06 73 views
0

我在TensorFlow中有一个图,我根据数百个时代的32个观测值的批量大小进行了训练。我现在想基于训练图来预测一些新数据,所以我保存了它并重新加载了它,但是我被迫总是传入与我的批量大小相同的观察值,因为我已经在我的该图对应于批量大小。我怎样才能让我的图表接受任何数量的观察?Tensorflow模型评估基于批量大小

我应该如何配置它,以便我可以训练任意数量的观测值,然后再运行不同的数量?

下面是代码的一些重要部分的摘录。 构建图:

graph = tf.Graph() 
    with graph.as_default(): 
     x = tf.placeholder(tf.float32, shape=[batch_size, self.image_height, self.image_width, 1], name="data") 

     y_ = tf.placeholder(tf.float32, shape=[batch_size, num_labels], name="labels") 

     # Layer 1 
     W_conv1 = weight_variable([patch_size, patch_size, 1, depth], name="weight_1") 
     b_conv1 = bias_variable([depth], name="bias_1") 

     h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1, name="conv_1") + b_conv1, name="relu_1") 
     h_pool1 = max_pool_2x2(h_conv1, name="pool_1") 

     #Layer 2 
     #W_conv2 = weight_variable([patch_size, patch_size, depth, depth*2]) 
     #b_conv2 = bias_variable([depth*2]) 

     #h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2) 
     #h_pool2 = max_pool_2x2(h_conv2) 

     # Densely Connected Layer 
     W_fc1 = weight_variable([self.image_height/4 * self.image_width/2 * depth*2, depth], name="weight_2") 
     b_fc1 = bias_variable([depth], name="bias_2") 

     h_pool2_flat = tf.reshape(h_pool1, [-1, self.image_height/2 * self.image_width/2 * depth], name="reshape_1") 
     h_fc1 = tf.nn.relu(tf.nn.xw_plus_b(h_pool2_flat, W_fc1, b_fc1), name="relu_2") 

     keep_prob = tf.placeholder(tf.float32, name="keep_prob") 
     h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob, name="drop_1") 

     W_fc2 = weight_variable([depth, num_labels], name="dense_weight") 
     b_fc2 = bias_variable([num_labels], name="dense_bias") 

     logits = tf.nn.xw_plus_b(h_fc1_drop, W_fc2, b_fc2) 
     tf.add_to_collection("logits", logits) 
     y_conv = tf.nn.softmax(logits, name="softmax_1") 
     tf.add_to_collection("y_conv", y_conv) 


     with tf.name_scope("cross-entropy") as scope: 
      cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(y_conv, y_, name="cross_entropy_1")) 
      ce_summ = tf.scalar_summary("cross entropy", cross_entropy, name="cross_entropy") 

     optimizer = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy, name="min_adam_1") 

     with tf.name_scope("prediction") as scope: 
      correct_prediction = tf.equal(tf.argmax(y_conv, 1), tf.argmax(y_, 1)) 
      accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) 
      accuracy_summary = tf.scalar_summary("accuracy", accuracy, name="accuracy_summary") 

     merged = tf.merge_all_summaries() 

加载和运行新的数据

with tf.Session() as sess: 
      new_saver = tf.train.import_meta_graph('./simple_model/one-layer-50.meta') 
      new_saver.restore(sess, './simple_model/one-layer-50') 
      logger.info("Model restored") 
      image, _ = tf_nn.reformat(images, None, 3) 

      x_image = tf.placeholder(tf.float32, shape=[image.shape[0], 28, 28, 1], 
            name="data") 
      keep_prob = tf.placeholder(tf.float32, name="keep_prob") 

      feed_dict = {x_image: image, keep_prob: .01} 
      y_ = tf.get_collection("y_") 
      prediction = sess.run(y_, feed_dict=feed_dict) 
+0

嘿,参照你的第二个代码块,你正在访问图中标签的占位符'y_'。这将如何帮助评估模型? – Rusty

回答

5

您可以通过使用None,而不是像这样的具体数字定义占位符对尺寸的一个灵活的大小:

x = tf.placeholder(tf.float32, shape=[None, self.image_height, self.image_width, 1], name="data") 

y_ = tf.placeholder(tf.float32, shape=[None, num_labels], name="labels") 

编辑:有a section in the TensorFlow faq about this

+0

太棒了!感谢您快速,简单的回答。 –

2

我的方法是将batch_size定义为tf.variable,然后提供运行会话时要使用的批处理大小的值。这在过去对我来说很好,但我想Stryke的解决方案会更优雅。

+0

谢谢!我标记@Styrke的答案是正确的,因为它的工作原理正好符合我说明问题的方式,但这也可以奏效。 –