2016-09-29 91 views
0

我试图在tensorflow中实现softmax回归模型,以便与其他主流深度学习框架进行基准比较。由于tensorflow中的feed_dict issue,官方文档代码很慢。我试图以tensorflow的形式服务数据,但我不知道最有效的方法。现在我只使用单个批次作为常量并通过该批次进行培训。对代码进行minibatched解决方案的有效解决方案是什么?这里是我的代码:张量流中快速的softmax回归实现

from tensorflow.examples.tutorials.mnist import input_data 

import tensorflow as tf 
import numpy as np 

mnist = input_data.read_data_sets(FLAGS.data_dir, one_hot=True) 
batch_xs, batch_ys = mnist.train.next_batch(100) 

x = tf.constant(batch_xs, name="x") 
W = tf.Variable(0.1*tf.random_normal([784, 10])) 
b = tf.Variable(tf.zeros([10])) 
logits = tf.matmul(x, W) + b 

batch_y = batch_ys.astype(np.float32) 
y_ = tf.constant(batch_y, name="y_") 

loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, y_)) 
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(loss) 
.... 
# Minitbatch is never updated during that for loop 
for i in range(5500): 
    sess.run(train_step) 

回答

0

正如以下。

from tensorflow.examples.tutorials.mnist import input_data 

import tensorflow as tf 
import numpy as np 

batch_size = 32 #any size you want 

mnist = input_data.read_data_sets(FLAGS.data_dir, one_hot=True) 


x = tf.placeholder(tf.float32, shape = [None, 784]) 
y = tf.placeholder(tf.float32, shape = [None, 10]) 

W = tf.Variable(0.1*tf.random_normal([784, 10])) 
b = tf.Variable(tf.zeros([10])) 

logits = tf.matmul(x, W) + b 

loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, y)) 
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(loss) 
.... 
# Minitbatch is never updated during that for loop 
for i in range(1000): 
    batch_xs, batch_ys = mnist.train.next_batch(batch_size) 
    l, _ = sess.run([loss, train_step], feed_dict = {x: batch_x, y: batch_ys}) 
    print l #loss for every minibatch 

形状类似于[无,784]允许您输入shape [?,784]的任何值。

我没有测试过这个代码,但我希望它能工作。