2017-10-16 144 views
2

我有一个类似于this one的问题。如何累积张量流中的梯度?

因为我的资源有限,而且我使用深度模型(VGG-16) - 用于训练三重网络 - 我想累积128批次大小一个训练示例的梯度,然后传播错误并更新重量。

我不清楚我该怎么做。我使用tensorflow工作,但是欢迎任何实现/伪代码。

+0

你为什么不使用来自问题,你链接的答案? – Pop

+0

@Pop因为我不理解他们。我正在寻找更详细的东西(初学者级别) –

回答

2

让我们穿行在你喜欢的答案之一提出的代码:

## Optimizer definition - nothing different from any classical example 
opt = tf.train.AdamOptimizer() 

## Retrieve all trainable variables you defined in your graph 
tvs = tf.trainable_variables() 
## Creation of a list of variables with the same shape as the trainable ones 
# initialized with 0s 
accum_vars = [tf.Variable(tf.zeros_like(tv.initialized_value()), trainable=False) for tv in tvs] 
zero_ops = [tv.assign(tf.zeros_like(tv)) for tv in accum_vars] 

## Calls the compute_gradients function of the optimizer to obtain... the list of gradients 
gvs = opt.compute_gradients(rmse, tvs) 

## Adds to each element from the list you initialized earlier with zeros its gradient (works because accum_vars and gvs are in the same order) 
accum_ops = [accum_vars[i].assign_add(gv[0]) for i, gv in enumerate(gvs)] 

## Define the training step (part with variable value update) 
train_step = opt.apply_gradients([(accum_vars[i], gv[1]) for i, gv in enumerate(gvs)]) 

这第一部分主要增加了新的variablesops你的图形,这将使你

  1. 厚积薄发变量accum_vars
  2. 更新模型权重ops accum_ops(列表中的变量)accum_opstrain_step

然后,用它训练时,你必须遵循这些步骤(仍然您链接的答案):

## The while loop for training 
while ...: 
    # Run the zero_ops to initialize it 
    sess.run(zero_ops) 
    # Accumulate the gradients 'n_minibatches' times in accum_vars using accum_ops 
    for i in xrange(n_minibatches): 
     sess.run(accum_ops, feed_dict=dict(X: Xs[i], y: ys[i])) 
    # Run the train_step ops to update the weights based on your accumulated gradients 
    sess.run(train_step)