2017-09-06 98 views
1

我试图使用张量流实现跳过思考模型,并且当前版本被放置hereenter image description here了解ResourceExhaustedError:分配形状张量时的OOM

目前我用我的机器上的一个GPU(共2个GPU)和GPU的信息是

2017-09-06 11:29:32.657299: I tensorflow/core/common_runtime/gpu/gpu_device.cc:940] Found device 0 with properties: 
name: GeForce GTX 1080 Ti 
major: 6 minor: 1 memoryClockRate (GHz) 1.683 
pciBusID 0000:02:00.0 
Total memory: 10.91GiB 
Free memory: 10.75GiB 

然而,我当我试图把数据反馈给模型OOM。我尝试调试如下:

我用下面的代码片段我运行sess.run(tf.global_variables_initializer())

logger.info('Total: {} params'.format(
     np.sum([ 
      np.prod(v.get_shape().as_list()) 
      for v in tf.trainable_variables() 
     ]))) 

,并得到2017-09-06 11:29:51,333 INFO main main.py:127 - Total: 62968629 params之后,如果全部采用tf.float32约莫240Mb。的tf.global_variables输出是

[<tf.Variable 'embedding/embedding_matrix:0' shape=(155229, 200) dtype=float32_ref>, 
<tf.Variable 'encoder/rnn/gru_cell/gates/kernel:0' shape=(400, 400) dtype=float32_ref>, 
<tf.Variable 'encoder/rnn/gru_cell/gates/bias:0' shape=(400,) dtype=float32_ref>, 
<tf.Variable 'encoder/rnn/gru_cell/candidate/kernel:0' shape=(400, 200) dtype=float32_ref>, 
<tf.Variable 'encoder/rnn/gru_cell/candidate/bias:0' shape=(200,) dtype=float32_ref>, 
<tf.Variable 'decoder/weights:0' shape=(200, 155229) dtype=float32_ref>, 
<tf.Variable 'decoder/biases:0' shape=(155229,) dtype=float32_ref>, 
<tf.Variable 'decoder/previous_decoder/rnn/gru_cell/gates/kernel:0' shape=(400, 400) dtype=float32_ref>, 
<tf.Variable 'decoder/previous_decoder/rnn/gru_cell/gates/bias:0' shape=(400,) dtype=float32_ref>, 
<tf.Variable 'decoder/previous_decoder/rnn/gru_cell/candidate/kernel:0' shape=(400, 200) dtype=float32_ref>, 
<tf.Variable 'decoder/previous_decoder/rnn/gru_cell/candidate/bias:0' shape=(200,) dtype=float32_ref>, 
<tf.Variable 'decoder/next_decoder/rnn/gru_cell/gates/kernel:0' shape=(400, 400) dtype=float32_ref>, 
<tf.Variable 'decoder/next_decoder/rnn/gru_cell/gates/bias:0' shape=(400,) dtype=float32_ref>, 
<tf.Variable 'decoder/next_decoder/rnn/gru_cell/candidate/kernel:0' shape=(400, 200) dtype=float32_ref>, 
<tf.Variable 'decoder/next_decoder/rnn/gru_cell/candidate/bias:0' shape=(200,) dtype=float32_ref>, 
<tf.Variable 'global_step:0' shape=() dtype=int32_ref>] 

在我的训练句话,我有一个数据数组,其形状为(164652, 3, 30),即sample_size x 3 x time_step3在这里是指前面的句子,当前句子和下一个句子。该训练数据的大小约为57Mb,存储在loader中。然后,我用写一个生成函数来获得句子,看起来像

def iter_batches(self, batch_size=128, time_major=True, shuffle=True): 

    num_samples = len(self._sentences) 
    if shuffle: 
     samples = self._sentences[np.random.permutation(num_samples)] 
    else: 
     samples = self._sentences 

    batch_start = 0 
    while batch_start < num_samples: 
     batch = samples[batch_start:batch_start + batch_size] 

     lens = (batch != self._vocab[self._vocab.pad_token]).sum(axis=2) 
     y, x, z = batch[:, 0, :], batch[:, 1, :], batch[:, 2, :] 
     if time_major: 
      yield (y.T, lens[:, 0]), (x.T, lens[:, 1]), (z.T, lens[:, 2]) 
     else: 
      yield (y, lens[:, 0]), (x, lens[:, 1]), (z, lens[:, 2]) 
     batch_start += batch_size 

的训练循环看起来像

for epoch in num_epochs: 
    batches = loader.iter_batches(batch_size=args.batch_size) 
    try: 
     (y, y_lens), (x, x_lens), (z, z_lens) = next(batches) 
     _, summaries, loss_val = sess.run(
     [train_op, train_summary_op, st.loss], 
     feed_dict={ 
      st.inputs: x, 
      st.sequence_length: x_lens, 
      st.previous_targets: y, 
      st.previous_target_lengths: y_lens, 
      st.next_targets: z, 
      st.next_target_lengths: z_lens 
     }) 
    except StopIteraton: 
     ... 

然后我得到了一个OOM。如果我将整个try正文(不提供数据)注释掉,脚本运行得很好。

我不知道为什么我在这么小的数据范围内得到了OOM。使用nvidia-smi我总是得到

Wed Sep 6 12:03:37 2017 
+-----------------------------------------------------------------------------+ 
| NVIDIA-SMI 384.59     Driver Version: 384.59     | 
|-------------------------------+----------------------+----------------------+ 
| GPU Name  Persistence-M| Bus-Id  Disp.A | Volatile Uncorr. ECC | 
| Fan Temp Perf Pwr:Usage/Cap|   Memory-Usage | GPU-Util Compute M. | 
|===============================+======================+======================| 
| 0 GeForce GTX 108... Off | 00000000:02:00.0 Off |     N/A | 
| 0% 44C P2 60W/275W | 10623MiB/11172MiB |  0%  Default | 
+-------------------------------+----------------------+----------------------+ 
| 1 GeForce GTX 108... Off | 00000000:03:00.0 Off |     N/A | 
| 0% 43C P2 62W/275W | 10621MiB/11171MiB |  0%  Default | 
+-------------------------------+----------------------+----------------------+ 

+-----------------------------------------------------------------------------+ 
| Processes:              GPU Memory | 
| GPU  PID Type Process name        Usage  | 
|=============================================================================| 
| 0  32748 C python3          10613MiB | 
| 1  32748 C python3          10611MiB | 
+-----------------------------------------------------------------------------+ 

无法看到我的脚本的实际 GPU使用,因为总是tensorflow抢断开头的所有记忆。这里的实际问题是我不知道如何调试。

我读过一些关于StackOverflow上的OOM的文章。其中大部分发生在向模型提供大型测试集数据并通过小批量提供数据时可以避免该问题。但我不明白为什么在我的11Gb 1080Ti中看到这样的小数据和参数组合糟糕,因为它只是试图分配一个大小为[3840 x 155229]的矩阵。 (解码器的输出矩阵,3840 = 30(time_steps) x 128(batch_size)155229是vocab_size)。

2017-09-06 12:14:45.787566: W tensorflow/core/common_runtime/bfc_allocator.cc:277] ********************************************************************************************xxxxxxxx 
2017-09-06 12:14:45.787597: W tensorflow/core/framework/op_kernel.cc:1158] Resource exhausted: OOM when allocating tensor with shape[3840,155229] 
2017-09-06 12:14:45.788735: W tensorflow/core/framework/op_kernel.cc:1158] Resource exhausted: OOM when allocating tensor with shape[3840,155229] 
    [[Node: decoder/previous_decoder/Add = Add[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"](decoder/previous_decoder/MatMul, decoder/biases/read)]] 
2017-09-06 12:14:45.790453: I tensorflow/core/common_runtime/gpu/pool_allocator.cc:247] PoolAllocator: After 2857 get requests, put_count=2078 evicted_count=1000 eviction_rate=0.481232 and unsatisfied allocation rate=0.657683 
2017-09-06 12:14:45.790482: I tensorflow/core/common_runtime/gpu/pool_allocator.cc:259] Raising pool_size_limit_ from 100 to 110 
Traceback (most recent call last): 
    File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 1139, in _do_call 
    return fn(*args) 
    File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 1121, in _run_fn 
    status, run_metadata) 
    File "/usr/lib/python3.6/contextlib.py", line 88, in __exit__ 
    next(self.gen) 
    File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/errors_impl.py", line 466, in raise_exception_on_not_ok_status 
    pywrap_tensorflow.TF_GetCode(status)) 
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[3840,155229] 
    [[Node: decoder/previous_decoder/Add = Add[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"](decoder/previous_decoder/MatMul, decoder/biases/read)]] 
    [[Node: GradientDescent/update/_146 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_2166_GradientDescent/update", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]] 

During handling of the above exception, another exception occurred: 

任何帮助将不胜感激。提前致谢。

回答

1

让我们从一个分裂的问题之一:

关于tensorflow分配所有存储器中,可以使用下面的代码片段,让tensorflow每当需要的时候分配内存。这样你就可以理解事情的进展。

gpu_options = tf.GPUOptions(allow_growth=True) 
session = tf.InteractiveSession(config=tf.ConfigProto(gpu_options=gpu_options)) 

关于大小第二件事, 由于没有关于网络大小的信息,我们不能估计什么错误。但是,您可以选择一步一步地调试所有网络。例如,仅创建一个网络图层,获取其输出,创建会话和馈送值一次,并可视化您消耗的内存量。迭代此调试会话,直到看到内存不足的地方。

请注意,3840 x 155229输出是真的,真的是一个大输出。它意味着〜600M神经元,并且每一层只有〜2.22GB。如果您有任何相似的尺寸图层,它们将会加起来以非常快的速度填充您的GPU内存。

此外,这是仅适用于前进方向,如果要使用该层进行训练,通过优化加入反向传播和层将这种尺寸由2。所以相乘,对于训练你消耗〜5 GB只是为输出层。

我建议你修改你的网络,并尽量减少批量大小/参数计数为你解答适合模型到GPU

+0

谢谢!我会尽快尝试'gpu_options'。关于网络大小,不是在tf.trainable_variables()])中获得整数[62968629]的片段'np.sum([np.prod(v.get_shape()。as_list())'网络的参数?加上梯度,总共为2 * 62968629 * 4/1024/1024/1024 - > 0.47G'。而且,我的编码器只有'1'层,我的''2'解码器只有'1'层。 '3840 x 155229'是解码器输出,不是关于参数,所以我认为它在传播时不会翻倍? – Edityouprofile

+0

该计算对于推断是正确的。我以为你做了一个完全连接的层,我的坏。但是,对于培训,您需要使用tf.global_variables()而不是trainable_variables()作为优化器,并且您实现的所有其他附录将添加更多不可见参数。 –

+0

再次感谢。我打印了'tf.global_variables()'和'tf.trainable_variables()'的结果并更新了问题。在我的情况下,后者只比前者缺少'global_step'张量。 – Edityouprofile

相关问题