2017-04-16 104 views
0

这是我得到tensorflow:ValueError异常:GraphDef不能大于2GB

Traceback (most recent call last): 
    File "fully_connected_feed.py", line 387, in <module> 
    tf.app.run(main=main, argv=[sys.argv[0]] + unparsed) 
    File "/home/-/.local/lib/python2.7/site- 
packages/tensorflow/python/platform/app.py", line 44, in run 
    _sys.exit(main(_sys.argv[:1] + flags_passthrough)) 
    File "fully_connected_feed.py", line 289, in main 
    run_training() 
    File "fully_connected_feed.py", line 256, in run_training 
    saver.save(sess, checkpoint_file, global_step=step) 
    File "/home/-/.local/lib/python2.7/site- 
packages/tensorflow/python/training/saver.py", line 1386, in save 
    self.export_meta_graph(meta_graph_filename) 
    File "/home/-/.local/lib/python2.7/site- 
packages/tensorflow/python/training/saver.py", line 1414, in export_meta_graph 
    graph_def=ops.get_default_graph().as_graph_def(add_shapes=True), 
    File "/home/-/.local/lib/python2.7/site- 
packages/tensorflow/python/framework/ops.py", line 2257, in as_graph_def 
    result, _ = self._as_graph_def(from_version, add_shapes) 
    File "/home/-/.local/lib/python2.7/site- 
packages/tensorflow/python/framework/ops.py", line 2220, in _as_graph_def 
    raise ValueError("GraphDef cannot be larger than 2GB.") 
ValueError: GraphDef cannot be larger than 2GB. 

错误,我相信这是从这个代码的结果

weights = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope="hidden1")[0] 
weights = tf.scatter_nd_update(weights,indices, updates) 
weights = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope="hidden2")[0] 
weights = tf.scatter_nd_update(weights,indices, updates) 

我不知道为什么我的模型正在变得如此之大(15k步和240MB)。有什么想法吗?谢谢!

回答

1

很难说没有看到代码就会发生什么,但总的来说TensorFlow模型的大小不会随着步骤数量的增加而增加 - 它们应该是固定的。

如果模型大小随着步数的增加而增加,则表明计算图将被添加到每一步。例如,是这样的:

import tensorflow as tf 

with tf.Session() as sess: 
    for i in xrange(1000): 
    sess.run(tf.add(1, 2)) 
    # or perhaps sess.run(tf.scatter_nd_update(...)) in your case 

将创建在曲线图3000层的节点(一个用于加载,一个用于“1”一个用于在每个迭代上“2”)。相反,你要一次定义的计算图表和喜欢的东西反复运行:

import tensorflow as tf 

x = tf.add(1, 2) 
# or perhaps x = tf.scatter_nd_update(...) in your case 
with tf.Session() as sess: 
    for i in xrange(1000): 
    sess.run(x) 

这将有3个节点的固定图的所有1000(和更多)的迭代。希望有所帮助。

+0

谢谢!我在模型上增加了你的观点,buti我嵌套'tf.scatter_nd_update(...)'因为我需要在每一步更新我的权重(手动卷积)。也许这是这样做的错误方式? –

+0

也许我是误解,但不一样适用?不要在循环中调用'tf.scatter_nd_update',而是保存返回的操作并在循环中调用它。从[tf.scatter_nd_update]的文档(https://www.tensorflow.org/api_docs/python/tf/scatter_nd_update) - 它应用了更新,并且只是为了方便返回与第一个参数相同的值。所以你可以这样做: update = tf.scatter_nd_update(权重,索引,更新) 我在xrange(num_steps)中: sess.run(更新) – ash

相关问题