2016-09-15 71 views
0

我在TensorFlow中使用Python API共享变量时遇到问题。TensorFlow,Python,共享变量,初始化在顶部

我读过官方文档(https://www.tensorflow.org/versions/r0.10/how_tos/variable_scope/index.html),但我仍然无法弄清楚发生了什么。

我在下面写了一个最小的工作示例来说明问题。

概括地说,我想下面的代码做到以下几点:

1)初始化一个变量“FC1/W”后,立即创建会话,

2)创建NPY将数组“x_npy”加入占位符“x”中,

3)运行一个操作“y”,它应该意识到已经创建了变量“fc1/w”,然后使用该变量值(而不是初始化新的)来计算它的输出。

4)请注意,我说的标志“再利用=真”在函数‘线性’的变量范围,但似乎并没有帮助,因为我不断收到错误:

ValueError: Variable fc1/w does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=None in VarScope? 

这是因为如果我要删除标志“再利用= true”,则TensorFlow会告诉我该变量确实存在相当混乱:

ValueError: Variable fc1/w already exists, disallowed. Did you mean to set reuse=True in VarScope? 

5)请注意,我的工作一个更大的代码库,我真的希望能够使用共享变量功能,而不是w如果不使用共享变量,可能会解决我在下面编写的特定示例代码,但可能无法很好地推广。 6)最后,请注意,我真的想保持图形的创建与评估的分离。特别是,我不想在会话范围内使用“tf.InteractiveSession()”或创建“y”,即在“with tf.Session()as sess:”下面。

这是我在Stack Overflow上的第一篇文章,我对TensorFlow相当陌生,所以请接受我的道歉,如果问题不完全清楚。无论如何,我很乐意提供更多细节或进一步阐明任何方面。

预先感谢您。

import tensorflow as tf 
import numpy as np 


def linear(x_, output_size, non_linearity, name): 
    with tf.variable_scope(name, reuse=True): 
     input_size = x_.get_shape().as_list()[1] 
     # If doesn't exist, initialize "name/w" randomly: 
     w = tf.get_variable("w", [input_size, output_size], tf.float32, 
          tf.random_normal_initializer()) 
     z = tf.matmul(x_, w) 
     return non_linearity(z) 


def init_w(name, w_initializer): 
    with tf.variable_scope(name): 
     w = tf.get_variable("w", initializer=w_initializer) 
     return tf.initialize_variables([w]) 


batch_size = 1 
fc1_input_size = 7 
fc1_output_size = 5 

# Initialize with zeros 
fc1_w_initializer = tf.zeros([fc1_input_size, fc1_output_size]) 

# 
x = tf.placeholder(tf.float32, [None, fc1_input_size]) 

# 
y = linear(x, fc1_output_size, tf.nn.softmax, "fc1") 

with tf.Session() as sess: 

    # Initialize "fc1/w" with zeros. 
    sess.run(init_w("fc1", fc1_w_initializer)) 

    # Create npy array to feed into placeholder x 
    x_npy = np.arange(batch_size * fc1_input_size, dtype=np.float32).reshape((batch_size, fc1_input_size)) 

    # Run y, and print result. 
    print(sess.run(y, dict_feed={x: x_npy})) 

回答

0

看起来像tf.variable_scope()的调用发现变量范围/ w,即使您在空会话中运行它。我清理了你的代码来演示。

import tensorflow as tf 
import numpy as np 


def shared_layer(x, output_size, non_linearity, name): 
    with tf.variable_scope(name): 
     input_size = x.get_shape().as_list()[1] 
     # If doesn't exist, initialize "name/w" randomly: 
     w = tf.get_variable("w", [input_size, output_size], tf.float32, 
          tf.random_normal_initializer()) 
     z = tf.matmul(x, w) 
     return non_linearity(z) 

def shared_init(sess, scope, var, initializer): 
    with tf.variable_scope(scope, reuse=True): 
     w = tf.get_variable(var, initializer=initializer) 
     sess.run(tf.initialize_variables([w])) 

layer_input_size = 2 
layer_output_size = 2 

w_init = tf.zeros([layer_input_size, layer_output_size]) 

x = tf.placeholder(tf.float32, [None, layer_input_size]) 
y = shared_layer(x, layer_output_size, tf.nn.softmax, "scope") 

with tf.Session() as sess: 
    shared_init(sess, "scope", "w", w_init) 
    with tf.variable_scope("scope", reuse=True): 
     print sess.run(tf.get_variable("w", [2,2]))