0
我目前正试图修改我的tensorflow分类器,它能够标记一个单词序列为正数或负数,以处理更长的序列,而无需重新训练。我的模型是一个RNN,最大序列长度为210.一个输入是一个单词(300 dim),我用Google向量化单词word2vec,所以我可以输入最多210个单词的序列。现在我的问题是,如何将最大序列长度更改为例如3000,以分类电影评论。如何更改Tensorflow RNN模型中的最大序列长度?
与210固定最大序列长度我的工作模型(tf_version:1.1.0):
n_chunks = 210
chunk_size = 300
x = tf.placeholder("float",[None,n_chunks,chunk_size])
y = tf.placeholder("float",None)
seq_length = tf.placeholder("int64",None)
with tf.variable_scope("rnn1"):
lstm_cell = tf.contrib.rnn.LSTMCell(rnn_size,
state_is_tuple=True)
lstm_cell = tf.contrib.rnn.DropoutWrapper (lstm_cell,
input_keep_prob=0.8)
outputs, _ = tf.nn.dynamic_rnn(lstm_cell,x,dtype=tf.float32,
sequence_length = self.seq_length)
fc = tf.contrib.layers.fully_connected(outputs, 1000,
activation_fn=tf.nn.relu)
output = tf.contrib.layers.flatten(fc)
#*1
logits = tf.contrib.layers.fully_connected(output, self.n_classes,
activation_fn=None)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits
(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer(learning_rate=0.01).minimize(cost)
...
#train
#train_x padded to fit(batch_size*n_chunks*chunk_size)
sess.run([optimizer, cost], feed_dict={x:train_x, y:train_y,
seq_length:seq_length})
#predict:
...
pred = tf.nn.softmax(logits)
pred = sess.run(pred,feed_dict={x:word_vecs, seq_length:sq_l})
我已经尝试过什么修改:
1个与无更换n_chunks和简单地在
饲料数据x = tf.placeholder(tf.float32, [None,None,300])
#model fails to build
#ValueError: The last dimension of the inputs to `Dense` should be defined.
#Found `None`.
# at *1
...
#all entrys in word_vecs still have got the same length for example
#3000(batch_size*3000(!= n_chunks)*300)
pred = tf.nn.softmax(logits)
pred = sess.run(pred,feed_dict={x:word_vecs, seq_length:sq_l})
2更改X,然后恢复旧型号:
x = tf.placeholder(tf.float32, [None,n_chunks*10,chunk_size]
...
saver = tf.train.Saver(tf.all_variables(), reshape=True)
saver.restore(sess,"...")
#fails as well:
#InvalidArgumentError (see above for traceback): Input to reshape is a
#tensor with 420000 values, but the requested shape has 840000
#[[Node: save/Reshape_5 = Reshape[T=DT_FLOAT, Tshape=DT_INT32,
#_device="/job:localhost/replica:0/task:0/cpu:0"](save/RestoreV2_5,
#save/Reshape_5/shape)]]
# run prediction
如果可能,请您提供任何有效的示例或解释为什么它不是?
感谢您的回答,我没有将n_chunks设置为3000,因为它不需要进行培训,因为最大seq长度为210.如果我将n_chunks设置为3000,我必须用0个vecs填充所有输入以使它们合适,所以如果我有一个超过n_chunks的序列,那么训练过程会变得非常昂贵,我将不得不重新开始。在我的第二次尝试中,我改变了其他地方,n_chunks也进来了,我只是忘了提及它。 – Tobi