我已经写了我自己的代码,参考this奇妙的教程,并且当我按照我在课堂上的理解使用注意力搜索时,我无法得到结果AttentionModel _build_decoder_cell函数创建单独的解码器细胞和推理模式注意包装,假设这(我认为这是不正确的,不能找到办法解决它),在张量流中实现注意束搜索
with tf.name_scope("Decoder"):
mem_units = 2*dim
dec_cell = tf.contrib.rnn.BasicLSTMCell(2*dim)
beam_cel = tf.contrib.rnn.BasicLSTMCell(2*dim)
beam_width = 3
out_layer = Dense(output_vocab_size)
with tf.name_scope("Training"):
attn_mech = tf.contrib.seq2seq.BahdanauAttention(num_units = mem_units, memory = enc_rnn_out, normalize=True)
attn_cell = tf.contrib.seq2seq.AttentionWrapper(cell = dec_cell,attention_mechanism = attn_mech)
batch_size = tf.shape(enc_rnn_out)[0]
initial_state = attn_cell.zero_state(batch_size = batch_size , dtype=tf.float32)
initial_state = initial_state.clone(cell_state = enc_rnn_state)
helper = tf.contrib.seq2seq.TrainingHelper(inputs = emb_x_y , sequence_length = seq_len)
decoder = tf.contrib.seq2seq.BasicDecoder(cell = attn_cell, helper = helper, initial_state = initial_state ,output_layer=out_layer)
outputs, final_state, final_sequence_lengths= tf.contrib.seq2seq.dynamic_decode(decoder=decoder,impute_finished=True)
training_logits = tf.identity(outputs.rnn_output)
training_pred = tf.identity(outputs.sample_id)
with tf.name_scope("Inference"):
enc_rnn_out_beam = tf.contrib.seq2seq.tile_batch(enc_rnn_out , beam_width)
seq_len_beam = tf.contrib.seq2seq.tile_batch(seq_len , beam_width)
enc_rnn_state_beam = tf.contrib.seq2seq.tile_batch(enc_rnn_state , beam_width)
batch_size_beam = tf.shape(enc_rnn_out_beam)[0] # now batch size is beam_width times
# start tokens mean be the original batch size so divide
start_tokens = tf.tile(tf.constant([27], dtype=tf.int32), [ batch_size_beam//beam_width ])
end_token = 0
attn_mech_beam = tf.contrib.seq2seq.BahdanauAttention(num_units = mem_units, memory = enc_rnn_out_beam, normalize=True)
cell_beam = tf.contrib.seq2seq.AttentionWrapper(cell=beam_cel,attention_mechanism=attn_mech_beam,attention_layer_size=mem_units)
initial_state_beam = cell_beam.zero_state(batch_size=batch_size_beam,dtype=tf.float32).clone(cell_state=enc_rnn_state_beam)
my_decoder = tf.contrib.seq2seq.BeamSearchDecoder(cell = cell_beam,
embedding = emb_out,
start_tokens = start_tokens,
end_token = end_token,
initial_state = initial_state_beam,
beam_width = beam_width
,output_layer=out_layer)
beam_output, t1 , t2 = tf.contrib.seq2seq.dynamic_decode( my_decoder,
maximum_iterations=maxlen)
beam_logits = tf.no_op()
beam_sample_id = beam_output.predicted_ids
当我训练结束后拨打梁_sample_id我没有得到正确的结果。
我的猜测是我们应该使用相同的注意力包装,但这是不可能的,因为我们必须使用tile_sequence来使用波束搜索。
任何见解/建议将不胜感激。
我也创造了他们的主要信息库这个问题Issue-93
是的我没有能够使用我的方法在训练过程中学到的权重。 tf.name_scope()在版本1.3中没有参数“reuse”,你必须是tf.variable_scope()。 我通过在@dnnavn在[github问题](https://github.com/tensorflow/nmt/issues/93)中指出我创建了两个单独的训练和推理图来解决这个问题,他声称它只能通过单独的图表,我需要尝试一下。同时,如果你已经成功地尝试了它,请做评论。谢谢 –
是的,tf.variable_scope代替tf。name_scope –
嗨同样,我可以看到您已将此答案标记为正确,您是否有更改对数据进行测试? –