Tensorflow 1.0版多层编码器的输出状态,以多层解码器Seq2Seq模型TF 1.0
我的问题是,什么尺寸encoder_state
说法确实tf.contrib.seq2seq attention_decoder_fn_train
预期。
它可以采取多层编码器状态输出吗?
语境:
我想在tensorflow 1.0创建基于多层双向关注seq2seq。
我的编码器:
cell = LSTM(n)
cell = MultiRnnCell([cell]*4)
((encoder_fw_outputs,encoder_bw_outputs),
(encoder_fw_state,encoder_bw_state)) = (tf.nn.bidirectional_dynamic_rnn(cell_fw=cell, cell_bw = cell....)
现在,mutilayered双向编码器返回编码器cell_states[c]
hidden_states[h]
和对于每个层,并且还用于向后和向前通。 我串连的直传和复路各州通过它来encoder_state:
self.encoder_state = tf.concat((encoder_fw_state, encoder_bw_state), -1)
而且我通过这我的解码器:
decoder_fn_train = seq2seq.simple_decoder_fn_train(encoder_state=self.encoder_state)
(self.decoder_outputs_train,
self.decoder_state_train,
self.decoder_context_state_train) = seq2seq.dynamic_rnn_decoder(cell=decoder_cell,...)
但它给以下错误:
ValueError: The two structures don't have the same number of elements. First structure: Tensor("BidirectionalEncoder/transpose:0", shape=(?, 2, 2, 20), dtype=float32), second structure: (LSTMStateTuple(c=20, h=20), LSTMStateTuple(c=20, h=20)).
我的decoder_cell
也是多层的。
1: