In tensorflow 1.3 seq2seq , how to extract the attention mask? -
i using contrib.seq2seq in tensorflow r1.3, how can fetch attention matrix in below apis? seems the attention matrix alignment value return of attentionwrapper.call(), how can construct call?
def single_cell(): cell = tf.contrib.rnn.basiclstmcell(rnn_size) return tf.contrib.rnn.dropoutwrapper(cell, input_keep_prob = keep_prob) dec_cell = tf.contrib.rnn.multirnncell([single_cell() _ in range(num_layers)]) attn_mech = tf.contrib.seq2seq.bahdanauattention(rnn_size, enc_output, text_length, normalize=false, name='bahdanauattention') dec_cell = tf.contrib.seq2seq.attentionwrapper(dec_cell, attn_mech, rnn_size) initial_state = dec_cell.zero_state(batch_size, dtype=tf.float32).clone(cell_state=enc_state) output_layer = dense(vocab_size, kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1))
I still have the same problem, which want to do the text summarization.
ReplyDelete