-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Question about Luong Attention Implementation #130
Comments
@kyquang97 Luong takes the last context vector and concatenates them with the last output vector as an input to RNN. The output from RNN will be passed to Attention layer to calculate the context vector in the current time step. The current context vector combines with the current output from RNN will be calculated to the current output for this time step. Please noted that the current context vector will be passed to the next time step. |
@beebrain Please correct me if I'm wrong, but you are using the Lstm layer, instead of the lstm cell, so when each forward pass happens its a different sample not a different timestep on a single sample. You have no control over each timesteps separately here. what you get out of the RNN in this configuration is just a translation/sequence that has already gone through all timesteps! |
I think he just meant that in this implementation, the |
I think the rnn_output and hidden output of self.gru had the same value. You can use hidden or rnn_output. |
You do remind me! I'm also confused by the usage of outputs and hidden states in some attention implementations at first and they do actually share the same values. BTW, what about the LSTM? From Pytorch doc, the LSTM outputs hidden states as well as cell states. Are cell states used in attention or can I just consider using outputs and last hidden states equally? |
In my opinion, You can use the hidden state output like GRU. |
I am also confused about why we can calculate all the attention scores for the source sentence using the previous hidden state and current input embedding. |
Hi @spro, i've read your implementation of luong attention in pytorch seq2seq translation tutorial and in the context calculation step, you're using rnn_output as input when calculating attn_weights but i think we should hidden at current decoder timestep instead. Please check it and can you provide explaination about it if i'm wrong
The text was updated successfully, but these errors were encountered: