Masked Multi-Head Attention
Learn about the masked-multi head attention mechanism and how it works.
In our English-to-French translation task, say our training dataset looks like the one shown here:
A sample training set
Source sentence | Target sentence |
I am good | Je vais bien |
Good morning | Bonjour |
Thank you very much | Merci beaucoup |
By looking at the preceding dataset, we can understand that we have source and target sentences. We saw how the decoder predicts the target sentence word by word in each time step and that happens only during testing.
During training, since we have the right target sentence, we can just feed the whole target sentence as input to the decoder but with a small modification. We learned that the decoder takes the input
Say we are converting the English sentence 'I am good' to the French sentence 'Je vais bien'. We can just add the