Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion modules.py
Original file line number Diff line number Diff line change
Expand Up @@ -247,7 +247,10 @@ def multihead_attention(queries,

# Restore shape
outputs = tf.concat(tf.split(outputs, num_heads, axis=0), axis=2 ) # (N, T_q, C)


# Linear projections
outputs = tf.layers.dense(outputs, num_units, activation=tf.nn.relu) # (N, T_q, C)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you are right about an extra projection, but can I ask about the activation function here? seems in the original paper there is no bias and activation, only a plain "Concat(head1, ..., headh)W_O"


# Residual connection
outputs += queries

Expand Down