text
stringlengths 0
339
|
---|
extremely small gradients 4
|
. To counteract this effect, we scale the dot products by √
|
1
|
dk
|
.
|
3.2.2 Multi-Head Attention
|
Instead of performing a single attention function with dmodel-dimensional keys, values and queries,
|
we found it beneficial to linearly project the queries, keys and values h times with different, learned
|
linear projections to dk, dk and dv dimensions, respectively. On each of these projected versions of
|
queries, keys and values we then perform the attention function in parallel, yielding dv-dimensional
|
output values. These are concatenated and once again projected, resulting in the final values, as
|
depicted in Figure 2.
|
4To illustrate why the dot products get large, assume that the components of q and k are independent random
|
variables with mean 0 and variance 1. Then their dot product, q · k =
|
Pdk
|
i=1 qiki, has mean 0 and variance dk.
|
4
|
Multi-head attention allows the model to jointly attend to information from different representation
|
subspaces at different positions. With a single attention head, averaging inhibits this.
|
MultiHead(Q, K, V ) = Concat(head1, ..., headh)WO
|
where headi = Attention(QWQ
|
i
|
, KW K
|
i
|
, V WV
|
i
|
)
|
Where the projections are parameter matrices W
|
Q
|
i ∈ R
|
dmodel×dk , W K
|
i ∈ R
|
dmodel×dk , WV
|
i ∈ R
|
dmodel×dv
|
and WO ∈ R
|
hdv×dmodel
|
.
|
In this work we employ h = 8 parallel attention layers, or heads. For each of these we use
|
dk = dv = dmodel/h = 64. Due to the reduced dimension of each head, the total computational cost
|
is similar to that of single-head attention with full dimensionality.
|
3.2.3 Applications of Attention in our Model
|
The Transformer uses multi-head attention in three different ways:
|
• In "encoder-decoder attention" layers, the queries come from the previous decoder layer,
|
and the memory keys and values come from the output of the encoder. This allows every
|
position in the decoder to attend over all positions in the input sequence. This mimics the
|
typical encoder-decoder attention mechanisms in sequence-to-sequence models such as
|
[38, 2, 9].
|
• The encoder contains self-attention layers. In a self-attention layer all of the keys, values
|
and queries come from the same place, in this case, the output of the previous layer in the
|
encoder. Each position in the encoder can attend to all positions in the previous layer of the
|
encoder.
|
• Similarly, self-attention layers in the decoder allow each position in the decoder to attend to
|
all positions in the decoder up to and including that position. We need to prevent leftward
|
information flow in the decoder to preserve the auto-regressive property. We implement this
|
inside of scaled dot-product attention by masking out (setting to −∞) all values in the input
|
of the softmax which correspond to illegal connections. See Figure 2.
|
3.3 Position-wise Feed-Forward Networks
|
In addition to attention sub-layers, each of the layers in our encoder and decoder contains a fully
|
connected feed-forward network, which is applied to each position separately and identically. This
|
consists of two linear transformations with a ReLU activation in between.
|
FFN(x) = max(0, xW1 + b1)W2 + b2 (2)
|
While the linear transformations are the same across different positions, they use different parameters
|
from layer to layer. Another way of describing this is as two convolutions with kernel size 1.
|
The dimensionality of input and output is dmodel = 512, and the inner-layer has dimensionality
|
df f = 2048.
|
3.4 Embeddings and Softmax
|
Similarly to other sequence transduction models, we use learned embeddings to convert the input
|
tokens and output tokens to vectors of dimension dmodel. We also use the usual learned linear transformation and softmax function to convert the decoder output to predicted next-token probabilities. In
|
our model, we share the same weight matrix between the two embedding layers and the pre-softmax
|
linear transformation, similar to [30]. In the embedding layers, we multiply those weights by √
|
dmodel.
|
3.5 Positional Encoding
|
Since our model contains no recurrence and no convolution, in order for the model to make use of the
|
order of the sequence, we must inject some information about the relative or absolute position of the
|
5
|
Table 1: Maximum path lengths, per-layer complexity and minimum number of sequential operations
|
for different layer types. n is the sequence length, d is the representation dimension, k is the kernel
|
size of convolutions and r the size of the neighborhood in restricted self-attention.
|
Layer Type Complexity per Layer Sequential Maximum Path Length
|
Operations
|
Self-Attention O(n
|
2
|
· d) O(1) O(1)
|
Recurrent O(n · d
|
2
|
) O(n) O(n)
|
Convolutional O(k · n · d
|
2
|
) O(1) O(logk(n))
|
Self-Attention (restricted) O(r · n · d) O(1) O(n/r)
|
tokens in the sequence. To this end, we add "positional encodings" to the input embeddings at the
|
bottoms of the encoder and decoder stacks. The positional encodings have the same dimension dmodel
|
as the embeddings, so that the two can be summed. There are many choices of positional encodings,
|
learned and fixed [9].
|
In this work, we use sine and cosine functions of different frequencies:
|
P E(pos,2i) = sin(pos/100002i/dmodel)
|
P E(pos,2i+1) = cos(pos/100002i/dmodel)
|
where pos is the position and i is the dimension. That is, each dimension of the positional encoding
|
corresponds to a sinusoid. The wavelengths form a geometric progression from 2π to 10000 · 2π. We
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.