text
stringlengths 0
339
|
---|
our research.
|
†Work performed while at Google Brain.
|
‡Work performed while at Google Research.
|
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
|
arXiv:1706.03762v5 [cs.CL] 6 Dec 2017
|
transduction problems such as language modeling and machine translation [35, 2, 5]. Numerous
|
efforts have since continued to push the boundaries of recurrent language models and encoder-decoder
|
architectures [38, 24, 15].
|
Recurrent models typically factor computation along the symbol positions of the input and output
|
sequences. Aligning the positions to steps in computation time, they generate a sequence of hidden
|
states ht, as a function of the previous hidden state ht−1 and the input for position t. This inherently
|
sequential nature precludes parallelization within training examples, which becomes critical at longer
|
sequence lengths, as memory constraints limit batching across examples. Recent work has achieved
|
significant improvements in computational efficiency through factorization tricks [21] and conditional
|
computation [32], while also improving model performance in case of the latter. The fundamental
|
constraint of sequential computation, however, remains.
|
Attention mechanisms have become an integral part of compelling sequence modeling and transduction models in various tasks, allowing modeling of dependencies without regard to their distance in
|
the input or output sequences [2, 19]. In all but a few cases [27], however, such attention mechanisms
|
are used in conjunction with a recurrent network.
|
In this work we propose the Transformer, a model architecture eschewing recurrence and instead
|
relying entirely on an attention mechanism to draw global dependencies between input and output.
|
The Transformer allows for significantly more parallelization and can reach a new state of the art in
|
translation quality after being trained for as little as twelve hours on eight P100 GPUs.
|
2 Background
|
The goal of reducing sequential computation also forms the foundation of the Extended Neural GPU
|
[16], ByteNet [18] and ConvS2S [9], all of which use convolutional neural networks as basic building
|
block, computing hidden representations in parallel for all input and output positions. In these models,
|
the number of operations required to relate signals from two arbitrary input or output positions grows
|
in the distance between positions, linearly for ConvS2S and logarithmically for ByteNet. This makes
|
it more difficult to learn dependencies between distant positions [12]. In the Transformer this is
|
reduced to a constant number of operations, albeit at the cost of reduced effective resolution due
|
to averaging attention-weighted positions, an effect we counteract with Multi-Head Attention as
|
described in section 3.2.
|
Self-attention, sometimes called intra-attention is an attention mechanism relating different positions
|
of a single sequence in order to compute a representation of the sequence. Self-attention has been
|
used successfully in a variety of tasks including reading comprehension, abstractive summarization,
|
textual entailment and learning task-independent sentence representations [4, 27, 28, 22].
|
End-to-end memory networks are based on a recurrent attention mechanism instead of sequencealigned recurrence and have been shown to perform well on simple-language question answering and
|
language modeling tasks [34].
|
To the best of our knowledge, however, the Transformer is the first transduction model relying
|
entirely on self-attention to compute representations of its input and output without using sequencealigned RNNs or convolution. In the following sections, we will describe the Transformer, motivate
|
self-attention and discuss its advantages over models such as [17, 18] and [9].
|
3 Model Architecture
|
Most competitive neural sequence transduction models have an encoder-decoder structure [5, 2, 35].
|
Here, the encoder maps an input sequence of symbol representations (x1, ..., xn) to a sequence
|
of continuous representations z = (z1, ..., zn). Given z, the decoder then generates an output
|
sequence (y1, ..., ym) of symbols one element at a time. At each step the model is auto-regressive
|
[10], consuming the previously generated symbols as additional input when generating the next.
|
The Transformer follows this overall architecture using stacked self-attention and point-wise, fully
|
connected layers for both the encoder and decoder, shown in the left and right halves of Figure 1,
|
respectively.
|
2
|
Figure 1: The Transformer - model architecture.
|
3.1 Encoder and Decoder Stacks
|
Encoder: The encoder is composed of a stack of N = 6 identical layers. Each layer has two
|
sub-layers. The first is a multi-head self-attention mechanism, and the second is a simple, positionwise fully connected feed-forward network. We employ a residual connection [11] around each of
|
the two sub-layers, followed by layer normalization [1]. That is, the output of each sub-layer is
|
LayerNorm(x + Sublayer(x)), where Sublayer(x) is the function implemented by the sub-layer
|
itself. To facilitate these residual connections, all sub-layers in the model, as well as the embedding
|
layers, produce outputs of dimension dmodel = 512.
|
Decoder: The decoder is also composed of a stack of N = 6 identical layers. In addition to the two
|
sub-layers in each encoder layer, the decoder inserts a third sub-layer, which performs multi-head
|
attention over the output of the encoder stack. Similar to the encoder, we employ residual connections
|
around each of the sub-layers, followed by layer normalization. We also modify the self-attention
|
sub-layer in the decoder stack to prevent positions from attending to subsequent positions. This
|
masking, combined with fact that the output embeddings are offset by one position, ensures that the
|
predictions for position i can depend only on the known outputs at positions less than i.
|
3.2 Attention
|
An attention function can be described as mapping a query and a set of key-value pairs to an output,
|
where the query, keys, values, and output are all vectors. The output is computed as a weighted sum
|
of the values, where the weight assigned to each value is computed by a compatibility function of the
|
query with the corresponding key.
|
3
|
Scaled Dot-Product Attention Multi-Head Attention
|
Figure 2: (left) Scaled Dot-Product Attention. (right) Multi-Head Attention consists of several
|
attention layers running in parallel.
|
3.2.1 Scaled Dot-Product Attention
|
We call our particular attention "Scaled Dot-Product Attention" (Figure 2). The input consists of
|
queries and keys of dimension dk, and values of dimension dv. We compute the dot products of the
|
query with all keys, divide each by √
|
dk, and apply a softmax function to obtain the weights on the
|
values.
|
In practice, we compute the attention function on a set of queries simultaneously, packed together
|
into a matrix Q. The keys and values are also packed together into matrices K and V . We compute
|
the matrix of outputs as:
|
Attention(Q, K, V ) = softmax(QKT
|
√
|
dk
|
)V (1)
|
The two most commonly used attention functions are additive attention [2], and dot-product (multiplicative) attention. Dot-product attention is identical to our algorithm, except for the scaling factor
|
of √
|
1
|
dk
|
. Additive attention computes the compatibility function using a feed-forward network with
|
a single hidden layer. While the two are similar in theoretical complexity, dot-product attention is
|
much faster and more space-efficient in practice, since it can be implemented using highly optimized
|
matrix multiplication code.
|
While for small values of dk the two mechanisms perform similarly, additive attention outperforms
|
dot product attention without scaling for larger values of dk [3]. We suspect that for large values of
|
dk, the dot products grow large in magnitude, pushing the softmax function into regions where it has
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.