text
stringlengths 0
339
|
---|
chose this function because we hypothesized it would allow the model to easily learn to attend by
|
relative positions, since for any fixed offset k, P Epos+k can be represented as a linear function of
|
P Epos.
|
We also experimented with using learned positional embeddings [9] instead, and found that the two
|
versions produced nearly identical results (see Table 3 row (E)). We chose the sinusoidal version
|
because it may allow the model to extrapolate to sequence lengths longer than the ones encountered
|
during training.
|
4 Why Self-Attention
|
In this section we compare various aspects of self-attention layers to the recurrent and convolutional layers commonly used for mapping one variable-length sequence of symbol representations
|
(x1, ..., xn) to another sequence of equal length (z1, ..., zn), with xi
|
, zi ∈ R
|
d
|
, such as a hidden
|
layer in a typical sequence transduction encoder or decoder. Motivating our use of self-attention we
|
consider three desiderata.
|
One is the total computational complexity per layer. Another is the amount of computation that can
|
be parallelized, as measured by the minimum number of sequential operations required.
|
The third is the path length between long-range dependencies in the network. Learning long-range
|
dependencies is a key challenge in many sequence transduction tasks. One key factor affecting the
|
ability to learn such dependencies is the length of the paths forward and backward signals have to
|
traverse in the network. The shorter these paths between any combination of positions in the input
|
and output sequences, the easier it is to learn long-range dependencies [12]. Hence we also compare
|
the maximum path length between any two input and output positions in networks composed of the
|
different layer types.
|
As noted in Table 1, a self-attention layer connects all positions with a constant number of sequentially
|
executed operations, whereas a recurrent layer requires O(n) sequential operations. In terms of
|
computational complexity, self-attention layers are faster than recurrent layers when the sequence
|
length n is smaller than the representation dimensionality d, which is most often the case with
|
sentence representations used by state-of-the-art models in machine translations, such as word-piece
|
[38] and byte-pair [31] representations. To improve computational performance for tasks involving
|
very long sequences, self-attention could be restricted to considering only a neighborhood of size r in
|
6
|
the input sequence centered around the respective output position. This would increase the maximum
|
path length to O(n/r). We plan to investigate this approach further in future work.
|
A single convolutional layer with kernel width k < n does not connect all pairs of input and output
|
positions. Doing so requires a stack of O(n/k) convolutional layers in the case of contiguous kernels,
|
or O(logk(n)) in the case of dilated convolutions [18], increasing the length of the longest paths
|
between any two positions in the network. Convolutional layers are generally more expensive than
|
recurrent layers, by a factor of k. Separable convolutions [6], however, decrease the complexity
|
considerably, to O(k · n · d + n · d
|
2
|
). Even with k = n, however, the complexity of a separable
|
convolution is equal to the combination of a self-attention layer and a point-wise feed-forward layer,
|
the approach we take in our model.
|
As side benefit, self-attention could yield more interpretable models. We inspect attention distributions
|
from our models and present and discuss examples in the appendix. Not only do individual attention
|
heads clearly learn to perform different tasks, many appear to exhibit behavior related to the syntactic
|
and semantic structure of the sentences.
|
5 Training
|
This section describes the training regime for our models.
|
5.1 Training Data and Batching
|
We trained on the standard WMT 2014 English-German dataset consisting of about 4.5 million
|
sentence pairs. Sentences were encoded using byte-pair encoding [3], which has a shared sourcetarget vocabulary of about 37000 tokens. For English-French, we used the significantly larger WMT
|
2014 English-French dataset consisting of 36M sentences and split tokens into a 32000 word-piece
|
vocabulary [38]. Sentence pairs were batched together by approximate sequence length. Each training
|
batch contained a set of sentence pairs containing approximately 25000 source tokens and 25000
|
target tokens.
|
5.2 Hardware and Schedule
|
We trained our models on one machine with 8 NVIDIA P100 GPUs. For our base models using
|
the hyperparameters described throughout the paper, each training step took about 0.4 seconds. We
|
trained the base models for a total of 100,000 steps or 12 hours. For our big models,(described on the
|
bottom line of table 3), step time was 1.0 seconds. The big models were trained for 300,000 steps
|
(3.5 days).
|
5.3 Optimizer
|
We used the Adam optimizer [20] with β1 = 0.9, β2 = 0.98 and = 10−9
|
. We varied the learning
|
rate over the course of training, according to the formula:
|
lrate = d
|
−0.5
|
model · min(step_num−0.5
|
, step_num · warmup_steps−1.5
|
) (3)
|
This corresponds to increasing the learning rate linearly for the first warmup_steps training steps,
|
and decreasing it thereafter proportionally to the inverse square root of the step number. We used
|
warmup_steps = 4000.
|
5.4 Regularization
|
We employ three types of regularization during training:
|
Residual Dropout We apply dropout [33] to the output of each sub-layer, before it is added to the
|
sub-layer input and normalized. In addition, we apply dropout to the sums of the embeddings and the
|
positional encodings in both the encoder and decoder stacks. For the base model, we use a rate of
|
Pdrop = 0.1.
|
7
|
Table 2: The Transformer achieves better BLEU scores than previous state-of-the-art models on the
|
English-to-German and English-to-French newstest2014 tests at a fraction of the training cost.
|
Model
|
BLEU Training Cost (FLOPs)
|
EN-DE EN-FR EN-DE EN-FR
|
ByteNet [18] 23.75
|
Deep-Att + PosUnk [39] 39.2 1.0 · 1020
|
GNMT + RL [38] 24.6 39.92 2.3 · 1019 1.4 · 1020
|
ConvS2S [9] 25.16 40.46 9.6 · 1018 1.5 · 1020
|
MoE [32] 26.03 40.56 2.0 · 1019 1.2 · 1020
|
Deep-Att + PosUnk Ensemble [39] 40.4 8.0 · 1020
|
GNMT + RL Ensemble [38] 26.30 41.16 1.8 · 1020 1.1 · 1021
|
ConvS2S Ensemble [9] 26.36 41.29 7.7 · 1019 1.2 · 1021
|
Transformer (base model) 27.3 38.1 3.3 · 1018
|
Transformer (big) 28.4 41.8 2.3 · 1019
|
Label Smoothing During training, we employed label smoothing of value ls = 0.1 [36]. This
|
hurts perplexity, as the model learns to be more unsure, but improves accuracy and BLEU score.
|
6 Results
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.