Neural Machine Translation in Linear Time
Abstract
ByteNet, a one-dimensional convolutional neural network with dilation, achieves state-of-the-art performance in character-level language modeling and machine translation by dynamically unfolding the decoder and preserving temporal resolution.
We present a novel neural network for processing sequences. The ByteNet is a one-dimensional convolutional neural network that is composed of two parts, one to encode the source sequence and the other to decode the target sequence. The two network parts are connected by stacking the decoder on top of the encoder and preserving the temporal resolution of the sequences. To address the differing lengths of the source and the target, we introduce an efficient mechanism by which the decoder is dynamically unfolded over the representation of the encoder. The ByteNet uses dilation in the convolutional layers to increase its receptive field. The resulting network has two core properties: it runs in time that is linear in the length of the sequences and it sidesteps the need for excessive memorization. The ByteNet decoder attains state-of-the-art performance on character-level language modelling and outperforms the previous best results obtained with recurrent networks. The ByteNet also achieves state-of-the-art performance on character-to-character machine translation on the English-to-German WMT translation task, surpassing comparable neural translation models that are based on recurrent networks with attentional pooling and run in quadratic time. We find that the latent alignment structure contained in the representations reflects the expected alignment between the tokens.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Transformer-Encoder Trees for Efficient Multilingual Machine Translation and Speech Translation (2025)
- ResFormer: All-Time Reservoir Memory for Long Sequence Classification (2025)
- Gated Associative Memory: A Parallel O(N) Architecture for Efficient Sequence Modeling (2025)
- Efficient Transformer-Based Piano Transcription With Sparse Attention Mechanisms (2025)
- DTW-Align: Bridging the Modality Gap in End-to-End Speech Translation with Dynamic Time Warping Alignment (2025)
- Pre-Trained CNN Architecture for Transformer-Based Image Caption Generation Model (2025)
- End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF: A Reproducibility Study (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper