DAT LM Inference
🐠
Generate text based on your input
This is a collection of models and spaces associated with the paper: "Disentangling and Integrating Relational and Sensory Information in Transformer"
Generate text based on your input
Note Generate text with Dual Attention Transformer Language Models. (Note that inference can be slow since this is running on HF's free CPU resources. For faster inference, you can run the app locally. )
Visualize language model attention
Note Visualize the internal representations of Dual Attention Transformer Language Models. Explore the relational representations in relational attention.