DiffReaper-6
DiffReaper-6 is a Large-scale Diffusion-based Large Language Model (Diffusion-LLM) developed by DifferenceLabs.
It represents a significant architectural leap over the previous 5L version, transitioning to a more robust denoiser and a deeper transformer-based backbone to achieve actual conversational coherence.
Model Details
- Architecture: Diffusion-Transformer (DiT) with Adaptive Layer Norm (adaLN-Single) modulation.
- Backbone: 24 Layers, 24 Attention Heads, 1536 Hidden Dimension.
- Tokenizer: BERT-base-uncased.
- Training Objective: MSE on Denoising Latents (Predicting original embeddings from noisy input).
- Conditioning: Prompt-concatenated latents with time-step embedding.
Training
The model is being trained on an RTX 5090 using the ultrachat-10k dataset, focusing on conversational flow and instruction following.
Goal
To prove that diffusion models can reach (and eventually exceed) the coherence of auto-regressive models while maintaining the creative "soul" and parallel generation benefits of diffusion.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for DifferenceLabs/DiffReaper-6
Base model
darwinkernelpanic/DiffReaper-3
Finetuned
darwinkernelpanic/DiffReaper-Talk
Finetuned
darwinkernelpanic/DiffReaper-5
Finetuned
darwinkernelpanic/DiffReaper-5L