Dataset Viewer
Auto-converted to Parquet
audio
audioduration (s)
5.5
25.6
text
stringclasses
7 values
start_time
stringclasses
7 values
end_time
stringclasses
7 values
Spin, Self play fine-tuning that improves LLMs. Tricksy - it's a form of fast inference involving sparsity. Phi-2 is a model from Microsoft. Lightning Attention 2 is an alternative to Flash Attention. Mixtral 8x7B,
00:00:00.000
00:00:19.380
this is a mixture of experts model. Solar 10.7B is a Mistral model with some extra layers added in. OpenChat is a fine-tune of the Mistral model. Notux 8x7B v1 is a version of the Mixtral model, fine-tuned.
00:00:19.480
00:00:45.000
Gemini Pro is Google's best model, or perhaps not as good as Gemini Ultra. Microsoft Phi-2, I've already mentioned. DeciLM 7B is a high-speed 7B model. That's DeciLM 7B. Arena Elo is a means of comparing LLMs.
00:00:45.800
00:01:05.190
MT-bench is another metric and MMLU is also. GPT-4-Turbo is a fast GPT-4 model by OpenAI. Mistral Medium is a mixture of experts but with larger experts than Mixtral 8x7B.
00:01:05.190
00:01:24.260
Claude 1, Claude 2, or Claude 2.0 are the latest Claude models. Mixtral 8x7b Instruct v1, or rather v0.1, that's Mixtral 8x7B Instruct v0.1), that's the latest mixture-of-experts.
00:01:25.640
00:01:45.380
Yi 34B Chat is a very strong fine-tune of Llama. Claude Instant 1 is one of the Claude models. Tulu 2 DPO 70B is a DPO fine-tuned model by Allen AI. It's a fine-tune of the Llama 2 model, the 70B.
00:01:46.320
00:02:06.160
WizardLM 70B is also a fine-tune of the Llama 2 70B model.
00:02:06.920
00:02:12.380
README.md exists but content is empty.
Downloads last month
134

Models trained or fine-tuned on Trelis/llm-lingo