MetaAI's CodeLlama - Coding Assistant LLM Fast, small, and capable coding model you can run locally on your computer! Requires 8GB+ of RAM. Code Llama: Open Foundation Models for Code Paper • 2308.12950 • Published Aug 24, 2023 • 27 TheBloke/CodeLlama-7B-Instruct-GGUF Text Generation • 7B • Updated Sep 27, 2023 • 15.5k • 136 TheBloke/CodeLlama-34B-Instruct-GGUF Text Generation • 34B • Updated Sep 27, 2023 • 3.47k • 102 codellama/CodeLlama-7b-Instruct-hf Text Generation • 7B • Updated Apr 12, 2024 • 198k • 241
Qwen1.5 GGUF GGUF quants for the new Qwen1.5 model (https://qwenlm.github.io/blog/qwen1.5/) Qwen/Qwen1.5-0.5B-Chat-GGUF Text Generation • 0.6B • Updated Apr 9, 2024 • 4.11k • 31 Qwen/Qwen1.5-1.8B-Chat-GGUF Text Generation • 2B • Updated Apr 9, 2024 • 2.24k • 18 Qwen/Qwen1.5-4B-Chat-GGUF Text Generation • 4B • Updated Apr 9, 2024 • 864 • 13 Qwen/Qwen1.5-7B-Chat-GGUF Text Generation • 8B • Updated Apr 9, 2024 • 4.54k • 68
Coding Models trained and/or fine-tuned for coding tasks lmstudio-community/DeepSeek-Coder-V2-Lite-Instruct-GGUF Text Generation • 16B • Updated Jun 22, 2024 • 5.86k • 54 lmstudio-community/Codestral-22B-v0.1-GGUF Text Generation • 22B • Updated Jun 5, 2024 • 8.99k • 26 lmstudio-community/codegemma-1.1-7b-it-GGUF Text Generation • 9B • Updated May 14, 2024 • 582 • 5 lmstudio-community/starcoder2-15b-instruct-v0.1-GGUF Text Generation • 16B • Updated Apr 30, 2024 • 468 • 3
lmstudio-community/DeepSeek-Coder-V2-Lite-Instruct-GGUF Text Generation • 16B • Updated Jun 22, 2024 • 5.86k • 54
lmstudio-community/starcoder2-15b-instruct-v0.1-GGUF Text Generation • 16B • Updated Apr 30, 2024 • 468 • 3
Multilingual Models trained to perform well in more than one language lmstudio-community/aya-23-8B-GGUF Text Generation • 8B • Updated May 23, 2024 • 193 • 7 lmstudio-community/aya-23-35B-GGUF Text Generation • 35B • Updated May 23, 2024 • 108 • 14
Vision Models (GGUF) How to use: Download a "mmproj" model file + one or more of the primary model files. nisten/obsidian-3b-multimodal-q6-gguf 3B • Updated Dec 9, 2023 • 364 • 70 PsiPi/liuhaotian_llava-v1.5-13b-GGUF Image-Text-to-Text • 13B • Updated Mar 11, 2024 • 683 • 36 abetlen/BakLLaVA-1-GGUF 7B • Updated Nov 9, 2023 • 255 • 7 Mozilla/llava-v1.5-7b-llamafile 7B • Updated Apr 1 • 5.39k • 181
General Use General purpose chatbot-like LLMs you can run on your computer lmstudio-community/Meta-Llama-3-8B-Instruct-BPE-fix-GGUF Text Generation • 8B • Updated May 3, 2024 • 120 • 11 lmstudio-community/gemma-2-9b-it-GGUF Text Generation • 9B • Updated Jul 16, 2024 • 4.3k • 27 lmstudio-community/Phi-3.1-mini-4k-instruct-GGUF Text Generation • 4B • Updated Aug 1, 2024 • 1.02k • 23 bartowski/dolphin-2.9.3-mistral-7B-32k-GGUF Text Generation • 7B • Updated Jun 25, 2024 • 229 • 8
lmstudio-community/Meta-Llama-3-8B-Instruct-BPE-fix-GGUF Text Generation • 8B • Updated May 3, 2024 • 120 • 11
lmstudio-community/Phi-3.1-mini-4k-instruct-GGUF Text Generation • 4B • Updated Aug 1, 2024 • 1.02k • 23
Tools Use (RAG, Function Calling) Models specifically fine-tuned for function calling, tool-use, or RAG lmstudio-community/Llama3-ChatQA-1.5-8B-GGUF Text Generation • 8B • Updated May 4, 2024 • 116 • 6 bartowski/firefunction-v2-GGUF Text Generation • 71B • Updated Jun 22, 2024 • 53 • 6 bartowski/Phi-3-Context-Obedient-RAG-GGUF Text Generation • 4B • Updated May 11, 2024 • 104 • 3
Smol Models Very small Large Language Models. Runs fast, might be quirky lmstudio-community/Qwen2-500M-Instruct-GGUF Text Generation • 0.5B • Updated Jun 24, 2024 • 254 • 6
MetaAI's CodeLlama - Coding Assistant LLM Fast, small, and capable coding model you can run locally on your computer! Requires 8GB+ of RAM. Code Llama: Open Foundation Models for Code Paper • 2308.12950 • Published Aug 24, 2023 • 27 TheBloke/CodeLlama-7B-Instruct-GGUF Text Generation • 7B • Updated Sep 27, 2023 • 15.5k • 136 TheBloke/CodeLlama-34B-Instruct-GGUF Text Generation • 34B • Updated Sep 27, 2023 • 3.47k • 102 codellama/CodeLlama-7b-Instruct-hf Text Generation • 7B • Updated Apr 12, 2024 • 198k • 241
Vision Models (GGUF) How to use: Download a "mmproj" model file + one or more of the primary model files. nisten/obsidian-3b-multimodal-q6-gguf 3B • Updated Dec 9, 2023 • 364 • 70 PsiPi/liuhaotian_llava-v1.5-13b-GGUF Image-Text-to-Text • 13B • Updated Mar 11, 2024 • 683 • 36 abetlen/BakLLaVA-1-GGUF 7B • Updated Nov 9, 2023 • 255 • 7 Mozilla/llava-v1.5-7b-llamafile 7B • Updated Apr 1 • 5.39k • 181
Qwen1.5 GGUF GGUF quants for the new Qwen1.5 model (https://qwenlm.github.io/blog/qwen1.5/) Qwen/Qwen1.5-0.5B-Chat-GGUF Text Generation • 0.6B • Updated Apr 9, 2024 • 4.11k • 31 Qwen/Qwen1.5-1.8B-Chat-GGUF Text Generation • 2B • Updated Apr 9, 2024 • 2.24k • 18 Qwen/Qwen1.5-4B-Chat-GGUF Text Generation • 4B • Updated Apr 9, 2024 • 864 • 13 Qwen/Qwen1.5-7B-Chat-GGUF Text Generation • 8B • Updated Apr 9, 2024 • 4.54k • 68
General Use General purpose chatbot-like LLMs you can run on your computer lmstudio-community/Meta-Llama-3-8B-Instruct-BPE-fix-GGUF Text Generation • 8B • Updated May 3, 2024 • 120 • 11 lmstudio-community/gemma-2-9b-it-GGUF Text Generation • 9B • Updated Jul 16, 2024 • 4.3k • 27 lmstudio-community/Phi-3.1-mini-4k-instruct-GGUF Text Generation • 4B • Updated Aug 1, 2024 • 1.02k • 23 bartowski/dolphin-2.9.3-mistral-7B-32k-GGUF Text Generation • 7B • Updated Jun 25, 2024 • 229 • 8
lmstudio-community/Meta-Llama-3-8B-Instruct-BPE-fix-GGUF Text Generation • 8B • Updated May 3, 2024 • 120 • 11
lmstudio-community/Phi-3.1-mini-4k-instruct-GGUF Text Generation • 4B • Updated Aug 1, 2024 • 1.02k • 23
Coding Models trained and/or fine-tuned for coding tasks lmstudio-community/DeepSeek-Coder-V2-Lite-Instruct-GGUF Text Generation • 16B • Updated Jun 22, 2024 • 5.86k • 54 lmstudio-community/Codestral-22B-v0.1-GGUF Text Generation • 22B • Updated Jun 5, 2024 • 8.99k • 26 lmstudio-community/codegemma-1.1-7b-it-GGUF Text Generation • 9B • Updated May 14, 2024 • 582 • 5 lmstudio-community/starcoder2-15b-instruct-v0.1-GGUF Text Generation • 16B • Updated Apr 30, 2024 • 468 • 3
lmstudio-community/DeepSeek-Coder-V2-Lite-Instruct-GGUF Text Generation • 16B • Updated Jun 22, 2024 • 5.86k • 54
lmstudio-community/starcoder2-15b-instruct-v0.1-GGUF Text Generation • 16B • Updated Apr 30, 2024 • 468 • 3
Tools Use (RAG, Function Calling) Models specifically fine-tuned for function calling, tool-use, or RAG lmstudio-community/Llama3-ChatQA-1.5-8B-GGUF Text Generation • 8B • Updated May 4, 2024 • 116 • 6 bartowski/firefunction-v2-GGUF Text Generation • 71B • Updated Jun 22, 2024 • 53 • 6 bartowski/Phi-3-Context-Obedient-RAG-GGUF Text Generation • 4B • Updated May 11, 2024 • 104 • 3
Multilingual Models trained to perform well in more than one language lmstudio-community/aya-23-8B-GGUF Text Generation • 8B • Updated May 23, 2024 • 193 • 7 lmstudio-community/aya-23-35B-GGUF Text Generation • 35B • Updated May 23, 2024 • 108 • 14
Smol Models Very small Large Language Models. Runs fast, might be quirky lmstudio-community/Qwen2-500M-Instruct-GGUF Text Generation • 0.5B • Updated Jun 24, 2024 • 254 • 6