Request: DOI
#30 opened 2 months ago
by
ngine-htx
large-instruct02407
#29 opened 4 months ago
by
Jarelan
Optimizing model settings
1
#27 opened 11 months ago
by
svitv
Zico
#26 opened about 1 year ago
by
testakkfff123
Help! Hope to get an inference configuration that can run on multiple GPUs.
1
#25 opened about 1 year ago
by
Lokis
Praise and Criticism
9
#23 opened over 1 year ago
by
ChuckMcSneed
License for tokenizer
#22 opened over 1 year ago
by
marksverdhei
Does the "Average Generation Length" in the press release mean the average number of output tokens?
#20 opened over 1 year ago
by
yumemio
Miqu / Mistral Medium f16 / bf16 weights
#19 opened over 1 year ago
by
Nexesenex
不知道下载哪些内容
1
#18 opened over 1 year ago
by
qcnace
Change rope scaling to match max embedding size
❤️
👍
2
#16 opened over 1 year ago
by
Blackroot
[AUTOMATED] Model Memory Requirements
#15 opened over 1 year ago
by
model-sizer-bot
Model load error
2
#14 opened over 1 year ago
by
caisarl76
Old Mistral Large Not Released + No Base Model present.
#13 opened over 1 year ago
by
User8213
No chat template
6
#12 opened over 1 year ago
by
zyddnys
consolidated vs model safetensors - what's the difference?
🚀
👍
4
15
#9 opened over 1 year ago
by
jukofyork
Are we gonna get the base model for finetuning?
👀
➕
11
1
#8 opened over 1 year ago
by
rombodawg
GGUF quants pl0x
🤝
4
1
#5 opened over 1 year ago
by
AIGUYCONTENT
Is this "large" or "large2"?
👍
👀
3
6
#4 opened over 1 year ago
by
ZeroWw
Le baiser du chef
👍
🔥
14
#2 opened over 1 year ago
by
nanowell