-
-
-
-
-
-
Inference Providers
Active filters:
fp8
RedHatAI/Meta-Llama-3-8B-Instruct-FP8-KV
Text Generation
•
8B
•
Updated
•
5.01k
•
•
8
comaniac/Meta-Llama-3-8B-Instruct-FP8-v1
Text Generation
•
8B
•
Updated
•
2
comaniac/Mixtral-8x22B-Instruct-v0.1-FP8-v1
Text Generation
•
141B
•
Updated
•
5
RedHatAI/Meta-Llama-3-70B-Instruct-FP8
Text Generation
•
71B
•
Updated
•
1.74k
•
•
13
comaniac/Meta-Llama-3-70B-Instruct-FP8-v1
Text Generation
•
71B
•
Updated
•
4
comaniac/Mixtral-8x7B-Instruct-v0.1-FP8-v1
Text Generation
•
47B
•
Updated
•
4
comaniac/Mixtral-8x7B-Instruct-v0.1-FP8-v2
Text Generation
•
47B
•
Updated
•
4
Skywork/Skywork-MoE-Base-FP8
Text Generation
•
146B
•
Updated
•
11
•
7
RedHatAI/Qwen2-72B-Instruct-FP8
Text Generation
•
73B
•
Updated
•
1.41k
•
15
comaniac/Meta-Llama-3-70B-Instruct-FP8-v2
Text Generation
•
71B
•
Updated
•
4
comaniac/Mixtral-8x7B-Instruct-v0.1-FP8-v3
Text Generation
•
47B
•
Updated
•
6
comaniac/Mixtral-8x22B-Instruct-v0.1-FP8-v2
Text Generation
•
141B
•
Updated
•
8
RedHatAI/Mixtral-8x22B-Instruct-v0.1-AutoFP8
Text Generation
•
141B
•
Updated
•
6
•
3
nm-testing/granite-20b-code-base-FP8
Text Generation
•
20B
•
Updated
•
4
nm-testing/granite-3b-code-base-FP8
Text Generation
•
3B
•
Updated
•
6
fr00000/dolp-fp8
Text Generation
•
8B
•
Updated
•
3
RedHatAI/Qwen2-0.5B-Instruct-FP8
Text Generation
•
0.5B
•
Updated
•
1.32k
•
3
nm-testing/opt-125m-fp8-static-kv
Text Generation
•
0.1B
•
Updated
•
3
RedHatAI/Qwen2-1.5B-Instruct-FP8
Text Generation
•
2B
•
Updated
•
5.41k
RedHatAI/Qwen2-7B-Instruct-FP8
Text Generation
•
8B
•
Updated
•
14.1k
•
•
2
anyisalin/L3-70B-Euryale-v2.1-FP8
Text Generation
•
71B
•
Updated
•
5
nm-testing/Qwen2-0.5B-Instruct-FP8-KV
Text Generation
•
0.5B
•
Updated
•
2
yentinglin/Llama-3-Taiwan-70B-Instruct-FP8
Text Generation
•
71B
•
Updated
•
8
kuotient/llama3-instrucTrans-enko-8b-FP8
Text Generation
•
8B
•
Updated
•
4
•
2
nm-testing/SparseLlama-3-8B-pruned_50.2of4-FP8
Text Generation
•
8B
•
Updated
•
2
FlorianJc/Hermes-2-Pro-Mistral-7B-vllm-fp8
Text Generation
•
7B
•
Updated
•
2
FlorianJc/openchat-3.6-8b-20240522-vllm-fp8
Text Generation
•
8B
•
Updated
•
4
FlorianJc/Llama3-ChatQA-1.5-8B-vllm-fp8
Text Generation
•
8B
•
Updated
•
4
TechxGenus/Codestral-22B-v0.1-FP8
Text Generation
•
22B
•
Updated
•
4
Model-SafeTensors/Meta-Llama-3-70B-FP8-Dynamic
Text Generation
•
71B
•
Updated
•
4