Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
AMD
Team
company
Verified
http://www.amd.com/
AMD
amd
Activity Feed
Follow
1,883
AI & ML interests
None defined yet.
Recent Activity
bowenbaoamd
published
a model
1 day ago
amd/Llama-2-70b-chat-hf-WMXFP4-AMXFP4-KVFP8-Scale-UINT8-MLPerf-GPTQ
hecui102
updated
a model
2 days ago
amd/AMD-Hummingbird-I2V
Prakamya
authored
a paper
about 1 month ago
SAND-Math: Using LLMs to Generate Novel, Difficult and Useful Mathematics Questions and Answers
View all activity
Articles
Creating custom kernels for the AMD MI300
Jul 9
•
44
Hugging Face on AMD Instinct MI300 GPU
May 21, 2024
•
15
Team members
282
+248
+235
+214
+204
+184
amd
's models
173
Sort: Recently updated
amd/AMD-OLMo-1B-SFT-DPO-awq-asym-uint4-g128-lmhead-onnx-hybrid
Updated
Jun 23
•
3
amd/gemma-2-2b-awq-uint4-asym-g128-lmhead-g32-fp16-onnx-hybrid
Text Generation
•
Updated
Jun 23
•
5
amd/Qwen2-7B-awq-uint4-asym-g128-lmhead-fp16-onnx-hybrid
Updated
Jun 23
•
4
amd/Qwen2-1.5B-awq-uint4-asym-global-g128-lmhead-g32-fp16-onnx-hybrid
Updated
Jun 23
•
3
amd/Mistral-7B-v0.3-awq-asym-uint4-g128-lmhead-onnx-hybrid
Updated
Jun 23
•
5
amd/Mistral-7B-Instruct-v0.2-awq-asym-uint4-g128-lmhead-onnx-hybrid
Updated
Jun 23
•
4
amd/Mistral-7B-Instruct-v0.1-awq-asym-uint4-g128-lmhead-onnx-hybrid
Updated
Jun 23
•
7
•
1
amd/Llama-3.1-8B-Instruct-awq-asym-uint4-g128-lmhead-onnx-hybrid
Updated
Jun 23
•
67
amd/CodeLlama-7b-instruct-awq-asym-uint4-g128-lmhead-onnx-hybrid
Updated
Jun 23
•
8
amd/DeepSeek-R1-Distill-Qwen-7B-awq-asym-uint4-g128-lmhead-onnx-hybrid
Updated
Jun 23
•
172
•
2
amd/DeepSeek-R1-Distill-Qwen-1.5B-awq-asym-uint4-g128-lmhead-onnx-hybrid
Updated
Jun 23
•
9
•
1
amd/DeepSeek-R1-Distill-Llama-8B-awq-asym-uint4-g128-lmhead-onnx-hybrid
Updated
Jun 23
•
284
•
1
amd/Llama-3.2-3B-Instruct-awq-g128-int4-asym-fp16-onnx-hybrid
Updated
Jun 23
•
491
amd/Llama-3.2-1B-Instruct-awq-g128-int4-asym-fp16-onnx-hybrid
Updated
Jun 23
•
842
amd/Llama-3.1-8B-awq-g128-int4-asym-fp16-onnx-hybrid
Text Generation
•
Updated
Jun 23
•
9
amd/Llama-3-8B-awq-g128-int4-asym-fp16-onnx-hybrid
Text Generation
•
Updated
Jun 23
•
5
amd/Llama-2-7b-chat-hf-awq-g128-int4-asym-fp16-onnx-hybrid
Text Generation
•
Updated
Jun 23
•
8
amd/Llama-2-7b-hf-awq-g128-int4-asym-fp16-onnx-hybrid
Text Generation
•
Updated
Jun 23
•
4
amd/chatglm3-6b-awq-g128-int4-asym-fp16-onnx-hybrid
Updated
Jun 23
•
7
amd/Qwen1.5-7B-Chat-awq-g128-int4-asym-fp16-onnx-hybrid
Text Generation
•
Updated
Jun 23
•
128
amd/Mistral-7B-Instruct-v0.3-awq-g128-int4-asym-fp16-onnx-hybrid
Updated
Jun 23
•
95
amd/Phi-3.5-mini-instruct-awq-g128-int4-asym-fp16-onnx-hybrid
Text Generation
•
Updated
Jun 23
•
39
amd/Llama-3.2-90B-Vision-Instruct-FP8-KV
89B
•
Updated
Jun 10
•
11
amd/Instella-3B-Long-Instruct
Text Generation
•
3B
•
Updated
Jun 8
•
19
•
1
amd/PARD-Qwen2.5-0.5B
Text Generation
•
0.6B
•
Updated
May 19
•
1.21k
amd/PARD-DeepSeek-R1-Distill-Qwen-1.5B
Text Generation
•
2B
•
Updated
May 19
•
49
•
2
amd/PARD-Llama-3.2-1B
Text Generation
•
1B
•
Updated
May 19
•
1.84k
•
2
amd/DeepSeek-R1-Distill-Llama-70B-dml-int4-awq-block-128
Updated
May 7
•
7
•
4
amd/DeepSeek-R1-Distill-Llama-8B-dml-int4-awq-block-128
Updated
May 7
•
3
amd/Llama-3.2-3B-Instruct-awq-uint4-float16-cpu-onnx
Updated
Apr 28
Previous
1
2
3
4
5
6
Next