MODEL / config.json
AKASH2393's picture
Upload fine-tuned my model
ced5ffb verified
{
"architectures": ["GemmaForCausalLM"],
"model_type": "gemma",
"hidden_size": 1024,
"num_attention_heads": 16,
"num_hidden_layers": 24,
"vocab_size": 262144
}