lxaw's picture
Add model card
3eec08c verified
metadata
language:
  - en
tags:
  - llama
  - fine-tuned
  - causal-lm
license: apache-2.0
base_model: YongganFu/Llama-400M-12L

data4elm_full_finetuned_no_lora

Fine-tuned Llama-400M model

Model Details

This model is a fully fine-tuned version of YongganFu/Llama-400M-12L.

Model Files

The model directory contains:

  • config.json - Model configuration
  • generation_config.json - Generation settings
  • model.safetensors - Model weights in safetensors format
  • special_tokens_map.json - Special token mapping
  • tokenizer.json - Tokenizer configuration
  • tokenizer.model - Tokenizer model
  • trainer_state.json - Training state information
  • training_args.bin - Training arguments

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

# Load the fine-tuned model
model = AutoModelForCausalLM.from_pretrained("lxaw/data4elm_full_finetuned_no_lora")
tokenizer = AutoTokenizer.from_pretrained("lxaw/data4elm_full_finetuned_no_lora")

# Example usage
input_text = "What is the capital of France?"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(inputs.input_ids, max_length=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Training Details

This model was fine-tuned using standard full fine-tuning (not parameter-efficient methods like LoRA).