Instructions to use opencode/CodeLlama-7B-quotes with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- PEFT
How to use opencode/CodeLlama-7B-quotes with PEFT:
from peft import PeftModel from transformers import AutoModelForCausalLM base_model = AutoModelForCausalLM.from_pretrained("codellama/CodeLlama-7b-hf") model = PeftModel.from_pretrained(base_model, "opencode/CodeLlama-7B-quotes") - Notebooks
- Google Colab
- Kaggle
| library_name: peft | |
| ## Training procedure | |
| The following `bitsandbytes` quantization config was used during training: | |
| - quant_method: bitsandbytes | |
| - load_in_8bit: False | |
| - load_in_4bit: True | |
| - llm_int8_threshold: 6.0 | |
| - llm_int8_skip_modules: None | |
| - llm_int8_enable_fp32_cpu_offload: False | |
| - llm_int8_has_fp16_weight: False | |
| - bnb_4bit_quant_type: nf4 | |
| - bnb_4bit_use_double_quant: True | |
| - bnb_4bit_compute_dtype: bfloat16 | |
| The following `bitsandbytes` quantization config was used during training: | |
| - quant_method: bitsandbytes | |
| - load_in_8bit: False | |
| - load_in_4bit: True | |
| - llm_int8_threshold: 6.0 | |
| - llm_int8_skip_modules: None | |
| - llm_int8_enable_fp32_cpu_offload: False | |
| - llm_int8_has_fp16_weight: False | |
| - bnb_4bit_quant_type: nf4 | |
| - bnb_4bit_use_double_quant: True | |
| - bnb_4bit_compute_dtype: bfloat16 | |
| ### Framework versions | |
| - PEFT 0.6.0.dev0 | |
| - PEFT 0.6.0.dev0 | |