yang-bo's picture
Update README.md
b8f341f
metadata
license: other
datasets:
  - https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM

This repo contains a low-rank adapter for LLaMA-30b fit on the GPT-4-LLM English and Chinese dataset.

This version of the weights was trained with the following the default hyperparameters of alpaca-lora:

  • Epochs: 3
  • Batch size: 128
  • Cutoff length: 256
  • Learning rate: 3e-4
  • Lora r: 8
  • Lora target modules: q_proj, v_proj

Instructions for running it can be found at https://github.com/tloen/alpaca-lora.

Usage and License Notices

The licence of this LoRA weight is inherited from LLaMA's license, for research use only. In addition, the LoRA weight must not be used to compete with OpenAI, according to OpenAI's terms of use , because the dataset is generated from OpenAI API.