File size: 935 Bytes
b28618d
 
6c4b03e
 
b28618d
6c4b03e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b8f341f
6c4b03e
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
---
license: other
datasets:
- https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM
---

This repo contains a low-rank adapter for LLaMA-30b
fit on the [GPT-4-LLM](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM) English and Chinese dataset.

This version of the weights was trained with the following the default hyperparameters of [alpaca-lora](https://github.com/tloen/alpaca-lora):

- Epochs: 3
- Batch size: 128
- Cutoff length: 256
- Learning rate: 3e-4
- Lora _r_: 8
- Lora target modules: q_proj, v_proj

Instructions for running it can be found at https://github.com/tloen/alpaca-lora.

### Usage and License Notices

The licence of this LoRA weight is inherited from LLaMA's license, for research use only. In addition, the LoRA weight must not be used to compete with OpenAI, according to OpenAI's [terms of use
](https://openai.com/policies/terms-of-use), because the dataset is generated from OpenAI API.