File size: 2,281 Bytes
2b171d6
20a6d8b
 
bedb6a0
 
 
 
 
20a6d8b
 
2b171d6
 
 
 
bedb6a0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2b171d6
 
 
bedb6a0
2b171d6
bedb6a0
 
 
2b171d6
bedb6a0
2b171d6
bedb6a0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
---
base_model:
- TinyLlama/TinyLlama_v1.1
datasets:
- benchang1110/Taiwan-pretrain-9B
- benchang1110/Taiwan-book-1B
language:
- zh
library_name: transformers
license: apache-2.0
---

# Model Card for Model ID

![image](image.png)
This is a continue-pretrained version of [Tinyllama-v1.1](TinyLlama/TinyLlama_v1.1) tailored for traditional Chinese. The continue-pretraining dataset contains over 10B tokens. Using bfloat16, the VRAM required during inference is only around 3GB!!! 

# Usage
**This is a causal language model not a chat model !** It is not designed to generate human-like responses. It is designed to generate text based on previous text.

```python
from transformers import AutoModelForCausalLM, AutoTokenizer, AutoConfig
import torch
from transformers import TextStreamer

def generate_response(input):
    '''
    simple test for the model
    '''
    # tokenzize the input
    tokenized_input = tokenizer.encode_plus(input, return_tensors='pt').to(device)
    print(tokenized_input['input_ids'])
    # generate the response
    _ = model.generate(
        input_ids=tokenized_input['input_ids'], 
        attention_mask=tokenized_input['attention_mask'],
        pad_token_id=tokenizer.pad_token_id,
        do_sample=True,
        repetition_penalty=1.0,
        max_length=2048,
        streamer=streamer,
    )


if __name__ == '__main__':
    device = 'cuda' if torch.cuda.is_available() else 'cpu'
    model = AutoModelForCausalLM.from_pretrained("benchang1110/Taiwan-tinyllama-v1.1-base",attn_implementation="flash_attention_2",device_map=device,torch_dtype=torch.bfloat16)
    tokenizer = AutoTokenizer.from_pretrained("benchang1110/Taiwan-tinyllama-v1.1-base",use_fast=True)
    streamer = TextStreamer(tokenizer)
    while(True):
        text = input("input a simple prompt:")
        generate_response(text)
```

### Training Procedure

The following training hyperparameters are used:

| Data size    | Global Batch Size | Learning Rate | Epochs | Max Length | Weight Decay | 
|--------------|-------------------|---------------|--------|------------|--------------|
| 10B        | 32               | 5e-5          | 1      | 2048       | 1e-4            |

![loss](loss.png)
### Compute Infrastructure
1xA100(80GB), took approximately 200 GPU hours.