davanstrien HF Staff commited on
Commit
e993985
·
verified ·
1 Parent(s): da2d36e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +121 -33
README.md CHANGED
@@ -1,58 +1,146 @@
1
  ---
2
  base_model: HuggingFaceTB/SmolLM2-360M
3
  library_name: transformers
4
- model_name: SmolLM2-360M-tldr-sft-2025-05-28_17-48
5
  tags:
6
  - generated_from_trainer
7
  - trl
8
  - sft
9
- licence: license
 
 
 
10
  ---
11
 
12
- # Model Card for SmolLM2-360M-tldr-sft-2025-05-28_17-48
13
 
14
- This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-360M](https://huggingface.co/HuggingFaceTB/SmolLM2-360M).
15
- It has been trained using [TRL](https://github.com/huggingface/trl).
 
16
 
17
- ## Quick start
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
 
19
  ```python
20
- from transformers import pipeline
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
 
22
- question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
- generator = pipeline("text-generation", model="davanstrien/SmolLM2-360M-tldr-sft-2025-05-28_17-48", device="cuda")
24
- output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
- print(output["generated_text"])
 
 
 
26
  ```
27
 
28
- ## Training procedure
 
 
 
 
29
 
30
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/davanstrien/huggingface/runs/qf0k0s0e)
31
 
 
 
 
 
32
 
33
- This model was trained with SFT.
 
 
 
 
 
 
34
 
35
- ### Framework versions
36
 
37
- - TRL: 0.19.0
38
- - Transformers: 4.52.3
39
- - Pytorch: 2.7.0
40
- - Datasets: 3.6.0
41
- - Tokenizers: 0.21.1
42
 
43
- ## Citations
 
 
 
 
44
 
 
 
45
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46
 
47
- Cite TRL as:
48
-
49
- ```bibtex
50
- @misc{vonwerra2022trl,
51
- title = {{TRL: Transformer Reinforcement Learning}},
52
- author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
53
- year = 2020,
54
- journal = {GitHub repository},
55
- publisher = {GitHub},
56
- howpublished = {\url{https://github.com/huggingface/trl}}
57
- }
58
- ```
 
1
  ---
2
  base_model: HuggingFaceTB/SmolLM2-360M
3
  library_name: transformers
4
+ model_name: SmolLM2-360M-tldr-sft-2025-02-12_15-13
5
  tags:
6
  - generated_from_trainer
7
  - trl
8
  - sft
9
+ license: mit
10
+ datasets:
11
+ - davanstrien/hub-tldr-dataset-summaries-llama
12
+ - davanstrien/hub-tldr-model-summaries-llama
13
  ---
14
 
15
+ # Smol-Hub-tldr
16
 
17
+ <div style="float: right; margin-left: 1em;">
18
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/60107b385ac3e86b3ea4fc34/dD9vx3VOPB0Tf6C_ZjJT2.png" alt="Model visualization" width="200"/>
19
+ </div>
20
 
21
+ This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-360M](https://huggingface.co/HuggingFaceTB/SmolLM2-360M). The model is focused on generating concise, one-sentence summaries of model and dataset cards from the Hugging Face Hub. These summaries are intended to be used for:
22
+
23
+ - creating useful tl;dr descriptions that can give you a quick sense of what a dataset or model is for
24
+ - as input text for creating embeddings for semantic search. You can see a demo of this in [librarian-bots/huggingface-datasets-semantic-search](https://huggingface.co/spaces/librarian-bots/huggingface-datasets-semantic-search).
25
+
26
+ The model was trained using supervised fine-tuning (SFT) with [TRL](https://github.com/huggingface/trl).
27
+
28
+ A meta example of a summary generated for this card:
29
+
30
+ > This model is a fine-tuned version of SmolLM2-360M for generating concise, one-sentence summaries of model and dataset cards from the Hugging Face Hub.
31
+
32
+
33
+ ## Intended Use
34
+
35
+ The model is designed to generate brief, informative summaries of:
36
+ - Model cards: Focusing on key capabilities and characteristics
37
+ - Dataset cards: Capturing essential dataset characteristics and purposes
38
+
39
+ ## Training Data
40
+
41
+ The model was trained on:
42
+ - Model card summaries generated by Llama 3.3 70B
43
+ - Dataset card summaries generated by Llama 3.3 70B
44
+
45
+ Model context length: the model was trained with cards up to a length of 2048 tokens
46
+
47
+ ## Usage
48
+
49
+ Using the chat template when using the model in inference is recommended. Additionally, you should prepend either `<MODEL_CARD>` or `<DATASET_CARD>` to the start of the card you want to summarize. The training data used the body of the model or dataset card (i.e., the part after the YAML, so you will likely get better results only by passing this part of the card.
50
+
51
+ I have so far found that a low temperature of `0.4` generates better results.
52
+
53
+ Example:
54
 
55
  ```python
56
+ from transformers import AutoModelForCausalLM, AutoTokenizer
57
+ from huggingface_hub import ModelCard
58
+
59
+ card = ModelCard.load("davanstrien/Smol-Hub-tldr")
60
+
61
+ # Load tokenizer and model
62
+ tokenizer = AutoTokenizer.from_pretrained("davanstrien/Smol-Hub-tldr")
63
+ model = AutoModelForCausalLM.from_pretrained("davanstrien/Smol-Hub-tldr")
64
+
65
+ # Format input according to the chat template
66
+ messages = [{"role": "user", "content": f"<MODEL_CARD>{card.text}"}]
67
+ # Encode with the chat template
68
+ inputs = tokenizer.apply_chat_template(
69
+ messages, add_generation_prompt=True, return_tensors="pt"
70
+ )
71
+
72
+ # Generate with stop tokens
73
+ outputs = model.generate(
74
+ inputs,
75
+ max_new_tokens=60,
76
+ pad_token_id=tokenizer.pad_token_id,
77
+ eos_token_id=tokenizer.eos_token_id,
78
+ temperature=0.4,
79
+ do_sample=True,
80
+ )
81
 
82
+ input_length = inputs.shape[1]
83
+ response = tokenizer.decode(outputs[0][input_length:], skip_special_tokens=False)
84
+
85
+ # Extract just the summary part
86
+ summary = response.split("<CARD_SUMMARY>")[-1].split("</CARD_SUMMARY>")[0]
87
+ print(summary)
88
+ >>> "The Smol-Hub-tldr model is a fine-tuned version of SmolLM2-360M designed to generate concise, one-sentence summaries of model and dataset cards from the Hugging Face Hub."
89
  ```
90
 
91
+ The model currently should close its summary with a `</CARD_SUMMARY>` (cooking some more with this...), so you can also use this as a stopping criterion when using `pipeline` inference.
92
+
93
+ ```python
94
+ from transformers import pipeline, StoppingCriteria, StoppingCriteriaList
95
+ import torch
96
 
 
97
 
98
+ class StopOnTokens(StoppingCriteria):
99
+ def __init__(self, tokenizer, stop_token_ids):
100
+ self.stop_token_ids = stop_token_ids
101
+ self.tokenizer = tokenizer
102
 
103
+ def __call__(
104
+ self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs
105
+ ) -> bool:
106
+ for stop_id in self.stop_token_ids:
107
+ if input_ids[0][-1] == stop_id:
108
+ return True
109
+ return False
110
 
 
111
 
112
+ # Initialize pipeline
113
+ pipe = pipeline("text-generation", "davanstrien/Smol-Hub-tldr")
114
+ tokenizer = pipe.tokenizer
 
 
115
 
116
+ # Get the token IDs for stopping
117
+ stop_token_ids = [
118
+ tokenizer.encode("</CARD_SUMMARY>", add_special_tokens=True)[-1],
119
+ tokenizer.eos_token_id,
120
+ ]
121
 
122
+ # Create stopping criteria
123
+ stopping_criteria = StoppingCriteriaList([StopOnTokens(tokenizer, stop_token_ids)])
124
 
125
+ # Generate with stopping criteria
126
+ response = pipe(
127
+ messages,
128
+ max_new_tokens=50,
129
+ do_sample=True,
130
+ temperature=0.7,
131
+ stopping_criteria=stopping_criteria,
132
+ return_full_text=False,
133
+ )
134
+
135
+ # Clean up the response
136
+ summary = response[0]["generated_text"]
137
+ print(summary)
138
+ >>> "This model is a fine-tuned version of SmolLM2-360M for generating concise, one-sentence summaries of model and dataset cards from the Hugging Face Hub."
139
+ ```
140
 
141
+ ## Framework Versions
142
+ - TRL 0.14.0
143
+ - Transformers 4.48.3
144
+ - PyTorch 2.6.0
145
+ - Datasets 3.2.0
146
+ - Tokenizers 0.21.0