Update README.md
Browse files
README.md
CHANGED
@@ -20,9 +20,9 @@ tags:
|
|
20 |
- lxcorp
|
21 |
---
|
22 |
|
23 |
-
#
|
24 |
|
25 |
-
**
|
26 |
|
27 |
---
|
28 |
|
@@ -43,8 +43,8 @@ tags:
|
|
43 |
```python
|
44 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
45 |
|
46 |
-
model = AutoModelForCausalLM.from_pretrained("lxcorp/
|
47 |
-
tokenizer = AutoTokenizer.from_pretrained("lxcorp/
|
48 |
|
49 |
input_text = "Problema: Prove que 17 é um número primo."
|
50 |
inputs = tokenizer(input_text, return_tensors="pt")
|
@@ -52,7 +52,7 @@ inputs = tokenizer(input_text, return_tensors="pt")
|
|
52 |
output = model.generate(**inputs, max_new_tokens=100)
|
53 |
print(tokenizer.decode(output[0], skip_special_tokens=True))
|
54 |
|
55 |
-
|
56 |
---
|
57 |
|
58 |
About λχ Corp.
|
|
|
20 |
- lxcorp
|
21 |
---
|
22 |
|
23 |
+
# lambda-1v-1b — Lightweight Math & Logic Reasoning Model
|
24 |
|
25 |
+
**lambda-1v-1b** is a compact, fine-tuned language model built on top of `TinyLlama-1.1B-Chat-v1.0`, designed for educational reasoning tasks in both Portuguese and English. It focuses on logic, number theory, and mathematics, delivering fast performance with minimal computational requirements.
|
26 |
|
27 |
---
|
28 |
|
|
|
43 |
```python
|
44 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
45 |
|
46 |
+
model = AutoModelForCausalLM.from_pretrained("lxcorp/lambda-1v-1b")
|
47 |
+
tokenizer = AutoTokenizer.from_pretrained("lxcorp/lambda-1v-1b")
|
48 |
|
49 |
input_text = "Problema: Prove que 17 é um número primo."
|
50 |
inputs = tokenizer(input_text, return_tensors="pt")
|
|
|
52 |
output = model.generate(**inputs, max_new_tokens=100)
|
53 |
print(tokenizer.decode(output[0], skip_special_tokens=True))
|
54 |
|
55 |
+
```
|
56 |
---
|
57 |
|
58 |
About λχ Corp.
|