File size: 2,011 Bytes
4c120d6 66b95a5 21f905d 242a3fc 21f905d 242a3fc 21f905d ce1d661 242a3fc 16c7b3f 4c120d6 e3435fb 66c7212 e3435fb 66c7212 e3435fb 16c7b3f 242a3fc e3435fb 242a3fc e3435fb 16c7b3f 242a3fc 66b95a5 242a3fc 21f905d 66b95a5 66c7212 66b95a5 21f905d 242a3fc 21f905d 2b7e589 66c7212 16c7b3f 66b95a5 16c7b3f 66b95a5 16c7b3f e3435fb 242a3fc 16c7b3f e3435fb 242a3fc e3435fb 16c7b3f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 |
---
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
library_name: transformers
license: mit
language:
- en
- pt
metrics:
- accuracy
pipeline_tag: text-generation
tags:
- hf-inference
- education
- logic
- math
- low-resource
- transformers
- open-source
- causal-lm
- lxcorp
---
# lambda-1v-1b — Lightweight Math & Logic Reasoning Model
**lambda-1v-1b** is a compact, fine-tuned language model built on top of `TinyLlama-1.1B-Chat-v1.0`, designed for educational reasoning tasks in both Portuguese and English. It focuses on logic, number theory, and mathematics, delivering fast performance with minimal computational requirements.
---
## Model Architecture
- **Base Model**: TinyLlama-1.1B-Chat
- **Fine-Tuning Strategy**: LoRA (applied to `q_proj` and `v_proj`)
- **Quantization**: 8-bit (NF4 via `bnb_config`)
- **Dataset**: [`HuggingFaceH4/MATH`](https://huggingface.co/datasets/HuggingFaceH4/MATH) — subset: `number_theory`
- **Max Tokens per Sample**: 512
- **Batch Size**: 20 per device
- **Epochs**: 3
---
## Example Usage (Python)
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("lxcorp/lambda-1v-1b")
tokenizer = AutoTokenizer.from_pretrained("lxcorp/lambda-1v-1b")
input_text = "Problema: Prove que 17 é um número primo."
inputs = tokenizer(input_text, return_tensors="pt")
output = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
---
About λχ Corp.
λχ Corp. is an indie tech corporation founded by Marius Jabami in Angola, focused on AI-driven educational tools, robotics, and lightweight software solutions. The lambdAI model is the first release in a planned series of educational LLMs optimized for reasoning, logic, and low-resource deployment.
Stay updated on the project at lxcorp.ai and huggingface.co/lxcorp.
---
Developed with care by Marius Jabami — Powered by ambition, faith, and open source.
---
--- |