File size: 1,542 Bytes
c078b70 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
---
pipeline_tag: sentence-similarity
language: es
license: apache-2.0
tags:
- passage-retrieval
- sentence-similarity
- pruned
library_name: sentence-transformers
base_model: cnmoro/snowflake-arctic-embed-m-v2.0-cpu
base_model_relation: quantized
---
# 🇪🇸 spanish-snowflake-arctic-embed-m-v2.0-cpu
This model is a 50.7% smaller version of [cnmoro/snowflake-arctic-embed-m-v2.0-cpu](https://huggingface.co/cnmoro/snowflake-arctic-embed-m-v2.0-cpu)
for the Spanish language, created using the [mtem-pruner](https://huggingface.co/spaces/antoinelouis/mtem-pruner) space.
This pruned model should perform similarly to the original model for Spanish language tasks with a much smaller
memory footprint. However, it may not perform well for other languages present in the original multilingual model as tokens not
commonly used in Spanish were removed from the original multilingual model's vocabulary.
## Usage
You can use this model with the Transformers library:
```python
from transformers import AutoModel, AutoTokenizer
model_name = "CarlosRCDev/spanish-snowflake-arctic-embed-m-v2.0-cpu"
model = AutoModel.from_pretrained(model_name, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True, use_fast=True)
```
Or with the sentence-transformers library:
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("CarlosRCDev/spanish-snowflake-arctic-embed-m-v2.0-cpu")
```
**Credits**: cc [@antoinelouis](https://huggingface.co/antoinelouis)
|