modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-08 19:17:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 549
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-08 18:30:19
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Prikshit7766/en_pipeline
|
Prikshit7766
| 2024-03-25T07:14:39Z | 3 | 0 |
spacy
|
[
"spacy",
"text-classification",
"en",
"dataset:imdb",
"region:us"
] |
text-classification
| 2023-08-17T11:53:22Z |
---
tags:
- spacy
- text-classification
language:
- en
model-index:
- name: en_pipeline
results: []
datasets:
- imdb
---
| Feature | Description |
| --- | --- |
| **Name** | `en_pipeline` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.5.1,<3.6.0` |
| **Default Pipeline** | `textcat` |
| **Components** | `textcat` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (2 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`textcat`** | `POS`, `NEG` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `CATS_SCORE` | 87.23 |
| `CATS_MICRO_P` | 87.24 |
| `CATS_MICRO_R` | 87.24 |
| `CATS_MICRO_F` | 87.24 |
| `CATS_MACRO_P` | 87.29 |
| `CATS_MACRO_R` | 87.24 |
| `CATS_MACRO_F` | 87.23 |
| `CATS_MACRO_AUC` | 93.90 |
| `TEXTCAT_LOSS` | 1499.40 |
|
LarryAIDraw/noa_bluearchive
|
LarryAIDraw
| 2024-03-25T07:14:10Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-11-26T08:06:18Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/122000?modelVersionId=156935
|
JunWorks/whisperBase_LoRA_Taigi
|
JunWorks
| 2024-03-25T07:14:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"endpoints_compatible",
"region:us"
] | null | 2024-03-22T11:36:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
Trained with commonVoice 16.1 Taigi (nan-tw), about 4 ish validated hours <br>
Raw train logs in screenlog.0 <br>
CER before fine tuning: 100.45 (practically gibberish) <br>
CER 75.41 :( (understandable, it is trained only on 4 hours for a very niche language)<br>
<b>DEMO<b>: https://79ed50f9b823872678.gradio.live
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Model type:** peft lora whisper-base
- **Language(s) (NLP):** Taigi (nan-tw)
- **Finetuned from model [optional]:** openai/whisper-base
### Training results
| Training Loss | Epoch | Validation Loss |
|:-------------:|:-----:|:---------------:|
| 2.0245 | 1.0 | 2.0332 |
| 1.7279 | 2.0 | 1.8450 |
| 1.6811 | 3.0 | 1.7509 |
| 1.6153 | 4.0 | 1.6890 |
| 1.5804 | 5.0 | 1.6443 |
| 1.5382 | 6.0 | 1.6113 |
| 1.5547 | 7.0 | 1.5843 |
| 1.5229 | 8.0 | 1.5626 |
| 1.4683 | 9.0 | 1.5455 |
| 1.4458 | 10.0 | 1.5330 |
| 1.4628 | 11.0 | 1.5210 |
| 1.4278 | 12.0 | 1.5136 |
| 1.4231 | 13.0 | 1.5046 |
| 1.3957 | 14.0 | 1.4997 |
| 1.4187 | 15.0 | 1.4950 |
| 1.4219 | 16.0 | 1.4936 |
| 1.5940 | 17.0 | 1.49281 |
|
flammenai/flammen13-mistral-7B
|
flammenai
| 2024-03-25T07:06:22Z | 8 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:automerger/OgnoExperiment27-7B",
"base_model:merge:automerger/OgnoExperiment27-7B",
"base_model:flammenai/flammen12-mistral-7B",
"base_model:merge:flammenai/flammen12-mistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-25T00:28:19Z |
---
license: apache-2.0
base_model:
- nbeerbower/flammen12-mistral-7B
- automerger/OgnoExperiment27-7B
library_name: transformers
tags:
- mergekit
- merge
---
# flammen13-mistral-7B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [nbeerbower/flammen12-mistral-7B](https://huggingface.co/nbeerbower/flammen12-mistral-7B)
* [automerger/OgnoExperiment27-7B](https://huggingface.co/automerger/OgnoExperiment27-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: nbeerbower/flammen12-mistral-7B
layer_range: [0, 32]
- model: automerger/OgnoExperiment27-7B
layer_range: [0, 32]
merge_method: slerp
base_model: nbeerbower/flammen12-mistral-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
hiwei/bert-finetuned-sst2
|
hiwei
| 2024-03-25T07:03:28Z | 113 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-25T06:09:22Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-finetuned-sst2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-sst2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4130
- Accuracy: 0.9071
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2799 | 1.0 | 8419 | 0.4068 | 0.8888 |
| 0.2054 | 2.0 | 16838 | 0.4117 | 0.8991 |
| 0.1146 | 3.0 | 25257 | 0.4130 | 0.9071 |
### Framework versions
- Transformers 4.39.1
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
mahiatlinux/ShadowDolph-7B-v1
|
mahiatlinux
| 2024-03-25T06:59:55Z | 151 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"mahiatlinux/merged1and2-and-dolphin",
"automerger/YamShadow-7B",
"conversational",
"en",
"base_model:automerger/YamShadow-7B",
"base_model:merge:automerger/YamShadow-7B",
"base_model:mahiatlinux/merged1and2-and-dolphin",
"base_model:merge:mahiatlinux/merged1and2-and-dolphin",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-18T06:33:43Z |
---
tags:
- merge
- mergekit
- lazymergekit
- mahiatlinux/merged1and2-and-dolphin
- automerger/YamShadow-7B
base_model:
- mahiatlinux/merged1and2-and-dolphin
- automerger/YamShadow-7B
license: apache-2.0
language:
- en
---
# ShadowDolph 7B v1
merged1and2-and-dolphin-and-yamshadow is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mahiatlinux/merged1and2-and-dolphin](https://huggingface.co/mahiatlinux/merged1and2-and-dolphin)
* [automerger/YamShadow-7B](https://huggingface.co/automerger/YamShadow-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mahiatlinux/merged1and2-and-dolphin
layer_range: [0, 32]
- model: automerger/YamShadow-7B
layer_range: [0, 32]
merge_method: slerp
base_model: mahiatlinux/merged1and2-and-dolphin
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mahiatlinux/merged1and2-and-dolphin-and-yamshadow"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
simpragma/whisper-tiny-kannada-collection-sales-stt_logs
|
simpragma
| 2024-03-25T06:52:30Z | 116 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"kn",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-03-25T06:42:58Z |
---
language:
- kn
metrics:
- wer
---
|
harsh290198/stable-diffusion-xl-for-female-models
|
harsh290198
| 2024-03-25T06:52:19Z | 4 | 0 |
diffusers
|
[
"diffusers",
"autotrain",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-03-24T21:33:52Z |
---
tags:
- autotrain
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of a girl model
license: openrail++
---
# AutoTrain SDXL LoRA DreamBooth - harsh290198/stable-diffusion-xl-for-female-models
<Gallery />
## Model description
These are harsh290198/stable-diffusion-xl-for-female-models LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: None.
## Trigger words
You should use photo of a girl model to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](harsh290198/stable-diffusion-xl-for-female-models/tree/main) them in the Files & versions tab.
|
apexmin/poop_emoji
|
apexmin
| 2024-03-25T06:52:13Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-03-13T01:56:31Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks toy
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - apexmin/poop_emoji
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks toy using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.




DreamBooth for the text encoder was enabled: False.
|
xumeng/banaba
|
xumeng
| 2024-03-25T06:41:31Z | 0 | 0 |
allennlp
|
[
"allennlp",
"biology",
"chemistry",
"code",
"text-to-speech",
"ab",
"dataset:storytracer/US-PD-Books",
"arxiv:1910.09700",
"license:mit",
"region:us"
] |
text-to-speech
| 2024-03-25T03:50:05Z |
---
license: mit
datasets:
- storytracer/US-PD-Books
language:
- ab
metrics:
- bleu
library_name: allennlp
pipeline_tag: text-to-speech
tags:
- biology
- chemistry
- code
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
KasaiDanto/GVI
|
KasaiDanto
| 2024-03-25T06:34:35Z | 0 | 0 |
transformers
|
[
"transformers",
"text-generation",
"vi",
"en",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-24T19:14:40Z |
---
language:
- vi
- en
library_name: transformers
pipeline_tag: text-generation
---
|
mfidabel/Modelo_4_Whisper_Tiny
|
mfidabel
| 2024-03-25T06:34:25Z | 4 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:openai/whisper-tiny",
"base_model:adapter:openai/whisper-tiny",
"license:apache-2.0",
"region:us"
] | null | 2024-03-25T04:02:53Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: openai/whisper-tiny
model-index:
- name: Modelo_4_Whisper_Tiny
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Modelo_4_Whisper_Tiny
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2120
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.8758 | 1.0 | 1174 | 1.2410 |
| 0.773 | 2.0 | 2348 | 1.1992 |
| 0.7368 | 3.0 | 3522 | 1.2029 |
| 0.6759 | 4.0 | 4696 | 1.1813 |
| 0.588 | 5.0 | 5870 | 1.2120 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.0+cu118
- Datasets 2.16.1
- Tokenizers 0.15.2
|
KaQyn/peft-lora-CodeLlama-13b-flutter-copilot
|
KaQyn
| 2024-03-25T06:16:57Z | 4 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:codellama/CodeLlama-13b-Instruct-hf",
"base_model:adapter:codellama/CodeLlama-13b-Instruct-hf",
"region:us"
] | null | 2024-03-23T08:50:53Z |
---
library_name: peft
tags:
- generated_from_trainer
base_model: codellama/CodeLlama-13b-Instruct-hf
model-index:
- name: peft-lora-CodeLlama-13b-flutter-copilot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# peft-lora-CodeLlama-13b-flutter-copilot
This model is a fine-tuned version of [codellama/CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3620
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.7824 | 0.05 | 100 | 0.4186 |
| 0.3055 | 0.1 | 200 | 0.4164 |
| 0.4455 | 0.15 | 300 | 0.4134 |
| 0.3148 | 0.2 | 400 | 0.3762 |
| 0.2942 | 0.25 | 500 | 0.3780 |
| 0.8817 | 0.3 | 600 | 0.3760 |
| 0.4958 | 0.35 | 700 | 0.3738 |
| 0.4388 | 0.4 | 800 | 0.3710 |
| 0.3605 | 0.45 | 900 | 0.3698 |
| 0.2862 | 0.5 | 1000 | 0.3673 |
| 3.4798 | 0.55 | 1100 | 0.3687 |
| 3.3077 | 0.6 | 1200 | 0.3703 |
| 0.4847 | 0.65 | 1300 | 0.3666 |
| 0.3593 | 0.7 | 1400 | 0.3662 |
| 0.5983 | 0.75 | 1500 | 0.3654 |
| 0.6138 | 0.8 | 1600 | 0.3638 |
| 0.403 | 0.85 | 1700 | 0.3635 |
| 0.4199 | 0.9 | 1800 | 0.3632 |
| 0.3526 | 0.95 | 1900 | 0.3621 |
| 0.492 | 1.0 | 2000 | 0.3620 |
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.40.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
lcz529959/zhun02
|
lcz529959
| 2024-03-25T06:16:13Z | 138 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:Sumail/zhun03",
"base_model:merge:Sumail/zhun03",
"base_model:lcz529959/CopyLucia",
"base_model:merge:lcz529959/CopyLucia",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-25T06:14:45Z |
---
base_model:
- Sumail/zhun03
- lcz529959/CopyLucia
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Sumail/zhun03](https://huggingface.co/Sumail/zhun03)
* [lcz529959/CopyLucia](https://huggingface.co/lcz529959/CopyLucia)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Sumail/zhun03
layer_range: [0, 12]
- model: lcz529959/CopyLucia
layer_range: [0, 12]
merge_method: slerp
base_model: lcz529959/CopyLucia
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.1
dtype: float32
```
|
N0de/ppo-Huggy
|
N0de
| 2024-03-25T06:05:54Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2024-03-25T06:05:32Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: N0de/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
varunril/odia_transcript_generation
|
varunril
| 2024-03-25T05:55:19Z | 1 | 0 |
transformers
|
[
"transformers",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"or",
"dataset:OpenSLR",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-03-22T09:37:22Z |
---
language: or
datasets:
- OpenSLR
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Odia by Shyam Sunder Kumar
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: OpenSLR
type: OpenSLR
args: or
metrics:
- name: Test WER
type: wer
value: 68.75
---
# Wav2Vec2-Large-XLSR-53-Odia
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) odia using the [Multilingual and code-switching ASR challenges for low resource Indian languages](https://navana-tech.github.io/IS21SS-indicASRchallenge/data.html).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "or", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("theainerd/wav2vec2-large-xlsr-53-odia")
model = Wav2Vec2ForCTC.from_pretrained("theainerd/wav2vec2-large-xlsr-53-odia")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Odia test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "or", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("theainerd/wav2vec2-large-xlsr-53-odia")
model = Wav2Vec2ForCTC.from_pretrained("theainerd/wav2vec2-large-xlsr-53-odia")
model.to("cuda")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 68.75 %
## Training
The script used for training can be found [Odia ASR Fine Tuning Wav2Vec2](https://colab.research.google.com/drive/1aHpFRTxaBeNblRHAtYOy0hBeXbbMWtot?usp=sharing)
|
apexmin/monster_toy
|
apexmin
| 2024-03-25T05:50:55Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-03-13T01:35:29Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks toy
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - apexmin/monster_toy
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks toy using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.




DreamBooth for the text encoder was enabled: False.
|
fanJ666/sd-class-butterflies-32
|
fanJ666
| 2024-03-25T05:37:29Z | 46 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2024-03-25T05:36:40Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('fanJ666/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
OpenBuddy/openbuddy-qwen1.5-14b-v20.1-32k
|
OpenBuddy
| 2024-03-25T05:33:22Z | 50 | 1 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"ru",
"fi",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-03-21T09:18:25Z |
---
license: other
license_name: tongyi-qianwen-license-agreement
license_link: >-
https://huggingface.co/Qwen/Qwen1.5-14B/blob/39b74a78357df4d2296e838d87565967d663a67a/LICENSE
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- ru
- fi
pipeline_tag: text-generation
inference: false
library_name: transformers
---
# OpenBuddy - Open Multilingual Chatbot
GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy)
Website and Demo: [https://openbuddy.ai](https://openbuddy.ai)
Evaluation result of this model: [Evaluation.txt](Evaluation.txt)

# Copyright Notice
Base model: https://huggingface.co/Qwen/Qwen1.5-14B
License: Qwen: https://huggingface.co/Qwen/Qwen1.5-14B/blob/39b74a78357df4d2296e838d87565967d663a67a/LICENSE
## Disclaimer
All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions.
OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software.
By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy.
## 免责声明
所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。
OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。
使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
|
arcdev/SFR-Embedding-Mistral
|
arcdev
| 2024-03-25T05:25:04Z | 9 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"mistral",
"feature-extraction",
"mteb",
"transformers",
"en",
"arxiv:2210.07316",
"arxiv:2310.06825",
"arxiv:2401.00368",
"arxiv:2104.08663",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-03-25T05:25:04Z |
---
tags:
- mteb
- sentence-transformers
- transformers
model-index:
- name: SFR-Embedding-Mistral
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 77.92537313432834
- type: ap
value: 40.86767661556651
- type: f1
value: 71.65758897929837
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 95.967
- type: ap
value: 94.46300829592593
- type: f1
value: 95.96507173189292
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 54.352000000000004
- type: f1
value: 53.636682615380174
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: ndcg_at_1
value: 43.314
- type: ndcg_at_2
value: 54.757
- type: ndcg_at_3
value: 58.84700000000001
- type: ndcg_at_5
value: 63.634
- type: ndcg_at_7
value: 65.741
- type: ndcg_at_10
value: 67.171
- type: ndcg_at_20
value: 68.585
- type: ndcg_at_30
value: 68.81
- type: ndcg_at_50
value: 68.932
- type: ndcg_at_70
value: 68.992
- type: ndcg_at_100
value: 69.014
- type: ndcg_at_200
value: 69.014
- type: ndcg_at_300
value: 69.014
- type: ndcg_at_500
value: 69.014
- type: ndcg_at_700
value: 69.014
- type: ndcg_at_1000
value: 69.014
- type: map_at_1
value: 43.314
- type: map_at_2
value: 52.383
- type: map_at_3
value: 55.108999999999995
- type: map_at_5
value: 57.772999999999996
- type: map_at_7
value: 58.718
- type: map_at_10
value: 59.256
- type: map_at_20
value: 59.668
- type: map_at_30
value: 59.709999999999994
- type: map_at_50
value: 59.727
- type: map_at_70
value: 59.733999999999995
- type: map_at_100
value: 59.73500000000001
- type: map_at_200
value: 59.73500000000001
- type: map_at_300
value: 59.73500000000001
- type: map_at_500
value: 59.73500000000001
- type: map_at_700
value: 59.73500000000001
- type: map_at_1000
value: 59.73500000000001
- type: recall_at_1
value: 43.314
- type: recall_at_2
value: 61.451
- type: recall_at_3
value: 69.63000000000001
- type: recall_at_5
value: 81.223
- type: recall_at_7
value: 87.33999999999999
- type: recall_at_10
value: 92.034
- type: recall_at_20
value: 97.44
- type: recall_at_30
value: 98.506
- type: recall_at_50
value: 99.14699999999999
- type: recall_at_70
value: 99.502
- type: recall_at_100
value: 99.644
- type: recall_at_200
value: 99.644
- type: recall_at_300
value: 99.644
- type: recall_at_500
value: 99.644
- type: recall_at_700
value: 99.644
- type: recall_at_1000
value: 99.644
- type: precision_at_1
value: 43.314
- type: precision_at_2
value: 30.725
- type: precision_at_3
value: 23.21
- type: precision_at_5
value: 16.245
- type: precision_at_7
value: 12.477
- type: precision_at_10
value: 9.203
- type: precision_at_20
value: 4.872
- type: precision_at_30
value: 3.2840000000000003
- type: precision_at_50
value: 1.983
- type: precision_at_70
value: 1.421
- type: precision_at_100
value: 0.996
- type: precision_at_200
value: 0.498
- type: precision_at_300
value: 0.332
- type: precision_at_500
value: 0.199
- type: precision_at_700
value: 0.14200000000000002
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 44.666
- type: mrr_at_2
value: 52.418
- type: mrr_at_3
value: 55.595000000000006
- type: mrr_at_5
value: 58.205
- type: mrr_at_7
value: 59.202999999999996
- type: mrr_at_10
value: 59.727
- type: mrr_at_20
value: 60.133
- type: mrr_at_30
value: 60.178
- type: mrr_at_50
value: 60.192
- type: mrr_at_70
value: 60.19799999999999
- type: mrr_at_100
value: 60.199999999999996
- type: mrr_at_200
value: 60.199999999999996
- type: mrr_at_300
value: 60.199999999999996
- type: mrr_at_500
value: 60.199999999999996
- type: mrr_at_700
value: 60.199999999999996
- type: mrr_at_1000
value: 60.199999999999996
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 52.07508593014336
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 47.381339333240675
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 67.58376647859171
- type: mrr
value: 80.56885635140483
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 88.40107280274783
- type: cos_sim_spearman
value: 86.07003345325681
- type: euclidean_pearson
value: 87.1726034325395
- type: euclidean_spearman
value: 86.07003345325681
- type: manhattan_pearson
value: 87.25660625029772
- type: manhattan_spearman
value: 86.3808839096893
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 88.81168831168831
- type: f1
value: 88.76514496560141
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 43.9382520874344
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 41.14351847240913
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: ndcg_at_1
value: 34.51166666666667
- type: ndcg_at_2
value: 38.51591666666667
- type: ndcg_at_3
value: 40.95083333333333
- type: ndcg_at_5
value: 43.580666666666666
- type: ndcg_at_7
value: 45.0625
- type: ndcg_at_10
value: 46.49083333333333
- type: ndcg_at_20
value: 48.731333333333325
- type: ndcg_at_30
value: 49.78666666666667
- type: ndcg_at_50
value: 50.84049999999999
- type: ndcg_at_70
value: 51.393750000000004
- type: ndcg_at_100
value: 51.883333333333326
- type: ndcg_at_200
value: 52.65225
- type: ndcg_at_300
value: 52.98241666666669
- type: ndcg_at_500
value: 53.28541666666668
- type: ndcg_at_700
value: 53.49241666666668
- type: ndcg_at_1000
value: 53.63758333333334
- type: map_at_1
value: 29.10075
- type: map_at_2
value: 34.636500000000005
- type: map_at_3
value: 36.92033333333333
- type: map_at_5
value: 38.81641666666666
- type: map_at_7
value: 39.635416666666664
- type: map_at_10
value: 40.294583333333335
- type: map_at_20
value: 41.07574999999999
- type: map_at_30
value: 41.333
- type: map_at_50
value: 41.529333333333334
- type: map_at_70
value: 41.606833333333334
- type: map_at_100
value: 41.66224999999999
- type: map_at_200
value: 41.72691666666666
- type: map_at_300
value: 41.746583333333334
- type: map_at_500
value: 41.75983333333333
- type: map_at_700
value: 41.76558333333333
- type: map_at_1000
value: 41.769000000000005
- type: recall_at_1
value: 29.10075
- type: recall_at_2
value: 39.07658333333333
- type: recall_at_3
value: 44.93591666666667
- type: recall_at_5
value: 51.66883333333333
- type: recall_at_7
value: 55.881000000000014
- type: recall_at_10
value: 60.34691666666667
- type: recall_at_20
value: 68.44016666666667
- type: recall_at_30
value: 72.90766666666667
- type: recall_at_50
value: 77.843
- type: recall_at_70
value: 80.70366666666668
- type: recall_at_100
value: 83.42866666666667
- type: recall_at_200
value: 88.06816666666668
- type: recall_at_300
value: 90.249
- type: recall_at_500
value: 92.37616666666668
- type: recall_at_700
value: 93.978
- type: recall_at_1000
value: 95.12791666666666
- type: precision_at_1
value: 34.51166666666667
- type: precision_at_2
value: 24.326333333333327
- type: precision_at_3
value: 19.099249999999998
- type: precision_at_5
value: 13.672666666666666
- type: precision_at_7
value: 10.772
- type: precision_at_10
value: 8.302166666666668
- type: precision_at_20
value: 4.8960833333333325
- type: precision_at_30
value: 3.551083333333333
- type: precision_at_50
value: 2.3386666666666662
- type: precision_at_70
value: 1.7605833333333334
- type: precision_at_100
value: 1.2965
- type: precision_at_200
value: 0.7106666666666668
- type: precision_at_300
value: 0.4955
- type: precision_at_500
value: 0.3106666666666667
- type: precision_at_700
value: 0.22791666666666668
- type: precision_at_1000
value: 0.1635833333333333
- type: mrr_at_1
value: 34.51166666666667
- type: mrr_at_2
value: 39.954249999999995
- type: mrr_at_3
value: 41.93741666666668
- type: mrr_at_5
value: 43.487166666666674
- type: mrr_at_7
value: 44.14983333333333
- type: mrr_at_10
value: 44.62766666666666
- type: mrr_at_20
value: 45.15291666666668
- type: mrr_at_30
value: 45.317
- type: mrr_at_50
value: 45.42875
- type: mrr_at_70
value: 45.46966666666667
- type: mrr_at_100
value: 45.49716666666667
- type: mrr_at_200
value: 45.525166666666664
- type: mrr_at_300
value: 45.53233333333335
- type: mrr_at_500
value: 45.5365
- type: mrr_at_700
value: 45.538583333333335
- type: mrr_at_1000
value: 45.539583333333326
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: ndcg_at_1
value: 35.179
- type: ndcg_at_2
value: 31.243
- type: ndcg_at_3
value: 30.562
- type: ndcg_at_5
value: 32.409
- type: ndcg_at_7
value: 34.525
- type: ndcg_at_10
value: 36.415
- type: ndcg_at_20
value: 39.443
- type: ndcg_at_30
value: 40.796
- type: ndcg_at_50
value: 42.16
- type: ndcg_at_70
value: 42.971
- type: ndcg_at_100
value: 43.691
- type: ndcg_at_200
value: 45.004
- type: ndcg_at_300
value: 45.527
- type: ndcg_at_500
value: 46.072
- type: ndcg_at_700
value: 46.387
- type: ndcg_at_1000
value: 46.663
- type: map_at_1
value: 15.692
- type: map_at_2
value: 20.116
- type: map_at_3
value: 22.6
- type: map_at_5
value: 24.701
- type: map_at_7
value: 25.934
- type: map_at_10
value: 26.843
- type: map_at_20
value: 27.975
- type: map_at_30
value: 28.372000000000003
- type: map_at_50
value: 28.671000000000003
- type: map_at_70
value: 28.803
- type: map_at_100
value: 28.895
- type: map_at_200
value: 29.011
- type: map_at_300
value: 29.042
- type: map_at_500
value: 29.065
- type: map_at_700
value: 29.075
- type: map_at_1000
value: 29.081000000000003
- type: recall_at_1
value: 15.692
- type: recall_at_2
value: 22.602
- type: recall_at_3
value: 27.814
- type: recall_at_5
value: 33.756
- type: recall_at_7
value: 38.073
- type: recall_at_10
value: 42.553000000000004
- type: recall_at_20
value: 51.121
- type: recall_at_30
value: 55.523999999999994
- type: recall_at_50
value: 60.586
- type: recall_at_70
value: 63.94
- type: recall_at_100
value: 67.134
- type: recall_at_200
value: 73.543
- type: recall_at_300
value: 76.372
- type: recall_at_500
value: 79.60199999999999
- type: recall_at_700
value: 81.536
- type: recall_at_1000
value: 83.37400000000001
- type: precision_at_1
value: 35.179
- type: precision_at_2
value: 27.199
- type: precision_at_3
value: 22.953000000000003
- type: precision_at_5
value: 17.224999999999998
- type: precision_at_7
value: 14.238999999999999
- type: precision_at_10
value: 11.303
- type: precision_at_20
value: 6.954000000000001
- type: precision_at_30
value: 5.116
- type: precision_at_50
value: 3.395
- type: precision_at_70
value: 2.579
- type: precision_at_100
value: 1.9109999999999998
- type: precision_at_200
value: 1.065
- type: precision_at_300
value: 0.743
- type: precision_at_500
value: 0.46699999999999997
- type: precision_at_700
value: 0.344
- type: precision_at_1000
value: 0.247
- type: mrr_at_1
value: 35.179
- type: mrr_at_2
value: 41.792
- type: mrr_at_3
value: 44.484
- type: mrr_at_5
value: 46.39
- type: mrr_at_7
value: 47.125
- type: mrr_at_10
value: 47.711999999999996
- type: mrr_at_20
value: 48.214
- type: mrr_at_30
value: 48.325
- type: mrr_at_50
value: 48.392
- type: mrr_at_70
value: 48.418
- type: mrr_at_100
value: 48.44
- type: mrr_at_200
value: 48.46
- type: mrr_at_300
value: 48.461999999999996
- type: mrr_at_500
value: 48.466
- type: mrr_at_700
value: 48.466
- type: mrr_at_1000
value: 48.467
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: ndcg_at_1
value: 62.375
- type: ndcg_at_2
value: 56.286
- type: ndcg_at_3
value: 53.665
- type: ndcg_at_5
value: 51.139
- type: ndcg_at_7
value: 49.873
- type: ndcg_at_10
value: 49.056
- type: ndcg_at_20
value: 48.783
- type: ndcg_at_30
value: 49.166
- type: ndcg_at_50
value: 51.141999999999996
- type: ndcg_at_70
value: 52.774
- type: ndcg_at_100
value: 54.403
- type: ndcg_at_200
value: 57.419
- type: ndcg_at_300
value: 58.778
- type: ndcg_at_500
value: 60.228
- type: ndcg_at_700
value: 61.07599999999999
- type: ndcg_at_1000
value: 61.846000000000004
- type: map_at_1
value: 10.359
- type: map_at_2
value: 14.446
- type: map_at_3
value: 16.689
- type: map_at_5
value: 20.096
- type: map_at_7
value: 22.247
- type: map_at_10
value: 24.468999999999998
- type: map_at_20
value: 28.938000000000002
- type: map_at_30
value: 31.134
- type: map_at_50
value: 33.403
- type: map_at_70
value: 34.486
- type: map_at_100
value: 35.337
- type: map_at_200
value: 36.364999999999995
- type: map_at_300
value: 36.735
- type: map_at_500
value: 37.057
- type: map_at_700
value: 37.225
- type: map_at_1000
value: 37.379
- type: recall_at_1
value: 10.359
- type: recall_at_2
value: 14.945
- type: recall_at_3
value: 17.694
- type: recall_at_5
value: 22.677
- type: recall_at_7
value: 26.131
- type: recall_at_10
value: 30.053
- type: recall_at_20
value: 39.518
- type: recall_at_30
value: 44.925
- type: recall_at_50
value: 52.154
- type: recall_at_70
value: 56.729
- type: recall_at_100
value: 61.18900000000001
- type: recall_at_200
value: 70.407
- type: recall_at_300
value: 74.412
- type: recall_at_500
value: 78.891
- type: recall_at_700
value: 81.74
- type: recall_at_1000
value: 84.253
- type: precision_at_1
value: 75
- type: precision_at_2
value: 64.125
- type: precision_at_3
value: 57.833
- type: precision_at_5
value: 50.24999999999999
- type: precision_at_7
value: 44.75
- type: precision_at_10
value: 39.75
- type: precision_at_20
value: 30.412
- type: precision_at_30
value: 25.141999999999996
- type: precision_at_50
value: 19.2
- type: precision_at_70
value: 15.729000000000001
- type: precision_at_100
value: 12.552
- type: precision_at_200
value: 7.866
- type: precision_at_300
value: 5.9270000000000005
- type: precision_at_500
value: 4.1129999999999995
- type: precision_at_700
value: 3.2460000000000004
- type: precision_at_1000
value: 2.5260000000000002
- type: mrr_at_1
value: 75
- type: mrr_at_2
value: 78.625
- type: mrr_at_3
value: 79.708
- type: mrr_at_5
value: 80.446
- type: mrr_at_7
value: 80.862
- type: mrr_at_10
value: 81.161
- type: mrr_at_20
value: 81.3
- type: mrr_at_30
value: 81.348
- type: mrr_at_50
value: 81.361
- type: mrr_at_70
value: 81.361
- type: mrr_at_100
value: 81.361
- type: mrr_at_200
value: 81.367
- type: mrr_at_300
value: 81.367
- type: mrr_at_500
value: 81.368
- type: mrr_at_700
value: 81.368
- type: mrr_at_1000
value: 81.368
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 50.239999999999995
- type: f1
value: 46.42361822342044
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: ndcg_at_1
value: 83.723
- type: ndcg_at_2
value: 86.777
- type: ndcg_at_3
value: 87.997
- type: ndcg_at_5
value: 88.864
- type: ndcg_at_7
value: 89.143
- type: ndcg_at_10
value: 89.349
- type: ndcg_at_20
value: 89.709
- type: ndcg_at_30
value: 89.82900000000001
- type: ndcg_at_50
value: 89.923
- type: ndcg_at_70
value: 89.982
- type: ndcg_at_100
value: 90.026
- type: ndcg_at_200
value: 90.10000000000001
- type: ndcg_at_300
value: 90.12599999999999
- type: ndcg_at_500
value: 90.17399999999999
- type: ndcg_at_700
value: 90.19
- type: ndcg_at_1000
value: 90.208
- type: map_at_1
value: 77.64999999999999
- type: map_at_2
value: 83.769
- type: map_at_3
value: 85.041
- type: map_at_5
value: 85.736
- type: map_at_7
value: 85.924
- type: map_at_10
value: 86.032
- type: map_at_20
value: 86.177
- type: map_at_30
value: 86.213
- type: map_at_50
value: 86.233
- type: map_at_70
value: 86.24300000000001
- type: map_at_100
value: 86.249
- type: map_at_200
value: 86.256
- type: map_at_300
value: 86.258
- type: map_at_500
value: 86.26
- type: map_at_700
value: 86.26
- type: map_at_1000
value: 86.261
- type: recall_at_1
value: 77.64999999999999
- type: recall_at_2
value: 88.53999999999999
- type: recall_at_3
value: 91.696
- type: recall_at_5
value: 93.916
- type: recall_at_7
value: 94.731
- type: recall_at_10
value: 95.318
- type: recall_at_20
value: 96.507
- type: recall_at_30
value: 96.956
- type: recall_at_50
value: 97.34899999999999
- type: recall_at_70
value: 97.61
- type: recall_at_100
value: 97.83
- type: recall_at_200
value: 98.223
- type: recall_at_300
value: 98.374
- type: recall_at_500
value: 98.67899999999999
- type: recall_at_700
value: 98.787
- type: recall_at_1000
value: 98.919
- type: precision_at_1
value: 83.723
- type: precision_at_2
value: 48.425000000000004
- type: precision_at_3
value: 33.638
- type: precision_at_5
value: 20.843
- type: precision_at_7
value: 15.079
- type: precision_at_10
value: 10.674999999999999
- type: precision_at_20
value: 5.457999999999999
- type: precision_at_30
value: 3.6740000000000004
- type: precision_at_50
value: 2.2239999999999998
- type: precision_at_70
value: 1.599
- type: precision_at_100
value: 1.125
- type: precision_at_200
value: 0.5680000000000001
- type: precision_at_300
value: 0.38
- type: precision_at_500
value: 0.22999999999999998
- type: precision_at_700
value: 0.165
- type: precision_at_1000
value: 0.116
- type: mrr_at_1
value: 83.723
- type: mrr_at_2
value: 88.794
- type: mrr_at_3
value: 89.679
- type: mrr_at_5
value: 90.049
- type: mrr_at_7
value: 90.129
- type: mrr_at_10
value: 90.167
- type: mrr_at_20
value: 90.208
- type: mrr_at_30
value: 90.214
- type: mrr_at_50
value: 90.217
- type: mrr_at_70
value: 90.218
- type: mrr_at_100
value: 90.21900000000001
- type: mrr_at_200
value: 90.21900000000001
- type: mrr_at_300
value: 90.21900000000001
- type: mrr_at_500
value: 90.21900000000001
- type: mrr_at_700
value: 90.21900000000001
- type: mrr_at_1000
value: 90.21900000000001
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: ndcg_at_1
value: 59.721999999999994
- type: ndcg_at_2
value: 56.85
- type: ndcg_at_3
value: 56.462999999999994
- type: ndcg_at_5
value: 57.75599999999999
- type: ndcg_at_7
value: 59.109
- type: ndcg_at_10
value: 60.402
- type: ndcg_at_20
value: 63.071999999999996
- type: ndcg_at_30
value: 64.302
- type: ndcg_at_50
value: 65.619
- type: ndcg_at_70
value: 66.161
- type: ndcg_at_100
value: 66.645
- type: ndcg_at_200
value: 67.353
- type: ndcg_at_300
value: 67.646
- type: ndcg_at_500
value: 67.852
- type: ndcg_at_700
value: 67.974
- type: ndcg_at_1000
value: 68.084
- type: map_at_1
value: 31.56
- type: map_at_2
value: 42.093
- type: map_at_3
value: 46.177
- type: map_at_5
value: 49.78
- type: map_at_7
value: 51.410999999999994
- type: map_at_10
value: 52.524
- type: map_at_20
value: 53.815000000000005
- type: map_at_30
value: 54.201
- type: map_at_50
value: 54.531
- type: map_at_70
value: 54.625
- type: map_at_100
value: 54.686
- type: map_at_200
value: 54.757999999999996
- type: map_at_300
value: 54.776
- type: map_at_500
value: 54.786
- type: map_at_700
value: 54.790000000000006
- type: map_at_1000
value: 54.793000000000006
- type: recall_at_1
value: 31.56
- type: recall_at_2
value: 44.858
- type: recall_at_3
value: 51.11
- type: recall_at_5
value: 58.394
- type: recall_at_7
value: 63.001
- type: recall_at_10
value: 66.81200000000001
- type: recall_at_20
value: 74.901
- type: recall_at_30
value: 79.218
- type: recall_at_50
value: 84.49
- type: recall_at_70
value: 87.003
- type: recall_at_100
value: 89.345
- type: recall_at_200
value: 93.173
- type: recall_at_300
value: 94.906
- type: recall_at_500
value: 96.223
- type: recall_at_700
value: 97.043
- type: recall_at_1000
value: 97.785
- type: precision_at_1
value: 59.721999999999994
- type: precision_at_2
value: 46.682
- type: precision_at_3
value: 37.602999999999994
- type: precision_at_5
value: 27.500000000000004
- type: precision_at_7
value: 21.847
- type: precision_at_10
value: 16.667
- type: precision_at_20
value: 9.545
- type: precision_at_30
value: 6.795
- type: precision_at_50
value: 4.38
- type: precision_at_70
value: 3.221
- type: precision_at_100
value: 2.319
- type: precision_at_200
value: 1.2149999999999999
- type: precision_at_300
value: 0.827
- type: precision_at_500
value: 0.504
- type: precision_at_700
value: 0.364
- type: precision_at_1000
value: 0.257
- type: mrr_at_1
value: 59.721999999999994
- type: mrr_at_2
value: 64.506
- type: mrr_at_3
value: 65.792
- type: mrr_at_5
value: 66.965
- type: mrr_at_7
value: 67.34700000000001
- type: mrr_at_10
value: 67.57
- type: mrr_at_20
value: 67.896
- type: mrr_at_30
value: 68.008
- type: mrr_at_50
value: 68.083
- type: mrr_at_70
value: 68.105
- type: mrr_at_100
value: 68.116
- type: mrr_at_200
value: 68.12700000000001
- type: mrr_at_300
value: 68.13
- type: mrr_at_500
value: 68.132
- type: mrr_at_700
value: 68.133
- type: mrr_at_1000
value: 68.133
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: ndcg_at_1
value: 81.796
- type: ndcg_at_2
value: 67.999
- type: ndcg_at_3
value: 72.15599999999999
- type: ndcg_at_5
value: 74.99900000000001
- type: ndcg_at_7
value: 76.179
- type: ndcg_at_10
value: 77.022
- type: ndcg_at_20
value: 78.173
- type: ndcg_at_30
value: 78.648
- type: ndcg_at_50
value: 79.104
- type: ndcg_at_70
value: 79.335
- type: ndcg_at_100
value: 79.56
- type: ndcg_at_200
value: 79.911
- type: ndcg_at_300
value: 80.045
- type: ndcg_at_500
value: 80.19500000000001
- type: ndcg_at_700
value: 80.281
- type: ndcg_at_1000
value: 80.35
- type: map_at_1
value: 40.898
- type: map_at_2
value: 62.016000000000005
- type: map_at_3
value: 66.121
- type: map_at_5
value: 68.471
- type: map_at_7
value: 69.261
- type: map_at_10
value: 69.738
- type: map_at_20
value: 70.208
- type: map_at_30
value: 70.343
- type: map_at_50
value: 70.43700000000001
- type: map_at_70
value: 70.47099999999999
- type: map_at_100
value: 70.498
- type: map_at_200
value: 70.526
- type: map_at_300
value: 70.533
- type: map_at_500
value: 70.538
- type: map_at_700
value: 70.541
- type: map_at_1000
value: 70.542
- type: recall_at_1
value: 40.898
- type: recall_at_2
value: 63.964
- type: recall_at_3
value: 70.743
- type: recall_at_5
value: 76.36699999999999
- type: recall_at_7
value: 79.142
- type: recall_at_10
value: 81.404
- type: recall_at_20
value: 85.111
- type: recall_at_30
value: 86.92800000000001
- type: recall_at_50
value: 88.899
- type: recall_at_70
value: 90.01400000000001
- type: recall_at_100
value: 91.19500000000001
- type: recall_at_200
value: 93.234
- type: recall_at_300
value: 94.105
- type: recall_at_500
value: 95.159
- type: recall_at_700
value: 95.8
- type: recall_at_1000
value: 96.34700000000001
- type: precision_at_1
value: 81.796
- type: precision_at_2
value: 63.964
- type: precision_at_3
value: 47.162
- type: precision_at_5
value: 30.547
- type: precision_at_7
value: 22.612
- type: precision_at_10
value: 16.281000000000002
- type: precision_at_20
value: 8.511000000000001
- type: precision_at_30
value: 5.795
- type: precision_at_50
value: 3.556
- type: precision_at_70
value: 2.572
- type: precision_at_100
value: 1.8239999999999998
- type: precision_at_200
value: 0.932
- type: precision_at_300
value: 0.627
- type: precision_at_500
value: 0.381
- type: precision_at_700
value: 0.27399999999999997
- type: precision_at_1000
value: 0.193
- type: mrr_at_1
value: 81.796
- type: mrr_at_2
value: 85.69200000000001
- type: mrr_at_3
value: 86.52
- type: mrr_at_5
value: 86.973
- type: mrr_at_7
value: 87.13300000000001
- type: mrr_at_10
value: 87.208
- type: mrr_at_20
value: 87.303
- type: mrr_at_30
value: 87.32799999999999
- type: mrr_at_50
value: 87.347
- type: mrr_at_70
value: 87.35199999999999
- type: mrr_at_100
value: 87.355
- type: mrr_at_200
value: 87.357
- type: mrr_at_300
value: 87.357
- type: mrr_at_500
value: 87.358
- type: mrr_at_700
value: 87.358
- type: mrr_at_1000
value: 87.358
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 94.79200000000002
- type: ap
value: 92.54484356773553
- type: f1
value: 94.78965313682525
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: ndcg_at_1
value: 24.398
- type: ndcg_at_2
value: 31.336000000000002
- type: ndcg_at_3
value: 35.266999999999996
- type: ndcg_at_5
value: 39.356
- type: ndcg_at_7
value: 41.562
- type: ndcg_at_10
value: 43.408
- type: ndcg_at_20
value: 46.107
- type: ndcg_at_30
value: 47.164
- type: ndcg_at_50
value: 48.126000000000005
- type: ndcg_at_70
value: 48.626999999999995
- type: ndcg_at_100
value: 49.043
- type: ndcg_at_200
value: 49.575
- type: ndcg_at_300
value: 49.794
- type: ndcg_at_500
value: 49.942
- type: ndcg_at_700
value: 50.014
- type: ndcg_at_1000
value: 50.077000000000005
- type: map_at_1
value: 23.723
- type: map_at_2
value: 29.593000000000004
- type: map_at_3
value: 32.273
- type: map_at_5
value: 34.587
- type: map_at_7
value: 35.589999999999996
- type: map_at_10
value: 36.296
- type: map_at_20
value: 37.059999999999995
- type: map_at_30
value: 37.265
- type: map_at_50
value: 37.402
- type: map_at_70
value: 37.454
- type: map_at_100
value: 37.486999999999995
- type: map_at_200
value: 37.516
- type: map_at_300
value: 37.524
- type: map_at_500
value: 37.528
- type: map_at_700
value: 37.529
- type: map_at_1000
value: 37.53
- type: recall_at_1
value: 23.723
- type: recall_at_2
value: 35.355
- type: recall_at_3
value: 43.22
- type: recall_at_5
value: 53.025
- type: recall_at_7
value: 59.327
- type: recall_at_10
value: 65.302
- type: recall_at_20
value: 75.765
- type: recall_at_30
value: 80.632
- type: recall_at_50
value: 85.63499999999999
- type: recall_at_70
value: 88.554
- type: recall_at_100
value: 91.16300000000001
- type: recall_at_200
value: 94.85
- type: recall_at_300
value: 96.532
- type: recall_at_500
value: 97.751
- type: recall_at_700
value: 98.383
- type: recall_at_1000
value: 98.97
- type: precision_at_1
value: 24.398
- type: precision_at_2
value: 18.274
- type: precision_at_3
value: 14.951999999999998
- type: precision_at_5
value: 11.052
- type: precision_at_7
value: 8.84
- type: precision_at_10
value: 6.8309999999999995
- type: precision_at_20
value: 3.978
- type: precision_at_30
value: 2.827
- type: precision_at_50
value: 1.807
- type: precision_at_70
value: 1.336
- type: precision_at_100
value: 0.964
- type: precision_at_200
value: 0.502
- type: precision_at_300
value: 0.34099999999999997
- type: precision_at_500
value: 0.208
- type: precision_at_700
value: 0.15
- type: precision_at_1000
value: 0.105
- type: mrr_at_1
value: 24.398
- type: mrr_at_2
value: 30.351
- type: mrr_at_3
value: 33.001000000000005
- type: mrr_at_5
value: 35.228
- type: mrr_at_7
value: 36.223
- type: mrr_at_10
value: 36.903999999999996
- type: mrr_at_20
value: 37.631
- type: mrr_at_30
value: 37.830000000000005
- type: mrr_at_50
value: 37.955
- type: mrr_at_70
value: 38.003
- type: mrr_at_100
value: 38.033
- type: mrr_at_200
value: 38.059
- type: mrr_at_300
value: 38.066
- type: mrr_at_500
value: 38.068999999999996
- type: mrr_at_700
value: 38.07
- type: mrr_at_1000
value: 38.07
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 96.35658914728683
- type: f1
value: 96.15039630903114
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 86.29730962152303
- type: f1
value: 71.12166316567485
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 79.98991257565568
- type: f1
value: 77.41680115095276
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 82.1990585070612
- type: f1
value: 82.23719179179362
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 40.03019554933584
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 38.999760551497815
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 32.72383151953079
- type: mrr
value: 33.93989699030721
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: ndcg_at_1
value: 51.858000000000004
- type: ndcg_at_2
value: 49.675999999999995
- type: ndcg_at_3
value: 47.519
- type: ndcg_at_5
value: 45.198
- type: ndcg_at_7
value: 43.504
- type: ndcg_at_10
value: 41.88
- type: ndcg_at_20
value: 39.122
- type: ndcg_at_30
value: 37.95
- type: ndcg_at_50
value: 37.602999999999994
- type: ndcg_at_70
value: 37.836
- type: ndcg_at_100
value: 38.493
- type: ndcg_at_200
value: 40.187
- type: ndcg_at_300
value: 41.524
- type: ndcg_at_500
value: 43.657000000000004
- type: ndcg_at_700
value: 45.234
- type: ndcg_at_1000
value: 47.047
- type: map_at_1
value: 6.392
- type: map_at_2
value: 10.113
- type: map_at_3
value: 11.543000000000001
- type: map_at_5
value: 13.729
- type: map_at_7
value: 14.985000000000001
- type: map_at_10
value: 16.217000000000002
- type: map_at_20
value: 18.106
- type: map_at_30
value: 18.878
- type: map_at_50
value: 19.822
- type: map_at_70
value: 20.352999999999998
- type: map_at_100
value: 20.827
- type: map_at_200
value: 21.512
- type: map_at_300
value: 21.826
- type: map_at_500
value: 22.155
- type: map_at_700
value: 22.349
- type: map_at_1000
value: 22.531000000000002
- type: recall_at_1
value: 6.392
- type: recall_at_2
value: 11.215
- type: recall_at_3
value: 13.231000000000002
- type: recall_at_5
value: 16.66
- type: recall_at_7
value: 18.802
- type: recall_at_10
value: 21.185000000000002
- type: recall_at_20
value: 25.35
- type: recall_at_30
value: 27.91
- type: recall_at_50
value: 32.845
- type: recall_at_70
value: 35.789
- type: recall_at_100
value: 39.247
- type: recall_at_200
value: 46.655
- type: recall_at_300
value: 51.43299999999999
- type: recall_at_500
value: 59.472
- type: recall_at_700
value: 64.742
- type: recall_at_1000
value: 70.97099999999999
- type: precision_at_1
value: 53.559999999999995
- type: precision_at_2
value: 48.762
- type: precision_at_3
value: 44.169000000000004
- type: precision_at_5
value: 39.071
- type: precision_at_7
value: 35.161
- type: precision_at_10
value: 31.238
- type: precision_at_20
value: 23.064999999999998
- type: precision_at_30
value: 18.844
- type: precision_at_50
value: 14.601
- type: precision_at_70
value: 12.088000000000001
- type: precision_at_100
value: 9.844999999999999
- type: precision_at_200
value: 6.358
- type: precision_at_300
value: 4.915
- type: precision_at_500
value: 3.531
- type: precision_at_700
value: 2.8649999999999998
- type: precision_at_1000
value: 2.289
- type: mrr_at_1
value: 54.17999999999999
- type: mrr_at_2
value: 59.288
- type: mrr_at_3
value: 60.836
- type: mrr_at_5
value: 62.275999999999996
- type: mrr_at_7
value: 62.688
- type: mrr_at_10
value: 62.865
- type: mrr_at_20
value: 63.11
- type: mrr_at_30
value: 63.193999999999996
- type: mrr_at_50
value: 63.258
- type: mrr_at_70
value: 63.278
- type: mrr_at_100
value: 63.297000000000004
- type: mrr_at_200
value: 63.315999999999995
- type: mrr_at_300
value: 63.318
- type: mrr_at_500
value: 63.32299999999999
- type: mrr_at_700
value: 63.324000000000005
- type: mrr_at_1000
value: 63.324999999999996
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: ndcg_at_1
value: 50.897999999999996
- type: ndcg_at_2
value: 59.126
- type: ndcg_at_3
value: 63.093999999999994
- type: ndcg_at_5
value: 67.197
- type: ndcg_at_7
value: 68.719
- type: ndcg_at_10
value: 69.915
- type: ndcg_at_20
value: 71.229
- type: ndcg_at_30
value: 71.667
- type: ndcg_at_50
value: 71.98
- type: ndcg_at_70
value: 72.127
- type: ndcg_at_100
value: 72.217
- type: ndcg_at_200
value: 72.319
- type: ndcg_at_300
value: 72.347
- type: ndcg_at_500
value: 72.37
- type: ndcg_at_700
value: 72.379
- type: ndcg_at_1000
value: 72.381
- type: map_at_1
value: 45.297
- type: map_at_2
value: 55.596000000000004
- type: map_at_3
value: 58.724
- type: map_at_5
value: 61.387
- type: map_at_7
value: 62.173
- type: map_at_10
value: 62.69
- type: map_at_20
value: 63.125
- type: map_at_30
value: 63.223
- type: map_at_50
value: 63.27700000000001
- type: map_at_70
value: 63.295
- type: map_at_100
value: 63.303
- type: map_at_200
value: 63.31
- type: map_at_300
value: 63.31099999999999
- type: map_at_500
value: 63.312000000000005
- type: map_at_700
value: 63.312000000000005
- type: map_at_1000
value: 63.312000000000005
- type: recall_at_1
value: 45.297
- type: recall_at_2
value: 63.866
- type: recall_at_3
value: 71.898
- type: recall_at_5
value: 81.16600000000001
- type: recall_at_7
value: 85.301
- type: recall_at_10
value: 88.94800000000001
- type: recall_at_20
value: 93.719
- type: recall_at_30
value: 95.628
- type: recall_at_50
value: 97.14699999999999
- type: recall_at_70
value: 97.955
- type: recall_at_100
value: 98.48599999999999
- type: recall_at_200
value: 99.157
- type: recall_at_300
value: 99.355
- type: recall_at_500
value: 99.53699999999999
- type: recall_at_700
value: 99.62299999999999
- type: recall_at_1000
value: 99.638
- type: precision_at_1
value: 50.897999999999996
- type: precision_at_2
value: 36.703
- type: precision_at_3
value: 27.926000000000002
- type: precision_at_5
value: 19.276
- type: precision_at_7
value: 14.533999999999999
- type: precision_at_10
value: 10.678
- type: precision_at_20
value: 5.663
- type: precision_at_30
value: 3.8600000000000003
- type: precision_at_50
value: 2.358
- type: precision_at_70
value: 1.7000000000000002
- type: precision_at_100
value: 1.198
- type: precision_at_200
value: 0.603
- type: precision_at_300
value: 0.40299999999999997
- type: precision_at_500
value: 0.242
- type: precision_at_700
value: 0.173
- type: precision_at_1000
value: 0.121
- type: mrr_at_1
value: 50.897999999999996
- type: mrr_at_2
value: 59.994
- type: mrr_at_3
value: 62.553000000000004
- type: mrr_at_5
value: 64.307
- type: mrr_at_7
value: 64.864
- type: mrr_at_10
value: 65.22200000000001
- type: mrr_at_20
value: 65.499
- type: mrr_at_30
value: 65.561
- type: mrr_at_50
value: 65.592
- type: mrr_at_70
value: 65.602
- type: mrr_at_100
value: 65.607
- type: mrr_at_200
value: 65.61099999999999
- type: mrr_at_300
value: 65.61200000000001
- type: mrr_at_500
value: 65.61200000000001
- type: mrr_at_700
value: 65.61200000000001
- type: mrr_at_1000
value: 65.61200000000001
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: ndcg_at_1
value: 82.96
- type: ndcg_at_2
value: 85.614
- type: ndcg_at_3
value: 87.19
- type: ndcg_at_5
value: 88.654
- type: ndcg_at_7
value: 89.287
- type: ndcg_at_10
value: 89.785
- type: ndcg_at_20
value: 90.384
- type: ndcg_at_30
value: 90.589
- type: ndcg_at_50
value: 90.738
- type: ndcg_at_70
value: 90.789
- type: ndcg_at_100
value: 90.824
- type: ndcg_at_200
value: 90.869
- type: ndcg_at_300
value: 90.881
- type: ndcg_at_500
value: 90.886
- type: ndcg_at_700
value: 90.889
- type: ndcg_at_1000
value: 90.889
- type: map_at_1
value: 72.152
- type: map_at_2
value: 80.818
- type: map_at_3
value: 83.462
- type: map_at_5
value: 85.286
- type: map_at_7
value: 85.921
- type: map_at_10
value: 86.334
- type: map_at_20
value: 86.737
- type: map_at_30
value: 86.847
- type: map_at_50
value: 86.911
- type: map_at_70
value: 86.932
- type: map_at_100
value: 86.943
- type: map_at_200
value: 86.953
- type: map_at_300
value: 86.955
- type: map_at_500
value: 86.956
- type: map_at_700
value: 86.956
- type: map_at_1000
value: 86.956
- type: recall_at_1
value: 72.152
- type: recall_at_2
value: 84.129
- type: recall_at_3
value: 88.87
- type: recall_at_5
value: 93.067
- type: recall_at_7
value: 94.882
- type: recall_at_10
value: 96.353
- type: recall_at_20
value: 98.26700000000001
- type: recall_at_30
value: 98.92999999999999
- type: recall_at_50
value: 99.441
- type: recall_at_70
value: 99.619
- type: recall_at_100
value: 99.748
- type: recall_at_200
value: 99.911
- type: recall_at_300
value: 99.956
- type: recall_at_500
value: 99.98
- type: recall_at_700
value: 99.991
- type: recall_at_1000
value: 99.996
- type: precision_at_1
value: 82.96
- type: precision_at_2
value: 52.175000000000004
- type: precision_at_3
value: 38.223
- type: precision_at_5
value: 25.056
- type: precision_at_7
value: 18.717
- type: precision_at_10
value: 13.614999999999998
- type: precision_at_20
value: 7.208
- type: precision_at_30
value: 4.928
- type: precision_at_50
value: 3.024
- type: precision_at_70
value: 2.183
- type: precision_at_100
value: 1.54
- type: precision_at_200
value: 0.779
- type: precision_at_300
value: 0.521
- type: precision_at_500
value: 0.313
- type: precision_at_700
value: 0.22399999999999998
- type: precision_at_1000
value: 0.157
- type: mrr_at_1
value: 82.96
- type: mrr_at_2
value: 87.005
- type: mrr_at_3
value: 88.07199999999999
- type: mrr_at_5
value: 88.634
- type: mrr_at_7
value: 88.793
- type: mrr_at_10
value: 88.87899999999999
- type: mrr_at_20
value: 88.94999999999999
- type: mrr_at_30
value: 88.96
- type: mrr_at_50
value: 88.965
- type: mrr_at_70
value: 88.966
- type: mrr_at_100
value: 88.967
- type: mrr_at_200
value: 88.967
- type: mrr_at_300
value: 88.967
- type: mrr_at_500
value: 88.967
- type: mrr_at_700
value: 88.967
- type: mrr_at_1000
value: 88.967
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 59.90388554491155
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 67.64232539036783
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: ndcg_at_1
value: 22.6
- type: ndcg_at_2
value: 20.355999999999998
- type: ndcg_at_3
value: 18.536
- type: ndcg_at_5
value: 16.523
- type: ndcg_at_7
value: 17.979
- type: ndcg_at_10
value: 19.908
- type: ndcg_at_20
value: 22.887
- type: ndcg_at_30
value: 24.43
- type: ndcg_at_50
value: 25.959
- type: ndcg_at_70
value: 26.989
- type: ndcg_at_100
value: 27.977
- type: ndcg_at_200
value: 29.831000000000003
- type: ndcg_at_300
value: 30.787
- type: ndcg_at_500
value: 31.974999999999998
- type: ndcg_at_700
value: 32.554
- type: ndcg_at_1000
value: 33.277
- type: map_at_1
value: 4.593
- type: map_at_2
value: 6.923
- type: map_at_3
value: 8.3
- type: map_at_5
value: 10.072000000000001
- type: map_at_7
value: 10.782
- type: map_at_10
value: 11.72
- type: map_at_20
value: 12.838
- type: map_at_30
value: 13.257
- type: map_at_50
value: 13.569
- type: map_at_70
value: 13.733
- type: map_at_100
value: 13.858999999999998
- type: map_at_200
value: 14.018
- type: map_at_300
value: 14.072999999999999
- type: map_at_500
value: 14.126
- type: map_at_700
value: 14.145
- type: map_at_1000
value: 14.161999999999999
- type: recall_at_1
value: 4.593
- type: recall_at_2
value: 7.997999999999999
- type: recall_at_3
value: 10.563
- type: recall_at_5
value: 14.907
- type: recall_at_7
value: 17.4
- type: recall_at_10
value: 21.18
- type: recall_at_20
value: 28.144999999999996
- type: recall_at_30
value: 32.462
- type: recall_at_50
value: 37.267
- type: recall_at_70
value: 40.875
- type: recall_at_100
value: 44.641999999999996
- type: recall_at_200
value: 52.573
- type: recall_at_300
value: 57.089999999999996
- type: recall_at_500
value: 63.14300000000001
- type: recall_at_700
value: 66.313
- type: recall_at_1000
value: 70.458
- type: precision_at_1
value: 22.6
- type: precision_at_2
value: 19.7
- type: precision_at_3
value: 17.333000000000002
- type: precision_at_5
value: 14.680000000000001
- type: precision_at_7
value: 12.243
- type: precision_at_10
value: 10.440000000000001
- type: precision_at_20
value: 6.944999999999999
- type: precision_at_30
value: 5.333
- type: precision_at_50
value: 3.678
- type: precision_at_70
value: 2.881
- type: precision_at_100
value: 2.2030000000000003
- type: precision_at_200
value: 1.295
- type: precision_at_300
value: 0.9369999999999999
- type: precision_at_500
value: 0.622
- type: precision_at_700
value: 0.466
- type: precision_at_1000
value: 0.347
- type: mrr_at_1
value: 22.6
- type: mrr_at_2
value: 27.900000000000002
- type: mrr_at_3
value: 30.067
- type: mrr_at_5
value: 32.207
- type: mrr_at_7
value: 33.004
- type: mrr_at_10
value: 33.596
- type: mrr_at_20
value: 34.268
- type: mrr_at_30
value: 34.492
- type: mrr_at_50
value: 34.628
- type: mrr_at_70
value: 34.681
- type: mrr_at_100
value: 34.717
- type: mrr_at_200
value: 34.757
- type: mrr_at_300
value: 34.768
- type: mrr_at_500
value: 34.772
- type: mrr_at_700
value: 34.774
- type: mrr_at_1000
value: 34.775
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 86.90122745229677
- type: cos_sim_spearman
value: 82.92294737327579
- type: euclidean_pearson
value: 84.08979655773187
- type: euclidean_spearman
value: 82.92294657285412
- type: manhattan_pearson
value: 84.09347480531832
- type: manhattan_spearman
value: 82.91564613948087
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 87.01218713698583
- type: cos_sim_spearman
value: 79.46865215168464
- type: euclidean_pearson
value: 83.22621889891909
- type: euclidean_spearman
value: 79.46853821709514
- type: manhattan_pearson
value: 83.69962580788805
- type: manhattan_spearman
value: 79.9561593356932
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 88.98438696342964
- type: cos_sim_spearman
value: 89.15419511870839
- type: euclidean_pearson
value: 88.49646141802894
- type: euclidean_spearman
value: 89.15419503946019
- type: manhattan_pearson
value: 88.6420585616327
- type: manhattan_spearman
value: 89.42648950757743
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 87.30772547759544
- type: cos_sim_spearman
value: 84.93199878424691
- type: euclidean_pearson
value: 86.16266630395455
- type: euclidean_spearman
value: 84.93198798543634
- type: manhattan_pearson
value: 86.14285723189803
- type: manhattan_spearman
value: 85.0361672522687
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 90.21342071197127
- type: cos_sim_spearman
value: 90.7407512744838
- type: euclidean_pearson
value: 90.1517933113061
- type: euclidean_spearman
value: 90.74075125431919
- type: manhattan_pearson
value: 90.17963034676193
- type: manhattan_spearman
value: 90.88999275865135
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 86.82518054100498
- type: cos_sim_spearman
value: 87.81570533154735
- type: euclidean_pearson
value: 86.91684561573618
- type: euclidean_spearman
value: 87.81570533154735
- type: manhattan_pearson
value: 86.98311935744032
- type: manhattan_spearman
value: 87.9594667151966
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 92.09578436612053
- type: cos_sim_spearman
value: 92.01519349090438
- type: euclidean_pearson
value: 92.07113635890894
- type: euclidean_spearman
value: 92.01519349090438
- type: manhattan_pearson
value: 91.89343820765625
- type: manhattan_spearman
value: 91.7443476810177
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 69.29997751464549
- type: cos_sim_spearman
value: 68.36425436812782
- type: euclidean_pearson
value: 69.81381677661783
- type: euclidean_spearman
value: 68.36425436812782
- type: manhattan_pearson
value: 69.92823397008026
- type: manhattan_spearman
value: 68.35770640039254
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 88.39126315452359
- type: cos_sim_spearman
value: 88.99708463265337
- type: euclidean_pearson
value: 88.60793820038607
- type: euclidean_spearman
value: 88.99708463265337
- type: manhattan_pearson
value: 88.69860633571047
- type: manhattan_spearman
value: 89.20094593888012
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 86.58028062818582
- type: mrr
value: 96.53586790841693
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: ndcg_at_1
value: 66.333
- type: ndcg_at_2
value: 70.655
- type: ndcg_at_3
value: 72.801
- type: ndcg_at_5
value: 75.793
- type: ndcg_at_7
value: 76.946
- type: ndcg_at_10
value: 77.66199999999999
- type: ndcg_at_20
value: 78.786
- type: ndcg_at_30
value: 79.066
- type: ndcg_at_50
value: 79.255
- type: ndcg_at_70
value: 79.423
- type: ndcg_at_100
value: 79.476
- type: ndcg_at_200
value: 79.65299999999999
- type: ndcg_at_300
value: 79.696
- type: ndcg_at_500
value: 79.73599999999999
- type: ndcg_at_700
value: 79.77199999999999
- type: ndcg_at_1000
value: 79.77199999999999
- type: map_at_1
value: 63.383
- type: map_at_2
value: 68.144
- type: map_at_3
value: 70.19800000000001
- type: map_at_5
value: 72.38
- type: map_at_7
value: 72.955
- type: map_at_10
value: 73.312
- type: map_at_20
value: 73.678
- type: map_at_30
value: 73.72800000000001
- type: map_at_50
value: 73.75500000000001
- type: map_at_70
value: 73.771
- type: map_at_100
value: 73.776
- type: map_at_200
value: 73.783
- type: map_at_300
value: 73.784
- type: map_at_500
value: 73.785
- type: map_at_700
value: 73.786
- type: map_at_1000
value: 73.786
- type: recall_at_1
value: 63.383
- type: recall_at_2
value: 72.283
- type: recall_at_3
value: 77.183
- type: recall_at_5
value: 84.56099999999999
- type: recall_at_7
value: 87.67200000000001
- type: recall_at_10
value: 89.822
- type: recall_at_20
value: 94
- type: recall_at_30
value: 95.333
- type: recall_at_50
value: 96.333
- type: recall_at_70
value: 97.333
- type: recall_at_100
value: 97.667
- type: recall_at_200
value: 99
- type: recall_at_300
value: 99.333
- type: recall_at_500
value: 99.667
- type: recall_at_700
value: 100
- type: recall_at_1000
value: 100
- type: precision_at_1
value: 66.333
- type: precision_at_2
value: 38.667
- type: precision_at_3
value: 28.111000000000004
- type: precision_at_5
value: 18.933
- type: precision_at_7
value: 14.094999999999999
- type: precision_at_10
value: 10.167
- type: precision_at_20
value: 5.35
- type: precision_at_30
value: 3.611
- type: precision_at_50
value: 2.1870000000000003
- type: precision_at_70
value: 1.576
- type: precision_at_100
value: 1.107
- type: precision_at_200
value: 0.5599999999999999
- type: precision_at_300
value: 0.374
- type: precision_at_500
value: 0.22499999999999998
- type: precision_at_700
value: 0.161
- type: precision_at_1000
value: 0.11299999999999999
- type: mrr_at_1
value: 66.333
- type: mrr_at_2
value: 70.833
- type: mrr_at_3
value: 72.167
- type: mrr_at_5
value: 73.6
- type: mrr_at_7
value: 74.084
- type: mrr_at_10
value: 74.283
- type: mrr_at_20
value: 74.54499999999999
- type: mrr_at_30
value: 74.59599999999999
- type: mrr_at_50
value: 74.622
- type: mrr_at_70
value: 74.639
- type: mrr_at_100
value: 74.643
- type: mrr_at_200
value: 74.65
- type: mrr_at_300
value: 74.652
- type: mrr_at_500
value: 74.653
- type: mrr_at_700
value: 74.653
- type: mrr_at_1000
value: 74.653
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.84554455445544
- type: cos_sim_ap
value: 96.31178339136798
- type: cos_sim_f1
value: 92.1921921921922
- type: cos_sim_precision
value: 92.28456913827655
- type: cos_sim_recall
value: 92.10000000000001
- type: dot_accuracy
value: 99.84554455445544
- type: dot_ap
value: 96.31178339136797
- type: dot_f1
value: 92.1921921921922
- type: dot_precision
value: 92.28456913827655
- type: dot_recall
value: 92.10000000000001
- type: euclidean_accuracy
value: 99.84554455445544
- type: euclidean_ap
value: 96.31178339136798
- type: euclidean_f1
value: 92.1921921921922
- type: euclidean_precision
value: 92.28456913827655
- type: euclidean_recall
value: 92.10000000000001
- type: manhattan_accuracy
value: 99.84752475247525
- type: manhattan_ap
value: 96.4591954606088
- type: manhattan_f1
value: 92.25352112676056
- type: manhattan_precision
value: 92.81376518218623
- type: manhattan_recall
value: 91.7
- type: max_accuracy
value: 99.84752475247525
- type: max_ap
value: 96.4591954606088
- type: max_f1
value: 92.25352112676056
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 74.24659759283294
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 46.77690051260451
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 55.68436757803185
- type: mrr
value: 56.82157711569475
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 31.652482405629843
- type: cos_sim_spearman
value: 31.16341822347735
- type: dot_pearson
value: 31.652479892699837
- type: dot_spearman
value: 31.16341822347735
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: ndcg_at_1
value: 92
- type: ndcg_at_2
value: 90.839
- type: ndcg_at_3
value: 90.642
- type: ndcg_at_5
value: 90.348
- type: ndcg_at_7
value: 89.015
- type: ndcg_at_10
value: 87.599
- type: ndcg_at_20
value: 84.434
- type: ndcg_at_30
value: 81.655
- type: ndcg_at_50
value: 77.278
- type: ndcg_at_70
value: 73.957
- type: ndcg_at_100
value: 69.56
- type: ndcg_at_200
value: 60.724000000000004
- type: ndcg_at_300
value: 57.245000000000005
- type: ndcg_at_500
value: 56.316
- type: ndcg_at_700
value: 58.399
- type: ndcg_at_1000
value: 62.21600000000001
- type: map_at_1
value: 0.247
- type: map_at_2
value: 0.488
- type: map_at_3
value: 0.7230000000000001
- type: map_at_5
value: 1.204
- type: map_at_7
value: 1.6500000000000001
- type: map_at_10
value: 2.292
- type: map_at_20
value: 4.274
- type: map_at_30
value: 6.027
- type: map_at_50
value: 9.083
- type: map_at_70
value: 11.751000000000001
- type: map_at_100
value: 14.912
- type: map_at_200
value: 22.213
- type: map_at_300
value: 26.667999999999996
- type: map_at_500
value: 31.556
- type: map_at_700
value: 34.221000000000004
- type: map_at_1000
value: 36.443999999999996
- type: recall_at_1
value: 0.247
- type: recall_at_2
value: 0.49899999999999994
- type: recall_at_3
value: 0.742
- type: recall_at_5
value: 1.247
- type: recall_at_7
value: 1.722
- type: recall_at_10
value: 2.405
- type: recall_at_20
value: 4.583
- type: recall_at_30
value: 6.587999999999999
- type: recall_at_50
value: 10.188
- type: recall_at_70
value: 13.496
- type: recall_at_100
value: 17.578
- type: recall_at_200
value: 28.158
- type: recall_at_300
value: 35.532000000000004
- type: recall_at_500
value: 45.31
- type: recall_at_700
value: 51.822
- type: recall_at_1000
value: 58.53
- type: precision_at_1
value: 96
- type: precision_at_2
value: 96
- type: precision_at_3
value: 95.333
- type: precision_at_5
value: 94.8
- type: precision_at_7
value: 93.429
- type: precision_at_10
value: 91.4
- type: precision_at_20
value: 87.7
- type: precision_at_30
value: 84.867
- type: precision_at_50
value: 80.24
- type: precision_at_70
value: 76.371
- type: precision_at_100
value: 71.08
- type: precision_at_200
value: 59.4
- type: precision_at_300
value: 51.459999999999994
- type: precision_at_500
value: 40.644000000000005
- type: precision_at_700
value: 33.889
- type: precision_at_1000
value: 27.250000000000004
- type: mrr_at_1
value: 96
- type: mrr_at_2
value: 98
- type: mrr_at_3
value: 98
- type: mrr_at_5
value: 98
- type: mrr_at_7
value: 98
- type: mrr_at_10
value: 98
- type: mrr_at_20
value: 98
- type: mrr_at_30
value: 98
- type: mrr_at_50
value: 98
- type: mrr_at_70
value: 98
- type: mrr_at_100
value: 98
- type: mrr_at_200
value: 98
- type: mrr_at_300
value: 98
- type: mrr_at_500
value: 98
- type: mrr_at_700
value: 98
- type: mrr_at_1000
value: 98
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: ndcg_at_1
value: 43.878
- type: ndcg_at_2
value: 37.956
- type: ndcg_at_3
value: 35.053
- type: ndcg_at_5
value: 32.59
- type: ndcg_at_7
value: 30.226
- type: ndcg_at_10
value: 29.005
- type: ndcg_at_20
value: 30.11
- type: ndcg_at_30
value: 32.019999999999996
- type: ndcg_at_50
value: 34.354
- type: ndcg_at_70
value: 36.665
- type: ndcg_at_100
value: 38.888
- type: ndcg_at_200
value: 43.435
- type: ndcg_at_300
value: 45.795
- type: ndcg_at_500
value: 48.699999999999996
- type: ndcg_at_700
value: 50.242
- type: ndcg_at_1000
value: 51.529
- type: map_at_1
value: 3.521
- type: map_at_2
value: 5.309
- type: map_at_3
value: 6.576
- type: map_at_5
value: 8.97
- type: map_at_7
value: 10.194
- type: map_at_10
value: 11.949
- type: map_at_20
value: 14.686
- type: map_at_30
value: 15.8
- type: map_at_50
value: 16.59
- type: map_at_70
value: 17.2
- type: map_at_100
value: 17.765
- type: map_at_200
value: 18.636
- type: map_at_300
value: 18.972
- type: map_at_500
value: 19.301
- type: map_at_700
value: 19.445
- type: map_at_1000
value: 19.546
- type: recall_at_1
value: 3.521
- type: recall_at_2
value: 5.848
- type: recall_at_3
value: 7.657
- type: recall_at_5
value: 11.368
- type: recall_at_7
value: 13.748
- type: recall_at_10
value: 18.061
- type: recall_at_20
value: 26.844
- type: recall_at_30
value: 31.186000000000003
- type: recall_at_50
value: 35.951
- type: recall_at_70
value: 40.961999999999996
- type: recall_at_100
value: 46.743
- type: recall_at_200
value: 58.483
- type: recall_at_300
value: 65.973
- type: recall_at_500
value: 75.233
- type: recall_at_700
value: 80.472
- type: recall_at_1000
value: 85.02
- type: precision_at_1
value: 46.939
- type: precision_at_2
value: 38.775999999999996
- type: precision_at_3
value: 34.694
- type: precision_at_5
value: 31.429000000000002
- type: precision_at_7
value: 27.697
- type: precision_at_10
value: 24.490000000000002
- type: precision_at_20
value: 18.776
- type: precision_at_30
value: 15.034
- type: precision_at_50
value: 10.857
- type: precision_at_70
value: 9.096
- type: precision_at_100
value: 7.51
- type: precision_at_200
value: 4.929
- type: precision_at_300
value: 3.7760000000000002
- type: precision_at_500
value: 2.6780000000000004
- type: precision_at_700
value: 2.085
- type: precision_at_1000
value: 1.5709999999999997
- type: mrr_at_1
value: 46.939
- type: mrr_at_2
value: 55.102
- type: mrr_at_3
value: 57.823
- type: mrr_at_5
value: 60.68
- type: mrr_at_7
value: 60.972
- type: mrr_at_10
value: 61.199000000000005
- type: mrr_at_20
value: 61.831
- type: mrr_at_30
value: 61.831
- type: mrr_at_50
value: 61.873
- type: mrr_at_70
value: 61.873
- type: mrr_at_100
value: 61.873
- type: mrr_at_200
value: 61.873
- type: mrr_at_300
value: 61.873
- type: mrr_at_500
value: 61.873
- type: mrr_at_700
value: 61.873
- type: mrr_at_1000
value: 61.873
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 69.3294
- type: ap
value: 14.561333393364736
- type: f1
value: 53.992309820496466
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 63.63893604980192
- type: f1
value: 63.92959380489434
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 56.270879258659775
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 88.71073493473207
- type: cos_sim_ap
value: 81.52392540284202
- type: cos_sim_f1
value: 74.71162377994676
- type: cos_sim_precision
value: 71.89558428885094
- type: cos_sim_recall
value: 77.75725593667546
- type: dot_accuracy
value: 88.71073493473207
- type: dot_ap
value: 81.52394754041109
- type: dot_f1
value: 74.71162377994676
- type: dot_precision
value: 71.89558428885094
- type: dot_recall
value: 77.75725593667546
- type: euclidean_accuracy
value: 88.71073493473207
- type: euclidean_ap
value: 81.52392035435321
- type: euclidean_f1
value: 74.71162377994676
- type: euclidean_precision
value: 71.89558428885094
- type: euclidean_recall
value: 77.75725593667546
- type: manhattan_accuracy
value: 88.47231328604637
- type: manhattan_ap
value: 81.22907439267321
- type: manhattan_f1
value: 74.3351571446749
- type: manhattan_precision
value: 71.78667977390022
- type: manhattan_recall
value: 77.0712401055409
- type: max_accuracy
value: 88.71073493473207
- type: max_ap
value: 81.52394754041109
- type: max_f1
value: 74.71162377994676
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.85136026700819
- type: cos_sim_ap
value: 87.7768002924216
- type: cos_sim_f1
value: 80.358908624794
- type: cos_sim_precision
value: 76.62918209122023
- type: cos_sim_recall
value: 84.47028025870034
- type: dot_accuracy
value: 89.85136026700819
- type: dot_ap
value: 87.77680027889778
- type: dot_f1
value: 80.358908624794
- type: dot_precision
value: 76.62918209122023
- type: dot_recall
value: 84.47028025870034
- type: euclidean_accuracy
value: 89.85136026700819
- type: euclidean_ap
value: 87.77680174697751
- type: euclidean_f1
value: 80.358908624794
- type: euclidean_precision
value: 76.62918209122023
- type: euclidean_recall
value: 84.47028025870034
- type: manhattan_accuracy
value: 89.86300306593705
- type: manhattan_ap
value: 87.78613271895861
- type: manhattan_f1
value: 80.31831016905645
- type: manhattan_precision
value: 76.68230516070304
- type: manhattan_recall
value: 84.3162919618109
- type: max_accuracy
value: 89.86300306593705
- type: max_ap
value: 87.78613271895861
- type: max_f1
value: 80.358908624794
language:
- en
license: cc-by-nc-4.0
---
<h1 align="center">Salesforce/SFR-Embedding-Mistral</h1>
**SFR-Embedding by Salesforce Research.**
The model is trained on top of [E5-mistral-7b-instruct](https://huggingface.co/intfloat/e5-mistral-7b-instruct) and [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1).
This project is for research purposes only. Third-party datasets may be subject to additional terms and conditions under their associated licenses. Please refer to specific papers for more details:
- [MTEB benchmark](https://arxiv.org/abs/2210.07316)
- [Mistral](https://arxiv.org/abs/2310.06825)
- [E5-mistral-7b-instruct](https://arxiv.org/pdf/2401.00368.pdf)
More technical details will be updated later.
## How to run
### Transformers
The models can be used as follows:
```python
import torch
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def last_token_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
left_padding = (attention_mask[:, -1].sum() == attention_mask.shape[0])
if left_padding:
return last_hidden_states[:, -1]
else:
sequence_lengths = attention_mask.sum(dim=1) - 1
batch_size = last_hidden_states.shape[0]
return last_hidden_states[torch.arange(batch_size, device=last_hidden_states.device), sequence_lengths]
def get_detailed_instruct(task_description: str, query: str) -> str:
return f'Instruct: {task_description}\nQuery: {query}'
# Each query must come with a one-sentence instruction that describes the task
task = 'Given a web search query, retrieve relevant passages that answer the query'
queries = [
get_detailed_instruct(task, 'How to bake a chocolate cake'),
get_detailed_instruct(task, 'Symptoms of the flu')
]
# No need to add instruction for retrieval documents
passages = [
"To bake a delicious chocolate cake, you'll need the following ingredients: all-purpose flour, sugar, cocoa powder, baking powder, baking soda, salt, eggs, milk, vegetable oil, and vanilla extract. Start by preheating your oven to 350°F (175°C). In a mixing bowl, combine the dry ingredients (flour, sugar, cocoa powder, baking powder, baking soda, and salt). In a separate bowl, whisk together the wet ingredients (eggs, milk, vegetable oil, and vanilla extract). Gradually add the wet mixture to the dry ingredients, stirring until well combined. Pour the batter into a greased cake pan and bake for 30-35 minutes. Let it cool before frosting with your favorite chocolate frosting. Enjoy your homemade chocolate cake!",
"The flu, or influenza, is an illness caused by influenza viruses. Common symptoms of the flu include a high fever, chills, cough, sore throat, runny or stuffy nose, body aches, headache, fatigue, and sometimes nausea and vomiting. These symptoms can come on suddenly and are usually more severe than the common cold. It's important to get plenty of rest, stay hydrated, and consult a healthcare professional if you suspect you have the flu. In some cases, antiviral medications can help alleviate symptoms and reduce the duration of the illness."
]
# load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained('Salesforce/SFR-Embedding-Mistral')
model = AutoModel.from_pretrained('Salesforce/SFR-Embedding-Mistral')
# get the embeddings
max_length = 4096
input_texts = queries + passages
batch_dict = tokenizer(input_texts, max_length=max_length, padding=True, truncation=True, return_tensors="pt")
outputs = model(**batch_dict)
embeddings = last_token_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
# [[86.7153549194336, 36.64569091796875], [35.00493621826172, 82.0738525390625]]
```
### Sentence Transformers
```python
from sentence_transformers import SentenceTransformer, util
model = SentenceTransformer("Salesforce/SFR-Embedding-Mistral")
def get_detailed_instruct(task_description: str, query: str) -> str:
return f'Instruct: {task_description}\nQuery: {query}'
# Each query must come with a one-sentence instruction that describes the task
task = 'Given a web search query, retrieve relevant passages that answer the query'
queries = [
get_detailed_instruct(task, 'How to bake a chocolate cake'),
get_detailed_instruct(task, 'Symptoms of the flu')
]
# No need to add instruction for retrieval documents
passages = [
"To bake a delicious chocolate cake, you'll need the following ingredients: all-purpose flour, sugar, cocoa powder, baking powder, baking soda, salt, eggs, milk, vegetable oil, and vanilla extract. Start by preheating your oven to 350°F (175°C). In a mixing bowl, combine the dry ingredients (flour, sugar, cocoa powder, baking powder, baking soda, and salt). In a separate bowl, whisk together the wet ingredients (eggs, milk, vegetable oil, and vanilla extract). Gradually add the wet mixture to the dry ingredients, stirring until well combined. Pour the batter into a greased cake pan and bake for 30-35 minutes. Let it cool before frosting with your favorite chocolate frosting. Enjoy your homemade chocolate cake!",
"The flu, or influenza, is an illness caused by influenza viruses. Common symptoms of the flu include a high fever, chills, cough, sore throat, runny or stuffy nose, body aches, headache, fatigue, and sometimes nausea and vomiting. These symptoms can come on suddenly and are usually more severe than the common cold. It's important to get plenty of rest, stay hydrated, and consult a healthcare professional if you suspect you have the flu. In some cases, antiviral medications can help alleviate symptoms and reduce the duration of the illness."
]
embeddings = model.encode(queries + passages)
scores = util.cos_sim(embeddings[:2], embeddings[2:]) * 100
print(scores.tolist())
# [[86.71537780761719, 36.645721435546875], [35.00497055053711, 82.07388305664062]]
```
### MTEB Benchmark Evaluation
Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB](https://arxiv.org/abs/2210.07316) benchmark.
SFR-Embedding Team (∗indicates lead contributors).
* Rui Meng*
* Ye Liu*
* Shafiq Rayhan Joty
* Caiming Xiong
* Yingbo Zhou
* Semih Yavuz
### Citation
```bibtex
@misc{SFRAIResearch2024,
title={SFR-Embedding-Mistral:Enhance Text Retrieval with Transfer Learning},
author={Rui Meng, Ye Liu, Shafiq Rayhan Joty, Caiming Xiong, Yingbo Zhou, Semih Yavuz},
howpublished={Salesforce AI Research Blog},
year={2024},
url={https://blog.salesforceairesearch.com/sfr-embedded-mistral/}
}
```
|
hossay/stool-condition-classification
|
hossay
| 2024-03-25T05:21:27Z | 205 | 1 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:generator",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-03T07:26:33Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- image-classification
- generated_from_trainer
datasets:
- generator
metrics:
- accuracy
- f1
model-index:
- name: stool-condition-classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: stool-image
type: generator
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.941747572815534
- name: F1
type: f1
value: 0.9302325581395349
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# stool-condition-classification
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the stool-image dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4237
- Auroc: 0.9418
- Accuracy: 0.9417
- Sensitivity: 0.9091
- Specificty: 0.9661
- Ppv: 0.9524
- Npv: 0.9344
- F1: 0.9302
- Model Selection: 0.9215
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Auroc | Accuracy | Sensitivity | Specificty | Ppv | Npv | F1 | Model Selection |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|:-----------:|:----------:|:------:|:------:|:------:|:---------------:|
| 0.5076 | 0.98 | 100 | 0.5361 | 0.8538 | 0.7731 | 0.5393 | 0.9801 | 0.96 | 0.7061 | 0.6906 | 0.5592 |
| 0.4086 | 1.96 | 200 | 0.4857 | 0.8728 | 0.7836 | 0.6011 | 0.9453 | 0.9068 | 0.7280 | 0.7230 | 0.6558 |
| 0.5208 | 2.94 | 300 | 0.5109 | 0.8059 | 0.7599 | 0.6124 | 0.8905 | 0.8321 | 0.7218 | 0.7055 | 0.7218 |
| 0.474 | 3.92 | 400 | 0.5212 | 0.8601 | 0.7995 | 0.6180 | 0.9602 | 0.9322 | 0.7395 | 0.7432 | 0.6578 |
| 0.4285 | 4.9 | 500 | 0.4511 | 0.8728 | 0.7757 | 0.7472 | 0.8010 | 0.7688 | 0.7816 | 0.7578 | 0.9462 |
| 0.3506 | 5.88 | 600 | 0.4716 | 0.8691 | 0.8047 | 0.6798 | 0.9154 | 0.8768 | 0.7635 | 0.7658 | 0.7644 |
| 0.4239 | 6.86 | 700 | 0.5043 | 0.8517 | 0.8100 | 0.6685 | 0.9353 | 0.9015 | 0.7611 | 0.7677 | 0.7332 |
| 0.2447 | 7.84 | 800 | 0.5804 | 0.8592 | 0.8074 | 0.6910 | 0.9104 | 0.8723 | 0.7689 | 0.7712 | 0.7806 |
| 0.1739 | 8.82 | 900 | 0.6225 | 0.8562 | 0.8074 | 0.7135 | 0.8905 | 0.8523 | 0.7783 | 0.7768 | 0.8229 |
| 0.2888 | 9.8 | 1000 | 0.5807 | 0.8570 | 0.8047 | 0.7528 | 0.8507 | 0.8171 | 0.7953 | 0.7836 | 0.9021 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.0.1
- Datasets 2.14.7
- Tokenizers 0.15.2
|
Smuggling1710/TurdusWestLakev2-IreneRP-Neural-7B-slerp
|
Smuggling1710
| 2024-03-25T05:18:03Z | 5 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Smuggling1710/WestLakev2-IreneRP-Neural-7B-slerp",
"udkai/Turdus",
"base_model:Smuggling1710/WestLakev2-IreneRP-Neural-7B-slerp",
"base_model:merge:Smuggling1710/WestLakev2-IreneRP-Neural-7B-slerp",
"base_model:udkai/Turdus",
"base_model:merge:udkai/Turdus",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-25T05:13:05Z |
---
tags:
- merge
- mergekit
- lazymergekit
- Smuggling1710/WestLakev2-IreneRP-Neural-7B-slerp
- udkai/Turdus
base_model:
- Smuggling1710/WestLakev2-IreneRP-Neural-7B-slerp
- udkai/Turdus
---
# TurdusWestLakev2-IreneRP-Neural-7B-slerp
TurdusWestLakev2-IreneRP-Neural-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Smuggling1710/WestLakev2-IreneRP-Neural-7B-slerp](https://huggingface.co/Smuggling1710/WestLakev2-IreneRP-Neural-7B-slerp)
* [udkai/Turdus](https://huggingface.co/udkai/Turdus)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Smuggling1710/WestLakev2-IreneRP-Neural-7B-slerp
layer_range: [0, 32]
- model: udkai/Turdus
layer_range: [0, 32]
merge_method: slerp
base_model: Smuggling1710/WestLakev2-IreneRP-Neural-7B-slerp
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Smuggling1710/TurdusWestLakev2-IreneRP-Neural-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
chahyunmook/42dot_label
|
chahyunmook
| 2024-03-25T05:11:45Z | 174 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-25T04:57:36Z |
---
library_name: transformers
license: cc-by-4.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tung491/ppo-SnowballTarget
|
tung491
| 2024-03-25T05:09:22Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2024-03-25T05:08:43Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: tung491/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
aillm456/finetuned-falcon-rw-1b-instruct-openorca
|
aillm456
| 2024-03-25T05:08:12Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"falcon",
"region:us"
] | null | 2024-03-25T05:01:21Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
ytzi/multipls-gpt2-medium
|
ytzi
| 2024-03-25T04:58:07Z | 53 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-25T04:57:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MY11111111/q-FrozenLake-v1-4x4-noSlippery
|
MY11111111
| 2024-03-25T04:55:56Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-25T04:55:53Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="MY11111111/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
fazeelzafar/codellama-finetuned-Java-FINAL2
|
fazeelzafar
| 2024-03-25T04:51:24Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-25T04:45:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
andrian-kr/mistral-7b-ua-gec
|
andrian-kr
| 2024-03-25T04:35:47Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-03-21T21:45:18Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.2
model-index:
- name: mistral-7b-ua-gec
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-7b-ua-gec
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 5
### Training results
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.40.0.dev0
- Pytorch 2.1.2
- Datasets 2.16.0
- Tokenizers 0.15.0
|
chahyunmook/42dot-test-upload
|
chahyunmook
| 2024-03-25T04:32:45Z | 172 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"ko",
"arxiv:1910.09700",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-25T04:18:59Z |
---
library_name: transformers
license: cc-by-4.0
language:
- ko
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dewifaj/alzheimer_mri_classification
|
dewifaj
| 2024-03-25T04:27:55Z | 17 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-03-24T14:48:35Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: alzheimer_mri_classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# alzheimer_mri_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3404
- Accuracy: 0.8770
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 128 | 0.8345 | 0.5996 |
| No log | 2.0 | 256 | 0.8245 | 0.6309 |
| No log | 3.0 | 384 | 0.7492 | 0.6543 |
| 0.8188 | 4.0 | 512 | 0.7173 | 0.6777 |
| 0.8188 | 5.0 | 640 | 0.6625 | 0.7168 |
| 0.8188 | 6.0 | 768 | 0.6182 | 0.7373 |
| 0.8188 | 7.0 | 896 | 0.5058 | 0.8027 |
| 0.5344 | 8.0 | 1024 | 0.5567 | 0.7764 |
| 0.5344 | 9.0 | 1152 | 0.4702 | 0.8193 |
| 0.5344 | 10.0 | 1280 | 0.4502 | 0.8242 |
| 0.5344 | 11.0 | 1408 | 0.4024 | 0.8408 |
| 0.3356 | 12.0 | 1536 | 0.4263 | 0.8516 |
| 0.3356 | 13.0 | 1664 | 0.3782 | 0.8535 |
| 0.3356 | 14.0 | 1792 | 0.3378 | 0.8604 |
| 0.3356 | 15.0 | 1920 | 0.3570 | 0.8701 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
apexmin/duck_toy
|
apexmin
| 2024-03-25T04:19:04Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-03-13T01:01:07Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks toy
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - apexmin/duck_toy
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks toy using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.




DreamBooth for the text encoder was enabled: False.
|
KeyonZeng/philion-2
|
KeyonZeng
| 2024-03-25T04:13:36Z | 134 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"en",
"dataset:argilla/distilabel-intel-orca-dpo-pairs",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-17T05:19:44Z |
---
library_name: transformers
license: apache-2.0
datasets:
- argilla/distilabel-intel-orca-dpo-pairs
language:
- en
metrics:
- accuracy
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tung491/Reinforce-PixelCoper
|
tung491
| 2024-03-25T04:06:41Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-25T04:05:45Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCoper
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 21.20 +/- 15.12
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
beethovenlab/vit-model-jorge-depaz
|
beethovenlab
| 2024-03-25T03:56:59Z | 196 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-03-25T03:41:28Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-model-jorge-depaz
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-model-jorge-depaz
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0435
- Accuracy: 0.9925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0491 | 3.85 | 500 | 0.0435 | 0.9925 |
### Framework versions
- Transformers 4.39.1
- Pytorch 2.1.1
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Severian/Nexus-IKM-Hermes-2-Pro-Mistral-7B
|
Severian
| 2024-03-25T03:50:09Z | 60 | 3 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"dataset:Severian/Internal-Knowledge-Map",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-14T10:53:56Z |
---
license: mit
datasets:
- Severian/Internal-Knowledge-Map
pipeline_tag: text-generation
---
## This model has been trained for 2 epochs using Unsloth on the Internal Knowledge Map dataset.
```
==((====))== Unsloth - 2x faster free finetuning | Num GPUs = 1
\\ /| Num examples = 3,555 | Num Epochs = 2
O^O/ \_/ \ Batch size per device = 4 | Gradient Accumulation steps = 4
\ / Total batch size = 16 | Total steps = 444
"-____-" Number of trainable parameters = 83,886,080
[444/444 25:17, Epoch 1/2]
Step Training Loss
1 3.133100
2 3.086100
3 3.045000
4 3.075100
5 3.086000
6 3.042100
7 3.018100
8 3.036100
9 2.986900
10 2.990600
11 2.949400
12 2.933200
13 2.899800
14 2.885900
15 2.928400
16 2.855700
17 2.805000
18 2.787100
19 2.807400
20 2.765600
21 2.794500
22 2.758400
23 2.753700
24 2.757400
25 2.669900
26 2.653900
27 2.708400
28 2.705100
29 2.695900
30 2.590100
31 2.615900
32 2.577500
33 2.571700
34 2.596400
35 2.570700
36 2.558600
37 2.524600
38 2.640500
39 2.506400
40 2.521900
41 2.519800
42 2.459700
43 2.388900
44 2.425400
45 2.387800
46 2.360600
47 2.376000
48 2.391600
49 2.321100
50 2.357600
51 2.325800
52 2.311800
53 2.255600
54 2.313900
55 2.200900
56 2.250800
57 2.242500
58 2.173000
59 2.261000
60 2.150500
61 2.162500
62 2.086800
63 2.178500
64 2.085600
65 2.068800
66 2.146500
67 2.001800
68 2.037600
69 2.009000
70 1.983300
71 1.931400
72 1.990400
73 1.944700
74 1.972700
75 2.002400
76 2.022400
77 1.900500
78 1.843100
79 1.887400
80 1.970700
81 1.820800
82 1.853900
83 1.744200
84 1.831400
85 1.768900
86 2.006100
87 1.681900
88 1.750000
89 1.628100
90 1.586900
91 1.567900
92 1.554500
93 1.830800
94 1.512500
95 1.592400
96 1.518600
97 1.593700
98 1.454100
99 1.497200
100 1.319700
101 1.363300
102 1.414300
103 1.343900
104 1.363500
105 1.449000
106 1.510100
107 1.268600
108 1.156600
109 1.075100
110 1.137200
111 1.020700
112 0.993600
113 1.195200
114 0.993300
115 1.072100
116 1.116900
117 1.184100
118 1.102600
119 1.083800
120 0.852100
121 1.023600
122 1.051200
123 1.270500
124 0.856200
125 1.089500
126 0.686800
127 0.800300
128 0.662400
129 0.688000
130 0.554400
131 0.737200
132 0.802900
133 0.538200
134 0.562000
135 0.516800
136 0.497200
137 0.611100
138 0.581200
139 0.442000
140 0.355200
141 0.473200
142 0.559600
143 0.683700
144 0.355300
145 0.343000
146 0.525300
147 0.442100
148 0.452900
149 0.478800
150 0.311300
151 0.535500
152 0.552600
153 0.252800
154 0.479200
155 0.539500
156 0.477200
157 0.283000
158 0.265100
159 0.352000
160 0.268500
161 0.711900
162 0.411300
163 0.377100
164 0.360500
165 0.311000
166 0.490800
167 0.269300
168 0.409600
169 0.147800
170 0.144600
171 0.223600
172 0.615300
173 0.218900
174 0.136400
175 0.133200
176 0.263200
177 0.363600
178 0.127700
179 0.238900
180 0.276200
181 0.306400
182 0.122000
183 0.302400
184 0.049500
185 0.406500
186 0.246400
187 0.429900
188 0.216900
189 0.320700
190 0.472800
191 0.159900
192 0.287500
193 0.334400
194 0.136100
195 0.233400
196 0.164100
197 0.196100
198 0.153300
199 0.251000
200 0.087500
201 0.083000
202 0.104900
203 0.157700
204 0.080300
205 0.280500
206 0.372100
207 0.150400
208 0.112900
209 0.265400
210 0.075800
211 0.082700
212 0.343000
213 0.081900
214 0.360400
215 0.261200
216 0.072000
217 0.249400
218 0.211600
219 0.304500
220 0.289300
221 0.209400
222 0.067800
223 0.144500
224 0.078600
225 0.143500
226 0.377800
227 0.222300
228 0.279800
229 0.063400
230 0.120400
231 0.214000
232 0.121600
233 0.360400
234 0.168600
235 0.206300
236 0.075800
237 0.033800
238 0.059700
239 0.227500
240 0.212800
241 0.186600
242 0.223400
243 0.033600
244 0.204600
245 0.033600
246 0.600600
247 0.105800
248 0.198400
249 0.255100
250 0.226500
251 0.104700
252 0.128700
253 0.088300
254 0.158600
255 0.033200
256 0.261900
257 0.320500
258 0.140100
259 0.266200
260 0.087300
261 0.085400
262 0.240300
263 0.308800
264 0.033000
265 0.120300
266 0.156400
267 0.083200
268 0.199200
269 0.052000
270 0.116600
271 0.144000
272 0.237700
273 0.214700
274 0.180600
275 0.334200
276 0.032800
277 0.101700
278 0.078800
279 0.163300
280 0.032700
281 0.098000
282 0.126500
283 0.032600
284 0.110000
285 0.063500
286 0.382900
287 0.193200
288 0.264400
289 0.119000
290 0.189500
291 0.274900
292 0.102100
293 0.101000
294 0.197300
295 0.083300
296 0.153000
297 0.057500
298 0.335000
299 0.150400
300 0.044300
301 0.317200
302 0.073700
303 0.217200
304 0.043100
305 0.061800
306 0.100500
307 0.088800
308 0.153700
309 0.157200
310 0.086700
311 0.114000
312 0.077200
313 0.092000
314 0.167700
315 0.237000
316 0.215800
317 0.058100
318 0.077200
319 0.162900
320 0.122400
321 0.171100
322 0.142000
323 0.032100
324 0.098500
325 0.059400
326 0.038500
327 0.089000
328 0.123200
329 0.190200
330 0.051700
331 0.087400
332 0.198400
333 0.073500
334 0.073100
335 0.176600
336 0.186100
337 0.183000
338 0.106100
339 0.064700
340 0.136500
341 0.085600
342 0.115400
343 0.106000
344 0.065800
345 0.143100
346 0.137300
347 0.251000
348 0.067200
349 0.181600
350 0.084600
351 0.108800
352 0.114600
353 0.043200
354 0.241500
355 0.031800
356 0.150500
357 0.063700
358 0.036100
359 0.158100
360 0.045700
361 0.120200
362 0.035800
363 0.050200
364 0.031700
365 0.044000
366 0.035400
367 0.035300
368 0.162500
369 0.044400
370 0.132700
371 0.054300
372 0.049100
373 0.031500
374 0.038000
375 0.084900
376 0.059000
377 0.034500
378 0.049200
379 0.058100
380 0.122700
381 0.096400
382 0.034300
383 0.071700
384 0.059300
385 0.048500
386 0.051000
387 0.063000
388 0.131400
389 0.031100
390 0.076700
391 0.072200
392 0.146300
393 0.031000
394 0.031000
395 0.099200
396 0.049000
397 0.104100
398 0.087400
399 0.097100
400 0.069800
401 0.034900
402 0.035300
403 0.057400
404 0.058000
405 0.041100
406 0.083400
407 0.090000
408 0.098600
409 0.106100
410 0.052600
411 0.057800
412 0.085500
413 0.061600
414 0.034000
415 0.079700
416 0.036800
417 0.034600
418 0.073800
419 0.047900
420 0.041100
421 0.046300
422 0.030600
423 0.064200
424 0.045900
425 0.045600
426 0.032900
427 0.048800
428 0.041700
429 0.048200
430 0.035800
431 0.058200
432 0.044100
433 0.033400
434 0.046100
435 0.042800
436 0.034900
437 0.045800
438 0.055800
439 0.030300
440 0.059600
441 0.030200
442 0.052700
443 0.030200
444 0.035600
```
|
apexmin/colorful_sneaker
|
apexmin
| 2024-03-25T03:49:53Z | 1 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-03-13T00:46:02Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks sneaker
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - apexmin/colorful_sneaker
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks sneaker using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.




DreamBooth for the text encoder was enabled: False.
|
0x0son0/nr_m16
|
0x0son0
| 2024-03-25T03:48:04Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-25T02:56:10Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
feecha/ObjLlama-7b-hf
|
feecha
| 2024-03-25T03:43:32Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-2",
"code",
"arxiv:2308.12950",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-25T03:38:41Z |
---
language:
- code
pipeline_tag: text-generation
tags:
- llama-2
license: llama2
---
# **Code Llama**
Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 34 billion parameters. This is the repository for the 7B instruct-tuned version in the Hugging Face Transformers format. This model is designed for general code synthesis and understanding. Links to other models can be found in the index at the bottom.
| | Base Model | Python | Instruct |
| --- | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- |
| 7B | [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) | [codellama/CodeLlama-7b-Python-hf](https://huggingface.co/codellama/CodeLlama-7b-Python-hf) | [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) |
| 13B | [codellama/CodeLlama-13b-hf](https://huggingface.co/codellama/CodeLlama-13b-hf) | [codellama/CodeLlama-13b-Python-hf](https://huggingface.co/codellama/CodeLlama-13b-Python-hf) | [codellama/CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) |
| 34B | [codellama/CodeLlama-34b-hf](https://huggingface.co/codellama/CodeLlama-34b-hf) | [codellama/CodeLlama-34b-Python-hf](https://huggingface.co/codellama/CodeLlama-34b-Python-hf) | [codellama/CodeLlama-34b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-34b-Instruct-hf) |
## Model Use
To use this model, please make sure to install transformers from `main` until the next version is released:
```bash
pip install git+https://github.com/huggingface/transformers.git@main accelerate
```
Model capabilities:
- [x] Code completion.
- [x] Infilling.
- [x] Instructions / chat.
- [ ] Python specialist.
## Model Details
*Note: Use of this model is governed by the Meta license. Meta developed and publicly released the Code Llama family of large language models (LLMs).
**Model Developers** Meta
**Variations** Code Llama comes in three model sizes, and three variants:
* Code Llama: base models designed for general code synthesis and understanding
* Code Llama - Python: designed specifically for Python
* Code Llama - Instruct: for instruction following and safer deployment
All variants are available in sizes of 7B, 13B and 34B parameters.
**This repository contains the Instruct version of the 7B parameters model.**
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Code Llama is an auto-regressive language model that uses an optimized transformer architecture.
**Model Dates** Code Llama and its variants have been trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** More information can be found in the paper "[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)" or its [arXiv page](https://arxiv.org/abs/2308.12950).
## Intended Use
**Intended Use Cases** Code Llama and its variants is intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications.
**Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants.
## Hardware and Software
**Training Factors** We used custom training libraries. The training and fine-tuning of the released models have been performed Meta’s Research Super Cluster.
**Carbon Footprint** In aggregate, training all 9 Code Llama models required 400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 65.3 tCO2eq, 100% of which were offset by Meta’s sustainability program.
## Training Data
All experiments reported here and the released models have been trained and fine-tuned using the same data as Llama 2 with different weights (see Section 2 and Table 1 in the [research paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) for details).
## Evaluation Results
See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper.
## Ethical Considerations and Limitations
Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-user-guide](https://ai.meta.com/llama/responsible-user-guide).
|
sphynxlee/q-FrozenLake-v1-4x4-noSlippery
|
sphynxlee
| 2024-03-25T03:40:15Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-25T03:39:30Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
|
JasperGrant/ASTBERT-gb-5k-methods-multipath
|
JasperGrant
| 2024-03-25T03:33:51Z | 80 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"fill-mask",
"generated_from_keras_callback",
"base_model:microsoft/graphcodebert-base",
"base_model:finetune:microsoft/graphcodebert-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-03-25T01:27:54Z |
---
base_model: microsoft/graphcodebert-base
tags:
- generated_from_keras_callback
model-index:
- name: ASTBERT-gb-5k-methods-multipath
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ASTBERT-gb-5k-methods-multipath
This model is a fine-tuned version of [microsoft/graphcodebert-base](https://huggingface.co/microsoft/graphcodebert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1757
- Train Accuracy: 0.9637
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 1e-04, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Epoch |
|:----------:|:--------------:|:-----:|
| 1.0763 | 0.8934 | 0 |
| 0.5565 | 0.9292 | 1 |
| 0.5056 | 0.9298 | 2 |
| 0.4761 | 0.9317 | 3 |
| 0.4387 | 0.9346 | 4 |
| 0.3948 | 0.9375 | 5 |
| 0.3434 | 0.9418 | 6 |
| 0.2872 | 0.9474 | 7 |
| 0.2297 | 0.9550 | 8 |
| 0.1757 | 0.9637 | 9 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.10.0
- Datasets 2.18.0
- Tokenizers 0.13.3
|
0x9/matrix-large-0.7B-v3
|
0x9
| 2024-03-25T03:22:11Z | 195 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-25T03:21:04Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
PGKChaitanya/corgy_dog_LoRA1
|
PGKChaitanya
| 2024-03-25T03:20:22Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"diffusers-training",
"dora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-03-25T02:31:07Z |
---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- dora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- dora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of TOK dog
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - PGKChaitanya/corgy_dog_LoRA1
<Gallery />
## Model description
These are PGKChaitanya/corgy_dog_LoRA1 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK dog to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](PGKChaitanya/corgy_dog_LoRA1/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
kennethge123/superglue_rte-bert-base-uncased
|
kennethge123
| 2024-03-25T03:18:53Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-23T19:45:08Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: superglue_rte-bert-base-uncased
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: super_glue
type: super_glue
config: rte
split: validation
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.6739130434782609
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# superglue_rte-bert-base-uncased
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5070
- Accuracy: 0.6739
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.704 | 1.0 | 623 | 0.6653 | 0.6159 |
| 0.6848 | 2.0 | 1246 | 0.7144 | 0.4203 |
| 0.7083 | 3.0 | 1869 | 0.6922 | 0.5797 |
| 0.7014 | 4.0 | 2492 | 0.7327 | 0.6232 |
| 0.6528 | 5.0 | 3115 | 0.6727 | 0.6522 |
| 0.6471 | 6.0 | 3738 | 0.8413 | 0.6159 |
| 0.5872 | 7.0 | 4361 | 0.8780 | 0.5507 |
| 0.5954 | 8.0 | 4984 | 0.7604 | 0.6377 |
| 0.5566 | 9.0 | 5607 | 0.8578 | 0.6812 |
| 0.5576 | 10.0 | 6230 | 2.0498 | 0.5362 |
| 0.4923 | 11.0 | 6853 | 1.4097 | 0.6304 |
| 0.5688 | 12.0 | 7476 | 1.4146 | 0.6667 |
| 0.433 | 13.0 | 8099 | 1.3354 | 0.6594 |
| 0.4259 | 14.0 | 8722 | 1.3271 | 0.6957 |
| 0.3869 | 15.0 | 9345 | 1.2881 | 0.6812 |
| 0.3641 | 16.0 | 9968 | 1.4485 | 0.6739 |
| 0.3292 | 17.0 | 10591 | 1.3445 | 0.6739 |
| 0.3734 | 18.0 | 11214 | 1.4917 | 0.6739 |
| 0.3227 | 19.0 | 11837 | 1.5281 | 0.6739 |
| 0.3133 | 20.0 | 12460 | 1.5070 | 0.6739 |
### Framework versions
- Transformers 4.32.1
- Pytorch 1.13.0+cu117
- Datasets 2.15.0
- Tokenizers 0.13.3
|
apexmin/clock
|
apexmin
| 2024-03-25T03:17:57Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-03-13T00:35:12Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks clock
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - apexmin/clock
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks clock using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.




DreamBooth for the text encoder was enabled: False.
|
JasperGrant/ASTBERT-cb-5k-methods-multipath
|
JasperGrant
| 2024-03-25T03:08:17Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"fill-mask",
"generated_from_keras_callback",
"base_model:microsoft/codebert-base-mlm",
"base_model:finetune:microsoft/codebert-base-mlm",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-03-25T01:03:39Z |
---
base_model: microsoft/codebert-base-mlm
tags:
- generated_from_keras_callback
model-index:
- name: ASTBERT-cb-5k-methods-multipath
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ASTBERT-cb-5k-methods-multipath
This model is a fine-tuned version of [microsoft/codebert-base-mlm](https://huggingface.co/microsoft/codebert-base-mlm) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1751
- Train Accuracy: 0.9639
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 1e-04, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Epoch |
|:----------:|:--------------:|:-----:|
| 1.0543 | 0.8955 | 0 |
| 0.5551 | 0.9292 | 1 |
| 0.5051 | 0.9299 | 2 |
| 0.4762 | 0.9317 | 3 |
| 0.4408 | 0.9340 | 4 |
| 0.3965 | 0.9372 | 5 |
| 0.3451 | 0.9416 | 6 |
| 0.2883 | 0.9473 | 7 |
| 0.2303 | 0.9548 | 8 |
| 0.1751 | 0.9639 | 9 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.10.0
- Datasets 2.18.0
- Tokenizers 0.13.3
|
wdavies/extract-question-from-text
|
wdavies
| 2024-03-25T03:08:12Z | 113 | 0 |
transformers
|
[
"transformers",
"onnx",
"safetensors",
"distilbert",
"question-answering",
"license:other",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-02-24T22:54:24Z |
---
license: other
license_name: none
license_link: LICENSE
---
|
wdavies/is-answer-in-text
|
wdavies
| 2024-03-25T03:07:38Z | 120 | 0 |
transformers
|
[
"transformers",
"onnx",
"safetensors",
"distilbert",
"text-classification",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-26T01:11:01Z |
---
license: other
license_name: none
license_link: LICENSE
---
|
wdavies/is-question-in-text
|
wdavies
| 2024-03-25T03:07:06Z | 108 | 0 |
transformers
|
[
"transformers",
"onnx",
"safetensors",
"distilbert",
"text-classification",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-01T01:11:51Z |
---
license: other
license_name: none
license_link: LICENSE
---
|
Sumail/zhun03
|
Sumail
| 2024-03-25T03:04:23Z | 139 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:Sumail/copy_sarak7_v1",
"base_model:merge:Sumail/copy_sarak7_v1",
"base_model:Sumail/copy_sarak7_v4",
"base_model:merge:Sumail/copy_sarak7_v4",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-25T02:40:38Z |
---
base_model:
- Sumail/copy_sarak7_v1
- Sumail/copy_sarak7_v4
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Sumail/copy_sarak7_v1](https://huggingface.co/Sumail/copy_sarak7_v1)
* [Sumail/copy_sarak7_v4](https://huggingface.co/Sumail/copy_sarak7_v4)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Sumail/copy_sarak7_v1
layer_range: [0, 12]
- model: Sumail/copy_sarak7_v4
layer_range: [0, 12]
merge_method: slerp
base_model: Sumail/copy_sarak7_v1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: float32
```
|
kennethge123/superglue-rte-gpt2-kd
|
kennethge123
| 2024-03-25T03:00:22Z | 4 | 0 |
pytorch
|
[
"pytorch",
"tensorboard",
"safetensors",
"gpt2",
"en",
"license:mit",
"region:us"
] | null | 2024-03-24T21:38:49Z |
---
language: en
license: mit
library_name: pytorch
---
# Plainly Optimized Network
Dataset: SUPERGLUE
Trainer Hyperparameters:
- `lr` = 5e-05
- `per_device_batch_size` = 8
- `gradient_accumulation_steps` = 2
- `weight_decay` = 1e-09
- `seed` = 42
|eval_loss|eval_accuracy|epoch|
|--|--|--|
|19.092|0.667|1.0|
|18.211|0.667|2.0|
|17.359|0.739|3.0|
|17.168|0.732|4.0|
|18.647|0.681|5.0|
|18.081|0.681|6.0|
|18.325|0.688|7.0|
|18.660|0.688|8.0|
|18.464|0.688|9.0|
|18.622|0.696|10.0|
|17.838|0.710|11.0|
|17.792|0.703|12.0|
|18.009|0.696|13.0|
|19.033|0.674|14.0|
|17.430|0.717|15.0|
|18.218|0.696|16.0|
|17.915|0.710|17.0|
|17.956|0.717|18.0|
|18.078|0.725|19.0|
|
zhendongw/prompt-diffusion-diffusers
|
zhendongw
| 2024-03-25T02:58:16Z | 50 | 1 |
diffusers
|
[
"diffusers",
"image-to-text",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
image-to-text
| 2024-03-25T01:41:13Z |
---
library_name: diffusers
base_models:
- runwayml/stable-diffusion-v1-5
- lllyasviel/ControlNet
pipeline_tag: image-to-text
---
[Prompt diffusion](https://huggingface.co/zhendongw/prompt-diffusion) converted to Diffusers.
|
rockyclh/llama-2-7b-chat-Glossary-financial-ratio
|
rockyclh
| 2024-03-25T02:54:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"autotrain",
"text-generation",
"peft",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-25T01:54:01Z |
---
tags:
- autotrain
- text-generation
- peft
library_name: transformers
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
pawkanarek/gemmatron6
|
pawkanarek
| 2024-03-25T02:42:29Z | 138 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"base_model:google/gemma-2b-it",
"base_model:finetune:google/gemma-2b-it",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-25T02:40:15Z |
---
license: other
base_model: google/gemma-2b-it
model-index:
- name: gemmatron6
results: []
---
This model is a fine-tuned version of [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it)
|
kennethge123/superglue-rte-bert-base-uncased-kd
|
kennethge123
| 2024-03-25T02:31:38Z | 4 | 0 |
pytorch
|
[
"pytorch",
"tensorboard",
"safetensors",
"bert",
"en",
"license:mit",
"region:us"
] | null | 2024-03-24T23:28:40Z |
---
language: en
license: mit
library_name: pytorch
---
# Plainly Optimized Network
Dataset: SUPERGLUE
Trainer Hyperparameters:
- `lr` = 5e-05
- `per_device_batch_size` = 8
- `gradient_accumulation_steps` = 2
- `weight_decay` = 1e-09
- `seed` = 42
|eval_loss|eval_accuracy|epoch|
|--|--|--|
|22.014|0.471|1.0|
|19.411|0.659|2.0|
|18.711|0.696|3.0|
|19.141|0.652|4.0|
|19.924|0.638|5.0|
|19.229|0.652|6.0|
|20.306|0.623|7.0|
|19.739|0.645|8.0|
|20.873|0.623|9.0|
|20.285|0.638|10.0|
|18.900|0.696|11.0|
|18.971|0.681|12.0|
|19.230|0.667|13.0|
|19.039|0.674|14.0|
|19.080|0.667|15.0|
|18.997|0.681|16.0|
|18.619|0.681|17.0|
|18.754|0.681|18.0|
|18.911|0.674|19.0|
|
HusseinEid/dqn-SpaceInvadersNoFrameskip-v4
|
HusseinEid
| 2024-03-25T02:28:35Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-24T15:55:09Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 743.50 +/- 220.47
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga HusseinEid -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga HusseinEid -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga HusseinEid
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
LameloBally/llama2-Merged32
|
LameloBally
| 2024-03-25T02:23:35Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-25T02:11:29Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dyumat/rl4llm_uofm_ppo_unsuper_t5_arxiv
|
dyumat
| 2024-03-25T02:10:10Z | 107 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-03-25T02:09:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Sumail/zhun02
|
Sumail
| 2024-03-25T02:08:53Z | 138 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:Sumail/copy_sarak7_v1",
"base_model:merge:Sumail/copy_sarak7_v1",
"base_model:Sumail/copy_sarak7_v4",
"base_model:merge:Sumail/copy_sarak7_v4",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-25T02:08:03Z |
---
base_model:
- Sumail/copy_sarak7_v1
- Sumail/copy_sarak7_v4
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Sumail/copy_sarak7_v1](https://huggingface.co/Sumail/copy_sarak7_v1)
* [Sumail/copy_sarak7_v4](https://huggingface.co/Sumail/copy_sarak7_v4)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Sumail/copy_sarak7_v1
layer_range: [0, 12]
- model: Sumail/copy_sarak7_v4
layer_range: [0, 12]
merge_method: slerp
base_model: Sumail/copy_sarak7_v1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
MrPrjnce/q-FrozenLake-v1-4x4-noSlippery
|
MrPrjnce
| 2024-03-25T02:03:37Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-25T02:03:35Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="MrPrjnce/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
inoutro/phi2-ko-instruction-tune
|
inoutro
| 2024-03-25T01:57:57Z | 61 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"ko",
"arxiv:1910.09700",
"license:cc-by-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-24T15:15:34Z |
---
language:
- ko
license: cc-by-3.0
---
# Model Card for Model ID
This model is a fine-tuned version of daekeun-ml/phi-2-ko-v0.1 with DeepSpeed.
Model size: 2.8B
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** inoutro
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** korean
- **License:** cc-by-3.0
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** daekeun-ml/phi-2-ko-v0.1
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AwesomeEmerald/BibleGPT
|
AwesomeEmerald
| 2024-03-25T01:57:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-25T01:56:54Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/mistral-7b-bnb-4bit
---
# Uploaded model
- **Developed by:** AwesomeEmerald
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
pillIdentifierAI/pillIdentifier
|
pillIdentifierAI
| 2024-03-25T01:56:09Z | 89 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"medical",
"en",
"license:agpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-02-03T16:30:30Z |
---
license: agpl-3.0
language:
- en
metrics:
- accuracy
library_name: transformers
tags:
- medical
pipeline_tag: image-classification
---
This model is part of a school project.
Utilizing the google/vit-base-patch16-224 vision transformer for image classification,
this pre-trained model is further tuned utilizing images of pills and tablets.
As pills and tablets have three main features, color, shape, and imprint,
the model aims to identify images of pill and tablets by automatically extracting features.
The dataset utilized is from the U.S. Department of Health's Computational Photography Project for Pill Identification (C3PI).
DISCLAIMER: The accuracy of this model is currently low (<20%). Further training is currently ongoing to improve the accuracy.
Version 2: This version tries to train the pretrained model with only 20 of the most common pills.
Unfortunately, the accuract of the model is still currently low (<30%).
|
BryanBradfo/vit-base-patch16-224-in21k-finetuned-lora-food101
|
BryanBradfo
| 2024-03-25T01:54:59Z | 9 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:adapter:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"region:us"
] | null | 2024-03-22T07:57:22Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: google/vit-base-patch16-224-in21k
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-in21k-finetuned-lora-food101
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-finetuned-lora-food101
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2034
- Accuracy: 0.94
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 9 | 0.5701 | 0.866 |
| 2.1862 | 2.0 | 18 | 0.2383 | 0.936 |
| 0.3244 | 3.0 | 27 | 0.2034 | 0.94 |
| 0.1904 | 4.0 | 36 | 0.2018 | 0.932 |
| 0.1786 | 5.0 | 45 | 0.1818 | 0.94 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.0
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
|
apexmin/berry_bowl
|
apexmin
| 2024-03-25T01:45:37Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-03-12T23:59:15Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks bowl
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - apexmin/berry_bowl
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks bowl using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.




DreamBooth for the text encoder was enabled: False.
|
bala3040/bala_sriram_gpt
|
bala3040
| 2024-03-25T01:38:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-25T01:38:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
haith1307/Reinforce-Cartpole
|
haith1307
| 2024-03-25T01:37:20Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-25T01:37:11Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cartpole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 390.70 +/- 167.09
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Sumail/Barista08
|
Sumail
| 2024-03-25T01:32:12Z | 138 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:coffiee/g10",
"base_model:merge:coffiee/g10",
"base_model:coffiee/g9",
"base_model:merge:coffiee/g9",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-25T01:29:40Z |
---
base_model:
- coffiee/g9
- coffiee/g10
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [coffiee/g9](https://huggingface.co/coffiee/g9)
* [coffiee/g10](https://huggingface.co/coffiee/g10)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: coffiee/g9
layer_range: [0, 18]
- model: coffiee/g10
layer_range: [0, 18]
merge_method: slerp
base_model: coffiee/g10
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
4n3mone/Llama-2-7b-hf_DTS_FFT
|
4n3mone
| 2024-03-25T01:29:56Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-25T01:21:28Z |
---
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf_DTS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft_outputs
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 5678
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.1+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
ijwatson98/rlaif-gpt2-xsum-2403
|
ijwatson98
| 2024-03-25T01:19:39Z | 196 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-25T01:18:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
blockblockblock/Dolphin-2.8-slerp-bpw6
|
blockblockblock
| 2024-03-25T00:57:36Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"chatml",
"text-generation-inference",
"slerp",
"mergekit",
"merge",
"en",
"base_model:cognitivecomputations/dolphin-2.8-experiment26-7b",
"base_model:merge:cognitivecomputations/dolphin-2.8-experiment26-7b",
"base_model:yam-peleg/Experiment26-7B",
"base_model:merge:yam-peleg/Experiment26-7B",
"license:apache-2.0",
"6-bit",
"exl2",
"region:us"
] |
text-generation
| 2024-03-25T00:55:38Z |
---
tags:
- text-generation
- autotrain_compatible
- endpoints_compatible
- chatml
- text-generation-inference
- transformers
- slerp
- mistral
- mergekit
- merge
base_model:
- yam-peleg/Experiment26-7B
- cognitivecomputations/dolphin-2.8-experiment26-7b
library_name: transformers
license: apache-2.0
language:
- en
pipeline_tag: text-generation
thumbnail: "https://huggingface.co/pabloce/Dolphin-2.8-slerp/resolve/main/Dolphin-28-slerp.jpeg"
---
# Dolphin-2.8-slerp - merge

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B)
* [cognitivecomputations/dolphin-2.8-experiment26-7b](https://huggingface.co/cognitivecomputations/dolphin-2.8-experiment26-7b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: yam-peleg/Experiment26-7B
layer_range: [0, 32]
- model: cognitivecomputations/dolphin-2.8-experiment26-7b
layer_range: [0, 32]
merge_method: slerp
base_model: yam-peleg/Experiment26-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
Yuma42/KangalKhan-Beta-Sapphire-7B
|
Yuma42
| 2024-03-25T00:49:29Z | 51 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"kaist-ai/mistral-orpo-capybara-7k",
"argilla/distilabeled-OpenHermes-2.5-Mistral-7B",
"conversational",
"en",
"base_model:argilla/distilabeled-OpenHermes-2.5-Mistral-7B",
"base_model:merge:argilla/distilabeled-OpenHermes-2.5-Mistral-7B",
"base_model:kaist-ai/mistral-orpo-capybara-7k",
"base_model:merge:kaist-ai/mistral-orpo-capybara-7k",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-25T00:44:15Z |
---
tags:
- merge
- mergekit
- lazymergekit
- kaist-ai/mistral-orpo-capybara-7k
- argilla/distilabeled-OpenHermes-2.5-Mistral-7B
base_model:
- kaist-ai/mistral-orpo-capybara-7k
- argilla/distilabeled-OpenHermes-2.5-Mistral-7B
license: apache-2.0
language:
- en
---
# KangalKhan-Beta-Sapphire-7B
KangalKhan-Beta-Sapphire-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [kaist-ai/mistral-orpo-capybara-7k](https://huggingface.co/kaist-ai/mistral-orpo-capybara-7k)
* [argilla/distilabeled-OpenHermes-2.5-Mistral-7B](https://huggingface.co/argilla/distilabeled-OpenHermes-2.5-Mistral-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: kaist-ai/mistral-orpo-capybara-7k
layer_range: [0, 32]
- model: argilla/distilabeled-OpenHermes-2.5-Mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: kaist-ai/mistral-orpo-capybara-7k
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Yuma42/KangalKhan-Beta-Sapphire-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
yoonyamm/ppo-Huggy
|
yoonyamm
| 2024-03-25T00:45:27Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2024-03-25T00:40:46Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: yoonyamm/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
blockblockblock/Dolphin-2.8-slerp-bpw5.5
|
blockblockblock
| 2024-03-25T00:44:54Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"chatml",
"text-generation-inference",
"slerp",
"mergekit",
"merge",
"en",
"base_model:cognitivecomputations/dolphin-2.8-experiment26-7b",
"base_model:merge:cognitivecomputations/dolphin-2.8-experiment26-7b",
"base_model:yam-peleg/Experiment26-7B",
"base_model:merge:yam-peleg/Experiment26-7B",
"license:apache-2.0",
"exl2",
"region:us"
] |
text-generation
| 2024-03-25T00:43:02Z |
---
tags:
- text-generation
- autotrain_compatible
- endpoints_compatible
- chatml
- text-generation-inference
- transformers
- slerp
- mistral
- mergekit
- merge
base_model:
- yam-peleg/Experiment26-7B
- cognitivecomputations/dolphin-2.8-experiment26-7b
library_name: transformers
license: apache-2.0
language:
- en
pipeline_tag: text-generation
thumbnail: "https://huggingface.co/pabloce/Dolphin-2.8-slerp/resolve/main/Dolphin-28-slerp.jpeg"
---
# Dolphin-2.8-slerp - merge

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B)
* [cognitivecomputations/dolphin-2.8-experiment26-7b](https://huggingface.co/cognitivecomputations/dolphin-2.8-experiment26-7b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: yam-peleg/Experiment26-7B
layer_range: [0, 32]
- model: cognitivecomputations/dolphin-2.8-experiment26-7b
layer_range: [0, 32]
merge_method: slerp
base_model: yam-peleg/Experiment26-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
apexmin/backpack_dog
|
apexmin
| 2024-03-25T00:43:37Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-03-12T23:36:55Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks backpack
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - apexmin/backpack_dog
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks backpack using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.




DreamBooth for the text encoder was enabled: False.
|
jam15/bert-finetuned-p5
|
jam15
| 2024-03-25T00:40:05Z | 106 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-25T00:38:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
base_model: bert-base-uncased
model-index:
- name: bert-finetuned-p5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-p5
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
jamking/ppo-Huggy
|
jamking
| 2024-03-25T00:38:24Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2024-03-25T00:37:29Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: jamking/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Mughees11/blue_jacket_1_LoRA_1000e
|
Mughees11
| 2024-03-25T00:34:55Z | 0 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-03-25T00:34:27Z |
---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of GCJ jacket
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - Mughees11/blue_jacket_1_LoRA_1000e
<Gallery />
## Model description
These are Mughees11/blue_jacket_1_LoRA_1000e LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of GCJ jacket to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](Mughees11/blue_jacket_1_LoRA_1000e/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
blockblockblock/Dolphin-2.8-slerp-bpw5
|
blockblockblock
| 2024-03-25T00:32:18Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"chatml",
"text-generation-inference",
"slerp",
"mergekit",
"merge",
"en",
"base_model:cognitivecomputations/dolphin-2.8-experiment26-7b",
"base_model:merge:cognitivecomputations/dolphin-2.8-experiment26-7b",
"base_model:yam-peleg/Experiment26-7B",
"base_model:merge:yam-peleg/Experiment26-7B",
"license:apache-2.0",
"5-bit",
"exl2",
"region:us"
] |
text-generation
| 2024-03-25T00:30:29Z |
---
tags:
- text-generation
- autotrain_compatible
- endpoints_compatible
- chatml
- text-generation-inference
- transformers
- slerp
- mistral
- mergekit
- merge
base_model:
- yam-peleg/Experiment26-7B
- cognitivecomputations/dolphin-2.8-experiment26-7b
library_name: transformers
license: apache-2.0
language:
- en
pipeline_tag: text-generation
thumbnail: "https://huggingface.co/pabloce/Dolphin-2.8-slerp/resolve/main/Dolphin-28-slerp.jpeg"
---
# Dolphin-2.8-slerp - merge

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B)
* [cognitivecomputations/dolphin-2.8-experiment26-7b](https://huggingface.co/cognitivecomputations/dolphin-2.8-experiment26-7b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: yam-peleg/Experiment26-7B
layer_range: [0, 32]
- model: cognitivecomputations/dolphin-2.8-experiment26-7b
layer_range: [0, 32]
merge_method: slerp
base_model: yam-peleg/Experiment26-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
lunarsylph/gemmacell_v14
|
lunarsylph
| 2024-03-25T00:31:53Z | 138 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-25T00:22:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Smuggling1710/IreneRP-Neural-7B-slerp
|
Smuggling1710
| 2024-03-25T00:29:14Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Virt-io/Irene-RP-v3-7B",
"NurtureAI/neural-chat-7b-v3-16k",
"base_model:Virt-io/Irene-RP-v3-7B",
"base_model:finetune:Virt-io/Irene-RP-v3-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-25T00:24:04Z |
---
tags:
- merge
- mergekit
- lazymergekit
- Virt-io/Irene-RP-v3-7B
- NurtureAI/neural-chat-7b-v3-16k
base_model:
- Virt-io/Irene-RP-v3-7B
- NurtureAI/neural-chat-7b-v3-16k
---
# IreneRP-Neural-7B-slerp
IreneRP-Neural-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Virt-io/Irene-RP-v3-7B](https://huggingface.co/Virt-io/Irene-RP-v3-7B)
* [NurtureAI/neural-chat-7b-v3-16k](https://huggingface.co/NurtureAI/neural-chat-7b-v3-16k)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Virt-io/Irene-RP-v3-7B
layer_range: [0, 32]
- model: NurtureAI/neural-chat-7b-v3-16k
layer_range: [0, 32]
merge_method: slerp
base_model: Virt-io/Irene-RP-v3-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Smuggling1710/IreneRP-Neural-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
wywang/reinforce-CartPole-v1-tweaked-params
|
wywang
| 2024-03-25T00:23:43Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-25T00:23:35Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: reinforce-CartPole-v1-tweaked-params
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
flapi514/marcelDiscord
|
flapi514
| 2024-03-25T00:06:02Z | 140 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-25T00:05:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Azazelle/Bianca-7b
|
Azazelle
| 2024-03-25T00:05:00Z | 0 | 0 |
transformers
|
[
"transformers",
"mergekit",
"merge",
"mistral",
"text-generation",
"base_model:Endevor/InfinityRP-v1-7B",
"base_model:merge:Endevor/InfinityRP-v1-7B",
"base_model:NeverSleep/Noromaid-7B-0.4-DPO",
"base_model:merge:NeverSleep/Noromaid-7B-0.4-DPO",
"base_model:Nexusflow/Starling-LM-7B-beta",
"base_model:merge:Nexusflow/Starling-LM-7B-beta",
"base_model:jan-hq/supermario-slerp-v3",
"base_model:merge:jan-hq/supermario-slerp-v3",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:merge:mistralai/Mistral-7B-v0.1",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-24T22:43:53Z |
---
pipeline_tag: text-generation
base_model:
- mistralai/Mistral-7B-v0.1
- jan-hq/supermario-slerp-v3
- Endevor/InfinityRP-v1-7B
- Nexusflow/Starling-LM-7B-beta
- NeverSleep/Noromaid-7B-0.4-DPO
library_name: transformers
tags:
- mergekit
- merge
- mistral
license: cc-by-4.0
---
# Basic-Sanity
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the rescaled_sample merge method using [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base.
### Models Merged
The following models were included in the merge:
* [jan-hq/supermario-slerp-v3](https://huggingface.co/jan-hq/supermario-slerp-v3)
* [Endevor/InfinityRP-v1-7B](https://huggingface.co/Endevor/InfinityRP-v1-7B)
* [Nexusflow/Starling-LM-7B-beta](https://huggingface.co/Nexusflow/Starling-LM-7B-beta)
* [NeverSleep/Noromaid-7B-0.4-DPO](https://huggingface.co/NeverSleep/Noromaid-7B-0.4-DPO)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Nexusflow/Starling-LM-7B-beta # Reasoning | OpenChat
parameters:
weight: 0.6
density: 0.7
- model: jan-hq/supermario-slerp-v3 # Reasoning | ChatML
parameters:
weight: 0.3
density: 0.5
- model: Endevor/InfinityRP-v1-7B # Roleplay | Alpaca
parameters:
weight: 0.3
density: 0.5
- model: NeverSleep/Noromaid-7B-0.4-DPO # Roleplay | ChatML
parameters:
weight: 0.2
density: 0.4
merge_method: rescaled_sample
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
```
|
beccacohen/distilbert-base-uncased-finetuned-imdb
|
beccacohen
| 2024-03-25T00:03:12Z | 106 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-03-19T02:31:58Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1220
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.4306 | 1.0 | 157 | 3.2104 |
| 3.2857 | 2.0 | 314 | 3.1220 |
| 3.2307 | 3.0 | 471 | 3.1649 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Roombreak/git-base-pokemon
|
Roombreak
| 2024-03-24T23:58:29Z | 62 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"git",
"image-text-to-text",
"generated_from_trainer",
"base_model:microsoft/git-base",
"base_model:finetune:microsoft/git-base",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-03-24T23:20:43Z |
---
license: mit
base_model: microsoft/git-base
tags:
- generated_from_trainer
model-index:
- name: git-base-pokemon
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# git-base-pokemon
This model is a fine-tuned version of [microsoft/git-base](https://huggingface.co/microsoft/git-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.5368
- Wer Score: 1.1538
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Score |
|:-------------:|:-----:|:----:|:---------------:|:---------:|
| 3.8114 | 50.0 | 50 | 6.5368 | 1.1538 |
### Framework versions
- Transformers 4.39.1
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
blockblockblock/Dolphin-2.8-slerp-bpw4.4
|
blockblockblock
| 2024-03-24T23:54:03Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"chatml",
"text-generation-inference",
"slerp",
"mergekit",
"merge",
"en",
"base_model:cognitivecomputations/dolphin-2.8-experiment26-7b",
"base_model:merge:cognitivecomputations/dolphin-2.8-experiment26-7b",
"base_model:yam-peleg/Experiment26-7B",
"base_model:merge:yam-peleg/Experiment26-7B",
"license:apache-2.0",
"exl2",
"region:us"
] |
text-generation
| 2024-03-24T23:52:34Z |
---
tags:
- text-generation
- autotrain_compatible
- endpoints_compatible
- chatml
- text-generation-inference
- transformers
- slerp
- mistral
- mergekit
- merge
base_model:
- yam-peleg/Experiment26-7B
- cognitivecomputations/dolphin-2.8-experiment26-7b
library_name: transformers
license: apache-2.0
language:
- en
pipeline_tag: text-generation
thumbnail: "https://huggingface.co/pabloce/Dolphin-2.8-slerp/resolve/main/Dolphin-28-slerp.jpeg"
---
# Dolphin-2.8-slerp - merge

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B)
* [cognitivecomputations/dolphin-2.8-experiment26-7b](https://huggingface.co/cognitivecomputations/dolphin-2.8-experiment26-7b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: yam-peleg/Experiment26-7B
layer_range: [0, 32]
- model: cognitivecomputations/dolphin-2.8-experiment26-7b
layer_range: [0, 32]
merge_method: slerp
base_model: yam-peleg/Experiment26-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
ZySec-AI/ZySec-7B-GGUF
|
ZySec-AI
| 2024-03-24T23:50:07Z | 86 | 1 | null |
[
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-03-24T23:23:58Z |
---
license: apache-2.0
---
## Ensure below settings
1. Set sysetem prompot: "You are ZySec, an AI Assistant specialisted in CyberSecurity."
2. Select Zephyr in Preset

|
Gabe-Thomp/ju-path-to-save-model
|
Gabe-Thomp
| 2024-03-24T23:48:53Z | 0 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-03-24T20:22:14Z |
---
license: creativeml-openrail-m
library_name: diffusers
tags:
- text-to-image
- dreambooth
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- dreambooth
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
base_model: CompVis/stable-diffusion-v1-4
inference: true
instance_prompt: a photo of a human name ju
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - Gabe-Thomp/ju-path-to-save-model
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of a human name ju using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
blockblockblock/Dolphin-2.8-slerp-bpw4.2
|
blockblockblock
| 2024-03-24T23:41:51Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"chatml",
"text-generation-inference",
"slerp",
"mergekit",
"merge",
"en",
"base_model:cognitivecomputations/dolphin-2.8-experiment26-7b",
"base_model:merge:cognitivecomputations/dolphin-2.8-experiment26-7b",
"base_model:yam-peleg/Experiment26-7B",
"base_model:merge:yam-peleg/Experiment26-7B",
"license:apache-2.0",
"exl2",
"region:us"
] |
text-generation
| 2024-03-24T23:40:27Z |
---
tags:
- text-generation
- autotrain_compatible
- endpoints_compatible
- chatml
- text-generation-inference
- transformers
- slerp
- mistral
- mergekit
- merge
base_model:
- yam-peleg/Experiment26-7B
- cognitivecomputations/dolphin-2.8-experiment26-7b
library_name: transformers
license: apache-2.0
language:
- en
pipeline_tag: text-generation
thumbnail: "https://huggingface.co/pabloce/Dolphin-2.8-slerp/resolve/main/Dolphin-28-slerp.jpeg"
---
# Dolphin-2.8-slerp - merge

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B)
* [cognitivecomputations/dolphin-2.8-experiment26-7b](https://huggingface.co/cognitivecomputations/dolphin-2.8-experiment26-7b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: yam-peleg/Experiment26-7B
layer_range: [0, 32]
- model: cognitivecomputations/dolphin-2.8-experiment26-7b
layer_range: [0, 32]
merge_method: slerp
base_model: yam-peleg/Experiment26-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
emayaml/vit-transferlearningCV
|
emayaml
| 2024-03-24T23:39:46Z | 221 | 1 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"compute-vision",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-03-19T22:59:00Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- compute-vision
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-transferlearningCV
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-transferlearningCV
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the datasetX dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0159
- Accuracy: 0.9925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1433 | 3.85 | 500 | 0.0159 | 0.9925 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
e22vvb/ALL_mt5-base_15_wikiSQL_no_sch
|
e22vvb
| 2024-03-24T23:32:16Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/mt5-base",
"base_model:finetune:google/mt5-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-03-23T16:18:46Z |
---
license: apache-2.0
base_model: google/mt5-base
tags:
- generated_from_trainer
model-index:
- name: ALL_mt5-base_15_wikiSQL_no_sch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ALL_mt5-base_15_wikiSQL_no_sch
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1014
- Rouge2 Precision: 0.774
- Rouge2 Recall: 0.7029
- Rouge2 Fmeasure: 0.731
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 15
- eval_batch_size: 15
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:------:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.1507 | 1.0 | 8637 | 0.1284 | 0.7172 | 0.6444 | 0.673 |
| 0.1214 | 2.0 | 17274 | 0.1125 | 0.7391 | 0.6666 | 0.6954 |
| 0.1049 | 3.0 | 25911 | 0.1070 | 0.7514 | 0.6775 | 0.7069 |
| 0.0951 | 4.0 | 34548 | 0.1035 | 0.7558 | 0.6832 | 0.712 |
| 0.0893 | 5.0 | 43185 | 0.1019 | 0.7627 | 0.6903 | 0.7189 |
| 0.0854 | 6.0 | 51822 | 0.1010 | 0.766 | 0.6933 | 0.7222 |
| 0.0794 | 7.0 | 60459 | 0.1001 | 0.7672 | 0.6951 | 0.7237 |
| 0.0719 | 8.0 | 69096 | 0.0999 | 0.7703 | 0.698 | 0.7267 |
| 0.0713 | 9.0 | 77733 | 0.1002 | 0.77 | 0.6983 | 0.7268 |
| 0.067 | 10.0 | 86370 | 0.1004 | 0.7726 | 0.7006 | 0.7291 |
| 0.0649 | 11.0 | 95007 | 0.1005 | 0.773 | 0.7017 | 0.7299 |
| 0.0636 | 12.0 | 103644 | 0.1009 | 0.7733 | 0.7018 | 0.7301 |
| 0.0614 | 13.0 | 112281 | 0.1009 | 0.7735 | 0.7021 | 0.7303 |
| 0.0608 | 14.0 | 120918 | 0.1012 | 0.7737 | 0.7028 | 0.7308 |
| 0.06 | 15.0 | 129555 | 0.1014 | 0.774 | 0.7029 | 0.731 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.1
|
furrutiav/beto_edu_task_iad_nllf_plus_ef_it_37
|
furrutiav
| 2024-03-24T23:25:20Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-03-24T22:48:29Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
emayaml/transferlearningNLP-textcomparation
|
emayaml
| 2024-03-24T23:20:10Z | 106 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-24T22:50:25Z |
---
license: apache-2.0
base_model: distilroberta-base
tags:
- text-classification
- generated_from_trainer
metrics:
- accuracy
- f1
widget:
- text: "Yucaipa owned Dominick's before selling the chain to Safeway in 1998 for $ 2.5 billion. Yucaipa bought Dominick's in 1995 for $ 693 million and sold it to Safeway for $ 1.8 billion in 1998."
example_title: Not Equivalent
- text: "Revenue in the first quarter of the year dropped 15 percent from the same period a year earlier. With the scandal hanging over Stewart's company revenue the first quarter of the year dropped 15 percent from the same period a year earlier."
example_title: Equivalent
model-index:
- name: distilroberta-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the datasetX dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5524
- Accuracy: 0.8505
- F1: 0.8943
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5182 | 1.09 | 500 | 0.5524 | 0.8505 | 0.8943 |
| 0.3291 | 2.18 | 1000 | 0.7097 | 0.8407 | 0.8845 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Yuma42/KangalKhan-PressurizedRuby-7B
|
Yuma42
| 2024-03-24T23:12:07Z | 49 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Yuma42/KangalKhan-RawRuby-7B",
"Yuma42/KangalKhan-Ruby-7B-Fixed",
"conversational",
"en",
"base_model:Yuma42/KangalKhan-RawRuby-7B",
"base_model:merge:Yuma42/KangalKhan-RawRuby-7B",
"base_model:Yuma42/KangalKhan-Ruby-7B-Fixed",
"base_model:merge:Yuma42/KangalKhan-Ruby-7B-Fixed",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-24T22:57:30Z |
---
tags:
- merge
- mergekit
- lazymergekit
- Yuma42/KangalKhan-RawRuby-7B
- Yuma42/KangalKhan-Ruby-7B-Fixed
base_model:
- Yuma42/KangalKhan-RawRuby-7B
- Yuma42/KangalKhan-Ruby-7B-Fixed
license: apache-2.0
language:
- en
---
# KangalKhan-PressurizedRuby-7B
KangalKhan-PressurizedRuby-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Yuma42/KangalKhan-RawRuby-7B](https://huggingface.co/Yuma42/KangalKhan-RawRuby-7B)
* [Yuma42/KangalKhan-Ruby-7B-Fixed](https://huggingface.co/Yuma42/KangalKhan-Ruby-7B-Fixed)
## 🧩 Configuration
```yaml
models:
- model: teknium/OpenHermes-2.5-Mistral-7B
# no parameters necessary for base model
- model: Yuma42/KangalKhan-RawRuby-7B
parameters:
density: 0.6
weight: 0.5
- model: Yuma42/KangalKhan-Ruby-7B-Fixed
parameters:
density: 0.6
weight: 0.5
merge_method: ties
base_model: teknium/OpenHermes-2.5-Mistral-7B
parameters:
normalize: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Yuma42/KangalKhan-PressurizedRuby-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
minindu-liya99/Taxi-v3
|
minindu-liya99
| 2024-03-24T23:06:19Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-24T23:06:17Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="minindu-liya99/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
blockblockblock/Dolphin-2.8-slerp-bpw3.5
|
blockblockblock
| 2024-03-24T23:05:27Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"chatml",
"text-generation-inference",
"slerp",
"mergekit",
"merge",
"en",
"base_model:cognitivecomputations/dolphin-2.8-experiment26-7b",
"base_model:merge:cognitivecomputations/dolphin-2.8-experiment26-7b",
"base_model:yam-peleg/Experiment26-7B",
"base_model:merge:yam-peleg/Experiment26-7B",
"license:apache-2.0",
"exl2",
"region:us"
] |
text-generation
| 2024-03-24T23:04:02Z |
---
tags:
- text-generation
- autotrain_compatible
- endpoints_compatible
- chatml
- text-generation-inference
- transformers
- slerp
- mistral
- mergekit
- merge
base_model:
- yam-peleg/Experiment26-7B
- cognitivecomputations/dolphin-2.8-experiment26-7b
library_name: transformers
license: apache-2.0
language:
- en
pipeline_tag: text-generation
thumbnail: "https://huggingface.co/pabloce/Dolphin-2.8-slerp/resolve/main/Dolphin-28-slerp.jpeg"
---
# Dolphin-2.8-slerp - merge

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B)
* [cognitivecomputations/dolphin-2.8-experiment26-7b](https://huggingface.co/cognitivecomputations/dolphin-2.8-experiment26-7b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: yam-peleg/Experiment26-7B
layer_range: [0, 32]
- model: cognitivecomputations/dolphin-2.8-experiment26-7b
layer_range: [0, 32]
merge_method: slerp
base_model: yam-peleg/Experiment26-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
sarak7/H4_325_29_v1
|
sarak7
| 2024-03-24T23:04:45Z | 194 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-24T23:03:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
thrunlab/Mistral_Sparse_refined_web_50p_cut_pre_mlp_cut_pre_attn_2024-03-24
|
thrunlab
| 2024-03-24T22:48:54Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"sparse_mistral",
"text-generation",
"generated_from_trainer",
"custom_code",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-03-24T14:08:47Z |
---
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- generated_from_trainer
model-index:
- name: Mistral_Sparse_refined_web_50p_cut_pre_mlp_cut_pre_attn_2024-03-24
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral_Sparse_refined_web_50p_cut_pre_mlp_cut_pre_attn_2024-03-24
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1460
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 0
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4669 | 0.01 | 25 | 2.6676 |
| 2.3645 | 0.02 | 50 | 2.6007 |
| 2.3355 | 0.02 | 75 | 2.5715 |
| 2.3828 | 0.03 | 100 | 2.5535 |
| 2.3401 | 0.04 | 125 | 2.5292 |
| 2.3527 | 0.05 | 150 | 2.5217 |
| 2.3829 | 0.06 | 175 | 2.4998 |
| 2.2761 | 0.07 | 200 | 2.4850 |
| 2.4218 | 0.07 | 225 | 2.4936 |
| 2.2971 | 0.08 | 250 | 2.4925 |
| 2.3207 | 0.09 | 275 | 2.4817 |
| 2.2992 | 0.1 | 300 | 2.4915 |
| 2.3897 | 0.11 | 325 | 2.4921 |
| 2.3127 | 0.12 | 350 | 2.4669 |
| 2.2856 | 0.12 | 375 | 2.4739 |
| 2.312 | 0.13 | 400 | 2.4699 |
| 2.2876 | 0.14 | 425 | 2.4651 |
| 2.2378 | 0.15 | 450 | 2.4591 |
| 2.2899 | 0.16 | 475 | 2.4741 |
| 2.3141 | 0.16 | 500 | 2.4618 |
| 2.2603 | 0.17 | 525 | 2.4650 |
| 2.2613 | 0.18 | 550 | 2.4635 |
| 2.3039 | 0.19 | 575 | 2.4709 |
| 2.3 | 0.2 | 600 | 2.4532 |
| 2.2806 | 0.21 | 625 | 2.4611 |
| 2.3565 | 0.21 | 650 | 2.4614 |
| 2.2878 | 0.22 | 675 | 2.4600 |
| 2.2105 | 0.23 | 700 | 2.4468 |
| 2.3047 | 0.24 | 725 | 2.4557 |
| 2.2744 | 0.25 | 750 | 2.4510 |
| 2.327 | 0.26 | 775 | 2.4459 |
| 2.3467 | 0.26 | 800 | 2.4419 |
| 2.3345 | 0.27 | 825 | 2.4455 |
| 2.227 | 0.28 | 850 | 2.4440 |
| 2.3044 | 0.29 | 875 | 2.4434 |
| 2.3411 | 0.3 | 900 | 2.4396 |
| 2.2335 | 0.3 | 925 | 2.4417 |
| 2.3237 | 0.31 | 950 | 2.4432 |
| 2.2669 | 0.32 | 975 | 2.4429 |
| 2.2561 | 0.33 | 1000 | 2.4428 |
| 2.2862 | 0.34 | 1025 | 2.4387 |
| 2.1977 | 0.35 | 1050 | 2.4380 |
| 2.2541 | 0.35 | 1075 | 2.4484 |
| 2.3078 | 0.36 | 1100 | 2.4425 |
| 2.2566 | 0.37 | 1125 | 2.4418 |
| 2.3104 | 0.38 | 1150 | 2.4454 |
| 2.296 | 0.39 | 1175 | 2.4415 |
| 2.2365 | 0.39 | 1200 | 2.4390 |
| 2.2823 | 0.4 | 1225 | 2.4484 |
| 2.3187 | 0.41 | 1250 | 2.4303 |
| 2.2503 | 0.42 | 1275 | 2.4351 |
| 2.236 | 0.43 | 1300 | 2.4436 |
| 2.2241 | 0.44 | 1325 | 2.4393 |
| 2.27 | 0.44 | 1350 | 2.4415 |
| 2.1327 | 0.45 | 1375 | 2.4449 |
| 2.2509 | 0.46 | 1400 | 2.4427 |
| 2.3235 | 0.47 | 1425 | 2.4279 |
| 2.2916 | 0.48 | 1450 | 2.4534 |
| 2.3007 | 0.49 | 1475 | 2.4388 |
| 2.2441 | 0.49 | 1500 | 2.4388 |
| 2.2449 | 0.5 | 1525 | 2.4383 |
| 2.2297 | 0.51 | 1550 | 2.4355 |
| 2.2189 | 0.52 | 1575 | 2.4314 |
| 2.2334 | 0.53 | 1600 | 2.4335 |
| 2.3038 | 0.53 | 1625 | 2.4378 |
| 2.281 | 0.54 | 1650 | 2.4230 |
| 2.3771 | 0.55 | 1675 | 2.4358 |
| 2.2954 | 0.56 | 1700 | 2.4272 |
| 2.3176 | 0.57 | 1725 | 2.4333 |
| 2.2551 | 0.58 | 1750 | 2.4320 |
| 2.2292 | 0.58 | 1775 | 2.4288 |
| 2.2678 | 0.59 | 1800 | 2.4316 |
| 2.2064 | 0.6 | 1825 | 2.4344 |
| 2.285 | 0.61 | 1850 | 2.4272 |
| 2.264 | 0.62 | 1875 | 2.4307 |
| 2.1799 | 0.63 | 1900 | 2.4237 |
| 2.2148 | 0.63 | 1925 | 2.4274 |
| 2.2222 | 0.64 | 1950 | 2.4223 |
| 2.2573 | 0.65 | 1975 | 2.4314 |
| 2.2688 | 0.66 | 2000 | 2.4256 |
| 2.1979 | 0.67 | 2025 | 2.4247 |
| 2.3255 | 0.67 | 2050 | 2.4345 |
| 2.3069 | 0.68 | 2075 | 2.4306 |
| 2.2678 | 0.69 | 2100 | 2.4222 |
| 2.2425 | 0.7 | 2125 | 2.4224 |
| 2.2997 | 0.71 | 2150 | 2.4245 |
| 2.255 | 0.72 | 2175 | 2.4259 |
| 2.3064 | 0.72 | 2200 | 2.4281 |
| 2.2634 | 0.73 | 2225 | 2.4202 |
| 2.2347 | 0.74 | 2250 | 2.4299 |
| 2.2811 | 0.75 | 2275 | 2.4240 |
| 2.309 | 0.76 | 2300 | 2.4264 |
| 2.2937 | 0.77 | 2325 | 2.4218 |
| 2.244 | 0.77 | 2350 | 2.4227 |
| 2.2088 | 0.78 | 2375 | 2.4216 |
| 2.2219 | 0.79 | 2400 | 2.4215 |
| 2.2195 | 0.8 | 2425 | 2.4149 |
| 2.3011 | 0.81 | 2450 | 2.4246 |
| 2.2774 | 0.81 | 2475 | 2.4246 |
| 2.1974 | 0.82 | 2500 | 2.4247 |
| 2.3793 | 0.83 | 2525 | 2.4267 |
| 2.3 | 0.84 | 2550 | 2.4219 |
| 2.2795 | 0.85 | 2575 | 2.4232 |
| 2.2487 | 0.86 | 2600 | 2.4230 |
| 2.3045 | 0.86 | 2625 | 2.4235 |
| 2.2968 | 0.87 | 2650 | 2.4285 |
| 2.2446 | 0.88 | 2675 | 2.4235 |
| 2.3246 | 0.89 | 2700 | 2.4223 |
| 2.3012 | 0.9 | 2725 | 2.4228 |
| 2.2852 | 0.91 | 2750 | 2.4247 |
| 2.2467 | 0.91 | 2775 | 2.4261 |
| 2.2133 | 0.92 | 2800 | 2.4202 |
| 2.1203 | 0.93 | 2825 | 2.4171 |
| 2.231 | 0.94 | 2850 | 2.4264 |
| 2.2386 | 0.95 | 2875 | 2.4249 |
| 2.2277 | 0.95 | 2900 | 2.4227 |
| 2.2708 | 0.96 | 2925 | 2.4327 |
| 2.3401 | 0.97 | 2950 | 2.4205 |
| 2.2068 | 0.98 | 2975 | 2.4287 |
| 2.3009 | 0.99 | 3000 | 2.4215 |
| 2.2744 | 1.0 | 3025 | 2.4289 |
| 2.1902 | 1.0 | 3050 | 2.4171 |
| 2.2535 | 1.01 | 3075 | 2.4273 |
| 2.3347 | 1.02 | 3100 | 2.4219 |
| 2.2299 | 1.03 | 3125 | 2.4338 |
| 2.2649 | 1.04 | 3150 | 2.4224 |
| 2.2959 | 1.04 | 3175 | 2.4262 |
| 2.3125 | 1.05 | 3200 | 2.4176 |
| 2.29 | 1.06 | 3225 | 2.4178 |
| 2.2887 | 1.07 | 3250 | 2.4214 |
| 2.2716 | 1.08 | 3275 | 2.4224 |
| 2.2285 | 1.09 | 3300 | 2.4155 |
| 2.2141 | 1.09 | 3325 | 2.4250 |
| 2.2393 | 1.1 | 3350 | 2.4221 |
| 2.2457 | 1.11 | 3375 | 2.4213 |
| 2.2702 | 1.12 | 3400 | 2.4153 |
| 2.244 | 1.13 | 3425 | 2.4178 |
| 2.2556 | 1.14 | 3450 | 2.4241 |
| 2.2327 | 1.14 | 3475 | 2.4258 |
| 2.2078 | 1.15 | 3500 | 2.4216 |
| 2.2766 | 1.16 | 3525 | 2.4258 |
| 2.2011 | 1.17 | 3550 | 2.4166 |
| 2.2338 | 1.18 | 3575 | 2.4213 |
| 2.2521 | 1.18 | 3600 | 2.4222 |
| 2.1923 | 1.19 | 3625 | 2.4221 |
| 2.1908 | 1.2 | 3650 | 2.4229 |
| 2.2142 | 1.21 | 3675 | 2.4215 |
| 2.3107 | 1.22 | 3700 | 2.4185 |
| 2.2513 | 1.23 | 3725 | 2.4188 |
| 2.1988 | 1.23 | 3750 | 2.4244 |
| 2.3081 | 1.24 | 3775 | 2.4214 |
| 2.2984 | 1.25 | 3800 | 2.4215 |
| 2.2241 | 1.26 | 3825 | 2.4165 |
| 2.2694 | 1.27 | 3850 | 2.4204 |
| 2.268 | 1.28 | 3875 | 2.4217 |
| 2.2311 | 1.28 | 3900 | 2.4223 |
| 2.2723 | 1.29 | 3925 | 2.4181 |
| 2.25 | 1.3 | 3950 | 2.4171 |
| 2.338 | 1.31 | 3975 | 2.4090 |
| 2.2806 | 1.32 | 4000 | 2.4174 |
| 2.1563 | 1.32 | 4025 | 2.4264 |
| 2.2137 | 1.33 | 4050 | 2.4270 |
| 2.2339 | 1.34 | 4075 | 2.4179 |
| 2.2593 | 1.35 | 4100 | 2.4187 |
| 2.2901 | 1.36 | 4125 | 2.4308 |
| 2.3096 | 1.37 | 4150 | 2.4230 |
| 2.3275 | 1.37 | 4175 | 2.4239 |
| 2.2729 | 1.38 | 4200 | 2.4238 |
| 2.3258 | 1.39 | 4225 | 2.4158 |
| 2.2342 | 1.4 | 4250 | 2.4250 |
| 2.2772 | 1.41 | 4275 | 2.4310 |
| 2.2495 | 1.42 | 4300 | 2.4178 |
| 2.2578 | 1.42 | 4325 | 2.4200 |
| 2.245 | 1.43 | 4350 | 2.4237 |
| 2.2206 | 1.44 | 4375 | 2.4288 |
| 2.1952 | 1.45 | 4400 | 2.4232 |
| 2.1864 | 1.46 | 4425 | 2.4265 |
| 2.221 | 1.46 | 4450 | 2.4237 |
| 2.2828 | 1.47 | 4475 | 2.4329 |
| 2.2533 | 1.48 | 4500 | 2.4143 |
| 2.2831 | 1.49 | 4525 | 2.4368 |
| 2.2538 | 1.5 | 4550 | 2.4305 |
| 2.2023 | 1.51 | 4575 | 2.4267 |
| 2.2467 | 1.51 | 4600 | 2.4217 |
| 2.2291 | 1.52 | 4625 | 2.4330 |
| 2.2284 | 1.53 | 4650 | 2.4244 |
| 2.2123 | 1.54 | 4675 | 2.4322 |
| 2.3115 | 1.55 | 4700 | 2.4216 |
| 2.2696 | 1.56 | 4725 | 2.4232 |
| 2.2189 | 1.56 | 4750 | 2.4234 |
| 2.2323 | 1.57 | 4775 | 2.4265 |
| 2.279 | 1.58 | 4800 | 2.4213 |
| 2.2401 | 1.59 | 4825 | 2.4227 |
| 2.2346 | 1.6 | 4850 | 2.4237 |
| 2.1738 | 1.6 | 4875 | 2.4226 |
| 2.2086 | 1.61 | 4900 | 2.4137 |
| 2.2422 | 1.62 | 4925 | 2.4225 |
| 2.2479 | 1.63 | 4950 | 2.4220 |
| 2.2511 | 1.64 | 4975 | 2.4221 |
| 2.2086 | 1.65 | 5000 | 2.4272 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
reciprocate/mistral-7b-gsm8k-code-rm
|
reciprocate
| 2024-03-24T22:44:41Z | 36 | 3 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-classification",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-24T22:00:02Z |
---
library_name: transformers
tags: []
---
This is a Mistral-7B Reward Model trained on [reciprocate/tinygsm_dpo](https://huggingface.co/datasets/reciprocate/tinygsm_dpo)
```python
from transformers import pipeline
reward_fn = pipeline(
"text-classification",
model="reciprocate/mistral-7b-gsm8k-code-rm",
truncation=True,
max_length=4096,
function_to_apply="none"
)
prompt = """\
Consider the following grade-school math problem: Megan has read 32 books this year. Kelcie has read 1/4 the amount of books that Megan has read. Greg has read 9 more than twice the number of books that Kelcie has read. How many books total have Megan, Kelcie, and Greg read?
Solve this problem using code.
- Give the complete solution to solve the problem written in Python.
- The program should contain multiple lines of code and end with 'result = XXX'.
- Use markdown to format your response starting with '```python' and ending with '```'.
"""
output = """\
Let's solve this problem using Python code.
```python
books_megan = 32
books_kelcie = books_megan / 4
books_kelcie = int(books_kelcie)
books_greg = 2 * books_kelcie + 9
total_books = books_megan + books_kelcie + books_greg
result = total_books```
"""
chats = [[
{"role": "user", "content": prompt},
{"role": "assistant", "content": output}
]]
inputs = [reward_fn.tokenizer.apply_chat_template(chat, tokenize=False) for chat in chats]
output = reward_fn(inputs)
scores = [x["score"] for x in output]
print(scores)
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.