modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-07 18:30:29
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 544
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-07 18:30:28
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
GouldJayden/ppo-LunarLander-v2
|
GouldJayden
| 2023-11-26T14:13:21Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-26T14:13:08Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 263.41 +/- 23.75
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
dnnagy/RestoreFormerPlusPlus
|
dnnagy
| 2023-11-26T14:09:57Z | 0 | 1 | null |
[
"onnx",
"license:mit",
"region:us"
] | null | 2023-11-26T13:59:44Z |
---
license: mit
---
Pretrained RestoreFormer++ model downloaded from https://github.com/wzhouxiff/RestoreFormerPlusPlus/releases/download/v1.0.0/RestoreFormer++.ckpt
and converted from ckpt to ONNX.
SHA256: d589614d059c0fdc43083690c7eeb67b229c0452abf78e084e12c03214fda8bd
|
cuadron11/suicide-distilbert-original-5-5
|
cuadron11
| 2023-11-26T14:07:32Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-11-26T13:55:44Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
model-index:
- name: suicide-distilbert-original-5-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# suicide-distilbert-original-5-5
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3097
- Accuracy: {'accuracy': 0.33}
- Precision: 0.2925
- Recall: 0.33
- Fscore: 0.3091
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | Fscore |
|:-------------:|:-----:|:----:|:---------------:|:------------------:|:---------:|:------:|:------:|
| No log | 1.0 | 10 | 1.6028 | {'accuracy': 0.31} | 0.0961 | 0.31 | 0.1467 |
| No log | 2.0 | 20 | 1.5911 | {'accuracy': 0.31} | 0.0961 | 0.31 | 0.1467 |
| No log | 3.0 | 30 | 1.5529 | {'accuracy': 0.31} | 0.0971 | 0.31 | 0.1478 |
| No log | 4.0 | 40 | 1.5191 | {'accuracy': 0.32} | 0.1532 | 0.32 | 0.2072 |
| No log | 5.0 | 50 | 1.4784 | {'accuracy': 0.35} | 0.1829 | 0.35 | 0.2343 |
| No log | 6.0 | 60 | 1.4951 | {'accuracy': 0.39} | 0.3290 | 0.39 | 0.2927 |
| No log | 7.0 | 70 | 1.5486 | {'accuracy': 0.35} | 0.2532 | 0.35 | 0.2676 |
| No log | 8.0 | 80 | 1.5740 | {'accuracy': 0.38} | 0.3537 | 0.38 | 0.3242 |
| No log | 9.0 | 90 | 1.6350 | {'accuracy': 0.37} | 0.3219 | 0.37 | 0.3439 |
| No log | 10.0 | 100 | 1.7061 | {'accuracy': 0.38} | 0.3253 | 0.38 | 0.3401 |
| No log | 11.0 | 110 | 1.7335 | {'accuracy': 0.37} | 0.3214 | 0.37 | 0.3430 |
| No log | 12.0 | 120 | 1.8224 | {'accuracy': 0.39} | 0.3332 | 0.39 | 0.3558 |
| No log | 13.0 | 130 | 1.8892 | {'accuracy': 0.37} | 0.3207 | 0.37 | 0.3414 |
| No log | 14.0 | 140 | 2.0359 | {'accuracy': 0.34} | 0.3092 | 0.34 | 0.3232 |
| No log | 15.0 | 150 | 2.0748 | {'accuracy': 0.36} | 0.4394 | 0.36 | 0.3411 |
| No log | 16.0 | 160 | 2.2032 | {'accuracy': 0.37} | 0.3182 | 0.37 | 0.3367 |
| No log | 17.0 | 170 | 2.2481 | {'accuracy': 0.34} | 0.2971 | 0.34 | 0.3141 |
| No log | 18.0 | 180 | 2.2278 | {'accuracy': 0.34} | 0.2957 | 0.34 | 0.3157 |
| No log | 19.0 | 190 | 2.2986 | {'accuracy': 0.32} | 0.2828 | 0.32 | 0.2981 |
| No log | 20.0 | 200 | 2.3097 | {'accuracy': 0.33} | 0.2925 | 0.33 | 0.3091 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
blanchon/q-Taxi-v3-v1
|
blanchon
| 2023-11-26T14:06:53Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-26T14:06:51Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.76
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="blanchon/q-Taxi-v3-v1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
cuadron11/suicide-distilbert-extended-5-5
|
cuadron11
| 2023-11-26T13:54:38Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-11-26T13:26:47Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
model-index:
- name: suicide-distilbert-extended-5-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# suicide-distilbert-extended-5-5
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9133
- Accuracy: {'accuracy': 0.3873050026896181}
- Precision: 0.3977
- Recall: 0.3873
- Fscore: 0.3793
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | Fscore |
|:-------------:|:-----:|:----:|:---------------:|:---------------------------------:|:---------:|:------:|:------:|
| No log | 1.0 | 349 | 1.4635 | {'accuracy': 0.3442711135018827} | 0.3793 | 0.3443 | 0.3050 |
| 1.4502 | 2.0 | 698 | 1.4143 | {'accuracy': 0.3862291554599247} | 0.3954 | 0.3862 | 0.3454 |
| 1.1938 | 3.0 | 1047 | 1.4696 | {'accuracy': 0.399677245831092} | 0.4354 | 0.3997 | 0.3775 |
| 1.1938 | 4.0 | 1396 | 1.7198 | {'accuracy': 0.39483593329747174} | 0.4181 | 0.3948 | 0.3799 |
| 0.714 | 5.0 | 1745 | 1.9133 | {'accuracy': 0.3873050026896181} | 0.3977 | 0.3873 | 0.3793 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
blanchon/q-FrozenLake-v1-4x4-noSlippery
|
blanchon
| 2023-11-26T13:54:26Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-26T13:54:24Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="blanchon/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
sravaniayyagari/fine-tuned-llama2-7b
|
sravaniayyagari
| 2023-11-26T13:54:24Z | 0 | 0 |
peft
|
[
"peft",
"pytorch",
"region:us"
] | null | 2023-11-24T10:22:40Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
Hanzalwi/bloom-1b-finetuned-aings-validation-data-2
|
Hanzalwi
| 2023-11-26T13:50:04Z | 1 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"bloom",
"arxiv:1910.09700",
"base_model:bigscience/bloom-1b1",
"base_model:adapter:bigscience/bloom-1b1",
"region:us"
] | null | 2023-11-26T09:26:35Z |
---
library_name: peft
base_model: bigscience/bloom-1b1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.3.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.3.dev0
|
Norod78/SDXL-simpstyle-Lora-v2
|
Norod78
| 2023-11-26T13:47:32Z | 24 | 2 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"the simpsons",
"style",
"cartoon",
"simpsons",
"sdxl style lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] |
text-to-image
| 2023-11-25T19:33:30Z |
---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Rent&allowDerivatives=True&allowDifferentLicense=False
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- the simpsons
- style
- cartoon
- simpsons
- sdxl style lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Simpstyle
widget:
- text: 'Rick Sanchez from the TV show "Rick and Morty" Simpstyle '
output:
url: >-
3920282.jpeg
- text: 'the girl with a pearl earring simpstyle '
output:
url: >-
3920310.jpeg
- text: 'A vibrant full body oil painting of She-Ra simpstyle, Very detailed, clean, high quality, sharp image '
output:
url: >-
3920319.jpeg
- text: 'A socially awkward potato Simpstyle '
output:
url: >-
3920312.jpeg
- text: 'Rick Sanchez from the TV show "Rick and Morty" Simpstyle '
output:
url: >-
3920311.jpeg
- text: 'Dr. who and the TARDIS Simpstyle '
output:
url: >-
3920314.jpeg
- text: 'Cyberman Simpstyle '
output:
url: >-
3920316.jpeg
- text: 'Wonderwoman Simpstyle '
output:
url: >-
3920318.jpeg
- text: 'Wonderwoman Simpstyle '
output:
url: >-
3920317.jpeg
---
# SDXL Simpsons Style
<Gallery />
([CivitAI](https://civitai.com/models/131753))
## Model description
<p>Make everything look like it's on the Simpsons show</p><p>Use "simpstyle" in your prompt (I use it as the end of the prompt, before the template wordings start)</p>
<p>
This is a newer (and hopefully better) version of [Norod78/SDXL-simpstyle-Lora](https://huggingface.co/Norod78/SDXL-simpstyle-Lora)
</p>
## Trigger words
You should use `Simpstyle` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Norod78/sdxl-simpsons-style/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Norod78/sdxl-simpsons-style', weight_name='SDXL-Simpstyle-Lora-v2-r16.safetensors')
image = pipeline('Wonderwoman Simpstyle ').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Ginger1704/ppo-Pyramids
|
Ginger1704
| 2023-11-26T13:45:44Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-11-26T13:45:43Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Ginger1704/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
rrw23/pets5
|
rrw23
| 2023-11-26T13:43:27Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-11-26T00:40:37Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - rrw23/pets5
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the pcuenq/oxford-pets dataset. You can find some example images in the following.




|
Yuta555/Llama-2-7b-MBTI-binary-clf-4th
|
Yuta555
| 2023-11-26T13:43:19Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2023-11-26T13:43:00Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.3.dev0
|
Yuta555/Llama-2-7b-MBTI-binary-clf-3rd
|
Yuta555
| 2023-11-26T13:39:38Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2023-11-26T13:39:27Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.3.dev0
|
athirdpath/CleverMommy-mix-20b-GGUF
|
athirdpath
| 2023-11-26T13:33:58Z | 0 | 0 | null |
[
"license:cc-by-nc-4.0",
"region:us"
] | null | 2023-11-26T12:45:15Z |
---
license: cc-by-nc-4.0
---
GGUF quants from extended part of my effort to create Eileithyia-20B. This model is made by following the recipe below, inverting it, then SLERPing the models back together at 0.5, hopefully fusing the models into one block for use with Harmonia.
slices:
- sources:
- model: microsoft/Orca-2-13b
-
layer_range: [0, 16]
- sources:
- model: athirdpath/Eileithyia-13B
-
layer_range: [8, 24]
- sources:
- model: microsoft/Orca-2-13b
-
layer_range: [17, 32]
- sources:
- model: athirdpath/Eileithyia-13B
-
layer_range: [25, 40]
merge_method: passthrough
dtype: float16
Thanks to Undi95 for pioneering the recipe.
|
tensor-diffusion/melaura-merge
|
tensor-diffusion
| 2023-11-26T13:19:40Z | 30 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"text-to-image",
"DiffusionPipeline",
"license:openrail++",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-11-23T13:13:00Z |
---
license: openrail++
pipeline_tag: text-to-image
tags:
- stable-diffusion
- text-to-image
- diffusers
- DiffusionPipeline
inference:
parameter:
width: 768
height: 768
negative_prompt: >-
lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit,
fewer digits, cropped, worst quality, low quality, normal quality, jpeg,
artifacts, signature, watermark, username, blurry, ugly, duplicate,
morbid, mutilated, extra fingers, mutated hands, poorly drawn hands,
poorly drawn face, mutation, deformed, blurry, bad anatomy, bad
widget:
- text: melaura, girl, hd, pink lips, detailed, age 16, Off-shoulder top
example_title: Off-shoulder top
- text: melaura, girl, hd, shiny cheeks
example_title: shiny cheeks
- text: melaura, girl,
example_title: Triger
library_name: diffusers
---
|
Elikue/q-Taxi-v3-rgb_array
|
Elikue
| 2023-11-26T13:18:42Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-26T13:14:07Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3-rgb_array
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Elikue/q-Taxi-v3-rgb_array", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Undi95/Leyley-13B-Lora
|
Undi95
| 2023-11-26T13:14:56Z | 8 | 4 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2023-11-26T05:54:33Z |
---
tags:
- generated_from_trainer
model-index:
- name: Leyley-13B-LoRA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/6gPGJuqNbLXk9mhrvkXo2.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
# Leyley-13B-lora
Train required to find an usable one (Oh brother...): [1](https://wandb.ai/undis95/leyleytest?workspace=user-undis95) - [2](https://wandb.ai/undis95/leyleytest2?workspace=user-undis95) - [3](https://wandb.ai/undis95/leyleytest2-noro?workspace=user-undis95) - [4](https://wandb.ai/undis95/leyleytest3-noro?workspace=user-undis95) - [5](https://wandb.ai/undis95/leyleytest4-noro?workspace=user-undis95)
This LoRA was trained on [Noromaid](https://huggingface.co/NeverSleep/Noromaid-13b-v0.1.1) from scratch using a [custom dataset](https://github.com/Undi95/somethingdata) of the game "The Coffin of Andy and Leyley".
It achieves the following results on the evaluation set:
- Loss: 1.1214
## Model description
LoRA of Andrew and Ashley from the game.
Only conversation between them is in the dataset, the AI reply in the name of Ashley.
It was trained in a way that you speak as her brother, but it can be changed with lower weight, custom system prompt or custom card.
## Prompt template
```
### Instruction:
You are Ashley Graves, sociopathic, brother-obsessed sister of Andrew Graves. In the following chat, you will talk with Andrew. Andrew called you Leyley as a child, and you called him Andy. Andrew does not like being called Andy.
Andrew: {prompt}
### Response:
Ashley:
### Input:
Andrew: {input}
```
Or
```
### Instruction:
You are Ashley Graves. In the following chat, you will talk with {{user}}.
{prompt}
### Response:
### Input:
{input}
```
## Recommanded settings

Or

Also, you HAVE to desactivate this :

### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-07
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 10
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8362 | 0.03 | 1 | 1.7488 |
| 2.035 | 2.46 | 80 | 1.6462 |
| 1.5489 | 4.92 | 160 | 1.4901 |
| 1.4392 | 7.38 | 240 | 1.3567 |
| 1.2196 | 9.85 | 320 | 1.2475 |
| 1.3219 | 12.31 | 400 | 1.2089 |
| 1.2171 | 14.77 | 480 | 1.1870 |
| 1.1686 | 17.23 | 560 | 1.1730 |
| 1.1506 | 19.69 | 640 | 1.1615 |
| 1.1829 | 22.15 | 720 | 1.1513 |
| 1.267 | 24.62 | 800 | 1.1454 |
| 1.0857 | 27.08 | 880 | 1.1367 |
| 1.0795 | 29.54 | 960 | 1.1345 |
| 1.0453 | 32.0 | 1040 | 1.1317 |
| 1.2093 | 34.46 | 1120 | 1.1283 |
| 1.1442 | 36.92 | 1200 | 1.1253 |
| 0.966 | 39.38 | 1280 | 1.1239 |
| 0.9576 | 41.85 | 1360 | 1.1227 |
| 1.0146 | 44.31 | 1440 | 1.1222 |
| 1.0243 | 46.77 | 1520 | 1.1213 |
| 1.0192 | 49.23 | 1600 | 1.1214 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.7
- Tokenizers 0.15.0
|
vihangd/dopeyplats-1.1b-2T-v1
|
vihangd
| 2023-11-26T13:12:49Z | 1,511 | 3 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-26T13:05:57Z |
---
license: apache-2.0
---
<p><h1> DopeyPlats-1.1b V2 </h1></p>
An experimental finetune of TinyLLaMA 1.1b 2T with Alpaca-QLoRA with some dpo goodness
<h2> Datasets </h2>
Trained on alpca style datasets
<p><h2> Prompt Template </h2></p>
Uses alpaca style prompt template
|
Elikue/q-FrozenLake-v1-4x4-noSlippery
|
Elikue
| 2023-11-26T13:10:24Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-26T13:10:21Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Elikue/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
kejolong/cyberpunk
|
kejolong
| 2023-11-26T13:04:51Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-11-17T16:33:47Z |
---
license: creativeml-openrail-m
---
|
zaidbhatti/t5-pst-ans
|
zaidbhatti
| 2023-11-26T12:58:16Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"question-answering",
"generated_from_trainer",
"base_model:allenai/unifiedqa-t5-small",
"base_model:finetune:allenai/unifiedqa-t5-small",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-11-26T10:03:22Z |
---
base_model: allenai/unifiedqa-t5-small
tags:
- generated_from_trainer
model-index:
- name: t5-pst-ans
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-pst-ans
This model is a fine-tuned version of [allenai/unifiedqa-t5-small](https://huggingface.co/allenai/unifiedqa-t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.2424
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 21 | 5.5295 |
| No log | 2.0 | 42 | 5.3172 |
| No log | 3.0 | 63 | 5.2424 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1
- Datasets 2.14.6
- Tokenizers 0.14.1
|
rajesh06/mistral-7b-llama-riding-camel
|
rajesh06
| 2023-11-26T12:52:54Z | 5 | 0 |
peft
|
[
"peft",
"safetensors",
"en",
"arxiv:1910.09700",
"base_model:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"base_model:adapter:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"region:us"
] | null | 2023-11-26T12:50:56Z |
---
library_name: peft
base_model: TheBloke/Mistral-7B-Instruct-v0.1-GPTQ
language:
- en
---
# Model Card for Model ID
Llama riding Camel. Mistral-7B-Instruct-v0.1-GPTQ model trained using jokes from `taivop/joke-dataset`.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Rajesh Baidya
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** Quantized
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: gptq
- bits: 8
- tokenizer: None
- dataset: None
- group_size: 128
- damp_percent: 0.1
- desc_act: True
- sym: True
- true_sequential: True
- use_cuda_fp16: False
- model_seqlen: None
- block_name_to_quantize: None
- module_name_preceding_first_block: None
- batch_size: 1
- pad_token_id: None
- use_exllama: True
- max_input_length: None
- exllama_config: {'version': <ExllamaVersion.ONE: 1>}
- cache_block_outputs: True
### Framework versions
- PEFT 0.6.2
|
lorddestrian/eli5_clm-model
|
lorddestrian
| 2023-11-26T12:44:59Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-26T10:21:08Z |
---
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_trainer
model-index:
- name: eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eli5_clm-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7663
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.8643 | 1.0 | 1136 | 3.7824 |
| 3.7639 | 2.0 | 2272 | 3.7699 |
| 3.7244 | 3.0 | 3408 | 3.7663 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
zjo/whisper-small-hi
|
zjo
| 2023-11-26T12:41:34Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-11-24T05:46:38Z |
---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-small-hi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-hi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1262
- Wer: 100.0
- Cer: 14.8636
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:-----:|:-------:|
| No log | 1.67 | 10 | 3.4373 | 98.0 | 17.6858 |
| No log | 3.33 | 20 | 3.2685 | 98.0 | 16.3688 |
| 3.0497 | 5.0 | 30 | 2.7666 | 100.0 | 16.4628 |
| 3.0497 | 6.67 | 40 | 2.1262 | 100.0 | 14.8636 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
CADM97/a2c-PandaReachDense-v3
|
CADM97
| 2023-11-26T12:39:36Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-26T12:34:06Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.29 +/- 0.15
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned_GroundTruth_5times_batch8_8e4_seed100
|
behzadnet
| 2023-11-26T12:33:03Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"region:us"
] | null | 2023-11-26T12:33:00Z |
---
library_name: peft
base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
YusufDagdeviren/turkishmusiclyricsmodel
|
YusufDagdeviren
| 2023-11-26T12:31:37Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"tr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-11-24T09:19:06Z |
---
license: apache-2.0
language:
- tr
---
# Model Description(EN)
This project uses a specially trained language model for tasks such as sentiment analysis on Turkish lyrics. The model is built by tweaking 'bert-base-turkish-cased', a pre-trained BERT model called BERTurk developed using PyTorch and Huggingface Transformers libraries. Since this model is specially trained to better understand Turkish language structure and culture, it aims to produce more effective results on Turkish lyrics.
BERT (Bidirectional Encoder Representations from Transformers) is a language model architecture known for its ability to understand prior vocabulary and context information. 'bert-base-turkish-cased' is pre-trained as a large language model on texts in the Turkish language. By focusing on general language understanding capabilities, this model allows for fine-tuning for specific tasks, such as Turkish lyrics.
The PyTorch and Huggingface Transformers libraries are powerful tools used to facilitate the training and use of the model. The model is designed for use in music-related tasks such as sentiment analysis, and specifically aims to provide optimal performance for understanding and labeling emotional content in Turkish lyrics.
The 10-fold cross-validation method was used for training the model. This method divides the dataset into 10 equal parts and generates 10 different training-test combinations, using each part as test data and the other 9 parts as training data in turn. Each combination helps us to evaluate the overall performance of the model more reliably. Since this process takes place on different subset combinations of the training data, it increases the generalization ability of the model and prevents overfitting. That is, the performance of the model is evaluated by averaging over 10 different test sets, which allows the model to provide a more reliable and overall performance.
# Model Açıklaması(TR)
Bu proje, Türkçe şarkı sözleri üzerinde duygu analizi gibi görevler için özel olarak eğitilmiş bir dil modeli kullanmaktadır. Model, PyTorch ve Huggingface Transformers kütüphaneleri kullanılarak geliştirilen BERTurk isimli önceden eğitilmiş bir BERT modeli olan 'bert-base-turkish-cased' üzerine ince ayar yapılarak oluşturulmuştur. Bu model, Türkçe dil yapısını ve kültürünü daha iyi anlamak üzere özel olarak eğitildiği için Türkçe şarkı sözleri üzerinde daha etkili sonuçlar üretmeyi amaçlamaktadır.
BERT (Bidirectional Encoder Representations from Transformers), önceki kelime ve bağlam bilgilerini anlama yeteneği ile bilinen bir dil modeli mimarisidir. 'bert-base-turkish-cased', Türkçe dilindeki metinler üzerinde geniş bir dil modeli olarak önceden eğitilmiştir. Bu model, genel dil anlama yetenekleri üzerine odaklanarak, Türkçe şarkı sözleri gibi belirli görevler için ince ayar yapılmasına olanak tanır.
PyTorch ve Huggingface Transformers kütüphaneleri, modelin eğitimi ve kullanımını kolaylaştırmak için kullanılan güçlü araçlardır. Bu model, duygu analizi gibi müzikle ilgili görevlerde kullanılmak üzere tasarlanmış olup, özellikle Türkçe şarkı sözleri üzerinde duygusal içeriği anlamak ve etiketlemek için optimal performans sağlamayı hedeflemektedir.
Modelin eğitimi için 10 katlı çapraz doğrulama yöntemi kullanılmıştır. Bu yöntem, veri setini 10 eşit parçaya böler ve her bir parçayı sırayla test verisi olarak kullanırken diğer 9 parçayı eğitim verisi olarak kullanarak 10 farklı eğitim-test kombinasyonu oluşturur. Her bir kombinasyon, modelin genel performansını daha güvenilir bir şekilde değerlendirmemize yardımcı olur. Bu süreç, eğitim verisinin farklı alt küme kombinasyonları üzerinde gerçekleştiği için modelin genelleme yeteneğini arttırır ve aşırı uydurmayı önler. Yani, modelin performansı, 10 farklı test seti üzerinde ortalaması alınarak değerlendirilmiştir, bu da modelin daha güvenilir ve genel bir performans sunmasını sağlar.
# Model Performance
Training Fold 1/10
Accuracy for Fold 1: 0.6875
Training Fold 2/10
Accuracy for Fold 2: 0.8
Training Fold 3/10
Accuracy for Fold 3: 0.95
Training Fold 4/10
Accuracy for Fold 4: 0.975
Training Fold 5/10
Accuracy for Fold 5: 0.9875
Training Fold 6/10
Accuracy for Fold 6: 1.0
Training Fold 7/10
Accuracy for Fold 7: 0.975
Training Fold 8/10
Accuracy for Fold 8: 1.0
Training Fold 9/10
Accuracy for Fold 9: 1.0
Training Fold 9/10
Accuracy for Fold 10: 0.975
Avarege Accuracy: 0.93499999999
## My Dataset
[My Dataset Link](https://www.kaggle.com/datasets/yusufdagdeviren/turkishmusiclyricsemotions )
## My Project
[My Project Link](https://github.com/ApolloTune)
|
Ginger1704/ppo-SnowballTarget
|
Ginger1704
| 2023-11-26T12:22:27Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-11-26T12:22:10Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Ginger1704/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
zaidbhatti/t5-pst-gen
|
zaidbhatti
| 2023-11-26T12:20:14Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:allenai/t5-small-squad2-question-generation",
"base_model:finetune:allenai/t5-small-squad2-question-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-11-26T10:40:33Z |
---
base_model: allenai/t5-small-squad2-question-generation
tags:
- generated_from_trainer
model-index:
- name: t5-pst-gen
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-pst-gen
This model is a fine-tuned version of [allenai/t5-small-squad2-question-generation](https://huggingface.co/allenai/t5-small-squad2-question-generation) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 7.1348
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 21 | 13.4676 |
| No log | 2.0 | 42 | 10.5790 |
| No log | 3.0 | 63 | 8.6378 |
| No log | 4.0 | 84 | 7.5136 |
| No log | 5.0 | 105 | 7.1348 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1
- Datasets 2.14.6
- Tokenizers 0.14.1
|
ai2sql/ai2sql_mistral_7b
|
ai2sql
| 2023-11-26T12:19:08Z | 7 | 1 |
peft
|
[
"peft",
"safetensors",
"mistral-7b",
"lora",
"text-generation",
"en",
"dataset:wikisql",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] |
text-generation
| 2023-11-24T21:13:12Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
pipeline_tag: text-generation
datasets:
- wikisql
language:
- en
tags:
- mistral-7b
- lora
widget:
- text: 'question: get people name with age equal 25 table: id, name, age'
---
# AI2sql
AI2sql is a state-of-the-art LLM for converting natural language questions to SQL queries.
## Model Description
This model card presents the finetuning of the Mistral-7b model using the PEFT library and bitsandbytes for loading large models in 4-bit. The notebook demonstrates finetuning with Low Rank Adapters (LoRA), allowing only the adapters to be finetuned instead of the entire model. The process is designed for ease of use with Google Colab and is applicable for models supporting device_map.
## Training Data
The finetuning involves a dataset on finance from Wikisql, using 10% of the data to showcase the process. The data is prepared in a prompt format for better comprehension by the model.
## Training Procedure
The training involves several steps:
1. **Installing Necessary Packages:** Installation of required libraries from their source.
2. **Model Loading:** Using QLoRA quantization to load the model, reducing memory usage.
3. **Dataset Preparation:** Tokenizing and splitting the dataset for training and testing.
4. **Applying LoRA:** Utilizing PEFT for applying low-rank adapters to the model.
5. **Running the Training:** Implementing training with specific arguments, showcasing the process with a demo setup.
6. **Evaluating the Model:** Qualitative evaluation through inferences.
## How to Use
The trained adapters can be shared on the Hugging Face Hub for easy loading. Users can directly load the adapters from the Hub and employ the model for tasks such as generating SQL queries.
## Limitations and Bias
This finetuning process is specific to the Mistral-7b model and may not generalize to other models. The focus on finance data might limit the model's applicability to other domains.
## Ethical Considerations
Users should be aware of potential biases in the training data, especially given its focus on finance, and should consider this when applying the model to real-world scenarios.
## Acknowledgements
This work utilizes resources and tools from Hugging Face, including the PEFT library, bitsandbytes, and other associated libraries. The process is designed to be accessible and implementable using Google Colab.
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.3.dev0
|
SchubergPhilis/TinyLlama-1.1B-Chat-v0.4-ENG
|
SchubergPhilis
| 2023-11-26T11:54:47Z | 9 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:SchubergPhilis/OpenAssistant-Top1-ENG-V1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-26T11:29:47Z |
---
license: apache-2.0
datasets:
- SchubergPhilis/OpenAssistant-Top1-ENG-V1
---
## TinyLlama-1.1B-Chat-v0.4-ENG
\ Schuberg Philis
Anoosh Ahmadi
#### Base Model: TinyLlama-1.1B-intermediate-step-715k-1.5T
Model creator: Zhang Peiyuan
Original model: TinyLlama-1.1B-intermediate-step-715k-1.5T
https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-715k-1.5T
#### Description
This repo contains ***SafeTensor*** model files for TinyLlama-1.1B-intermediate-step-715k-1.5T
This model is trained by ***SchubergPhilis/OpenAssistant-Top1-ENG-V1*** English conversations only.
|
elvis92/pets_rank_2
|
elvis92
| 2023-11-26T11:48:18Z | 0 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-11-26T06:41:59Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - elvis92/pets_rank_2
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the pcuenq/oxford-pets dataset. You can find some example images in the following.




|
deepnight-research/zsc-text
|
deepnight-research
| 2023-11-26T11:46:54Z | 59 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"zero-shot-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2023-11-25T20:51:18Z |
---
license: mit
pipeline_tag: zero-shot-classification
---
|
iamkhadke/zephyr-7b-beta_bf
|
iamkhadke
| 2023-11-26T11:43:42Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:adapter:HuggingFaceH4/zephyr-7b-beta",
"region:us"
] | null | 2023-11-26T11:43:35Z |
---
library_name: peft
base_model: HuggingFaceH4/zephyr-7b-beta
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.3.dev0
|
LizzyBennet/sample
|
LizzyBennet
| 2023-11-26T11:43:25Z | 5 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2023-11-03T03:47:07Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2
|
nyamato/distilbert-base-uncased-finetuned-emotion
|
nyamato
| 2023-11-26T11:37:20Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-11-26T10:54:18Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.923
- name: F1
type: f1
value: 0.9231347843792068
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2203
- Accuracy: 0.923
- F1: 0.9231
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8478 | 1.0 | 250 | 0.3235 | 0.901 | 0.8991 |
| 0.2492 | 2.0 | 500 | 0.2203 | 0.923 | 0.9231 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
dchuang777/textual_inversion_cat
|
dchuang777
| 2023-11-26T11:34:28Z | 9 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-11-26T07:25:38Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - dchuang777/textual_inversion_cat
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
susnato/detr-resnet-50_finetuned_plant_disease_detection_processed
|
susnato
| 2023-11-26T11:28:59Z | 45 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-11-25T06:25:30Z |
---
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
model-index:
- name: detr-resnet-50_finetuned_plant_disease_detection_processed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_finetuned_plant_disease_detection_processed
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6402
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.2965 | 0.19 | 50 | 4.4784 |
| 4.7649 | 0.38 | 100 | 4.3439 |
| 4.4907 | 0.57 | 150 | 4.0077 |
| 4.3973 | 0.76 | 200 | 3.2143 |
| 3.4084 | 0.95 | 250 | 2.6818 |
| 2.7091 | 1.14 | 300 | 2.3603 |
| 2.4601 | 1.33 | 350 | 1.9004 |
| 2.1096 | 1.52 | 400 | 1.5639 |
| 1.6941 | 1.7 | 450 | 1.3240 |
| 1.4949 | 1.89 | 500 | 1.1247 |
| 1.2246 | 2.08 | 550 | 1.0421 |
| 1.4479 | 2.27 | 600 | 1.1546 |
| 1.1327 | 2.46 | 650 | 1.1098 |
| 1.1184 | 2.65 | 700 | 0.8950 |
| 1.0516 | 2.84 | 750 | 0.8601 |
| 1.2556 | 3.03 | 800 | 0.8575 |
| 1.1216 | 3.22 | 850 | 0.8314 |
| 1.1027 | 3.41 | 900 | 1.0676 |
| 1.0815 | 3.6 | 950 | 0.9716 |
| 1.2254 | 3.79 | 1000 | 1.0091 |
| 0.9896 | 3.98 | 1050 | 0.7600 |
| 1.0736 | 4.17 | 1100 | 0.8907 |
| 1.2462 | 4.36 | 1150 | 0.7506 |
| 0.9959 | 4.55 | 1200 | 0.7623 |
| 1.0895 | 4.73 | 1250 | 0.7570 |
| 1.0736 | 4.92 | 1300 | 0.8248 |
| 1.1015 | 5.11 | 1350 | 0.8682 |
| 1.1423 | 5.3 | 1400 | 0.8340 |
| 1.0906 | 5.49 | 1450 | 0.8372 |
| 0.9333 | 5.68 | 1500 | 0.8420 |
| 1.1347 | 5.87 | 1550 | 0.8718 |
| 0.9407 | 6.06 | 1600 | 0.8270 |
| 0.8138 | 6.25 | 1650 | 0.8241 |
| 0.8731 | 6.44 | 1700 | 0.8013 |
| 1.0146 | 6.63 | 1750 | 0.7704 |
| 0.8847 | 6.82 | 1800 | 0.8885 |
| 1.0283 | 7.01 | 1850 | 0.8804 |
| 1.0359 | 7.2 | 1900 | 0.7907 |
| 0.987 | 7.39 | 1950 | 0.7997 |
| 1.0279 | 7.58 | 2000 | 0.9095 |
| 0.9027 | 7.77 | 2050 | 0.6823 |
| 0.927 | 7.95 | 2100 | 0.6728 |
| 1.0499 | 8.14 | 2150 | 0.6537 |
| 0.9774 | 8.33 | 2200 | 0.6455 |
| 0.9171 | 8.52 | 2250 | 0.6456 |
| 1.0002 | 8.71 | 2300 | 0.6723 |
| 0.9052 | 8.9 | 2350 | 0.6554 |
| 0.9029 | 9.09 | 2400 | 0.7272 |
| 1.0247 | 9.28 | 2450 | 0.6997 |
| 0.8296 | 9.47 | 2500 | 0.6661 |
| 1.0659 | 9.66 | 2550 | 0.7914 |
| 1.0226 | 9.85 | 2600 | 0.7823 |
| 0.9419 | 10.04 | 2650 | 0.7709 |
| 0.9008 | 10.23 | 2700 | 0.8114 |
| 0.826 | 10.42 | 2750 | 0.7042 |
| 0.7957 | 10.61 | 2800 | 0.7764 |
| 1.0086 | 10.8 | 2850 | 0.8362 |
| 1.0076 | 10.98 | 2900 | 0.8048 |
| 0.9613 | 11.17 | 2950 | 0.6945 |
| 0.9155 | 11.36 | 3000 | 0.7011 |
| 0.9436 | 11.55 | 3050 | 0.6524 |
| 0.9134 | 11.74 | 3100 | 0.6582 |
| 0.817 | 11.93 | 3150 | 0.6678 |
| 0.8545 | 12.12 | 3200 | 0.6520 |
| 0.9801 | 12.31 | 3250 | 0.7813 |
| 0.8566 | 12.5 | 3300 | 0.7205 |
| 0.8966 | 12.69 | 3350 | 0.6326 |
| 0.8705 | 12.88 | 3400 | 0.6577 |
| 0.8193 | 13.07 | 3450 | 0.6391 |
| 0.8099 | 13.26 | 3500 | 0.6658 |
| 0.921 | 13.45 | 3550 | 0.6535 |
| 0.7915 | 13.64 | 3600 | 0.6576 |
| 1.1439 | 13.83 | 3650 | 0.6593 |
| 0.8702 | 14.02 | 3700 | 0.6519 |
| 0.73 | 14.2 | 3750 | 0.6403 |
| 0.8306 | 14.39 | 3800 | 0.6393 |
| 0.8678 | 14.58 | 3850 | 0.6405 |
| 1.0003 | 14.77 | 3900 | 0.6407 |
| 1.023 | 14.96 | 3950 | 0.6402 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.15.0
|
ahmadtashfeen/my_awesome_qa_model
|
ahmadtashfeen
| 2023-11-26T11:11:40Z | 15 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"question-answering",
"generated_from_trainer",
"base_model:deepset/roberta-base-squad2",
"base_model:finetune:deepset/roberta-base-squad2",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-11-26T11:11:05Z |
---
license: cc-by-4.0
base_model: deepset/roberta-base-squad2
tags:
- generated_from_trainer
model-index:
- name: my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0044
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 21 | 0.4687 |
| No log | 2.0 | 42 | 0.5886 |
| No log | 3.0 | 63 | 0.6614 |
| No log | 4.0 | 84 | 0.6629 |
| No log | 5.0 | 105 | 0.8131 |
| No log | 6.0 | 126 | 1.1301 |
| No log | 7.0 | 147 | 0.9610 |
| No log | 8.0 | 168 | 1.0402 |
| No log | 9.0 | 189 | 1.0127 |
| No log | 10.0 | 210 | 1.0044 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
hkivancoral/hushem_5x_deit_base_sgd_0001_fold5
|
hkivancoral
| 2023-11-26T11:03:06Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-11-26T10:32:10Z |
---
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_5x_deit_base_sgd_0001_fold5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.24390243902439024
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_base_sgd_0001_fold5
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3415
- Accuracy: 0.2439
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4294 | 1.0 | 28 | 1.3731 | 0.1951 |
| 1.4335 | 2.0 | 56 | 1.3715 | 0.1707 |
| 1.422 | 3.0 | 84 | 1.3699 | 0.1951 |
| 1.4098 | 4.0 | 112 | 1.3685 | 0.1951 |
| 1.4327 | 5.0 | 140 | 1.3671 | 0.1951 |
| 1.4151 | 6.0 | 168 | 1.3658 | 0.1951 |
| 1.4318 | 7.0 | 196 | 1.3646 | 0.1951 |
| 1.4306 | 8.0 | 224 | 1.3633 | 0.1951 |
| 1.4139 | 9.0 | 252 | 1.3622 | 0.1951 |
| 1.3993 | 10.0 | 280 | 1.3610 | 0.1951 |
| 1.4169 | 11.0 | 308 | 1.3598 | 0.1951 |
| 1.4153 | 12.0 | 336 | 1.3588 | 0.1951 |
| 1.3944 | 13.0 | 364 | 1.3578 | 0.2195 |
| 1.3902 | 14.0 | 392 | 1.3568 | 0.2439 |
| 1.4112 | 15.0 | 420 | 1.3559 | 0.2439 |
| 1.3995 | 16.0 | 448 | 1.3550 | 0.2439 |
| 1.3975 | 17.0 | 476 | 1.3541 | 0.2195 |
| 1.4069 | 18.0 | 504 | 1.3533 | 0.2195 |
| 1.4187 | 19.0 | 532 | 1.3524 | 0.2195 |
| 1.4025 | 20.0 | 560 | 1.3517 | 0.2195 |
| 1.3945 | 21.0 | 588 | 1.3509 | 0.2195 |
| 1.3823 | 22.0 | 616 | 1.3502 | 0.2195 |
| 1.3849 | 23.0 | 644 | 1.3496 | 0.2195 |
| 1.3949 | 24.0 | 672 | 1.3489 | 0.2195 |
| 1.3838 | 25.0 | 700 | 1.3483 | 0.2195 |
| 1.3842 | 26.0 | 728 | 1.3477 | 0.2195 |
| 1.3834 | 27.0 | 756 | 1.3472 | 0.2195 |
| 1.3887 | 28.0 | 784 | 1.3466 | 0.2195 |
| 1.381 | 29.0 | 812 | 1.3461 | 0.2195 |
| 1.4001 | 30.0 | 840 | 1.3457 | 0.2439 |
| 1.3827 | 31.0 | 868 | 1.3452 | 0.2439 |
| 1.3845 | 32.0 | 896 | 1.3448 | 0.2439 |
| 1.3786 | 33.0 | 924 | 1.3444 | 0.2439 |
| 1.3866 | 34.0 | 952 | 1.3440 | 0.2439 |
| 1.3704 | 35.0 | 980 | 1.3437 | 0.2439 |
| 1.3751 | 36.0 | 1008 | 1.3433 | 0.2439 |
| 1.3701 | 37.0 | 1036 | 1.3431 | 0.2439 |
| 1.3624 | 38.0 | 1064 | 1.3428 | 0.2439 |
| 1.3758 | 39.0 | 1092 | 1.3426 | 0.2439 |
| 1.3768 | 40.0 | 1120 | 1.3423 | 0.2439 |
| 1.3757 | 41.0 | 1148 | 1.3422 | 0.2439 |
| 1.3802 | 42.0 | 1176 | 1.3420 | 0.2439 |
| 1.38 | 43.0 | 1204 | 1.3418 | 0.2439 |
| 1.3821 | 44.0 | 1232 | 1.3417 | 0.2439 |
| 1.3878 | 45.0 | 1260 | 1.3417 | 0.2439 |
| 1.3547 | 46.0 | 1288 | 1.3416 | 0.2439 |
| 1.3794 | 47.0 | 1316 | 1.3416 | 0.2439 |
| 1.3781 | 48.0 | 1344 | 1.3415 | 0.2439 |
| 1.373 | 49.0 | 1372 | 1.3415 | 0.2439 |
| 1.3839 | 50.0 | 1400 | 1.3415 | 0.2439 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
ewwerpm/m4-sWE-0.1B.pt
|
ewwerpm
| 2023-11-26T10:51:46Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-11-26T10:46:45Z |
RWKV-4-World-CHNtuned-0.1B-v1-20230617-ctx4096.pth 分片出来的 ,第一个torchscript模型,m4-sWE-0.1B.script.pt。 load没问题,就是netron打不开。

|
nanduzz/q-Taxi-v3
|
nanduzz
| 2023-11-26T10:49:38Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-26T10:49:35Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="nanduzz/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
bh8648/esg_base0-epoch5-copy3
|
bh8648
| 2023-11-26T10:20:28Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-11-26T10:20:17Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0
|
tuanio/w2v2_ablation_freeze_no_spec_augment
|
tuanio
| 2023-11-26T10:04:57Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"generated_from_trainer",
"base_model:nguyenvulebinh/wav2vec2-base-vietnamese-250h",
"base_model:finetune:nguyenvulebinh/wav2vec2-base-vietnamese-250h",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2023-11-26T08:28:48Z |
---
license: cc-by-nc-4.0
base_model: nguyenvulebinh/wav2vec2-base-vietnamese-250h
tags:
- generated_from_trainer
model-index:
- name: w2v2_ablation_freeze_no_spec_augment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# w2v2_ablation_freeze_no_spec_augment
This model is a fine-tuned version of [nguyenvulebinh/wav2vec2-base-vietnamese-250h](https://huggingface.co/nguyenvulebinh/wav2vec2-base-vietnamese-250h) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 32
- total_eval_batch_size: 128
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.14.1
|
hkivancoral/hushem_5x_deit_base_sgd_0001_fold3
|
hkivancoral
| 2023-11-26T09:59:56Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-11-26T09:29:34Z |
---
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_5x_deit_base_sgd_0001_fold3
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.3488372093023256
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_base_sgd_0001_fold3
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3613
- Accuracy: 0.3488
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4194 | 1.0 | 28 | 1.3854 | 0.3721 |
| 1.4285 | 2.0 | 56 | 1.3841 | 0.3721 |
| 1.4355 | 3.0 | 84 | 1.3830 | 0.3721 |
| 1.435 | 4.0 | 112 | 1.3819 | 0.3721 |
| 1.3846 | 5.0 | 140 | 1.3808 | 0.3488 |
| 1.4335 | 6.0 | 168 | 1.3798 | 0.3488 |
| 1.4069 | 7.0 | 196 | 1.3789 | 0.3488 |
| 1.4319 | 8.0 | 224 | 1.3779 | 0.3488 |
| 1.4012 | 9.0 | 252 | 1.3769 | 0.3488 |
| 1.401 | 10.0 | 280 | 1.3761 | 0.3488 |
| 1.4013 | 11.0 | 308 | 1.3752 | 0.3488 |
| 1.4057 | 12.0 | 336 | 1.3744 | 0.3488 |
| 1.3918 | 13.0 | 364 | 1.3736 | 0.3488 |
| 1.3961 | 14.0 | 392 | 1.3729 | 0.3721 |
| 1.3654 | 15.0 | 420 | 1.3722 | 0.3721 |
| 1.3967 | 16.0 | 448 | 1.3715 | 0.3488 |
| 1.3921 | 17.0 | 476 | 1.3708 | 0.3488 |
| 1.3819 | 18.0 | 504 | 1.3702 | 0.3488 |
| 1.3847 | 19.0 | 532 | 1.3696 | 0.3488 |
| 1.402 | 20.0 | 560 | 1.3690 | 0.3488 |
| 1.3988 | 21.0 | 588 | 1.3684 | 0.3488 |
| 1.3796 | 22.0 | 616 | 1.3679 | 0.3488 |
| 1.3761 | 23.0 | 644 | 1.3674 | 0.3721 |
| 1.3729 | 24.0 | 672 | 1.3669 | 0.3721 |
| 1.3864 | 25.0 | 700 | 1.3664 | 0.3488 |
| 1.3808 | 26.0 | 728 | 1.3660 | 0.3488 |
| 1.3849 | 27.0 | 756 | 1.3656 | 0.3488 |
| 1.3863 | 28.0 | 784 | 1.3652 | 0.3488 |
| 1.3673 | 29.0 | 812 | 1.3648 | 0.3488 |
| 1.3797 | 30.0 | 840 | 1.3644 | 0.3488 |
| 1.3679 | 31.0 | 868 | 1.3641 | 0.3488 |
| 1.3844 | 32.0 | 896 | 1.3638 | 0.3488 |
| 1.3656 | 33.0 | 924 | 1.3634 | 0.3488 |
| 1.3701 | 34.0 | 952 | 1.3631 | 0.3488 |
| 1.363 | 35.0 | 980 | 1.3629 | 0.3488 |
| 1.386 | 36.0 | 1008 | 1.3627 | 0.3488 |
| 1.3733 | 37.0 | 1036 | 1.3624 | 0.3488 |
| 1.3666 | 38.0 | 1064 | 1.3622 | 0.3488 |
| 1.3489 | 39.0 | 1092 | 1.3620 | 0.3488 |
| 1.3854 | 40.0 | 1120 | 1.3619 | 0.3488 |
| 1.3831 | 41.0 | 1148 | 1.3617 | 0.3488 |
| 1.3627 | 42.0 | 1176 | 1.3616 | 0.3488 |
| 1.3753 | 43.0 | 1204 | 1.3615 | 0.3488 |
| 1.3596 | 44.0 | 1232 | 1.3614 | 0.3488 |
| 1.3572 | 45.0 | 1260 | 1.3614 | 0.3488 |
| 1.3678 | 46.0 | 1288 | 1.3613 | 0.3488 |
| 1.3744 | 47.0 | 1316 | 1.3613 | 0.3488 |
| 1.3693 | 48.0 | 1344 | 1.3613 | 0.3488 |
| 1.3589 | 49.0 | 1372 | 1.3613 | 0.3488 |
| 1.3662 | 50.0 | 1400 | 1.3613 | 0.3488 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
mlwithrakesh/ppo-LunarLander-v2
|
mlwithrakesh
| 2023-11-26T09:52:26Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-25T12:41:11Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 265.90 +/- 17.53
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
teng0212/ppo-Huggy
|
teng0212
| 2023-11-26T09:43:30Z | 3 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-11-26T09:43:24Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: teng0212/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
mlwithrakesh/Taxi-v3
|
mlwithrakesh
| 2023-11-26T09:38:42Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-26T09:38:40Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.72
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
model = load_from_hub(repo_id="mlwithrakesh/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
aguinrodriguezj/FineTuning-distilBERT-SentimentAnalysis-3000samples
|
aguinrodriguezj
| 2023-11-26T09:33:11Z | 14 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-11-26T08:52:48Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: FineTuning-distilBERT-SentimentAnalysis-3000samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8833333333333333
- name: F1
type: f1
value: 0.8867313915857605
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# FineTuning-distilBERT-SentimentAnalysis-3000samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3595
- Accuracy: 0.8833
- F1: 0.8867
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 188 | 0.3306 | 0.8633 | 0.8731 |
| No log | 2.0 | 376 | 0.3595 | 0.8833 | 0.8867 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
bh8648/esg_base0-epoch5-copy2
|
bh8648
| 2023-11-26T09:29:45Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-11-26T09:29:35Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0
|
afalia/test-trainer
|
afalia
| 2023-11-26T09:20:36Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-11-26T09:20:19Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: test-trainer
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.75
- name: F1
type: f1
value: 0.8241379310344829
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-trainer
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1690
- Accuracy: 0.75
- F1: 0.8241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 459 | 0.7288 | 0.7353 | 0.8176 |
| 0.3982 | 2.0 | 918 | 1.0392 | 0.7549 | 0.8350 |
| 0.33 | 3.0 | 1377 | 1.1690 | 0.75 | 0.8241 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.1.0+cu118
- Datasets 1.17.0
- Tokenizers 0.14.1
|
zhila/ppo-LunarLander-v2
|
zhila
| 2023-11-26T09:02:49Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-26T09:02:18Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 229.51 +/- 60.21
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
zhijian12345/a2c-PandaReachDense-v3
|
zhijian12345
| 2023-11-26T09:01:30Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-26T08:52:09Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.20 +/- 0.10
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
hkivancoral/hushem_5x_deit_base_sgd_0001_fold1
|
hkivancoral
| 2023-11-26T08:57:53Z | 9 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-11-26T08:27:16Z |
---
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_5x_deit_base_sgd_0001_fold1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.3333333333333333
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_base_sgd_0001_fold1
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3736
- Accuracy: 0.3333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4044 | 1.0 | 27 | 1.3857 | 0.2889 |
| 1.4077 | 2.0 | 54 | 1.3850 | 0.2889 |
| 1.4004 | 3.0 | 81 | 1.3845 | 0.2667 |
| 1.4271 | 4.0 | 108 | 1.3839 | 0.2667 |
| 1.3916 | 5.0 | 135 | 1.3834 | 0.2889 |
| 1.3945 | 6.0 | 162 | 1.3828 | 0.2889 |
| 1.3833 | 7.0 | 189 | 1.3824 | 0.2889 |
| 1.3987 | 8.0 | 216 | 1.3819 | 0.2889 |
| 1.3885 | 9.0 | 243 | 1.3815 | 0.2889 |
| 1.4047 | 10.0 | 270 | 1.3810 | 0.2667 |
| 1.3682 | 11.0 | 297 | 1.3806 | 0.2889 |
| 1.3955 | 12.0 | 324 | 1.3802 | 0.2889 |
| 1.3888 | 13.0 | 351 | 1.3799 | 0.2889 |
| 1.3775 | 14.0 | 378 | 1.3795 | 0.2889 |
| 1.3829 | 15.0 | 405 | 1.3792 | 0.2889 |
| 1.3959 | 16.0 | 432 | 1.3788 | 0.2889 |
| 1.3739 | 17.0 | 459 | 1.3785 | 0.2889 |
| 1.3699 | 18.0 | 486 | 1.3782 | 0.2889 |
| 1.3964 | 19.0 | 513 | 1.3779 | 0.3111 |
| 1.3891 | 20.0 | 540 | 1.3776 | 0.3333 |
| 1.3791 | 21.0 | 567 | 1.3773 | 0.3333 |
| 1.3887 | 22.0 | 594 | 1.3770 | 0.3333 |
| 1.3832 | 23.0 | 621 | 1.3768 | 0.3333 |
| 1.3736 | 24.0 | 648 | 1.3765 | 0.3333 |
| 1.3564 | 25.0 | 675 | 1.3763 | 0.3111 |
| 1.3586 | 26.0 | 702 | 1.3761 | 0.3111 |
| 1.3706 | 27.0 | 729 | 1.3758 | 0.3111 |
| 1.3579 | 28.0 | 756 | 1.3756 | 0.3111 |
| 1.3705 | 29.0 | 783 | 1.3754 | 0.3111 |
| 1.3624 | 30.0 | 810 | 1.3752 | 0.3111 |
| 1.367 | 31.0 | 837 | 1.3751 | 0.3111 |
| 1.3534 | 32.0 | 864 | 1.3749 | 0.3333 |
| 1.3672 | 33.0 | 891 | 1.3747 | 0.3333 |
| 1.3654 | 34.0 | 918 | 1.3746 | 0.3333 |
| 1.3592 | 35.0 | 945 | 1.3744 | 0.3333 |
| 1.3511 | 36.0 | 972 | 1.3743 | 0.3333 |
| 1.3644 | 37.0 | 999 | 1.3742 | 0.3333 |
| 1.3508 | 38.0 | 1026 | 1.3741 | 0.3333 |
| 1.3516 | 39.0 | 1053 | 1.3740 | 0.3333 |
| 1.3605 | 40.0 | 1080 | 1.3739 | 0.3333 |
| 1.3566 | 41.0 | 1107 | 1.3739 | 0.3333 |
| 1.3647 | 42.0 | 1134 | 1.3738 | 0.3333 |
| 1.3671 | 43.0 | 1161 | 1.3737 | 0.3333 |
| 1.3471 | 44.0 | 1188 | 1.3737 | 0.3333 |
| 1.3589 | 45.0 | 1215 | 1.3737 | 0.3333 |
| 1.3648 | 46.0 | 1242 | 1.3736 | 0.3333 |
| 1.3521 | 47.0 | 1269 | 1.3736 | 0.3333 |
| 1.365 | 48.0 | 1296 | 1.3736 | 0.3333 |
| 1.3656 | 49.0 | 1323 | 1.3736 | 0.3333 |
| 1.3545 | 50.0 | 1350 | 1.3736 | 0.3333 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Djacon/rubert-tiny2-russian-emotion-detection
|
Djacon
| 2023-11-26T08:35:31Z | 103 | 3 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"russian",
"classification",
"emotion",
"emotion-detection",
"emotion-recognition",
"multiclass",
"ru",
"en",
"dataset:Djacon/ru-izard-emotions",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-08T16:58:47Z |
---
license: mit
language:
- ru
- en
tags:
- russian
- classification
- emotion
- emotion-detection
- emotion-recognition
- multiclass
widget:
- text: Сейчас ровно час дня
- text: Сегодня такой замечательный день!
- text: Жалею, что вчера сходил на этот концерт
- text: Что за бред я только что посмотрел...
- text: Куда бы сегодня сходить?
- text: Воу, это было так неожиданно
- text: Фу, эта еда просто отвратительна!
- text: В темной комнате услышал тихий посторонний шорох
- text: Извини, я не хотел чтобы так все произошло
datasets:
- Djacon/ru-izard-emotions
---
## Short Description
The __rubert-tiny2-russian-emotion-detection__ is a fine-tuned [rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) model for multi-label __emotion classification__ task, specifically on Russian texts. Trained on custom [ru-izard-emotions](https://huggingface.co/datasets/Djacon/ru-izard-emotions) dataset, so this model can recognize a spectrum of 9 emotions, including __joy__, __sadness__, __anger__, __enthusiasm__, __surprise__, __disgust__, __fear__, __guilt__, __shame__ + __neutral__ (no emotion). Project was inspired by the [Izard's model](https://en.wikipedia.org/wiki/Differential_Emotions_Scale) of human emotions.
For more information about model, please check [Github repository](https://github.com/Djacon/russian-emotion-detection)
## Training Parameters:
```yaml
Optimizer: AdamW
Schedule: LambdaLR
Learning Rate: 1e-4
Batch Size: 64
Number Of Epochs: 10
```
## Emotion Categories:
```js
0. Neutral (Нейтрально)
1. Joy (Радость)
2. Sadness (Грусть)
3. Anger (Гнев)
4. Enthusiasm (Интерес)
5. Surprise (Удивление)
6. Disgust (Отвращение)
7. Fear (Страх)
8. Guilt (Вина)
9. Shame (Стыд)
```
## Test results:
||Neutral|Joy|Sadness|Anger|Enthusiasm|Surprise|Disgust|Fear|Guilt|Shame|Mean|
|-|-|-|-|-|-|-|-|-|-|-|-|
|AUC|0.7319|0.8234|0.8069|0.7884|0.8493|0.8047|0.8147|0.9034|0.8528|0.7145|0.8090|
|F1 micro|0.7192|0.7951|0.8204|0.7642|0.8630|0.9032|0.9156|0.9482|0.9526|0.9606|0.8642|
|F1 macro|0.6021|0.7237|0.6548|0.6274|0.7291|0.5712|0.4780|0.8158|0.4879|0.4900|0.6180|
## Citations
```
@misc{Djacon,
author={Djacon},
year={2023},
publisher={Hugging Face},
journal={Hugging Face Hub},
}
```
|
NewstaR/CNC-7b-lora
|
NewstaR
| 2023-11-26T08:29:32Z | 7 | 0 |
peft
|
[
"peft",
"mistral",
"lora",
"instruct",
"custom code",
"text-generation",
"en",
"tl",
"dataset:NewstaR/clearNconcise",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:cc-by-sa-4.0",
"region:us"
] |
text-generation
| 2023-11-12T17:35:38Z |
---
license: cc-by-sa-4.0
datasets:
- NewstaR/clearNconcise
language:
- en
- tl
pipeline_tag: text-generation
tags:
- mistral
- lora
- instruct
- custom code
library_name: peft
inference: false
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for CNC-7b
## Model Details
- Name: CNC-7b
- Version: 1.0
- Release Date: November 13, 2023
## Intended Use
CNC-7b is a lora adapter for Mistral-7b (Instruct) intended to be clear, concise, and helpful in short text conversations. It is designed for conversational agents and assistants.
## Training Data
CNC-7b was trained on synthetic conversational data generated by Newstar using ChatGPT. The data was shaped using custom instructions to encourage clear, concise, and helpful responses.
## Evaluation Data
CNC-7b was evaluated on a test set of human-human conversations to measure whether responses were clear, concise, and on-topic.
## Ethical Considerations
- CNC-7b has limited conversational abilities and is not intended for complex conversations.
- The training data was filtered to remove harmful, unethical, or dangerous content.
- The model has no notion of facts about the real world. Any factual statements generated should not be assumed to be true.
## Caveats and Recommendations
- Only the Peft adapter parameters are released for CNC-7b. The full model is not released.
- CNC-7b has limited knowledge outside of conversational abilities. Do not use for anything requiring real world knowledge.
- Monitor CNC-7b conversations for harmful content generated, and re-train the model as needed.
|
cyuzhang/pets_rank_8
|
cyuzhang
| 2023-11-26T08:16:44Z | 1 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-11-26T06:40:18Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - cyuzhang/pets_rank_8
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the pcuenq/oxford-pets dataset. You can find some example images in the following.




|
LarryAIDraw/kikyou-09
|
LarryAIDraw
| 2023-11-26T08:15:26Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-11-26T08:04:03Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/209077/kiryuu-kikyou-blue-archive-lora
|
AlgorithmicResearchGroup/phi-physics
|
AlgorithmicResearchGroup
| 2023-11-26T08:15:01Z | 17 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"arxiv",
"summarization",
"custom_code",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-11-16T16:35:42Z |
---
license: apache-2.0
language:
- en
pipeline_tag: summarization
widget:
- text: What is the peak phase of T-eV?
example_title: Question Answering
tags:
- arxiv
---
# Table of Contents
0. [TL;DR](#TL;DR)
1. [Model Details](#model-details)
2. [Usage](#usage)
3. [Uses](#uses)
4. [Citation](#citation)
# TL;DR
This is a Phi-1_5 model trained on [camel-ai/physics](https://huggingface.co/datasets/camel-ai/physics). This model is for research purposes only and ***should not be used in production settings***.
## Model Description
- **Model type:** Language model
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Related Models:** [Phi-1_5](https://huggingface.co/microsoft/phi-1_5)
# Usage
Find below some example scripts on how to use the model in `transformers`:
## Using the Pytorch model
```python
from huggingface_hub import notebook_login
from datasets import load_dataset, Dataset
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model = "ArtifactAI/phi-physics"
model = AutoModelForCausalLM.from_pretrained(base_model, trust_remote_code= True)
tokenizer = AutoTokenizer.from_pretrained(base_model, trust_remote_code=True)
def generate(prompt):
inputs = tokenizer(f'''Below is an instruction that describes a task. Write a response that appropriately completes the request If you are adding additional white spaces, stop writing".\n\n### Instruction:\n{prompt}.\n\n### Response:\n ''', return_tensors="pt", return_attention_mask=False)
streamer = TextStreamer(tokenizer, skip_prompt= True)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=500)
generate("What are the common techniques used in identifying a new species, and how can scientists accurately categorize it within the existing taxonomy system?")
```
## Training Data
The model was trained on [camel-ai/phi-physics](https://huggingface.co/datasets/camel-ai/physics), a dataset of question/answer pairs.
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.2
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.2
# Citation
```
@misc{phi-math,
title={phi-physics},
author={Matthew Kenney},
year={2023}
}
```
|
Nitin98/bloom_llm_lora
|
Nitin98
| 2023-11-26T08:14:28Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:bigscience/bloom-560m",
"base_model:adapter:bigscience/bloom-560m",
"region:us"
] | null | 2023-11-26T08:14:26Z |
---
library_name: peft
base_model: bigscience/bloom-560m
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.3.dev0
|
LarryAIDraw/irako_nai_1-11
|
LarryAIDraw
| 2023-11-26T08:14:09Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-11-26T08:00:51Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/212560/irako-kantai-collection-kancolle-or
|
Wanlin0001/Reinforce1-Pixelcopter-PLE-v0
|
Wanlin0001
| 2023-11-26T08:12:24Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-26T08:12:19Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce1-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 53.20 +/- 44.12
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
iamkhadke/zephyr-7b-beta_demo
|
iamkhadke
| 2023-11-26T07:47:12Z | 3 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:adapter:HuggingFaceH4/zephyr-7b-beta",
"region:us"
] | null | 2023-11-26T07:47:08Z |
---
library_name: peft
base_model: HuggingFaceH4/zephyr-7b-beta
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.3.dev0
|
afrideva/llama2_xs_460M_experimental_platypus-GGUF
|
afrideva
| 2023-11-26T07:42:36Z | 6 | 0 | null |
[
"gguf",
"ggml",
"quantized",
"q2_k",
"q3_k_m",
"q4_k_m",
"q5_k_m",
"q6_k",
"q8_0",
"text-generation",
"dataset:garage-bAInd/Open-Platypus",
"dataset:Felladrin/Open-Platypus-train.csv",
"base_model:Felladrin/llama2_xs_460M_experimental_platypus",
"base_model:quantized:Felladrin/llama2_xs_460M_experimental_platypus",
"region:us"
] |
text-generation
| 2023-11-26T07:40:45Z |
---
base_model: Felladrin/llama2_xs_460M_experimental_platypus
datasets:
- garage-bAInd/Open-Platypus
- Felladrin/Open-Platypus-train.csv
inference: false
model_creator: Felladrin
model_name: llama2_xs_460M_experimental_platypus
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- gguf
- ggml
- quantized
- q2_k
- q3_k_m
- q4_k_m
- q5_k_m
- q6_k
- q8_0
widget:
- text: '### User:
How do the characteristics of our solar system compare to other known planetary
systems?
### Assistant:'
- text: '### User:
Which scientist is best known for his theory of relativity and the equation E=mc²?
### Assistant:'
- text: '### User:
Write a 5 paragraph-long story about a girl called Alice.
### Assistant:'
- text: '### User:
What chemical element, known for its ability to conduct electricity and heat,
is also the most abundant metal in Earth''''s crust?
### Assistant:'
---
# Felladrin/llama2_xs_460M_experimental_platypus-GGUF
Quantized GGUF model files for [llama2_xs_460M_experimental_platypus](https://huggingface.co/Felladrin/llama2_xs_460M_experimental_platypus) from [Felladrin](https://huggingface.co/Felladrin)
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama2_xs_460m_experimental_platypus.fp16.gguf](https://huggingface.co/afrideva/llama2_xs_460M_experimental_platypus-GGUF/resolve/main/llama2_xs_460m_experimental_platypus.fp16.gguf) | fp16 | 925.26 MB |
| [llama2_xs_460m_experimental_platypus.q2_k.gguf](https://huggingface.co/afrideva/llama2_xs_460M_experimental_platypus-GGUF/resolve/main/llama2_xs_460m_experimental_platypus.q2_k.gguf) | q2_k | 212.50 MB |
| [llama2_xs_460m_experimental_platypus.q3_k_m.gguf](https://huggingface.co/afrideva/llama2_xs_460M_experimental_platypus-GGUF/resolve/main/llama2_xs_460m_experimental_platypus.q3_k_m.gguf) | q3_k_m | 238.81 MB |
| [llama2_xs_460m_experimental_platypus.q4_k_m.gguf](https://huggingface.co/afrideva/llama2_xs_460M_experimental_platypus-GGUF/resolve/main/llama2_xs_460m_experimental_platypus.q4_k_m.gguf) | q4_k_m | 288.45 MB |
| [llama2_xs_460m_experimental_platypus.q5_k_m.gguf](https://huggingface.co/afrideva/llama2_xs_460M_experimental_platypus-GGUF/resolve/main/llama2_xs_460m_experimental_platypus.q5_k_m.gguf) | q5_k_m | 333.22 MB |
| [llama2_xs_460m_experimental_platypus.q6_k.gguf](https://huggingface.co/afrideva/llama2_xs_460M_experimental_platypus-GGUF/resolve/main/llama2_xs_460m_experimental_platypus.q6_k.gguf) | q6_k | 380.79 MB |
| [llama2_xs_460m_experimental_platypus.q8_0.gguf](https://huggingface.co/afrideva/llama2_xs_460M_experimental_platypus-GGUF/resolve/main/llama2_xs_460m_experimental_platypus.q8_0.gguf) | q8_0 | 492.57 MB |
## Original Model Card:
# ahxt's llama2_xs_460M_experimental trained on the Open-Platypus dataset
- Base model: [ahxt/llama2_xs_460M_experimental](https://huggingface.co/ahxt/llama2_xs_460M_experimental)
- Dataset: [garage-bAInd/Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus)
- Trained with [AutoTrain Advanced](https://github.com/huggingface/autotrain-advanced) using [these parameters](https://huggingface.co/Felladrin/llama2_xs_460M_experimental_platypus/blob/0ea9d942179b0d1905b28e0b0befff855720aa8d/training_params.json) and [this CSV file](https://huggingface.co/datasets/Felladrin/Open-Platypus-train.csv/blob/main/train.csv)
## Recommended Prompt Format
```
### User:
<message>
### Assistant:
```
|
pankajcipher/repo1
|
pankajcipher
| 2023-11-26T07:42:32Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-26T07:38:16Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 292.35 +/- 18.06
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
minhhhvu/AIprj
|
minhhhvu
| 2023-11-26T07:38:45Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-11-26T07:37:58Z |
# Spotify-Recommendation-System
Please visit the following link to access the demo version: [Spotify-Recommendation-System](https://longliveruby-spotify-recommendation-system.hf.space/)
https://user-images.githubusercontent.com/107134115/201241072-06681109-72ad-4416-b5f0-35322646dc1e.mp4
## Description
The goal of this project is to create a recommendation system that would allow users to discover music based on a given playlist or song that they already enjoy. This project begins with data collection and a self-growing dataset to ensure that the model will work well in the future and continues through model deployment.
## Data
For this project, I'm using the Million Playlist Dataset, which, as its name implies, consists of one million playlists.
contains a number of songs, and some metadata is included as well, such as the name of the playlist, duration, number of songs, number of artists, etc.
It is created by sampling playlists from the billions of playlists that Spotify users have created over the years.
Playlists that meet the following criteria were selected at random:
- Created by a user that resides in the United States and is at least 13 years old
- Was a public playlist at the time the MPD was generated
- Contains at least 5 tracks
- Contains no more than 250 tracks
- Contains at least 3 unique artists
- Contains at least 2 unique albums
- Has no local tracks (local tracks are non-Spotify tracks that a user has on their local device
- Has at least one follower (not including the creator
- Was created after January 1, 2010 and before December 1, 2017
- Does not have an offensive title
- Does not have an adult-oriented title if the playlist was created by a user under 18 years of age
Check out the dataset [here](https://www.aicrowd.com/challenges/spotify-million-playlist-dataset-challenge)
## Data extraction
The first step will be to obtain keys to use. We'll need a [Spotify for developers](https://developer.spotify.com/) account for this. This is equivalent to a Spotify account and does not necessitate Spotify Premium. Go to the dashboard and select "create an app" from there. We now have access to the public and private keys required to use the API.
Now that we have an app, we can get a client ID and a client secret for this app. Both of these will be required to authenticate with the Spotify web API for our application, and can be thought of as a kind of username and password for the application. It is best practice not to share either of these, but especially don’t share the client secret key. To prevent this, we can keep it in a separate file, which, if you’re using Git for version control, should be Gitignored.
Spotify credentials should be stored the in the a `Spotify.yaml` file with the first line as the **credential id** and the second line as the **secret key**:
```python
Client_id : ************************
client_secret : ************************
```
To access this credentials, please use the following code:
```python
stream= open("Spotify/Spotify.yaml")
spotify_details = yaml.safe_load(stream)
auth_manager = SpotifyClientCredentials(client_id=spotify_details['Client_id'],
client_secret=spotify_details['client_secret'])
sp = spotipy.client.Spotify(auth_manager=auth_manager)
```
## Code
### Reading1M_feature_extraction.ipynb
- This notebook reads the main.json files containing the playlists in order to train the model and generate recommendations.
- The loop_slices() function will go through as many slices as desired to extract the unique track URIs from the playlists for the content-based recommendation system.
- Using the Spotify API for Feature Extraction **(Audio Features, Track Release Date, Track Popularity, Artist Popularity, Artist Genres)** and Saving Results to a CSV Files and Errors to a Log File
```python
f = open('data/audio_features.csv','a')
e=0
for i in tqdm(range(0,len(t_uri),100)):
try:
track_feature = sp.audio_features(t_uri[i:i+100])
track_df = pd.DataFrame(track_feature)
csv_data = track_df.to_csv(header=False,index=False)
f.write(csv_data)
except Exception as error:
e+=1
r = open("audio_features_log.txt", "a")
r.write(datetime.datetime.now().strftime("%d.%b %Y %H:%M:%S")+": "+str(error)+'\n')
r.close()
time.sleep(3)
continue
r = open("audio_features_log.txt", "a")
r.write(datetime.datetime.now().strftime("%d.%b %Y %H:%M:%S")+" _________________________ "+"Total Number Of Errors : "+str(e)+" _________________________ "+'\n')
r.close()
f.close()
```
### Preprocessing.ipynb
- This notebook reads the extracted features and merges them into one dataframe.
- Handling missing extraction features and dropping duplicated and irrelevant columns
- Create five point buckets for track and artist popularity and 50 point buckets for the track release date.
```python
df['Track_release_date'] = df['Track_release_date'].apply(lambda x: int(x/50))
```
If I'm listening to music from the 1950s, I'd like the model to recommend music from the same era.
### Modeling.ipynb
- Repeating the extraction features and preprocessing steps for the user's playlist (input)
- If a track from the user's playlist is missing from the dataset, it will be added automatically.
- TfidfVectorizer was used for the Artist Genres (TF-IDF automatically assigns weights to metadata based on how frequently they appear).
<img width="1017" alt="tfidf_4" src="https://user-images.githubusercontent.com/107134115/201203710-c1a48e8b-1365-4cc3-bba4-58a1102bafde.png">
- I was first using OneHotEncoder for **Track_release_date, Track_pop, Artist_pop** but I found no difference in the final result other than high memory usage.
- Converting a user playlist to a single vector
- Cosine similarity is used to compare playlist vectors to song vectors to generate recommendations.

- I decided to go with three models.
**Model 1** which gives the genera more weight than the audio features
**Model 2** which gives equal weight to all features (as a result, playlist languages and genres are ignored)
**Spotify Model**, which is available through the Spotify API
### Deployment
Please visit the following link to access the app's final version: https://huggingface.co/spaces/Longliveruby/Spotify-Recommendation-System
The website can be accessed and tested out there. Due to the limitations of file sizes and RAM limits, I decided to go with
[huggingface](https://huggingface.co/) because the free version is not severely limited.
You can test the app on localhost by cloning the repository data, cd into the folder and run the following commands:
```python
cd Streamlit
streamlit run main.py
```
Installing dependencies:
```python
pip install -r requirements.txt
```
### Reference
- https://medium.com/analytics-vidhya/music-recommender-system-part-2-ff4c3f54cba3
- https://github.com/madhavthaker/spotify-recommendation-system
- https://spotipy.readthedocs.io/en/master/
|
AlgorithmicResearchGroup/phi-metamath
|
AlgorithmicResearchGroup
| 2023-11-26T07:28:12Z | 12 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"arxiv",
"summarization",
"custom_code",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-11-23T18:02:44Z |
---
license: apache-2.0
language:
- en
pipeline_tag: summarization
widget:
- text: What is the peak phase of T-eV?
example_title: Question Answering
tags:
- arxiv
---
# Table of Contents
0. [TL;DR](#TL;DR)
1. [Model Details](#model-details)
2. [Usage](#usage)
3. [Uses](#uses)
4. [Citation](#citation)
# TL;DR
This is a Phi-1_5 model trained on [meta-math/MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA). This model is for research purposes only and ***should not be used in production settings***.
## Model Description
- **Model type:** Language model
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Related Models:** [Phi-1_5](https://huggingface.co/microsoft/phi-1_5)
# Usage
Find below some example scripts on how to use the model in `transformers`:
## Using the Pytorch model
```python
from huggingface_hub import notebook_login
from datasets import load_dataset, Dataset
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model = "ArtifactAI/phi-metamath"
model = AutoModelForCausalLM.from_pretrained(base_model, trust_remote_code= True)
tokenizer = AutoTokenizer.from_pretrained(base_model, trust_remote_code=True)
def generate(prompt):
inputs = tokenizer(f'''Below is an instruction that describes a task. Write a response that appropriately completes the request If you are adding additional white spaces, stop writing".\n\n### Instruction:\n{prompt}.\n\n### Response:\n ''', return_tensors="pt", return_attention_mask=False)
streamer = TextStreamer(tokenizer, skip_prompt= True)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=500)
generate("What are the common techniques used in identifying a new species, and how can scientists accurately categorize it within the existing taxonomy system?")
```
## Training Data
The model was trained on [meta-math/MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA), a dataset of question/answer pairs.
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.2
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.2
# Citation
```
@misc{phi-metamath,
title={phi-metamath},
author={Matthew Kenney},
year={2023}
}
```
|
hhhong/ppo-LunarLander-v2_5
|
hhhong
| 2023-11-26T07:18:47Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-26T07:18:18Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 169.79 +/- 84.01
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
hhhong/ppo-LunarLander-v2_4
|
hhhong
| 2023-11-26T07:16:03Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-26T07:15:30Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 129.79 +/- 151.72
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Aksid/Python_Zachet
|
Aksid
| 2023-11-26T07:15:55Z | 1 | 0 |
tf-keras
|
[
"tf-keras",
"region:us"
] | null | 2023-11-26T06:54:47Z |
Построили модель и натренировали ее на большей части данных с цифрами так, чтобы можно было передавать модели фотографии с цифрами размером 28×28 пикселей и получать на выходе значение этой цифры.

Для построения модели использовали обычные полносвязанные слои с разным количеством узлов. В качестве функции активации на входном и промежуточных слоях использовали функцию relu. На выходном слое в качестве функции активации определили сигмоиду

В качестве оптимайзера был выбран Adam. В массиве X_train содержится 60000 изображений, ровно столько же содержится и в массиве y_train с соответствующими метками. Тестовые данные X_test и y_test содержат по 10000 элементов. Epoch 1/5 96/96 [==============================] - 43s 429ms/step - loss: 0.1776 - binary_accuracy: 0.9385 - val_loss: 0.0580 - val_binary_accuracy: 0.9812 Epoch 2/5 96/96 [==============================] - 40s 417ms/step - loss: 0.0492 - binary_accuracy: 0.9838 - val_loss: 0.0376 - val_binary_accuracy: 0.9880 Epoch 3/5 96/96 [==============================] - 40s 419ms/step - loss: 0.0370 - binary_accuracy: 0.9881 - val_loss: 0.0347 - val_binary_accuracy: 0.9892 Epoch 4/5 96/96 [==============================] - 41s 423ms/step - loss: 0.0327 - binary_accuracy: 0.9893 - val_loss: 0.0327 - val_binary_accuracy: 0.9896 Epoch 5/5 96/96 [==============================] - 41s 427ms/step - loss: 0.0295 - binary_accuracy: 0.9905 - val_loss: 0.0312 - val_binary_accuracy: 0.9903 В результате обучения модели на 5 эпохах был замечен очень низкий loss и высокая точность!
|
hhhong/ppo-LunarLander-v2_2
|
hhhong
| 2023-11-26T07:12:28Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-26T07:12:01Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 220.16 +/- 33.40
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
hhhong/ppo-LunarLander-v2
|
hhhong
| 2023-11-26T07:10:14Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-26T07:09:46Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 190.19 +/- 77.30
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AfnanTS/ENGLISH-MODEL
|
AfnanTS
| 2023-11-26T07:09:37Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-11-21T21:22:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: ENGLISH-MODEL
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ENGLISH-MODEL
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3026
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.9475 | 1.0 | 3350 | 3.4118 |
| 2.333 | 2.0 | 6700 | 2.5942 |
| 1.5966 | 3.0 | 10050 | 2.3026 |
### Framework versions
- Transformers 4.27.1
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.13.3
|
DebajyotyBanik/Statistical-Regression-SMT
|
DebajyotyBanik
| 2023-11-26T07:08:29Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-08-19T13:45:46Z |
The proposed framework is the advanced state-of-the-art SMT decoder. Statistical regression based algorithm is imposed inside the traditional SMT decoder. Finally the proposed model outperform the state-of-the-art technique in terms of translation accuracy and decoding time.
To use/understand it, please cite/read following papers
DEBAJYOTY BANIK, RAHUL PAUL, RAJKUMAR SINGH RATHORE, RUTVIJ H. JHAVERI, Improving Access to Medical Information for Multilingual Patients Using Machine Learning-based Machine Translation, Transactions on Asian and Low-Resource Language Information Processing, 2023
|
sd-concepts-library/musecat
|
sd-concepts-library
| 2023-11-26T07:05:47Z | 0 | 0 | null |
[
"base_model:stabilityai/stable-diffusion-2",
"base_model:finetune:stabilityai/stable-diffusion-2",
"license:mit",
"region:us"
] | null | 2023-11-26T07:05:45Z |
---
license: mit
base_model: stabilityai/stable-diffusion-2
---
### musecat on Stable Diffusion
This is the `<mscds>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
agu18dec/mistral_b_finance_finetuned_test
|
agu18dec
| 2023-11-26T06:59:16Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:alexsherstinsky/Mistral-7B-v0.1-sharded",
"base_model:adapter:alexsherstinsky/Mistral-7B-v0.1-sharded",
"region:us"
] | null | 2023-11-26T06:59:05Z |
---
library_name: peft
base_model: alexsherstinsky/Mistral-7B-v0.1-sharded
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2
|
tastypear/CausalLM-7B-DPO-alpha-GGUF
|
tastypear
| 2023-11-26T06:57:52Z | 6,993 | 32 |
transformers
|
[
"transformers",
"gguf",
"llama",
"llama2",
"qwen",
"text-generation",
"en",
"zh",
"dataset:JosephusCheung/GuanacoDataset",
"dataset:Open-Orca/OpenOrca",
"dataset:stingning/ultrachat",
"dataset:meta-math/MetaMathQA",
"dataset:liuhaotian/LLaVA-Instruct-150K",
"dataset:jondurbin/airoboros-3.1",
"dataset:WizardLM/WizardLM_evol_instruct_V2_196k",
"dataset:RyokoAI/ShareGPT52K",
"dataset:RyokoAI/Fandom23K",
"dataset:milashkaarshif/MoeGirlPedia_wikitext_raw_archive",
"dataset:wikipedia",
"dataset:wiki_lingua",
"dataset:fnlp/moss-003-sft-data",
"dataset:garage-bAInd/Open-Platypus",
"dataset:LDJnr/Puffin",
"dataset:openbmb/llava_zh",
"dataset:BAAI/COIG",
"dataset:TigerResearch/tigerbot-zhihu-zh-10k",
"dataset:liwu/MNBVC",
"dataset:teknium/openhermes",
"base_model:CausalLM/7B-DPO-alpha",
"base_model:quantized:CausalLM/7B-DPO-alpha",
"license:wtfpl",
"region:us"
] |
text-generation
| 2023-11-19T15:36:16Z |
---
base_model: CausalLM/7B-DPO-alpha
datasets:
- JosephusCheung/GuanacoDataset
- Open-Orca/OpenOrca
- stingning/ultrachat
- meta-math/MetaMathQA
- liuhaotian/LLaVA-Instruct-150K
- jondurbin/airoboros-3.1
- WizardLM/WizardLM_evol_instruct_V2_196k
- RyokoAI/ShareGPT52K
- RyokoAI/Fandom23K
- milashkaarshif/MoeGirlPedia_wikitext_raw_archive
- wikipedia
- wiki_lingua
- fnlp/moss-003-sft-data
- garage-bAInd/Open-Platypus
- LDJnr/Puffin
- openbmb/llava_zh
- BAAI/COIG
- TigerResearch/tigerbot-zhihu-zh-10k
- liwu/MNBVC
- teknium/openhermes
inference: false
language:
- en
- zh
license: wtfpl
model_creator: CausalLM
model_name: CausalLM 7B-DPO-alpha
model_type: llama
pipeline_tag: text-generation
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: tastypear
tags:
- llama
- llama2
- qwen
---
<!-- header start -->
I made a quantized version of this model by referring to TheBloke's publishing format and based on the recommendation of TheBloke/CausalLM-7B-GGUF.
我参考 TheBloke 的发布格式,并根据 TheBloke/CausalLM-7B-GGUF 的推荐,制作了这个模型的量化版本。
---
<!-- header end -->
<!-- markdownlint-disable MD041 -->
# CausalLM 7B-DPO-alpha - GGUF
- Model creator: [CausalLM](https://huggingface.co/CausalLM)
- Original model: [CausalLM 7B-DPO-alpha](https://huggingface.co/CausalLM/7B-DPO-alpha)
<!-- description start -->
## Description
This repo contains GGUF format model files for [CausalLM's CausalLM 7B-DPO-alpha](https://huggingface.co/CausalLM/7B-DPO-alpha).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `wtfpl`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [CausalLM's CausalLM 7B-DPO-alpha](https://huggingface.co/CausalLM/7B-DPO-alpha).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size |
| ---- | ---- | ---- | ---- |
| [causallm_7b.Q4_K_M.gguf](https://huggingface.co/tastypear/CausalLM-7B-DPO-alpha-GGUF/blob/main/causallm_7b-dpo-alpha.Q4_K_M.gguf) | Q4_K_M | 4 | 4.77 GB|
| [causallm_7b.Q5_K_S.gguf](https://huggingface.co/tastypear/CausalLM-7B-DPO-alpha-GGUF/blob/main/causallm_7b-dpo-alpha.Q5_K_S.gguf) | Q5_K_S | 5 | 5.40 GB|
| [causallm_7b.Q5_K_M.gguf](https://huggingface.co/tastypear/CausalLM-7B-DPO-alpha-GGUF/blob/main/causallm_7b-dpo-alpha.Q5_K_M.gguf) | Q5_K_M | 5 | 5.53 GB|
<!-- README_GGUF.md-provided-files end -->
<!-- footer start -->
<!-- original-model-card start -->
# Original model card: CausalLM's CausalLM 7B-DPO-alpha
For details, please refer to the version without DPO training: [CausalLM/7B](https://huggingface.co/CausalLM/7B).
| Model | MT-Bench |
| ------------------------- | ------------ |
| GPT-4 | 8.99 |
| GPT-3.5-Turbo | 7.94 |
| | |
| Zephyr-7b-β (Overfitting) | 7.34 |
| Zephyr-7b-α | 6.88 |
| | |
| **CausalLM/14B-DPO-α** | **7.618868** |
| **CausalLM/7B-DPO-α** | **7.038125** |
It should be noted that this is not a version that continues training on CausalLM/14B & 7B, but rather an optimized version that has undergone DPO training concurrently on a previous training branch, and some detailed parameters may have changed. You will still need to download the full model.
The beta branch will soon be released, employing some aggressive approaches that might be detrimental in certain tasks, in order to achieve better alignment with human preferences, aiming to meet or exceed the GPT-3.5 benchmarks. Stay tuned.
Disclaimer: Please note that the model was trained on unfiltered internet data. Since we do not have the capacity to vet all of it, there may be a substantial amount of objectionable content, pornography, violence, and offensive language present that we are unable to remove. Therefore, you will still need to complete your own checks on the model's safety and filter keywords in the output. Due to computational resource constraints, we are presently unable to implement RLHF for the model's ethics and safety, nor training on SFT samples that refuse to answer certain questions for restrictive fine-tuning.
更多详情,请参见未经DPO训练的版本:[CausalLM/14B](https://huggingface.co/CausalLM/14B)
需要注意的是,这并不是在 CausalLM/14B & 7B 上继续训练的版本,而是在之前的训练分支上同时进行了 DPO 训练的优化版本,一些细节参数可能发生了变化。 您仍然需要下载完整模型。
很快将会发布beta分支,采用了一些可能不利于某些任务的激进方法,以实现更好地符合人类偏好以接近和超过GPT-3.5基准。敬请期待。
免责声明:请注意,模型是在未经过滤的互联网数据上进行训练的。由于我们无法审核所有数据,可能会出现大量不良内容、色情、暴力和冒犯性语言,我们无法删除这些内容。因此,您仍然需要对模型的安全性进行自己的检查,并对输出中的关键词进行过滤。由于计算资源的限制,我们目前无法为模型的伦理和安全实施RLHF,也无法对拒绝回答某些问题的SFT样本进行训练以进行限制性微调。
<!-- original-model-card end -->
|
Yntec/AnalogMadness4
|
Yntec
| 2023-11-26T06:53:11Z | 1,712 | 3 |
diffusers
|
[
"diffusers",
"safetensors",
"Character",
"Photorealistic",
"Sexy",
"CornmeisterNL",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-11-26T06:20:00Z |
---
language:
- en
license: creativeml-openrail-m
tags:
- Character
- Photorealistic
- Sexy
- CornmeisterNL
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
# Analog Madness 4.0
https://civitai.com/models/8030?modelVersionId=56498
Sample and prompt:

sitting Pretty Cute Girl, Detailed Eyes, holding coins, beautiful detailed slot machine, gorgeous detailed hair, pants, Magazine ad, iconic, 1943, from the movie, sharp focus. visible brushstrokes by ROSSDRAWS and Clay Mann
|
melaris/isabella2ai
|
melaris
| 2023-11-26T06:34:30Z | 1 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-11-26T06:30:05Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Isabella2Ai Dreambooth model trained by melaris with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
DoanKhoi/sbert-phobert-base
|
DoanKhoi
| 2023-11-26T06:30:47Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-11-26T06:29:36Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# sbert-phobert-base
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sbert-phobert-base')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sbert-phobert-base')
model = AutoModel.from_pretrained('sbert-phobert-base')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sbert-phobert-base)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 247 with parameters:
```
{'batch_size': 20, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Oillim/MiniLM-L6-v2
|
Oillim
| 2023-11-26T06:22:39Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-11-26T06:08:51Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('MiniLM-L6-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=MiniLM-L6-v2)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 180 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 50,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
anon0616/wmt21-comet-qe-mqm
|
anon0616
| 2023-11-26T06:12:33Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-11-26T06:07:22Z |
---
license: apache-2.0
metrics:
- comet
---
Creator: [Unbabel](https://unbabel.github.io/COMET/html/index.html)
The Hub was created to enable the direct usage of the wmt21-comet-qe-mqm model with Python from the Hub.
Code example:
```python
pip install --upgrade pip # ensures that pip is current
pip install unbabel-comet
from comet import download_model, load_from_checkpoint
model_path = download_model("anon0616/wmt21-comet-qe-mqm")
model = load_from_checkpoint(model_path)
data = [
{
"src": "Dem Feuer konnte Einhalt geboten werden",
"mt": "The fire could be stopped",
"ref": "They were able to control the fire."
},
{
"src": "Schulen und Kindergärten wurden eröffnet.",
"mt": "Schools and kindergartens were open",
"ref": "Schools and kindergartens opened"
}
]
model_output = model.predict(data, batch_size=8, gpus=1)
print (model_output)
```
|
mikenekofan1/YEET
|
mikenekofan1
| 2023-11-26T05:48:10Z | 0 | 0 | null |
[
"anime",
"license:unknown",
"region:us"
] | null | 2023-11-16T17:34:49Z |
---
license: unknown
tags:
- anime
---
1.5 model.
A mix for generating kawaii stuffs and buildings.
Sample generations with prompts using Yeet V1:
<img src="https://cdn-uploads.huggingface.co/production/uploads/63794d21e411dfdbd1e45c55/NpHf8uNeXWSzEE2sSeFPs.png" width="768">
```
(masterpiece: 1.4), (best quality: 1.1), very detailed, detailed background, chromatic aberration, 1girl, solo, white hair, red eyes, fox ears, detailed eyes, hood up, head shot, (baseball cap:0.9), (loli), embroidery, Chinese clothing, embroidery, painting frame, gold dragon,
Negative prompt: (worst quality:1.3, low quality:1.1), blurry, lowres, (thick thighs:1.3, fat:1.3, overweight:1.3), fox, (sidelocks:1.3)
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 3010529194, Size: 512x512, Model hash: 0082ea4136, Model: YEET v1, VAE hash: 2f11c4a99d, VAE: kl-f8-anime2 - Copy.ckpt, Denoising strength: 0.7, Clip skip: 2, ADetailer model: face_yolov8n.pt, ADetailer prompt: "red eyes, loli, red eyeliner, red eyeshadow, fangs, smile, open mouth, teeth,", ADetailer confidence: 0.3, ADetailer dilate/erode: 4, ADetailer mask blur: 4, ADetailer denoising strength: 0.4, ADetailer inpaint only masked: True, ADetailer inpaint padding: 32, ADetailer ControlNet model: control_v11p_sd15_openpose [cab727d4], ADetailer version: 23.10.1, Hires upscale: 2, Hires steps: 20, Hires upscaler: Latent, Discard penultimate sigma: True, ControlNet 0: "Module: openpose_full, Model: control_v11p_sd15_openpose [cab727d4], Weight: 1.0, Resize Mode: ResizeMode.INNER_FIT, Low Vram: False, Guidance Start: 0.0, Guidance End: 1.0, Pixel Perfect: True, Control Mode: ControlMode.BALANCED", Version: 1.6.0
```
<img src="https://cdn-uploads.huggingface.co/production/uploads/63794d21e411dfdbd1e45c55/T1JQx4BbV6SsfIdp9vUbx.png" width="768">
```
(masterpiece:1.1, best quality), detailed background, intricate clothing,
BREAK
1girl, solo, stoic, calm, subtle blush, black hair, green eyes, medium length hair, medium hair, cat ears, animal ear fluff, (tactical maid:1.3), kevlar vest, tactical gear, full body, ((very skinny legs:1.2)), black and white sneakers, flat chest, ahoge, standing,
BREAK
canned food, food in jar, jar on shelf, glass jars, wooden shelves, window, blue sky, tree
Negative prompt: (worst quality, low quality), (tail, fox tail, (fat:1.3, thick:1.3, skindentation:1.3, fat thighs, thick thighs:1.1, chubby)), breasts, cleavage, thighband
Steps: 40, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 1427039822, Size: 512x768, Model hash: 0082ea4136, Model: YEET v1, Denoising strength: 0.5, Clip skip: 2, ADetailer model: hand_yolov8n.pt, ADetailer confidence: 0.3, ADetailer dilate/erode: 32, ADetailer mask blur: 4, ADetailer denoising strength: 0.4, ADetailer inpaint only masked: True, ADetailer inpaint padding: 32, ADetailer ControlNet model: control_v11p_sd15_inpaint [ebff9138], ADetailer ControlNet module: inpaint_global_harmonious, ADetailer version: 23.8.0, Hires upscale: 2, Hires upscaler: Latent (nearest-exact), Discard penultimate sigma: True, ControlNet 0: "preprocessor: inpaint_global_harmonious, model: control_v11p_sd15_inpaint [ebff9138], weight: 1.0, starting/ending: (0.0, 1.0), resize mode: ResizeMode.INNER_FIT, pixel perfect: True, control mode: ControlMode.BALANCED, preprocessor params: (-1, -1, -1)", Noise multiplier: 0.86
```
|
hkivancoral/hushem_5x_deit_base_sgd_001_fold5
|
hkivancoral
| 2023-11-26T05:46:46Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-11-26T05:14:34Z |
---
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_5x_deit_base_sgd_001_fold5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.5853658536585366
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_base_sgd_001_fold5
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0680
- Accuracy: 0.5854
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4163 | 1.0 | 28 | 1.3604 | 0.1951 |
| 1.3947 | 2.0 | 56 | 1.3473 | 0.2195 |
| 1.3604 | 3.0 | 84 | 1.3349 | 0.2683 |
| 1.3392 | 4.0 | 112 | 1.3241 | 0.2927 |
| 1.3444 | 5.0 | 140 | 1.3143 | 0.3171 |
| 1.3238 | 6.0 | 168 | 1.3050 | 0.3415 |
| 1.3103 | 7.0 | 196 | 1.2955 | 0.3659 |
| 1.2905 | 8.0 | 224 | 1.2862 | 0.3902 |
| 1.2713 | 9.0 | 252 | 1.2769 | 0.4146 |
| 1.2521 | 10.0 | 280 | 1.2674 | 0.4390 |
| 1.2419 | 11.0 | 308 | 1.2580 | 0.4390 |
| 1.2274 | 12.0 | 336 | 1.2492 | 0.4634 |
| 1.2017 | 13.0 | 364 | 1.2403 | 0.5122 |
| 1.2089 | 14.0 | 392 | 1.2314 | 0.5366 |
| 1.1882 | 15.0 | 420 | 1.2229 | 0.5366 |
| 1.1838 | 16.0 | 448 | 1.2144 | 0.5610 |
| 1.1566 | 17.0 | 476 | 1.2059 | 0.5610 |
| 1.1584 | 18.0 | 504 | 1.1980 | 0.6098 |
| 1.1748 | 19.0 | 532 | 1.1896 | 0.6098 |
| 1.1362 | 20.0 | 560 | 1.1817 | 0.6098 |
| 1.1338 | 21.0 | 588 | 1.1741 | 0.5854 |
| 1.1033 | 22.0 | 616 | 1.1667 | 0.5854 |
| 1.0957 | 23.0 | 644 | 1.1590 | 0.5854 |
| 1.0836 | 24.0 | 672 | 1.1521 | 0.5854 |
| 1.0697 | 25.0 | 700 | 1.1452 | 0.5610 |
| 1.078 | 26.0 | 728 | 1.1389 | 0.5366 |
| 1.0636 | 27.0 | 756 | 1.1332 | 0.5610 |
| 1.0604 | 28.0 | 784 | 1.1274 | 0.5366 |
| 1.0075 | 29.0 | 812 | 1.1217 | 0.5610 |
| 1.0554 | 30.0 | 840 | 1.1163 | 0.5610 |
| 1.0238 | 31.0 | 868 | 1.1110 | 0.5610 |
| 0.9869 | 32.0 | 896 | 1.1060 | 0.5854 |
| 0.9963 | 33.0 | 924 | 1.1019 | 0.5610 |
| 1.0156 | 34.0 | 952 | 1.0973 | 0.5854 |
| 0.9827 | 35.0 | 980 | 1.0931 | 0.5854 |
| 0.9853 | 36.0 | 1008 | 1.0896 | 0.5854 |
| 0.9677 | 37.0 | 1036 | 1.0862 | 0.5854 |
| 0.9703 | 38.0 | 1064 | 1.0831 | 0.5854 |
| 0.9924 | 39.0 | 1092 | 1.0803 | 0.5854 |
| 0.9509 | 40.0 | 1120 | 1.0778 | 0.5854 |
| 0.9744 | 41.0 | 1148 | 1.0755 | 0.5854 |
| 0.957 | 42.0 | 1176 | 1.0735 | 0.5854 |
| 0.958 | 43.0 | 1204 | 1.0718 | 0.5854 |
| 0.965 | 44.0 | 1232 | 1.0705 | 0.5854 |
| 0.9524 | 45.0 | 1260 | 1.0695 | 0.5854 |
| 0.9551 | 46.0 | 1288 | 1.0687 | 0.5854 |
| 0.9588 | 47.0 | 1316 | 1.0682 | 0.5854 |
| 0.9894 | 48.0 | 1344 | 1.0680 | 0.5854 |
| 0.9401 | 49.0 | 1372 | 1.0680 | 0.5854 |
| 0.9662 | 50.0 | 1400 | 1.0680 | 0.5854 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
cehongw/civit_jacket_output
|
cehongw
| 2023-11-26T05:35:25Z | 2 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"custom-diffusion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-11-26T03:48:58Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: photo of a <new1> jacket
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- custom-diffusion
inference: true
---
# Custom Diffusion - cehongw/civit_jacket_output
These are Custom Diffusion adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on photo of a <new1> jacket using [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion). You can find some example images in the following.
For more details on the training, please follow [this link](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion).
|
KantoRegion/test-lora-merged-hermione3-30
|
KantoRegion
| 2023-11-26T05:21:13Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-13b-hf",
"base_model:adapter:meta-llama/Llama-2-13b-hf",
"region:us"
] | null | 2023-11-26T05:21:11Z |
---
library_name: peft
base_model: meta-llama/Llama-2-13b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2
|
KantoRegion/test-lora-merged-hermione3-20
|
KantoRegion
| 2023-11-26T05:20:06Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-13b-hf",
"base_model:adapter:meta-llama/Llama-2-13b-hf",
"region:us"
] | null | 2023-11-26T05:20:01Z |
---
library_name: peft
base_model: meta-llama/Llama-2-13b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2
|
owanr/ghc-roberta-base-intra-frequency-model_annots-cross-ent-batch-size
|
owanr
| 2023-11-26T05:15:07Z | 0 | 0 | null |
[
"pytorch",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2023-11-25T23:20:59Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: ghc-roberta-base-intra-frequency-model_annots-cross-ent-batch-size
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ghc-roberta-base-intra-frequency-model_annots-cross-ent-batch-size
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.34.0
- Pytorch 2.1.0+cu121
- Datasets 2.6.1
- Tokenizers 0.14.1
|
etri-xainlp/llama2-ko-13b-instruct-v1.1
|
etri-xainlp
| 2023-11-26T04:42:21Z | 60 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-24T01:33:57Z |
---
license: apache-2.0
---
# llama2-ko-13b-instruct-v1.1
This model is a fine-tuned version of [meta-llama/Llama-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf) on an instruction-following dataset(109,974)
|
lawls/SpaceInvadersNoFrameskip-v4-DQN
|
lawls
| 2023-11-26T04:33:17Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-20T19:53:21Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 707.50 +/- 242.81
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga lawls -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga lawls -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga lawls
```
## Hyperparameters
```python
OrderedDict([('batch_size', 64),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 8),
('gradient_steps', 2),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_envs', 32),
('n_timesteps', 10000000),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 10000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
joshmazen/whisper-small-dv
|
joshmazen
| 2023-11-26T04:29:33Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-11-03T01:46:15Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-small-dv
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train[450:]
args: en-US
metrics:
- name: Wer
type: wer
value: 0.3417945690672963
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-dv
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6498
- Wer Ortho: 0.3461
- Wer: 0.3418
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.0007 | 17.86 | 500 | 0.6498 | 0.3461 | 0.3418 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
aicam2/falcon-7b-sample
|
aicam2
| 2023-11-26T04:29:30Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"generated_from_trainer",
"dataset:generator",
"base_model:ybelkada/falcon-7b-sharded-bf16",
"base_model:finetune:ybelkada/falcon-7b-sharded-bf16",
"region:us"
] | null | 2023-11-26T04:06:32Z |
---
base_model: ybelkada/falcon-7b-sharded-bf16
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: falcon-7b-sample
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon-7b-sample
This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 320
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
hkivancoral/hushem_5x_deit_base_sgd_001_fold2
|
hkivancoral
| 2023-11-26T04:08:35Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-11-26T03:36:27Z |
---
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_5x_deit_base_sgd_001_fold2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.4888888888888889
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_base_sgd_001_fold2
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2313
- Accuracy: 0.4889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4159 | 1.0 | 27 | 1.4534 | 0.1333 |
| 1.3715 | 2.0 | 54 | 1.4329 | 0.1333 |
| 1.349 | 3.0 | 81 | 1.4161 | 0.1556 |
| 1.3491 | 4.0 | 108 | 1.4025 | 0.1556 |
| 1.3037 | 5.0 | 135 | 1.3915 | 0.2 |
| 1.2977 | 6.0 | 162 | 1.3813 | 0.2444 |
| 1.2881 | 7.0 | 189 | 1.3728 | 0.2889 |
| 1.2735 | 8.0 | 216 | 1.3643 | 0.3333 |
| 1.2461 | 9.0 | 243 | 1.3568 | 0.3556 |
| 1.2282 | 10.0 | 270 | 1.3495 | 0.3556 |
| 1.2197 | 11.0 | 297 | 1.3426 | 0.3556 |
| 1.1815 | 12.0 | 324 | 1.3361 | 0.3778 |
| 1.1874 | 13.0 | 351 | 1.3296 | 0.3778 |
| 1.1512 | 14.0 | 378 | 1.3234 | 0.3778 |
| 1.169 | 15.0 | 405 | 1.3177 | 0.4 |
| 1.1635 | 16.0 | 432 | 1.3122 | 0.4 |
| 1.1212 | 17.0 | 459 | 1.3068 | 0.4 |
| 1.1132 | 18.0 | 486 | 1.3013 | 0.4 |
| 1.0934 | 19.0 | 513 | 1.2960 | 0.4 |
| 1.0783 | 20.0 | 540 | 1.2914 | 0.4 |
| 1.0674 | 21.0 | 567 | 1.2869 | 0.4 |
| 1.0564 | 22.0 | 594 | 1.2826 | 0.4222 |
| 1.0602 | 23.0 | 621 | 1.2784 | 0.4444 |
| 1.0292 | 24.0 | 648 | 1.2744 | 0.4667 |
| 1.0348 | 25.0 | 675 | 1.2706 | 0.4667 |
| 1.0373 | 26.0 | 702 | 1.2671 | 0.4667 |
| 1.0143 | 27.0 | 729 | 1.2638 | 0.4667 |
| 1.0044 | 28.0 | 756 | 1.2607 | 0.4667 |
| 0.9861 | 29.0 | 783 | 1.2578 | 0.4667 |
| 1.0112 | 30.0 | 810 | 1.2551 | 0.4667 |
| 0.9561 | 31.0 | 837 | 1.2525 | 0.4667 |
| 0.9839 | 32.0 | 864 | 1.2500 | 0.4667 |
| 0.9768 | 33.0 | 891 | 1.2477 | 0.4667 |
| 0.936 | 34.0 | 918 | 1.2456 | 0.4667 |
| 0.9571 | 35.0 | 945 | 1.2436 | 0.4667 |
| 0.9423 | 36.0 | 972 | 1.2418 | 0.4667 |
| 0.9413 | 37.0 | 999 | 1.2401 | 0.4667 |
| 0.9304 | 38.0 | 1026 | 1.2386 | 0.4889 |
| 0.9391 | 39.0 | 1053 | 1.2372 | 0.4889 |
| 0.9013 | 40.0 | 1080 | 1.2360 | 0.4889 |
| 0.9198 | 41.0 | 1107 | 1.2349 | 0.4889 |
| 0.9119 | 42.0 | 1134 | 1.2340 | 0.4889 |
| 0.9214 | 43.0 | 1161 | 1.2332 | 0.4889 |
| 0.8928 | 44.0 | 1188 | 1.2325 | 0.4889 |
| 0.9196 | 45.0 | 1215 | 1.2320 | 0.4889 |
| 0.906 | 46.0 | 1242 | 1.2316 | 0.4889 |
| 0.9098 | 47.0 | 1269 | 1.2314 | 0.4889 |
| 0.9113 | 48.0 | 1296 | 1.2313 | 0.4889 |
| 0.9534 | 49.0 | 1323 | 1.2313 | 0.4889 |
| 0.8999 | 50.0 | 1350 | 1.2313 | 0.4889 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
chienpham/sts-multilingual-mpnet-base-v2
|
chienpham
| 2023-11-26T04:01:52Z | 15 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-11-26T03:59:45Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# sts-multilingual-mpnet-base-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sts-multilingual-mpnet-base-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sts-multilingual-mpnet-base-v2')
model = AutoModel.from_pretrained('sts-multilingual-mpnet-base-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sts-multilingual-mpnet-base-v2)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 625 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Felladrin/onnx-Evol-Orca-LaMini-flan-t5-small
|
Felladrin
| 2023-11-26T03:48:17Z | 4 | 0 |
transformers.js
|
[
"transformers.js",
"onnx",
"t5",
"text2text-generation",
"base_model:sachithgunasekara/Evol-Orca-LaMini-flan-t5-small",
"base_model:quantized:sachithgunasekara/Evol-Orca-LaMini-flan-t5-small",
"region:us"
] |
text2text-generation
| 2023-11-26T00:43:06Z |
---
library_name: "transformers.js"
base_model: sachith-surge/Evol-Orca-LaMini-flan-t5-small
---
INT8 ONNX version of [sachith-surge/Evol-Orca-LaMini-flan-t5-small](https://huggingface.co/sachith-surge/Evol-Orca-LaMini-flan-t5-small) to use with [Transformers.js](https://huggingface.co/docs/transformers.js).
### Example usage
```js
import { pipeline } from '@xenova/transformers';
const generator = await pipeline('text2text-generation', 'Felladrin/onnx-Evol-Orca-LaMini-flan-t5-small');
const output = await generator("How can I become more healthy?", { add_special_tokens: true, max_new_tokens: 50, repetition_penalty: 1.2});
console.log(output); // 1. Exercise: Exercise can help you stay fit and healthy. It can help you stay fit and...
```
|
m-aliabbas1/tinybert_29_med_intents
|
m-aliabbas1
| 2023-11-26T03:42:33Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:prajjwal1/bert-tiny",
"base_model:finetune:prajjwal1/bert-tiny",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-28T06:52:32Z |
---
license: mit
base_model: prajjwal1/bert-tiny
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: tinybert_29_med_intents
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tinybert_29_med_intents
This model is a fine-tuned version of [prajjwal1/bert-tiny](https://huggingface.co/prajjwal1/bert-tiny) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3344
- Accuracy: 0.9199
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| No log | 1.0 | 430 | 2.8232 | 0.4144 |
| 3.1061 | 2.0 | 860 | 2.4220 | 0.4890 |
| 2.6532 | 3.0 | 1290 | 2.0921 | 0.5967 |
| 2.28 | 4.0 | 1720 | 1.8178 | 0.6878 |
| 1.9726 | 5.0 | 2150 | 1.5987 | 0.7431 |
| 1.7268 | 6.0 | 2580 | 1.4221 | 0.7569 |
| 1.5454 | 7.0 | 3010 | 1.2797 | 0.7762 |
| 1.5454 | 8.0 | 3440 | 1.1608 | 0.7818 |
| 1.3826 | 9.0 | 3870 | 1.0589 | 0.8039 |
| 1.2445 | 10.0 | 4300 | 0.9737 | 0.8177 |
| 1.1266 | 11.0 | 4730 | 0.8920 | 0.8343 |
| 1.0328 | 12.0 | 5160 | 0.8279 | 0.8398 |
| 0.9528 | 13.0 | 5590 | 0.7646 | 0.8453 |
| 0.8538 | 14.0 | 6020 | 0.7186 | 0.8564 |
| 0.8538 | 15.0 | 6450 | 0.6733 | 0.8619 |
| 0.7987 | 16.0 | 6880 | 0.6347 | 0.8812 |
| 0.7367 | 17.0 | 7310 | 0.5945 | 0.8840 |
| 0.6931 | 18.0 | 7740 | 0.5674 | 0.8950 |
| 0.6339 | 19.0 | 8170 | 0.5429 | 0.9061 |
| 0.606 | 20.0 | 8600 | 0.5132 | 0.9033 |
| 0.5647 | 21.0 | 9030 | 0.4991 | 0.9061 |
| 0.5647 | 22.0 | 9460 | 0.4709 | 0.9033 |
| 0.5375 | 23.0 | 9890 | 0.4642 | 0.9116 |
| 0.4961 | 24.0 | 10320 | 0.4421 | 0.9116 |
| 0.4695 | 25.0 | 10750 | 0.4390 | 0.9088 |
| 0.4499 | 26.0 | 11180 | 0.4126 | 0.9088 |
| 0.4315 | 27.0 | 11610 | 0.4149 | 0.9088 |
| 0.4005 | 28.0 | 12040 | 0.4036 | 0.9116 |
| 0.4005 | 29.0 | 12470 | 0.3938 | 0.9033 |
| 0.3929 | 30.0 | 12900 | 0.3846 | 0.9061 |
| 0.3707 | 31.0 | 13330 | 0.3856 | 0.9116 |
| 0.369 | 32.0 | 13760 | 0.3727 | 0.9088 |
| 0.3517 | 33.0 | 14190 | 0.3739 | 0.9088 |
| 0.3355 | 34.0 | 14620 | 0.3604 | 0.9088 |
| 0.3226 | 35.0 | 15050 | 0.3518 | 0.9144 |
| 0.3226 | 36.0 | 15480 | 0.3570 | 0.9116 |
| 0.3197 | 37.0 | 15910 | 0.3502 | 0.9144 |
| 0.3038 | 38.0 | 16340 | 0.3463 | 0.9144 |
| 0.3038 | 39.0 | 16770 | 0.3448 | 0.9116 |
| 0.2918 | 40.0 | 17200 | 0.3448 | 0.9144 |
| 0.2937 | 41.0 | 17630 | 0.3460 | 0.9144 |
| 0.2845 | 42.0 | 18060 | 0.3414 | 0.9199 |
| 0.2845 | 43.0 | 18490 | 0.3412 | 0.9199 |
| 0.2785 | 44.0 | 18920 | 0.3401 | 0.9227 |
| 0.2781 | 45.0 | 19350 | 0.3372 | 0.9199 |
| 0.2665 | 46.0 | 19780 | 0.3364 | 0.9199 |
| 0.2722 | 47.0 | 20210 | 0.3352 | 0.9199 |
| 0.2683 | 48.0 | 20640 | 0.3359 | 0.9199 |
| 0.267 | 49.0 | 21070 | 0.3345 | 0.9199 |
| 0.2641 | 50.0 | 21500 | 0.3344 | 0.9199 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
hkivancoral/hushem_5x_deit_base_sgd_001_fold1
|
hkivancoral
| 2023-11-26T03:35:46Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-11-26T03:03:59Z |
---
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_5x_deit_base_sgd_001_fold1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.4222222222222222
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_base_sgd_001_fold1
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2935
- Accuracy: 0.4222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3985 | 1.0 | 27 | 1.4731 | 0.2444 |
| 1.3797 | 2.0 | 54 | 1.4540 | 0.2667 |
| 1.3671 | 3.0 | 81 | 1.4399 | 0.3333 |
| 1.3529 | 4.0 | 108 | 1.4301 | 0.3333 |
| 1.3075 | 5.0 | 135 | 1.4206 | 0.3778 |
| 1.3006 | 6.0 | 162 | 1.4113 | 0.3778 |
| 1.2955 | 7.0 | 189 | 1.4036 | 0.3778 |
| 1.2684 | 8.0 | 216 | 1.3964 | 0.4 |
| 1.2547 | 9.0 | 243 | 1.3899 | 0.4 |
| 1.2309 | 10.0 | 270 | 1.3835 | 0.4 |
| 1.2188 | 11.0 | 297 | 1.3776 | 0.3778 |
| 1.1974 | 12.0 | 324 | 1.3722 | 0.3778 |
| 1.1972 | 13.0 | 351 | 1.3669 | 0.4 |
| 1.1775 | 14.0 | 378 | 1.3615 | 0.3778 |
| 1.1771 | 15.0 | 405 | 1.3571 | 0.3778 |
| 1.1595 | 16.0 | 432 | 1.3529 | 0.3778 |
| 1.11 | 17.0 | 459 | 1.3491 | 0.4 |
| 1.116 | 18.0 | 486 | 1.3456 | 0.4 |
| 1.0955 | 19.0 | 513 | 1.3420 | 0.4 |
| 1.0866 | 20.0 | 540 | 1.3386 | 0.4 |
| 1.0678 | 21.0 | 567 | 1.3355 | 0.4 |
| 1.0655 | 22.0 | 594 | 1.3327 | 0.4 |
| 1.0356 | 23.0 | 621 | 1.3298 | 0.4 |
| 1.0185 | 24.0 | 648 | 1.3265 | 0.3778 |
| 1.0437 | 25.0 | 675 | 1.3237 | 0.4 |
| 1.0442 | 26.0 | 702 | 1.3211 | 0.3778 |
| 1.028 | 27.0 | 729 | 1.3185 | 0.3778 |
| 1.0044 | 28.0 | 756 | 1.3165 | 0.3778 |
| 1.002 | 29.0 | 783 | 1.3148 | 0.4 |
| 0.9934 | 30.0 | 810 | 1.3131 | 0.4 |
| 0.9758 | 31.0 | 837 | 1.3109 | 0.4 |
| 0.9861 | 32.0 | 864 | 1.3087 | 0.4 |
| 0.9889 | 33.0 | 891 | 1.3069 | 0.4 |
| 0.9637 | 34.0 | 918 | 1.3052 | 0.4 |
| 0.9733 | 35.0 | 945 | 1.3034 | 0.4 |
| 0.9304 | 36.0 | 972 | 1.3021 | 0.4222 |
| 0.9586 | 37.0 | 999 | 1.3007 | 0.4222 |
| 0.9329 | 38.0 | 1026 | 1.2994 | 0.4222 |
| 0.918 | 39.0 | 1053 | 1.2983 | 0.4222 |
| 0.9142 | 40.0 | 1080 | 1.2972 | 0.4222 |
| 0.9236 | 41.0 | 1107 | 1.2963 | 0.4222 |
| 0.929 | 42.0 | 1134 | 1.2957 | 0.4222 |
| 0.9525 | 43.0 | 1161 | 1.2951 | 0.4222 |
| 0.8934 | 44.0 | 1188 | 1.2944 | 0.4222 |
| 0.9348 | 45.0 | 1215 | 1.2941 | 0.4222 |
| 0.9068 | 46.0 | 1242 | 1.2937 | 0.4222 |
| 0.9064 | 47.0 | 1269 | 1.2936 | 0.4222 |
| 0.9044 | 48.0 | 1296 | 1.2934 | 0.4222 |
| 0.9396 | 49.0 | 1323 | 1.2935 | 0.4222 |
| 0.894 | 50.0 | 1350 | 1.2935 | 0.4222 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
kartik14/my_awesome_opus_books_model
|
kartik14
| 2023-11-26T03:19:12Z | 11 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:opus_books",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-11-26T00:00:10Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
datasets:
- opus_books
metrics:
- bleu
model-index:
- name: my_awesome_opus_books_model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: opus_books
type: opus_books
config: en-fr
split: train
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 6.1787
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_opus_books_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the opus_books dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5208
- Bleu: 6.1787
- Gen Len: 17.5934
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| 1.9112 | 1.0 | 1589 | 1.6537 | 5.2795 | 17.6478 |
| 1.8421 | 2.0 | 3178 | 1.6098 | 5.6138 | 17.6216 |
| 1.8012 | 3.0 | 4767 | 1.5801 | 5.7983 | 17.6147 |
| 1.776 | 4.0 | 6356 | 1.5611 | 5.9587 | 17.6061 |
| 1.7581 | 5.0 | 7945 | 1.5474 | 6.0336 | 17.5977 |
| 1.7416 | 6.0 | 9534 | 1.5368 | 6.0882 | 17.5966 |
| 1.7261 | 7.0 | 11123 | 1.5297 | 6.1366 | 17.5969 |
| 1.7279 | 8.0 | 12712 | 1.5245 | 6.1442 | 17.5948 |
| 1.7112 | 9.0 | 14301 | 1.5217 | 6.1715 | 17.5931 |
| 1.7162 | 10.0 | 15890 | 1.5208 | 6.1787 | 17.5934 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
nvasko/ppo-LunarLander-v2
|
nvasko
| 2023-11-26T03:00:24Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-24T12:05:27Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 285.86 +/- 17.35
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
theparitt/dentist
|
theparitt
| 2023-11-26T02:54:21Z | 0 | 0 |
tf-keras
|
[
"tf-keras",
"region:us"
] | null | 2023-11-26T01:53:53Z |
# Semantic-Segmentation-of-Teeth-in-Panoramic-X-ray-Image
The aim of this study is automatic semantic segmentation and measurement total length of teeth in one-shot panoramic x-ray image by using deep learning method with U-Net Model and binary image analysis in order to provide diagnostic information for the management of dental disorders, diseases, and conditions.
[***Try Demo App On Hugging Face***](https://huggingface.co/spaces/SerdarHelli/Segmentation-of-Teeth-in-Panoramic-X-ray-Image-Using-U-Net)
***Original Dataset***
DATASET ref - H. Abdi, S. Kasaei, and M. Mehdizadeh, “Automatic segmentation of mandible in panoramic x-ray,” J. Med. Imaging, vol. 2, no. 4, p. 44003, 2015
[Link DATASET for only original images.](https://data.mendeley.com/datasets/hxt48yk462/1)
# Having Basic Usage , You can train your own model with Main.ipynb, Just Run Click
*****Examples of Model's Outputs*****
<img src="https://github.com/SerdarHelli/Semantic-Segmentation-of-Teeth-in-Panoramic-X-ray-Image/blob/master/Viewing_Estimations/Figures/example.png" alt="Results" width="1024" height="512">
*****Example of Final Output*****
<img src="https://github.com/SerdarHelli/Semantic-Segmentation-of-Teeth-in-Panoramic-X-ray-Image/blob/master/Viewing_Estimations/Figures/exampleofcca.png" alt="Results" width="1024" height="512">
*****Architecture.*****
<img src="https://github.com/SerdarHelli/Semantic-Segmentation-of-Teeth-in-Panoramic-X-ray-Image/blob/master/Viewing_Estimations/Figures/Architecture.png" alt="Results" width="1024" height="512">
### Paper
[The authors of this article are Selahattin Serdar Helli and Andaç Hamamcı with the Department of Biomedical Engineering, Faculty of Engineering, Yeditepe University, Istanbul, Turkey](https://dergipark.org.tr/tr/pub/dubited/issue/68307/950568)
### BibTeX Entry and Citation Info
```
@article{helli10tooth,
title={Tooth Instance Segmentation on Panoramic Dental Radiographs Using U-Nets and Morphological Processing},
author={HELL{\.I}, Serdar and HAMAMCI, Anda{\c{c}}},
journal={D{\"u}zce {\"U}niversitesi Bilim ve Teknoloji Dergisi},
volume={10},
number={1},
pages={39--50}
}
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.