modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-30 06:27:36
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 527
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-30 06:27:12
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
kr-manish/text-to-image-sdxl-lora-dreemBooth-rashmika_v2
|
kr-manish
| 2023-12-19T11:13:23Z | 1 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2023-12-19T08:31:31Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: A photo of rashmika mehta wearing casual clothes, taking a selfie, and smiling.
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
vilm/vinallama-7b
|
vilm
| 2023-12-19T11:10:40Z | 108 | 23 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"vi",
"arxiv:2312.11011",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-28T07:45:04Z |
---
license: llama2
language:
- vi
---
# VinaLLaMA - State-of-the-art Vietnamese LLMs

Read our [Paper](https://huggingface.co/papers/2312.11011)
|
ngocminhta/Llama-2-Chat-Movie-Review
|
ngocminhta
| 2023-12-19T11:03:55Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"movie",
"entertainment",
"text-classification",
"en",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-19T10:37:22Z |
---
license: apache-2.0
language:
- en
pipeline_tag: text-classification
tags:
- movie
- entertainment
---
# Model Card for Model ID
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LogicismTV/Chronomaid-Storytelling-13b-exl2
|
LogicismTV
| 2023-12-19T11:02:15Z | 11 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"base_model:NyxKrage/Chronomaid-Storytelling-13b",
"base_model:finetune:NyxKrage/Chronomaid-Storytelling-13b",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-12-18T06:04:30Z |
---
base_model: NyxKrage/Chronomaid-Storytelling-13b
inference: false
license: llama2
model_creator: Carsten Kragelund
model_name: Chronomaid Storytelling 13B
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: LogicismTV
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/T1kcNir.jpg" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://logicism.tv/">Vist my Website</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/nStuNeZsWz">Join my Discord</a></p>
</div>
</div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
# Chronomaid Storytelling 13B - ExLlama V2
Original model: [Chronomaid Storytelling 13B](https://huggingface.co/NyxKrage/Chronomaid-Storytelling-13b)
# Description
This is an EXL2 quantization of the NyxKrage's Chronomaid Storytelling 13B model.
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
# Quantizations
| Bits Per Weight | Size |
| --------------- | ---- |
| [main (2.4bpw)](https://huggingface.co/LogicismTV/Chronomaid-Storytelling-13b-exl2/tree/main) | 3.98 GB |
| [3bpw](https://huggingface.co/LogicismTV/Chronomaid-Storytelling-13b-exl2/tree/3bpw) | 4.86 GB |
| [3.5bpw](https://huggingface.co/LogicismTV/Chronomaid-Storytelling-13b-exl2/tree/3.5bpw) | 5.60 GB |
| [4bpw](https://huggingface.co/LogicismTV/Chronomaid-Storytelling-13b-exl2/tree/4bpw) | 6.34 GB |
| [4.5bpw](https://huggingface.co/LogicismTV/Chronomaid-Storytelling-13b-exl2/tree/4.5bpw) | 7.08 GB |
| [5bpw](https://huggingface.co/LogicismTV/Chronomaid-Storytelling-13b-exl2/tree/5bpw) | 7.82 GB |
| [6bpw](https://huggingface.co/LogicismTV/Chronomaid-Storytelling-13b-exl2/tree/6bpw) | 9.29 GB |
| [8bpw](https://huggingface.co/LogicismTV/Chronomaid-Storytelling-13b-exl2/tree/8bpw) | 12.2 GB |
# Original model card: Carsten Kragelund's Chronomaid Storytelling 13B
# Chronomaid-Storytelling-13b
<img src="https://cdn-uploads.huggingface.co/production/uploads/65221315578e7da0d74f73d8/v2fVXhCcOdvOdjTrd9dY0.webp" alt="image of a vibrant and whimsical scene with an anime-style character as the focal point. The character is a young girl with blue eyes and short brown hair, wearing a black and white maid outfit with ruffled apron and a red ribbon at her collar. She is lying amidst a fantastical backdrop filled with an assortment of floating, colorful clocks, gears, and hourglasses. The space around her is filled with sparkling stars, glowing nebulae, and swirling galaxies." height="75%" width="75%" />
Merge including [Noromaid-13b-v0.1.1](https://huggingface.co/NeverSleep/Noromaid-13b-v0.1.1), and [Chronos-13b-v2](https://huggingface.co/elinas/chronos-13b-v2) with the [Storytelling-v1-Lora](https://huggingface.co/Undi95/Storytelling-v1-13B-lora) applied afterwards
Inteded for primarily RP, and will do ERP, narrator-character and group-chats without much trouble in my testing.
## Prompt Format
Tested with Alpaca, the Noromaid preset's will probably also work (check the Noromaid model card for SillyTavern presets)
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
## Sampler Settings
Tested at
* `temp` 1.3 `min p` 0.05 and 0.15
* `temp` 1.7, `min p` 0.08 and 0.15
## Quantized Models
The model has been kindly quantized in GGUF, AWQ, and GPTQ by TheBloke
Find them in the [Chronomaid-Storytelling-13b Collection](https://huggingface.co/collections/NyxKrage/chronomaid-storytelling-13b-656115dd7065690d7f17c7c8)
## Thanks ❤️
To [Undi](https://huggingface.co/Undi95) & [Ikari](https://huggingface.co/IkariDev) for Noromaid and [Elinas](https://huggingface.co/elinas) for Chronos
Support [Undi](https://ko-fi.com/undiai) and [Elinas](https://ko-fi.com/elinas) on Kofi
|
npvinHnivqn/Mistral-7B-Instruct
|
npvinHnivqn
| 2023-12-19T10:52:30Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"license:mit",
"region:us"
] | null | 2023-12-18T19:31:58Z |
---
license: mit
---
### Quick start
```bash
pip install accelerate peft bitsandbytes pip install git+https://github.com/huggingface/transformers trl py7zr auto-gptq optimum
```
```python
from peft import AutoPeftModelForCausalLM
from transformers import GenerationConfig
from transformers import AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("npvinHnivqn/Mistral-7B-Instruct")
model = AutoPeftModelForCausalLM.from_pretrained(
'npvinHnivqn/Mistral-7B-Instruct',
low_cpu_mem_usage=True,
return_dict=True,
torch_dtype=torch.float16,
device_map="cuda")
generation_config = GenerationConfig(
do_sample=True,
top_k=1,
temperature=0.1,
max_new_tokens=25,
pad_token_id=tokenizer.eos_token_id
)
inputs = tokenizer("""<|SYSTEM|> You are a very good chatbot, you can answer every question from users. <|USER|> Summarize this following dialogue: Vasanth: I'm at the railway station in Chennai Karthik: No problems so far? Vasanth: no, everything's going smoothly Karthik: good. lets meet there soon! [INPUT] <|BOT|>""", return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, generation_config=generation_config)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
|
Kshitij2406/GPTTest
|
Kshitij2406
| 2023-12-19T10:51:27Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:vilsonrodrigues/falcon-7b-instruct-sharded",
"base_model:adapter:vilsonrodrigues/falcon-7b-instruct-sharded",
"region:us"
] | null | 2023-12-15T10:39:01Z |
---
library_name: peft
base_model: vilsonrodrigues/falcon-7b-instruct-sharded
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
hkivancoral/smids_10x_deit_small_adamax_00001_fold3
|
hkivancoral
| 2023-12-19T10:51:15Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-19T09:43:00Z |
---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_10x_deit_small_adamax_00001_fold3
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9183333333333333
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_small_adamax_00001_fold3
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8187
- Accuracy: 0.9183
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2585 | 1.0 | 750 | 0.2681 | 0.8983 |
| 0.2105 | 2.0 | 1500 | 0.2490 | 0.9167 |
| 0.0786 | 3.0 | 2250 | 0.2625 | 0.9167 |
| 0.0736 | 4.0 | 3000 | 0.2826 | 0.9133 |
| 0.0688 | 5.0 | 3750 | 0.3568 | 0.91 |
| 0.0468 | 6.0 | 4500 | 0.4349 | 0.9083 |
| 0.0289 | 7.0 | 5250 | 0.4645 | 0.9183 |
| 0.0394 | 8.0 | 6000 | 0.5300 | 0.9183 |
| 0.0012 | 9.0 | 6750 | 0.5842 | 0.92 |
| 0.0139 | 10.0 | 7500 | 0.6285 | 0.915 |
| 0.0002 | 11.0 | 8250 | 0.6464 | 0.9217 |
| 0.0001 | 12.0 | 9000 | 0.6757 | 0.9133 |
| 0.0 | 13.0 | 9750 | 0.7480 | 0.9167 |
| 0.0001 | 14.0 | 10500 | 0.7033 | 0.92 |
| 0.0 | 15.0 | 11250 | 0.7525 | 0.9133 |
| 0.0 | 16.0 | 12000 | 0.7472 | 0.915 |
| 0.0 | 17.0 | 12750 | 0.7380 | 0.92 |
| 0.0 | 18.0 | 13500 | 0.7432 | 0.9183 |
| 0.0 | 19.0 | 14250 | 0.7438 | 0.9217 |
| 0.0 | 20.0 | 15000 | 0.7615 | 0.92 |
| 0.0 | 21.0 | 15750 | 0.7581 | 0.9233 |
| 0.0 | 22.0 | 16500 | 0.7753 | 0.92 |
| 0.0 | 23.0 | 17250 | 0.7758 | 0.92 |
| 0.0 | 24.0 | 18000 | 0.7745 | 0.9217 |
| 0.0 | 25.0 | 18750 | 0.7780 | 0.9233 |
| 0.0 | 26.0 | 19500 | 0.7763 | 0.9217 |
| 0.0 | 27.0 | 20250 | 0.7839 | 0.9183 |
| 0.0 | 28.0 | 21000 | 0.7914 | 0.9183 |
| 0.0 | 29.0 | 21750 | 0.7935 | 0.92 |
| 0.0 | 30.0 | 22500 | 0.8320 | 0.9117 |
| 0.0 | 31.0 | 23250 | 0.8021 | 0.9183 |
| 0.0 | 32.0 | 24000 | 0.8041 | 0.9217 |
| 0.0 | 33.0 | 24750 | 0.8030 | 0.9167 |
| 0.0 | 34.0 | 25500 | 0.8170 | 0.9133 |
| 0.0 | 35.0 | 26250 | 0.8237 | 0.915 |
| 0.0 | 36.0 | 27000 | 0.8072 | 0.9167 |
| 0.0 | 37.0 | 27750 | 0.8249 | 0.915 |
| 0.0 | 38.0 | 28500 | 0.8116 | 0.9167 |
| 0.0 | 39.0 | 29250 | 0.8160 | 0.9217 |
| 0.0 | 40.0 | 30000 | 0.8158 | 0.92 |
| 0.0 | 41.0 | 30750 | 0.8164 | 0.92 |
| 0.0 | 42.0 | 31500 | 0.8163 | 0.92 |
| 0.0 | 43.0 | 32250 | 0.8169 | 0.92 |
| 0.0 | 44.0 | 33000 | 0.8174 | 0.92 |
| 0.0 | 45.0 | 33750 | 0.8182 | 0.92 |
| 0.0 | 46.0 | 34500 | 0.8186 | 0.9183 |
| 0.0 | 47.0 | 35250 | 0.8185 | 0.92 |
| 0.0 | 48.0 | 36000 | 0.8187 | 0.92 |
| 0.0 | 49.0 | 36750 | 0.8181 | 0.9183 |
| 0.0 | 50.0 | 37500 | 0.8187 | 0.9183 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
Dhanang/aspect_model
|
Dhanang
| 2023-12-19T10:51:07Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:indobenchmark/indobert-base-p2",
"base_model:finetune:indobenchmark/indobert-base-p2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-19T09:02:25Z |
---
license: mit
base_model: indobenchmark/indobert-base-p2
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: aspect_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# aspect_model
This model is a fine-tuned version of [indobenchmark/indobert-base-p2](https://huggingface.co/indobenchmark/indobert-base-p2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3490
- Accuracy: 0.8084
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 72 | 0.6516 | 0.7735 |
| No log | 2.0 | 144 | 0.6119 | 0.7909 |
| No log | 3.0 | 216 | 0.6152 | 0.8049 |
| No log | 4.0 | 288 | 0.7480 | 0.8118 |
| No log | 5.0 | 360 | 1.0121 | 0.7770 |
| No log | 6.0 | 432 | 1.0780 | 0.7909 |
| 0.27 | 7.0 | 504 | 1.1602 | 0.7840 |
| 0.27 | 8.0 | 576 | 1.2136 | 0.8014 |
| 0.27 | 9.0 | 648 | 1.2490 | 0.8014 |
| 0.27 | 10.0 | 720 | 1.3102 | 0.7840 |
| 0.27 | 11.0 | 792 | 1.3184 | 0.8049 |
| 0.27 | 12.0 | 864 | 1.3255 | 0.8014 |
| 0.27 | 13.0 | 936 | 1.3192 | 0.8049 |
| 0.0022 | 14.0 | 1008 | 1.3229 | 0.7944 |
| 0.0022 | 15.0 | 1080 | 1.3415 | 0.8014 |
| 0.0022 | 16.0 | 1152 | 1.3515 | 0.7909 |
| 0.0022 | 17.0 | 1224 | 1.3544 | 0.7944 |
| 0.0022 | 18.0 | 1296 | 1.3529 | 0.7944 |
| 0.0022 | 19.0 | 1368 | 1.3484 | 0.8084 |
| 0.0022 | 20.0 | 1440 | 1.3490 | 0.8084 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
kghanlon/distilbert-base-uncased-finetuned-SOTUs-v1-RILE-v1
|
kghanlon
| 2023-12-19T10:50:52Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:kghanlon/distilbert-base-uncased-finetuned-SOTUs-v1",
"base_model:finetune:kghanlon/distilbert-base-uncased-finetuned-SOTUs-v1",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-19T10:05:36Z |
---
base_model: kghanlon/distilbert-base-uncased-finetuned-SOTUs-v1
tags:
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
model-index:
- name: distilbert-base-uncased-finetuned-SOTUs-v1-RILE-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-SOTUs-v1-RILE-v1
This model is a fine-tuned version of [kghanlon/distilbert-base-uncased-finetuned-SOTUs-v1](https://huggingface.co/kghanlon/distilbert-base-uncased-finetuned-SOTUs-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8575
- Accuracy: 0.7345
- Recall: 0.7345
- F1: 0.7343
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:------:|
| 0.703 | 1.0 | 15490 | 0.6829 | 0.7138 | 0.7138 | 0.7109 |
| 0.5689 | 2.0 | 30980 | 0.6758 | 0.7348 | 0.7348 | 0.7344 |
| 0.4264 | 3.0 | 46470 | 0.8575 | 0.7345 | 0.7345 | 0.7343 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
satani/phtben-5
|
satani
| 2023-12-19T10:49:23Z | 8 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-12-19T10:45:26Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### phtben_5 Dreambooth model trained by satani with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
kalinds/Llama_1.5
|
kalinds
| 2023-12-19T10:42:58Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2023-12-19T10:20:34Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
ntc-ai/SDXL-LoRA-slider.mischievious-grin
|
ntc-ai
| 2023-12-19T10:35:52Z | 101 | 2 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] |
text-to-image
| 2023-12-19T10:35:49Z |
---
language:
- en
thumbnail: "images/evaluate/mischievious grin.../mischievious grin_17_3.0.png"
widget:
- text: mischievious grin
output:
url: images/mischievious grin_17_3.0.png
- text: mischievious grin
output:
url: images/mischievious grin_19_3.0.png
- text: mischievious grin
output:
url: images/mischievious grin_20_3.0.png
- text: mischievious grin
output:
url: images/mischievious grin_21_3.0.png
- text: mischievious grin
output:
url: images/mischievious grin_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "mischievious grin"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - mischievious grin (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/mischievious grin_17_-3.0.png" width=256 height=256 /> | <img src="images/mischievious grin_17_0.0.png" width=256 height=256 /> | <img src="images/mischievious grin_17_3.0.png" width=256 height=256 /> |
| <img src="images/mischievious grin_19_-3.0.png" width=256 height=256 /> | <img src="images/mischievious grin_19_0.0.png" width=256 height=256 /> | <img src="images/mischievious grin_19_3.0.png" width=256 height=256 /> |
| <img src="images/mischievious grin_20_-3.0.png" width=256 height=256 /> | <img src="images/mischievious grin_20_0.0.png" width=256 height=256 /> | <img src="images/mischievious grin_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
mischievious grin
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.mischievious-grin', weight_name='mischievious grin.safetensors', adapter_name="mischievious grin")
# Activate the LoRA
pipe.set_adapters(["mischievious grin"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, mischievious grin"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 470+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
tresbien1/ppo-Huggy
|
tresbien1
| 2023-12-19T10:29:10Z | 5 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-12-19T10:29:00Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: tresbien1/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
bclavie/fio-base-japanese-v0.1
|
bclavie
| 2023-12-19T10:28:16Z | 36 | 6 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"ja",
"dataset:shunk031/JGLUE",
"dataset:shunk031/jsnli",
"dataset:hpprc/jsick",
"dataset:miracl/miracl",
"dataset:castorini/mr-tydi",
"dataset:unicamp-dl/mmarco",
"autotrain_compatible",
"region:us"
] |
sentence-similarity
| 2023-12-18T11:01:07Z |
---
language:
- ja
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
inference: false
datasets:
- shunk031/JGLUE
- shunk031/jsnli
- hpprc/jsick
- miracl/miracl
- castorini/mr-tydi
- unicamp-dl/mmarco
library_name: sentence-transformers
---
# fio-base-japanese-v0.1
日本語版は近日公開予定です(日本語を勉強中なので、間違いはご容赦ください!)
fio-base-japanese-v0.1 is a proof of concept, and the first release of the Fio family of Japanese embeddings. It is based on [cl-tohoku/bert-base-japanese-v3](https://huggingface.co/cl-tohoku/bert-base-japanese-v3) and trained on limited volumes of data on a single GPU.
For more information, please refer to [my notes on Fio](https://ben.clavie.eu/fio).
#### Datasets
Similarity/Entailment:
- JSTS (train)
- JSNLI (train)
- JNLI (train)
- JSICK (train)
Retrieval:
- MMARCO (Multilingual Marco) (train, 124k sentence pairs, <1% of the full data)
- Mr.TyDI (train)
- MIRACL (train, 50% sample)
- ~~JSQuAD (train, 50% sample, no LLM enhancement)~~ JSQuAD is not used in the released version, to serve as an unseen test set.
#### Results
> ⚠️ WARNING: fio-base-japanese-v0.1 has seen textual entailment tasks during its training, which is _not_ the case of the other other japanese-only models in this table. This gives Fio an unfair advantage over the previous best results, `cl-nagoya/sup-simcse-ja-[base|large]`. During mid-training evaluations, this didn't seem to greatly affect performance, however, JSICK (NLI set) was included in the training data, and therefore it's impossible to fully remove this contamination at the moment. I intend to fix this in future release, but please keep this in mind as you view the results (see JSQuAD results on the associated blog post for a fully unseen comparison, although focused on retrieval).
This is adapted and truncated (to keep only the most popular models) from [oshizo's benchmarking github repo](https://github.com/oshizo/JapaneseEmbeddingEval), please check it out for more information and give it a star as it was very useful!
Italic denotes best model for its size when a smaller model outperforms a bigger one (base/large | 768/1024), bold denotes best overall.
| Model | JSTS valid-v1.1 | JSICK test | MIRACL dev | Average |
|-------------------------------------------------|-----------------|------------|------------|---------|
| bclavie/fio-base-japanese-v0.1 | **_0.863_** | **_0.894_** | 0.718 | _0.825_ |
| cl-nagoya/sup-simcse-ja-base | 0.809 | 0.827 | 0.527 | 0.721 |
| cl-nagoya/sup-simcse-ja-large | _0.831_ | _0.831_ | 0.507 | 0.723 |
| colorfulscoop/sbert-base-ja | 0.742 | 0.657 | 0.254 | 0.551 |
| intfloat/multilingual-e5-base | 0.796 | 0.806 | __0.845__ | 0.816 |
| intfloat/multilingual-e5-large | 0.819 | 0.794 | **0.883** | **_0.832_** |
| pkshatech/GLuCoSE-base-ja | 0.818 | 0.757 | 0.692 | 0.755 |
| text-embedding-ada-002 | 0.790 | 0.789 | 0.7232 | 0.768 |
## Usage
This model requires both `fugashi` and `unidic-lite`:
```
pip install -U fugashi unidic-lite
```
If using for a retrieval task, you must prefix your query with `"関連記事を取得するために使用できるこの文の表現を生成します: "`.
### Usage (Sentence-Transformers)
This model is best used through [sentence-transformers](https://www.SBERT.net). If you don't have it, it's easy to install:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["こんにちは、世界!", "文埋め込み最高!文埋め込み最高と叫びなさい", "極度乾燥しなさい"]
model = SentenceTransformer('bclavie/fio-base-japanese-v0.1')
embeddings = model.encode(sentences)
print(embeddings)
```
### Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Citing & Authors
```@misc{
bclavie-fio-embeddings,
author = {Benjamin Clavié},
title = {Fio Japanese Embeddings},
year = {2023},
howpublished = {\url{https://ben.clavie.eu/fio}}
}```
|
Federm1512/ppo-Huggy
|
Federm1512
| 2023-12-19T10:25:01Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-12-19T09:54:11Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Federm1512/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
XingeTong/9-testresults
|
XingeTong
| 2023-12-19T10:19:31Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:cardiffnlp/twitter-roberta-base-sentiment-latest",
"base_model:finetune:cardiffnlp/twitter-roberta-base-sentiment-latest",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-19T10:17:13Z |
---
base_model: cardiffnlp/twitter-roberta-base-sentiment-latest
tags:
- generated_from_trainer
model-index:
- name: 9-testresults
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 9-testresults
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment-latest](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9.359061927977144e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.34.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
VictorNGomes/pttmario5
|
VictorNGomes
| 2023-12-19T10:15:08Z | 6 | 1 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xlsum",
"base_model:VictorNGomes/pttmario5",
"base_model:finetune:VictorNGomes/pttmario5",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-12-17T01:40:38Z |
---
license: mit
base_model: VictorNGomes/pttmario5
tags:
- generated_from_trainer
datasets:
- xlsum
model-index:
- name: pttmario5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pttmario5
This model is a fine-tuned version of [VictorNGomes/pttmario5](https://huggingface.co/VictorNGomes/pttmario5) on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2144
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 384
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5131 | 3.34 | 500 | 2.2600 |
| 2.4594 | 6.69 | 1000 | 2.2144 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
baichuan-inc/Baichuan2-7B-Intermediate-Checkpoints
|
baichuan-inc
| 2023-12-19T10:03:18Z | 16 | 18 | null |
[
"en",
"zh",
"license:other",
"region:us"
] | null | 2023-09-05T09:35:23Z |
---
language:
- en
- zh
license: other
tasks:
- text-generation
---
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<div align="center">
<h1>
Baichuan 2
</h1>
</div>
<div align="center">
<a href="https://github.com/baichuan-inc/Baichuan2" target="_blank">🦉GitHub</a> | <a href="https://github.com/baichuan-inc/Baichuan-7B/blob/main/media/wechat.jpeg?raw=true" target="_blank">💬WeChat</a>
</div>
<div align="center">
百川API支持搜索增强和192K长窗口,新增百川搜索增强知识库、限时免费!<br>
🚀 <a href="https://www.baichuan-ai.com/" target="_blank">百川大模型在线对话平台</a> 已正式向公众开放 🎉
</div>
# 目录/Table of Contents
- [📖 模型介绍/Introduction](#Introduction)
- [⚙️ 快速开始/Quick Start](#Start)
- [📊 Benchmark评估/Benchmark Evaluation](#Benchmark)
- [📜 声明与协议/Terms and Conditions](#Terms)
# <span id="Introduction">模型介绍/Introduction</span>
Baichuan 2 是[百川智能]推出的新一代开源大语言模型,采用 **2.6 万亿** Tokens 的高质量语料训练,在权威的中文和英文 benchmark
上均取得同尺寸最好的效果。本次发布包含有 7B、13B 的 Base 和 Chat 版本,并提供了 Chat 版本的 4bits
量化,所有版本不仅对学术研究完全开放,开发者也仅需[邮件申请]并获得官方商用许可后,即可以免费商用。具体发布版本和下载见下表:
Baichuan 2 is the new generation of large-scale open-source language models launched by [Baichuan Intelligence inc.](https://www.baichuan-ai.com/).
It is trained on a high-quality corpus with 2.6 trillion tokens and has achieved the best performance in authoritative Chinese and English benchmarks of the same size.
This release includes 7B and 13B versions for both Base and Chat models, along with a 4bits quantized version for the Chat model.
All versions are fully open to academic research, and developers can also use them for free in commercial applications after obtaining an official commercial license through [email request](mailto:opensource@baichuan-inc.com).
The specific release versions and download links are listed in the table below:
| | Base Model | Chat Model | 4bits Quantized Chat Model |
|:---:|:--------------------:|:--------------------:|:--------------------------:|
| 7B | [Baichuan2-7B-Base](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base) | [Baichuan2-7B-Chat](https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat) | [Baichuan2-7B-Chat-4bits](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base-4bits) |
| 13B | [Baichuan2-13B-Base](https://huggingface.co/baichuan-inc/Baichuan2-13B-Base) | [Baichuan2-13B-Chat](https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat) | [Baichuan2-13B-Chat-4bits](https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat-4bits) |
# <span id="Start">快速开始/Quick Start</span>
在Baichuan2系列模型中,我们为了加快推理速度使用了Pytorch2.0加入的新功能F.scaled_dot_product_attention,因此模型需要在Pytorch2.0环境下运行。
In the Baichuan 2 series models, we have utilized the new feature `F.scaled_dot_product_attention` introduced in PyTorch 2.0 to accelerate inference speed. Therefore, the model needs to be run in a PyTorch 2.0 environment.
**我们将训练中的Checkpoints上传到了本项目中,可以通过指定revision来加载不同step的Checkpoint。**
**We have uploaded the checkpoints during training to this project. You can load checkpoints from different steps by specifying the revision.**
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("baichuan-inc/Baichuan2-7B-Intermediate-Checkpoints", revision="train_02200B", use_fast=False, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("baichuan-inc/Baichuan2-7B-Intermediate-Checkpoints", revision="train_02200B", device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True)
inputs = tokenizer('登鹳雀楼->王之涣\n夜雨寄北->', return_tensors='pt')
inputs = inputs.to('cuda:0')
pred = model.generate(**inputs, max_new_tokens=64, repetition_penalty=1.1)
print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
```
# <span id="Benchmark">Benchmark 结果/Benchmark Evaluation</span>
我们在[通用]、[法律]、[医疗]、[数学]、[代码]和[多语言翻译]六个领域的中英文权威数据集上对模型进行了广泛测试,更多详细测评结果可查看[GitHub]。
We have extensively tested the model on authoritative Chinese-English datasets across six domains: [General](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#general-domain), [Legal](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#law-and-medicine), [Medical](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#law-and-medicine), [Mathematics](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#mathematics-and-code), [Code](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#mathematics-and-code), and [Multilingual Translation](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#multilingual-translation). For more detailed evaluation results, please refer to [GitHub](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md).
### 7B Model Results
| | **C-Eval** | **MMLU** | **CMMLU** | **Gaokao** | **AGIEval** | **BBH** |
|:-----------------------:|:----------:|:--------:|:---------:|:----------:|:-----------:|:-------:|
| | 5-shot | 5-shot | 5-shot | 5-shot | 5-shot | 3-shot |
| **GPT-4** | 68.40 | 83.93 | 70.33 | 66.15 | 63.27 | 75.12 |
| **GPT-3.5 Turbo** | 51.10 | 68.54 | 54.06 | 47.07 | 46.13 | 61.59 |
| **LLaMA-7B** | 27.10 | 35.10 | 26.75 | 27.81 | 28.17 | 32.38 |
| **LLaMA2-7B** | 28.90 | 45.73 | 31.38 | 25.97 | 26.53 | 39.16 |
| **MPT-7B** | 27.15 | 27.93 | 26.00 | 26.54 | 24.83 | 35.20 |
| **Falcon-7B** | 24.23 | 26.03 | 25.66 | 24.24 | 24.10 | 28.77 |
| **ChatGLM2-6B** | 50.20 | 45.90 | 49.00 | 49.44 | 45.28 | 31.65 |
| **[Baichuan-7B]** | 42.80 | 42.30 | 44.02 | 36.34 | 34.44 | 32.48 |
| **[Baichuan2-7B-Base]** | 54.00 | 54.16 | 57.07 | 47.47 | 42.73 | 41.56 |
### 13B Model Results
| | **C-Eval** | **MMLU** | **CMMLU** | **Gaokao** | **AGIEval** | **BBH** |
|:---------------------------:|:----------:|:--------:|:---------:|:----------:|:-----------:|:-------:|
| | 5-shot | 5-shot | 5-shot | 5-shot | 5-shot | 3-shot |
| **GPT-4** | 68.40 | 83.93 | 70.33 | 66.15 | 63.27 | 75.12 |
| **GPT-3.5 Turbo** | 51.10 | 68.54 | 54.06 | 47.07 | 46.13 | 61.59 |
| **LLaMA-13B** | 28.50 | 46.30 | 31.15 | 28.23 | 28.22 | 37.89 |
| **LLaMA2-13B** | 35.80 | 55.09 | 37.99 | 30.83 | 32.29 | 46.98 |
| **Vicuna-13B** | 32.80 | 52.00 | 36.28 | 30.11 | 31.55 | 43.04 |
| **Chinese-Alpaca-Plus-13B** | 38.80 | 43.90 | 33.43 | 34.78 | 35.46 | 28.94 |
| **XVERSE-13B** | 53.70 | 55.21 | 58.44 | 44.69 | 42.54 | 38.06 |
| **[Baichuan-13B-Base]** | 52.40 | 51.60 | 55.30 | 49.69 | 43.20 | 43.01 |
| **[Baichuan2-13B-Base]** | 58.10 | 59.17 | 61.97 | 54.33 | 48.17 | 48.78 |
## 训练过程模型/Training Dynamics
除了训练了 2.6 万亿 Tokens 的 [Baichuan2-7B-Base](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base) 模型,我们还提供了在此之前的另外 11 个中间过程的模型(分别对应训练了约 0.2 ~ 2.4 万亿 Tokens)供社区研究使用
([训练过程checkpoint下载](https://huggingface.co/baichuan-inc/Baichuan2-7B-Intermediate-Checkpoints))。下图给出了这些 checkpoints 在 C-Eval、MMLU、CMMLU 三个 benchmark 上的效果变化:
In addition to the [Baichuan2-7B-Base](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base) model trained on 2.6 trillion tokens, we also offer 11 additional intermediate-stage models for community research, corresponding to training on approximately 0.2 to 2.4 trillion tokens each ([Intermediate Checkpoints Download](https://huggingface.co/baichuan-inc/Baichuan2-7B-Intermediate-Checkpoints)). The graph below shows the performance changes of these checkpoints on three benchmarks: C-Eval, MMLU, and CMMLU.

# <span id="Terms">声明与协议/Terms and Conditions</span>
## 声明
我们在此声明,我们的开发团队并未基于 Baichuan 2 模型开发任何应用,无论是在 iOS、Android、网页或任何其他平台。我们强烈呼吁所有使用者,不要利用
Baichuan 2 模型进行任何危害国家社会安全或违法的活动。另外,我们也要求使用者不要将 Baichuan 2
模型用于未经适当安全审查和备案的互联网服务。我们希望所有的使用者都能遵守这个原则,确保科技的发展能在规范和合法的环境下进行。
我们已经尽我们所能,来确保模型训练过程中使用的数据的合规性。然而,尽管我们已经做出了巨大的努力,但由于模型和数据的复杂性,仍有可能存在一些无法预见的问题。因此,如果由于使用
Baichuan 2 开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。
We hereby declare that our team has not developed any applications based on Baichuan 2 models, not on iOS, Android, the web, or any other platform. We strongly call on all users not to use Baichuan 2 models for any activities that harm national / social security or violate the law. Also, we ask users not to use Baichuan 2 models for Internet services that have not undergone appropriate security reviews and filings. We hope that all users can abide by this principle and ensure that the development of technology proceeds in a regulated and legal environment.
We have done our best to ensure the compliance of the data used in the model training process. However, despite our considerable efforts, there may still be some unforeseeable issues due to the complexity of the model and data. Therefore, if any problems arise due to the use of Baichuan 2 open-source models, including but not limited to data security issues, public opinion risks, or any risks and problems brought about by the model being misled, abused, spread or improperly exploited, we will not assume any responsibility.
## 协议
社区使用 Baichuan 2 模型需要遵循 [Apache 2.0](https://github.com/baichuan-inc/Baichuan2/blob/main/LICENSE) 和[《Baichuan 2 模型社区许可协议》](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/resolve/main/Baichuan%202%E6%A8%A1%E5%9E%8B%E7%A4%BE%E5%8C%BA%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf)。Baichuan 2 模型支持商业用途,如果您计划将 Baichuan 2 模型或其衍生品用于商业目的,请您确认您的主体符合以下情况:
1. 您或您的关联方的服务或产品的日均用户活跃量(DAU)低于100万。
2. 您或您的关联方不是软件服务提供商、云服务提供商。
3. 您或您的关联方不存在将授予您的商用许可,未经百川许可二次授权给其他第三方的可能。
在符合以上条件的前提下,您需要通过以下联系邮箱 opensource@baichuan-inc.com ,提交《Baichuan 2 模型社区许可协议》要求的申请材料。审核通过后,百川将特此授予您一个非排他性、全球性、不可转让、不可再许可、可撤销的商用版权许可。
The community usage of Baichuan 2 model requires adherence to [Apache 2.0](https://github.com/baichuan-inc/Baichuan2/blob/main/LICENSE) and [Community License for Baichuan2 Model](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/resolve/main/Baichuan%202%E6%A8%A1%E5%9E%8B%E7%A4%BE%E5%8C%BA%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf). The Baichuan 2 model supports commercial use. If you plan to use the Baichuan 2 model or its derivatives for commercial purposes, please ensure that your entity meets the following conditions:
1. The Daily Active Users (DAU) of your or your affiliate's service or product is less than 1 million.
2. Neither you nor your affiliates are software service providers or cloud service providers.
3. There is no possibility for you or your affiliates to grant the commercial license given to you, to reauthorize it to other third parties without Baichuan's permission.
Upon meeting the above conditions, you need to submit the application materials required by the Baichuan 2 Model Community License Agreement via the following contact email: opensource@baichuan-inc.com. Once approved, Baichuan will hereby grant you a non-exclusive, global, non-transferable, non-sublicensable, revocable commercial copyright license.
[GitHub]:https://github.com/baichuan-inc/Baichuan2
[Baichuan2]:https://github.com/baichuan-inc/Baichuan2
[Baichuan-7B]:https://huggingface.co/baichuan-inc/Baichuan-7B
[Baichuan2-7B-Base]:https://huggingface.co/baichuan-inc/Baichuan2-7B-Base
[Baichuan2-7B-Chat]:https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat
[Baichuan2-7B-Chat-4bits]:https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat-4bits
[Baichuan-13B-Base]:https://huggingface.co/baichuan-inc/Baichuan-13B-Base
[Baichuan2-13B-Base]:https://huggingface.co/baichuan-inc/Baichuan2-13B-Base
[Baichuan2-13B-Chat]:https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat
[Baichuan2-13B-Chat-4bits]:https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat-4bits
[通用]:https://github.com/baichuan-inc/Baichuan2#%E9%80%9A%E7%94%A8%E9%A2%86%E5%9F%9F
[法律]:https://github.com/baichuan-inc/Baichuan2#%E6%B3%95%E5%BE%8B%E5%8C%BB%E7%96%97
[医疗]:https://github.com/baichuan-inc/Baichuan2#%E6%B3%95%E5%BE%8B%E5%8C%BB%E7%96%97
[数学]:https://github.com/baichuan-inc/Baichuan2#%E6%95%B0%E5%AD%A6%E4%BB%A3%E7%A0%81
[代码]:https://github.com/baichuan-inc/Baichuan2#%E6%95%B0%E5%AD%A6%E4%BB%A3%E7%A0%81
[多语言翻译]:https://github.com/baichuan-inc/Baichuan2#%E5%A4%9A%E8%AF%AD%E8%A8%80%E7%BF%BB%E8%AF%91
[《Baichuan 2 模型社区许可协议》]:https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/blob/main/Baichuan%202%E6%A8%A1%E5%9E%8B%E7%A4%BE%E5%8C%BA%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf
[邮件申请]: mailto:opensource@baichuan-inc.com
[Email]: mailto:opensource@baichuan-inc.com
[opensource@baichuan-inc.com]: mailto:opensource@baichuan-inc.com
[训练过程heckpoint下载]: https://huggingface.co/baichuan-inc/Baichuan2-7B-Intermediate-Checkpoints
[百川智能]: https://www.baichuan-ai.com
|
sdpkjc/Ant-v4-sac_continuous_action-seed2
|
sdpkjc
| 2023-12-19T09:57:43Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"Ant-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-19T09:57:34Z |
---
tags:
- Ant-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: SAC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Ant-v4
type: Ant-v4
metrics:
- type: mean_reward
value: 5816.91 +/- 66.05
name: mean_reward
verified: false
---
# (CleanRL) **SAC** Agent Playing **Ant-v4**
This is a trained model of a SAC agent playing Ant-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sac_continuous_action.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[sac_continuous_action]"
python -m cleanrl_utils.enjoy --exp-name sac_continuous_action --env-id Ant-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/sdpkjc/Ant-v4-sac_continuous_action-seed2/raw/main/sac_continuous_action.py
curl -OL https://huggingface.co/sdpkjc/Ant-v4-sac_continuous_action-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/sdpkjc/Ant-v4-sac_continuous_action-seed2/raw/main/poetry.lock
poetry install --all-extras
python sac_continuous_action.py --save-model --upload-model --hf-entity sdpkjc --env-id Ant-v4 --seed 2 --track
```
# Hyperparameters
```python
{'alpha': 0.2,
'autotune': True,
'batch_size': 256,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'env_id': 'Ant-v4',
'exp_name': 'sac_continuous_action',
'gamma': 0.99,
'hf_entity': 'sdpkjc',
'learning_starts': 5000.0,
'noise_clip': 0.5,
'policy_frequency': 2,
'policy_lr': 0.0003,
'q_lr': 0.001,
'save_model': True,
'seed': 2,
'target_network_frequency': 1,
'tau': 0.005,
'torch_deterministic': True,
'total_timesteps': 1000000,
'track': True,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
Breyten/mistral-instruct-dutch-syntax-10000
|
Breyten
| 2023-12-19T09:56:16Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2023-12-16T22:54:25Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.1
model-index:
- name: mistral-instruct-dutch-syntax-10000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-Instruct-v0.1-syntax2023-12-16-21-24
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on a Lassy_small dataset curated for dutch syntax.
10000 samples where used, batch-size 2, runtime 2 epochs.
It achieves the following results on the evaluation set:
- Loss: 0.2522
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 0.03
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.7075 | 0.11 | 500 | 0.6710 |
| 0.3569 | 0.21 | 1000 | 0.4348 |
| 0.3458 | 0.32 | 1500 | 0.3517 |
| 0.3325 | 0.42 | 2000 | 0.3151 |
| 0.3014 | 0.53 | 2500 | 0.2928 |
| 0.2304 | 0.63 | 3000 | 0.2817 |
| 0.2984 | 0.74 | 3500 | 0.2736 |
| 0.2283 | 0.84 | 4000 | 0.2680 |
| 0.2399 | 0.95 | 4500 | 0.2640 |
| 0.24 | 1.05 | 5000 | 0.2609 |
| 0.2039 | 1.16 | 5500 | 0.2588 |
| 0.2447 | 1.26 | 6000 | 0.2558 |
| 0.2377 | 1.37 | 6500 | 0.2544 |
| 0.2399 | 1.47 | 7000 | 0.2544 |
| 0.2424 | 1.58 | 7500 | 0.2532 |
| 0.2626 | 1.68 | 8000 | 0.2527 |
| 0.2346 | 1.79 | 8500 | 0.2524 |
| 0.2194 | 1.89 | 9000 | 0.2522 |
| 0.2123 | 2.0 | 9500 | 0.2522 |
| 0.2618 | 2.11 | 10000 | 0.2522 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.1
- Pytorch 2.1.2+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
maonx/gtemodel1
|
maonx
| 2023-12-19T09:52:32Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-11-30T07:34:24Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# gtemodel2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('gtemodel2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=gtemodel2)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 159 with parameters:
```
{'batch_size': 36, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.ContrastiveLoss.ContrastiveLoss` with parameters:
```
{'distance_metric': 'SiameseDistanceMetric.COSINE_DISTANCE', 'margin': 0.5, 'size_average': True}
```
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 5e-06
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 79,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
sdpkjc/Walker2d-v4-sac_continuous_action-seed2
|
sdpkjc
| 2023-12-19T09:51:57Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"Walker2d-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-19T09:51:48Z |
---
tags:
- Walker2d-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: SAC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Walker2d-v4
type: Walker2d-v4
metrics:
- type: mean_reward
value: 3860.43 +/- 46.19
name: mean_reward
verified: false
---
# (CleanRL) **SAC** Agent Playing **Walker2d-v4**
This is a trained model of a SAC agent playing Walker2d-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sac_continuous_action.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[sac_continuous_action]"
python -m cleanrl_utils.enjoy --exp-name sac_continuous_action --env-id Walker2d-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/sdpkjc/Walker2d-v4-sac_continuous_action-seed2/raw/main/sac_continuous_action.py
curl -OL https://huggingface.co/sdpkjc/Walker2d-v4-sac_continuous_action-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/sdpkjc/Walker2d-v4-sac_continuous_action-seed2/raw/main/poetry.lock
poetry install --all-extras
python sac_continuous_action.py --save-model --upload-model --hf-entity sdpkjc --env-id Walker2d-v4 --seed 2 --track
```
# Hyperparameters
```python
{'alpha': 0.2,
'autotune': True,
'batch_size': 256,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'env_id': 'Walker2d-v4',
'exp_name': 'sac_continuous_action',
'gamma': 0.99,
'hf_entity': 'sdpkjc',
'learning_starts': 5000.0,
'noise_clip': 0.5,
'policy_frequency': 2,
'policy_lr': 0.0003,
'q_lr': 0.001,
'save_model': True,
'seed': 2,
'target_network_frequency': 1,
'tau': 0.005,
'torch_deterministic': True,
'total_timesteps': 1000000,
'track': True,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
sdpkjc/HalfCheetah-v4-sac_continuous_action-seed5
|
sdpkjc
| 2023-12-19T09:51:00Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"HalfCheetah-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-19T09:50:51Z |
---
tags:
- HalfCheetah-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: SAC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: HalfCheetah-v4
type: HalfCheetah-v4
metrics:
- type: mean_reward
value: 8328.68 +/- 81.00
name: mean_reward
verified: false
---
# (CleanRL) **SAC** Agent Playing **HalfCheetah-v4**
This is a trained model of a SAC agent playing HalfCheetah-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sac_continuous_action.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[sac_continuous_action]"
python -m cleanrl_utils.enjoy --exp-name sac_continuous_action --env-id HalfCheetah-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/sdpkjc/HalfCheetah-v4-sac_continuous_action-seed5/raw/main/sac_continuous_action.py
curl -OL https://huggingface.co/sdpkjc/HalfCheetah-v4-sac_continuous_action-seed5/raw/main/pyproject.toml
curl -OL https://huggingface.co/sdpkjc/HalfCheetah-v4-sac_continuous_action-seed5/raw/main/poetry.lock
poetry install --all-extras
python sac_continuous_action.py --save-model --upload-model --hf-entity sdpkjc --env-id HalfCheetah-v4 --seed 5 --track
```
# Hyperparameters
```python
{'alpha': 0.2,
'autotune': True,
'batch_size': 256,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'env_id': 'HalfCheetah-v4',
'exp_name': 'sac_continuous_action',
'gamma': 0.99,
'hf_entity': 'sdpkjc',
'learning_starts': 5000.0,
'noise_clip': 0.5,
'policy_frequency': 2,
'policy_lr': 0.0003,
'q_lr': 0.001,
'save_model': True,
'seed': 5,
'target_network_frequency': 1,
'tau': 0.005,
'torch_deterministic': True,
'total_timesteps': 1000000,
'track': True,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
Breyten/mistral-instruct-dutch-syntax-2000
|
Breyten
| 2023-12-19T09:50:13Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2023-12-16T20:41:32Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.1
model-index:
- name: mistral-instruct-dutch-syntax-2000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-instruct-dutch-syntax-2000
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on a curated version of Lassy-Small with syntax data.
2000 samples.
It achieves the following results on the evaluation set:
- Loss: 0.6808
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 0.03
- training_steps: 950
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1019 | 0.11 | 100 | 1.0701 |
| 0.9093 | 0.21 | 200 | 0.9592 |
| 0.8341 | 0.32 | 300 | 0.8800 |
| 0.7975 | 0.42 | 400 | 0.8150 |
| 0.7859 | 0.53 | 500 | 0.7638 |
| 0.7069 | 0.63 | 600 | 0.7254 |
| 0.6007 | 0.74 | 700 | 0.6974 |
| 0.6971 | 0.84 | 800 | 0.6832 |
| 0.6331 | 0.95 | 900 | 0.6808 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.1
- Pytorch 2.1.2+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
sdpkjc/Hopper-v4-sac_continuous_action-seed2
|
sdpkjc
| 2023-12-19T09:46:45Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"Hopper-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-19T09:46:39Z |
---
tags:
- Hopper-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: SAC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Hopper-v4
type: Hopper-v4
metrics:
- type: mean_reward
value: 1481.20 +/- 156.26
name: mean_reward
verified: false
---
# (CleanRL) **SAC** Agent Playing **Hopper-v4**
This is a trained model of a SAC agent playing Hopper-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sac_continuous_action.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[sac_continuous_action]"
python -m cleanrl_utils.enjoy --exp-name sac_continuous_action --env-id Hopper-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/sdpkjc/Hopper-v4-sac_continuous_action-seed2/raw/main/sac_continuous_action.py
curl -OL https://huggingface.co/sdpkjc/Hopper-v4-sac_continuous_action-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/sdpkjc/Hopper-v4-sac_continuous_action-seed2/raw/main/poetry.lock
poetry install --all-extras
python sac_continuous_action.py --save-model --upload-model --hf-entity sdpkjc --env-id Hopper-v4 --seed 2 --track
```
# Hyperparameters
```python
{'alpha': 0.2,
'autotune': True,
'batch_size': 256,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'env_id': 'Hopper-v4',
'exp_name': 'sac_continuous_action',
'gamma': 0.99,
'hf_entity': 'sdpkjc',
'learning_starts': 5000.0,
'noise_clip': 0.5,
'policy_frequency': 2,
'policy_lr': 0.0003,
'q_lr': 0.001,
'save_model': True,
'seed': 2,
'target_network_frequency': 1,
'tau': 0.005,
'torch_deterministic': True,
'total_timesteps': 1000000,
'track': True,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
SebastianSchramm/LlamaGuard-7b-GPTQ-4bit-128g-actorder_True
|
SebastianSchramm
| 2023-12-19T09:44:35Z | 8 | 2 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-2",
"4bit",
"gptq",
"conversational",
"en",
"base_model:meta-llama/LlamaGuard-7b",
"base_model:quantized:meta-llama/LlamaGuard-7b",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] |
text-generation
| 2023-12-08T17:54:13Z |
---
license: llama2
language:
- en
library_name: transformers
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
- 4bit
- gptq
base_model: meta-llama/LlamaGuard-7b
inference: false
---
# Quantized version of meta-llama/LlamaGuard-7b
## Model Description
The model [meta-llama/LlamaGuard-7b](https://huggingface.co/meta-llama/LlamaGuard-7b) was quantized to 4bit, group_size 128, and act-order=True with auto-gptq integration in transformers (https://huggingface.co/blog/gptq-integration).
## Evaluation
To evaluate the qunatized model and compare it with the full precision model, I performed binary classification on the "toxicity" label from the ~5k samples test set of lmsys/toxic-chat.
📊 Full Precision Model:
Average Precision Score: 0.3625
📊 4-bit Quantized Model:
Average Precision Score: 0.3450
|
sdpkjc/Hopper-v4-sac_continuous_action-seed5
|
sdpkjc
| 2023-12-19T09:43:49Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"Hopper-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-19T09:43:44Z |
---
tags:
- Hopper-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: SAC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Hopper-v4
type: Hopper-v4
metrics:
- type: mean_reward
value: 1680.67 +/- 734.03
name: mean_reward
verified: false
---
# (CleanRL) **SAC** Agent Playing **Hopper-v4**
This is a trained model of a SAC agent playing Hopper-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sac_continuous_action.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[sac_continuous_action]"
python -m cleanrl_utils.enjoy --exp-name sac_continuous_action --env-id Hopper-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/sdpkjc/Hopper-v4-sac_continuous_action-seed5/raw/main/sac_continuous_action.py
curl -OL https://huggingface.co/sdpkjc/Hopper-v4-sac_continuous_action-seed5/raw/main/pyproject.toml
curl -OL https://huggingface.co/sdpkjc/Hopper-v4-sac_continuous_action-seed5/raw/main/poetry.lock
poetry install --all-extras
python sac_continuous_action.py --save-model --upload-model --hf-entity sdpkjc --env-id Hopper-v4 --seed 5 --track
```
# Hyperparameters
```python
{'alpha': 0.2,
'autotune': True,
'batch_size': 256,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'env_id': 'Hopper-v4',
'exp_name': 'sac_continuous_action',
'gamma': 0.99,
'hf_entity': 'sdpkjc',
'learning_starts': 5000.0,
'noise_clip': 0.5,
'policy_frequency': 2,
'policy_lr': 0.0003,
'q_lr': 0.001,
'save_model': True,
'seed': 5,
'target_network_frequency': 1,
'tau': 0.005,
'torch_deterministic': True,
'total_timesteps': 1000000,
'track': True,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
nqtruong/detr-resnet-50_finetuned_cppe5
|
nqtruong
| 2023-12-19T09:42:59Z | 33 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"dataset:cppe-5",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-12-12T08:13:59Z |
---
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
datasets:
- cppe-5
model-index:
- name: detr-resnet-50_finetuned_cppe5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_finetuned_cppe5
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
sdpkjc/Humanoid-v4-sac_continuous_action-seed2
|
sdpkjc
| 2023-12-19T09:42:38Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"Humanoid-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-19T09:42:23Z |
---
tags:
- Humanoid-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: SAC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Humanoid-v4
type: Humanoid-v4
metrics:
- type: mean_reward
value: 4993.72 +/- 1028.23
name: mean_reward
verified: false
---
# (CleanRL) **SAC** Agent Playing **Humanoid-v4**
This is a trained model of a SAC agent playing Humanoid-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sac_continuous_action.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[sac_continuous_action]"
python -m cleanrl_utils.enjoy --exp-name sac_continuous_action --env-id Humanoid-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/sdpkjc/Humanoid-v4-sac_continuous_action-seed2/raw/main/sac_continuous_action.py
curl -OL https://huggingface.co/sdpkjc/Humanoid-v4-sac_continuous_action-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/sdpkjc/Humanoid-v4-sac_continuous_action-seed2/raw/main/poetry.lock
poetry install --all-extras
python sac_continuous_action.py --save-model --upload-model --hf-entity sdpkjc --env-id Humanoid-v4 --seed 2 --track
```
# Hyperparameters
```python
{'alpha': 0.2,
'autotune': True,
'batch_size': 256,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'env_id': 'Humanoid-v4',
'exp_name': 'sac_continuous_action',
'gamma': 0.99,
'hf_entity': 'sdpkjc',
'learning_starts': 5000.0,
'noise_clip': 0.5,
'policy_frequency': 2,
'policy_lr': 0.0003,
'q_lr': 0.001,
'save_model': True,
'seed': 2,
'target_network_frequency': 1,
'tau': 0.005,
'torch_deterministic': True,
'total_timesteps': 1000000,
'track': True,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
hkivancoral/smids_10x_deit_small_adamax_00001_fold2
|
hkivancoral
| 2023-12-19T09:42:28Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-19T08:33:59Z |
---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_10x_deit_small_adamax_00001_fold2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8718801996672213
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_small_adamax_00001_fold2
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1874
- Accuracy: 0.8719
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2546 | 1.0 | 750 | 0.2964 | 0.8885 |
| 0.1392 | 2.0 | 1500 | 0.2964 | 0.8935 |
| 0.1051 | 3.0 | 2250 | 0.3173 | 0.8802 |
| 0.0797 | 4.0 | 3000 | 0.3716 | 0.8802 |
| 0.0803 | 5.0 | 3750 | 0.4496 | 0.8769 |
| 0.0599 | 6.0 | 4500 | 0.5455 | 0.8769 |
| 0.0367 | 7.0 | 5250 | 0.6753 | 0.8686 |
| 0.0203 | 8.0 | 6000 | 0.7402 | 0.8752 |
| 0.0136 | 9.0 | 6750 | 0.8455 | 0.8686 |
| 0.0001 | 10.0 | 7500 | 0.8969 | 0.8686 |
| 0.0056 | 11.0 | 8250 | 0.9305 | 0.8769 |
| 0.0002 | 12.0 | 9000 | 0.9474 | 0.8752 |
| 0.0 | 13.0 | 9750 | 0.9957 | 0.8785 |
| 0.0 | 14.0 | 10500 | 1.0123 | 0.8769 |
| 0.0001 | 15.0 | 11250 | 0.9720 | 0.8835 |
| 0.0001 | 16.0 | 12000 | 1.0684 | 0.8785 |
| 0.0003 | 17.0 | 12750 | 1.1079 | 0.8752 |
| 0.0 | 18.0 | 13500 | 1.0971 | 0.8752 |
| 0.0 | 19.0 | 14250 | 1.0987 | 0.8735 |
| 0.0 | 20.0 | 15000 | 1.1190 | 0.8769 |
| 0.0 | 21.0 | 15750 | 1.1376 | 0.8686 |
| 0.0049 | 22.0 | 16500 | 1.1379 | 0.8686 |
| 0.0014 | 23.0 | 17250 | 1.1542 | 0.8752 |
| 0.0 | 24.0 | 18000 | 1.1536 | 0.8735 |
| 0.0 | 25.0 | 18750 | 1.1721 | 0.8719 |
| 0.0 | 26.0 | 19500 | 1.1498 | 0.8719 |
| 0.01 | 27.0 | 20250 | 1.1595 | 0.8719 |
| 0.0 | 28.0 | 21000 | 1.1250 | 0.8785 |
| 0.0 | 29.0 | 21750 | 1.1514 | 0.8686 |
| 0.0 | 30.0 | 22500 | 1.1182 | 0.8735 |
| 0.0 | 31.0 | 23250 | 1.1637 | 0.8752 |
| 0.0 | 32.0 | 24000 | 1.1726 | 0.8735 |
| 0.0 | 33.0 | 24750 | 1.1697 | 0.8719 |
| 0.0 | 34.0 | 25500 | 1.1588 | 0.8752 |
| 0.0 | 35.0 | 26250 | 1.1653 | 0.8702 |
| 0.0 | 36.0 | 27000 | 1.1669 | 0.8719 |
| 0.0141 | 37.0 | 27750 | 1.1767 | 0.8719 |
| 0.0 | 38.0 | 28500 | 1.1781 | 0.8719 |
| 0.0 | 39.0 | 29250 | 1.1951 | 0.8702 |
| 0.0 | 40.0 | 30000 | 1.1887 | 0.8702 |
| 0.0 | 41.0 | 30750 | 1.1872 | 0.8702 |
| 0.0 | 42.0 | 31500 | 1.1896 | 0.8702 |
| 0.0 | 43.0 | 32250 | 1.1930 | 0.8702 |
| 0.0 | 44.0 | 33000 | 1.1942 | 0.8702 |
| 0.0056 | 45.0 | 33750 | 1.1902 | 0.8702 |
| 0.0 | 46.0 | 34500 | 1.1880 | 0.8702 |
| 0.0 | 47.0 | 35250 | 1.1877 | 0.8702 |
| 0.0 | 48.0 | 36000 | 1.1882 | 0.8702 |
| 0.0 | 49.0 | 36750 | 1.1884 | 0.8702 |
| 0.0 | 50.0 | 37500 | 1.1874 | 0.8719 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
Vageesh1/Appointment_bot
|
Vageesh1
| 2023-12-19T09:42:15Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2023-12-14T17:51:41Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
sdpkjc/Swimmer-v4-sac_continuous_action-seed3
|
sdpkjc
| 2023-12-19T09:41:25Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"Swimmer-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-19T09:41:19Z |
---
tags:
- Swimmer-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: SAC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Swimmer-v4
type: Swimmer-v4
metrics:
- type: mean_reward
value: 149.90 +/- 5.08
name: mean_reward
verified: false
---
# (CleanRL) **SAC** Agent Playing **Swimmer-v4**
This is a trained model of a SAC agent playing Swimmer-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sac_continuous_action.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[sac_continuous_action]"
python -m cleanrl_utils.enjoy --exp-name sac_continuous_action --env-id Swimmer-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/sdpkjc/Swimmer-v4-sac_continuous_action-seed3/raw/main/sac_continuous_action.py
curl -OL https://huggingface.co/sdpkjc/Swimmer-v4-sac_continuous_action-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/sdpkjc/Swimmer-v4-sac_continuous_action-seed3/raw/main/poetry.lock
poetry install --all-extras
python sac_continuous_action.py --save-model --upload-model --hf-entity sdpkjc --env-id Swimmer-v4 --seed 3 --track
```
# Hyperparameters
```python
{'alpha': 0.2,
'autotune': True,
'batch_size': 256,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'env_id': 'Swimmer-v4',
'exp_name': 'sac_continuous_action',
'gamma': 0.99,
'hf_entity': 'sdpkjc',
'learning_starts': 5000.0,
'noise_clip': 0.5,
'policy_frequency': 2,
'policy_lr': 0.0003,
'q_lr': 0.001,
'save_model': True,
'seed': 3,
'target_network_frequency': 1,
'tau': 0.005,
'torch_deterministic': True,
'total_timesteps': 1000000,
'track': True,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
sdpkjc/HalfCheetah-v4-sac_continuous_action-seed2
|
sdpkjc
| 2023-12-19T09:38:04Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"HalfCheetah-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-19T09:37:56Z |
---
tags:
- HalfCheetah-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: SAC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: HalfCheetah-v4
type: HalfCheetah-v4
metrics:
- type: mean_reward
value: 11847.43 +/- 289.60
name: mean_reward
verified: false
---
# (CleanRL) **SAC** Agent Playing **HalfCheetah-v4**
This is a trained model of a SAC agent playing HalfCheetah-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sac_continuous_action.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[sac_continuous_action]"
python -m cleanrl_utils.enjoy --exp-name sac_continuous_action --env-id HalfCheetah-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/sdpkjc/HalfCheetah-v4-sac_continuous_action-seed2/raw/main/sac_continuous_action.py
curl -OL https://huggingface.co/sdpkjc/HalfCheetah-v4-sac_continuous_action-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/sdpkjc/HalfCheetah-v4-sac_continuous_action-seed2/raw/main/poetry.lock
poetry install --all-extras
python sac_continuous_action.py --save-model --upload-model --hf-entity sdpkjc --env-id HalfCheetah-v4 --seed 2 --track
```
# Hyperparameters
```python
{'alpha': 0.2,
'autotune': True,
'batch_size': 256,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'env_id': 'HalfCheetah-v4',
'exp_name': 'sac_continuous_action',
'gamma': 0.99,
'hf_entity': 'sdpkjc',
'learning_starts': 5000.0,
'noise_clip': 0.5,
'policy_frequency': 2,
'policy_lr': 0.0003,
'q_lr': 0.001,
'save_model': True,
'seed': 2,
'target_network_frequency': 1,
'tau': 0.005,
'torch_deterministic': True,
'total_timesteps': 1000000,
'track': True,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
rezashr/q-Taxi-v3
|
rezashr
| 2023-12-19T09:29:52Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-19T09:29:49Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="rezashr/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
sai-s2t/m2m100_1.2B
|
sai-s2t
| 2023-12-19T09:25:00Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-12-19T09:16:31Z |
---
license: mit
---
This is a fork of [this model](https://huggingface.co/facebook/m2m100_1.2B)
|
satani/phtben-2
|
satani
| 2023-12-19T09:18:34Z | 4 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-12-19T09:14:30Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### phtben_2 Dreambooth model trained by satani with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
Zigeng/SlimSAM
|
Zigeng
| 2023-12-19T09:16:39Z | 0 | 0 | null |
[
"arxiv:2312.05284",
"arxiv:2304.02643",
"license:apache-2.0",
"region:us"
] | null | 2023-12-19T04:55:18Z |
---
license: apache-2.0
---
# SlimSAM: 0.1% Data Makes Segment Anything Slim
<div align="center">
<img src="images/paper/intro.PNG" width="66%">
<img src="images/paper/everything.PNG" width="100%">
</div>
> **0.1% Data Makes Segment Anything Slim**
> [Zigeng Chen](https://github.com/czg1225), [Gongfan Fang](https://fangggf.github.io/), [Xinyin Ma](https://horseee.github.io/), [Xinchao Wang](https://sites.google.com/site/sitexinchaowang/)
> [Learning and Vision Lab](http://lv-nus.org/), National University of Singapore
> Paper: [[Arxiv]](https://arxiv.org/abs/2312.05284)
### Updates
* 🚀 **December 11, 2023**: Release the training code, inference code and pre-trained models for **SlimSAM**.
## Introduction
<div align="center">
<img src="images/paper/process.PNG" width="100%">
</div>
**SlimSAM** is a novel SAM compression method, which efficiently reuses pre-trained SAMs without the necessity for extensive retraining. This is achieved by the efficient reuse of pre-trained SAMs through a unified pruning-distillation framework. To enhance knowledge inheritance from the original SAM, we employ an innovative alternate slimming strategy that partitions the compression process into a progressive procedure. Diverging from prior pruning techniques, we meticulously prune and distill decoupled model structures in an alternating fashion. Furthermore, a novel label-free pruning criterion is also proposed to align the pruning objective with the optimization target, thereby boosting the post-distillation after pruning.

SlimSAM achieves approaching performance while reducing the parameter counts to **0.9\% (5.7M)**, MACs to **0.8\% (21G)**, and requiring mere **0.1\% (10k)** of the training data when compared to the original SAM-H. Extensive experiments demonstrate that our method realize significant superior performance while utilizing over **10 times** less training data when compared to other SAM compression methods.
## Visualization Results
Qualitative comparison of results obtained using point prompts, box prompts, and segment everything prompts are shown in the following section.
### Segment Everything Prompts
<div align="center">
<img src="images/paper/everything2.PNG" width="100%">
</div>
### Box Prompts and Point Prompts
<div align="center">
<img src="images/paper/prompt.PNG" width="100%">
</div>
## Quantitative Results
We conducted a comprehensive comparison encompassing performance, efficiency, and training costs with other SAM compression methods and structural pruning methods.
### Comparing with other SAM compression methods.
<div align="center">
<img src="images/paper/compare_tab1.PNG" width="100%">
</div>
### Comparing with other structural pruning methods.
<div align="center">
<img src="images/paper/compare_tab2.PNG" width="50%">
</div>
## Installation
The code requires `python>=3.8`, as well as `pytorch>=1.7` and `torchvision>=0.8`. Please follow the instructions [here](https://pytorch.org/get-started/locally/) to install both PyTorch and TorchVision dependencies. Installing both PyTorch and TorchVision with CUDA support is strongly recommended.
Install with
```
pip install -e .
```
The following optional dependencies are necessary for mask post-processing, saving masks in COCO format.
```
pip install opencv-python pycocotools matplotlib
```
## Dataset
We use the original SA-1B dataset in our code. See [here](https://ai.facebook.com/datasets/segment-anything/) for an overview of the datastet. The dataset can be downloaded [here](https://ai.facebook.com/datasets/segment-anything-downloads/).
The download dataset should be saved as:
```
<train_data_root>/
sa_xxxxxxx.jpg
sa_xxxxxxx.json
......
<val_data_root>/
sa_xxxxxxx.jpg
sa_xxxxxxx.json
......
```
To decode a mask in COCO RLE format into binary:
```
from pycocotools import mask as mask_utils
mask = mask_utils.decode(annotation["segmentation"])
```
See [here](https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocotools/mask.py) for more instructions to manipulate masks stored in RLE format.
## <a name="Models"></a>Model Checkpoints
The base model of our method is available. To enhance collaboration with our dependency dectection algorithm, we have split the original image encoder's qkv layer into three distinct linear layers: q, k, and v.
<div align="center">
<img src="images/paper/split.PNG" width="70%">
</div>
Click the links below to download the checkpoints of orginal SAM-B.
- `SAM-B`: [SAM-B model.](https://drive.google.com/file/d/1CtcyOm4h9bXgBF8DEVWn3N7g9-3r4Xzz/view?usp=sharing)
The check points of our SlimSAM are avalable. We release two versions, which are SlimSAM-50 (pruning ratio = 50%) and SlimSAM-77 (pruning ratio = 77%).
Click the links below to download the checkpoints for the corresponding pruning ratio.
- `SlimSAM-50`: [SlimSAM-50 model.](https://drive.google.com/file/d/1iCN9IW0Su0Ud_fOFoQUnTdkC3bFveMND/view?usp=sharing)
- `SlimSAM-77`: [SlimSAM-77 model.](https://drive.google.com/file/d/1L7LB6gHDzR-3D63pH9acD9E0Ul9_wMF-/view)
These models can be instantiated by running
```
import torch
SlimSAM_model = torch.load(<model_path>)
SlimSAM_model.image_encoder = SlimSAM_model.image_encoder.module
def forward(self, x):
x = self.patch_embed(x)
if self.pos_embed is not None:
x = x + self.pos_embed
for blk in self.blocks:
x,qkv_emb,mid_emb,x_emb = blk(x)
x = self.neck(x.permute(0, 3, 1, 2))
return x
import types
funcType = types.MethodType
SlimSAM_model.image_encoder.forward = funcType(forward, SlimSAM_model.image_encoder)
```
## <a name="Inference"></a>Inference
First download [SlimSAM-50 model](https://drive.google.com/file/d/1iCN9IW0Su0Ud_fOFoQUnTdkC3bFveMND/view?usp=sharing) or [SlimSAM-77 model](https://drive.google.com/file/d/1L7LB6gHDzR-3D63pH9acD9E0Ul9_wMF-/view) for inference
We provide detailed instructions in 'inference.py' on how to use a range of prompts, including 'point' and 'box' and 'everything', for inference purposes.
```
CUDA_VISIBLE_DEVICES=0 python inference.py
```
## <a name="Train"></a>Train
First download a [SAM-B model](https://drive.google.com/file/d/1CtcyOm4h9bXgBF8DEVWn3N7g9-3r4Xzz/view?usp=sharing) into 'checkpoints/' as the base model.
### Step1: Embedding Pruning + Bottleneck Aligning ###
The model after step1 is saved as 'checkpoints/vit_b_slim_step1_.pth'
```
CUDA_VISIBLE_DEVICES=0 python prune_distill_step1.py --traindata_path <train_data_root> --valdata_path <val_data_root> --prune_ratio <pruning ratio> --epochs <training epochs>
```
### Step2: Bottleneck Pruning + Embedding Aligning ###
The model after step2 is saved as 'checkpoints/vit_b_slim_step2_.pth'
```
CUDA_VISIBLE_DEVICES=0 python prune_distill_step2.py --traindata_path <train_data_root> --valdata_path <val_data_root> --prune_ratio <pruning ratio> --epochs <training epochs> --model_path 'checkpoints/vit_b_slim_step1_.pth'
```
You can adjust the training settings to meet your specific requirements. While our method demonstrates impressive performance with just 10,000 training data, incorporating additional training data will further enhance the model's effectiveness
## BibTex of our SlimSAM
If you use SlimSAM in your research, please use the following BibTeX entry. Thank you!
```bibtex
@misc{chen202301,
title={0.1% Data Makes Segment Anything Slim},
author={Zigeng Chen and Gongfan Fang and Xinyin Ma and Xinchao Wang},
year={2023},
eprint={2312.05284},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
## Acknowledgement
<details>
<summary>
<a href="https://github.com/facebookresearch/segment-anything">SAM</a> (Segment Anything) [<b>bib</b>]
</summary>
```bibtex
@article{kirillov2023segany,
title={Segment Anything},
author={Kirillov, Alexander and Mintun, Eric and Ravi, Nikhila and Mao, Hanzi and Rolland, Chloe and Gustafson, Laura and Xiao, Tete and Whitehead, Spencer and Berg, Alexander C. and Lo, Wan-Yen and Doll{\'a}r, Piotr and Girshick, Ross},
journal={arXiv:2304.02643},
year={2023}
}
```
</details>
<details>
<summary>
<a href="https://github.com/VainF/Torch-Pruning">Torch Pruning</a> (DepGraph: Towards Any Structural Pruning) [<b>bib</b>]
</summary>
```bibtex
@inproceedings{fang2023depgraph,
title={Depgraph: Towards any structural pruning},
author={Fang, Gongfan and Ma, Xinyin and Song, Mingli and Mi, Michael Bi and Wang, Xinchao},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={16091--16101},
year={2023}
}
```
</details>
|
margati/distilbert_base_multilingual_cased_ru_action_min_chunks_works_19_12
|
margati
| 2023-12-19T09:07:00Z | 2 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-multilingual-cased",
"base_model:finetune:distilbert/distilbert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-19T09:06:41Z |
---
license: apache-2.0
base_model: distilbert-base-multilingual-cased
tags:
- generated_from_keras_callback
model-index:
- name: distilbert_base_multilingual_cased_ru_action_min_chunks_works_19_12
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert_base_multilingual_cased_ru_action_min_chunks_works_19_12
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0535
- Validation Loss: 1.3927
- Train Accuracy: 0.6869
- Epoch: 14
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 6660, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.6916 | 0.6779 | 0.5859 | 0 |
| 0.6895 | 0.6660 | 0.6162 | 1 |
| 0.6505 | 0.6476 | 0.6566 | 2 |
| 0.5595 | 0.6096 | 0.7374 | 3 |
| 0.4751 | 0.7793 | 0.5960 | 4 |
| 0.3377 | 0.8518 | 0.6768 | 5 |
| 0.2418 | 1.0199 | 0.6465 | 6 |
| 0.1604 | 1.1340 | 0.6667 | 7 |
| 0.1399 | 1.1893 | 0.6465 | 8 |
| 0.1198 | 0.9966 | 0.6465 | 9 |
| 0.0854 | 1.2855 | 0.6768 | 10 |
| 0.0747 | 1.2972 | 0.6566 | 11 |
| 0.0594 | 1.3570 | 0.6970 | 12 |
| 0.0561 | 1.4063 | 0.6566 | 13 |
| 0.0535 | 1.3927 | 0.6869 | 14 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
uchihacero/perfectdeliberate_v5
|
uchihacero
| 2023-12-19T08:55:22Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-12-19T08:10:47Z |
---
license: creativeml-openrail-m
---
|
Federm1512/ppo-LunarLander-v2
|
Federm1512
| 2023-12-19T08:43:04Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-19T08:42:39Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 282.11 +/- 15.82
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Kraaven/ppo-LunarLanderV2_Test
|
Kraaven
| 2023-12-19T08:42:21Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-19T08:42:01Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 262.24 +/- 14.66
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
hkivancoral/smids_10x_deit_small_adamax_00001_fold1
|
hkivancoral
| 2023-12-19T08:33:27Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-19T07:24:57Z |
---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_10x_deit_small_adamax_00001_fold1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9065108514190318
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_small_adamax_00001_fold1
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8282
- Accuracy: 0.9065
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2758 | 1.0 | 751 | 0.3168 | 0.8831 |
| 0.1955 | 2.0 | 1502 | 0.2645 | 0.9032 |
| 0.1342 | 3.0 | 2253 | 0.2464 | 0.9115 |
| 0.0581 | 4.0 | 3004 | 0.2670 | 0.9032 |
| 0.0977 | 5.0 | 3755 | 0.3303 | 0.9115 |
| 0.0404 | 6.0 | 4506 | 0.3924 | 0.9048 |
| 0.0407 | 7.0 | 5257 | 0.4392 | 0.9098 |
| 0.0229 | 8.0 | 6008 | 0.5277 | 0.9132 |
| 0.023 | 9.0 | 6759 | 0.5759 | 0.9115 |
| 0.016 | 10.0 | 7510 | 0.6280 | 0.9032 |
| 0.0002 | 11.0 | 8261 | 0.6513 | 0.9098 |
| 0.0008 | 12.0 | 9012 | 0.6409 | 0.9182 |
| 0.006 | 13.0 | 9763 | 0.6473 | 0.9199 |
| 0.0 | 14.0 | 10514 | 0.7396 | 0.9065 |
| 0.0 | 15.0 | 11265 | 0.7703 | 0.9065 |
| 0.0 | 16.0 | 12016 | 0.7534 | 0.9065 |
| 0.0001 | 17.0 | 12767 | 0.8086 | 0.9032 |
| 0.0 | 18.0 | 13518 | 0.7937 | 0.9032 |
| 0.0 | 19.0 | 14269 | 0.7606 | 0.9165 |
| 0.0 | 20.0 | 15020 | 0.8234 | 0.9065 |
| 0.0001 | 21.0 | 15771 | 0.7617 | 0.9149 |
| 0.0 | 22.0 | 16522 | 0.8024 | 0.9015 |
| 0.0 | 23.0 | 17273 | 0.8089 | 0.9065 |
| 0.0 | 24.0 | 18024 | 0.8495 | 0.9015 |
| 0.0 | 25.0 | 18775 | 0.7997 | 0.9115 |
| 0.0 | 26.0 | 19526 | 0.8566 | 0.9015 |
| 0.0 | 27.0 | 20277 | 0.8140 | 0.9065 |
| 0.0 | 28.0 | 21028 | 0.8138 | 0.9065 |
| 0.0073 | 29.0 | 21779 | 0.7958 | 0.9082 |
| 0.0 | 30.0 | 22530 | 0.8037 | 0.9115 |
| 0.0 | 31.0 | 23281 | 0.8741 | 0.9032 |
| 0.0 | 32.0 | 24032 | 0.8298 | 0.9082 |
| 0.0 | 33.0 | 24783 | 0.8730 | 0.9015 |
| 0.0 | 34.0 | 25534 | 0.8840 | 0.8982 |
| 0.0 | 35.0 | 26285 | 0.8051 | 0.9132 |
| 0.0 | 36.0 | 27036 | 0.8192 | 0.9115 |
| 0.0 | 37.0 | 27787 | 0.8059 | 0.9132 |
| 0.0 | 38.0 | 28538 | 0.8065 | 0.9149 |
| 0.0 | 39.0 | 29289 | 0.8139 | 0.9132 |
| 0.0 | 40.0 | 30040 | 0.8141 | 0.9132 |
| 0.0 | 41.0 | 30791 | 0.8317 | 0.9098 |
| 0.0 | 42.0 | 31542 | 0.8371 | 0.9048 |
| 0.0 | 43.0 | 32293 | 0.8394 | 0.9032 |
| 0.0 | 44.0 | 33044 | 0.8362 | 0.9048 |
| 0.0 | 45.0 | 33795 | 0.8367 | 0.9048 |
| 0.0 | 46.0 | 34546 | 0.8416 | 0.9032 |
| 0.0 | 47.0 | 35297 | 0.8349 | 0.9048 |
| 0.0 | 48.0 | 36048 | 0.8314 | 0.9065 |
| 0.0 | 49.0 | 36799 | 0.8317 | 0.9065 |
| 0.0 | 50.0 | 37550 | 0.8282 | 0.9065 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
hkivancoral/smids_10x_deit_small_sgd_001_fold1
|
hkivancoral
| 2023-12-19T08:28:00Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-19T07:25:44Z |
---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_10x_deit_small_sgd_001_fold1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9015025041736227
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_small_sgd_001_fold1
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2862
- Accuracy: 0.9015
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5539 | 1.0 | 751 | 0.5690 | 0.7763 |
| 0.3867 | 2.0 | 1502 | 0.4456 | 0.8314 |
| 0.3236 | 3.0 | 2253 | 0.3927 | 0.8497 |
| 0.259 | 4.0 | 3004 | 0.3726 | 0.8514 |
| 0.3099 | 5.0 | 3755 | 0.3487 | 0.8598 |
| 0.2986 | 6.0 | 4506 | 0.3416 | 0.8715 |
| 0.2728 | 7.0 | 5257 | 0.3260 | 0.8731 |
| 0.2249 | 8.0 | 6008 | 0.3188 | 0.8781 |
| 0.2673 | 9.0 | 6759 | 0.3155 | 0.8848 |
| 0.2491 | 10.0 | 7510 | 0.3089 | 0.8848 |
| 0.2349 | 11.0 | 8261 | 0.3099 | 0.8881 |
| 0.2513 | 12.0 | 9012 | 0.3016 | 0.8898 |
| 0.2098 | 13.0 | 9763 | 0.3061 | 0.8898 |
| 0.1606 | 14.0 | 10514 | 0.3022 | 0.8881 |
| 0.1914 | 15.0 | 11265 | 0.2955 | 0.8881 |
| 0.2039 | 16.0 | 12016 | 0.2953 | 0.8898 |
| 0.2821 | 17.0 | 12767 | 0.2940 | 0.8965 |
| 0.1703 | 18.0 | 13518 | 0.2962 | 0.8915 |
| 0.2178 | 19.0 | 14269 | 0.2905 | 0.8965 |
| 0.1883 | 20.0 | 15020 | 0.2902 | 0.8998 |
| 0.13 | 21.0 | 15771 | 0.2893 | 0.8948 |
| 0.1613 | 22.0 | 16522 | 0.2875 | 0.8982 |
| 0.1627 | 23.0 | 17273 | 0.2879 | 0.8948 |
| 0.2201 | 24.0 | 18024 | 0.2853 | 0.8998 |
| 0.2067 | 25.0 | 18775 | 0.2893 | 0.8965 |
| 0.1982 | 26.0 | 19526 | 0.2860 | 0.8982 |
| 0.1922 | 27.0 | 20277 | 0.2854 | 0.8998 |
| 0.2065 | 28.0 | 21028 | 0.2873 | 0.8948 |
| 0.1663 | 29.0 | 21779 | 0.2836 | 0.9032 |
| 0.1637 | 30.0 | 22530 | 0.2824 | 0.9032 |
| 0.1216 | 31.0 | 23281 | 0.2840 | 0.8998 |
| 0.2073 | 32.0 | 24032 | 0.2863 | 0.9065 |
| 0.1694 | 33.0 | 24783 | 0.2888 | 0.8965 |
| 0.1525 | 34.0 | 25534 | 0.2882 | 0.8982 |
| 0.1562 | 35.0 | 26285 | 0.2864 | 0.9032 |
| 0.1612 | 36.0 | 27036 | 0.2821 | 0.9032 |
| 0.2418 | 37.0 | 27787 | 0.2832 | 0.9015 |
| 0.138 | 38.0 | 28538 | 0.2859 | 0.9032 |
| 0.0832 | 39.0 | 29289 | 0.2853 | 0.8998 |
| 0.1792 | 40.0 | 30040 | 0.2866 | 0.9015 |
| 0.1296 | 41.0 | 30791 | 0.2848 | 0.9032 |
| 0.1436 | 42.0 | 31542 | 0.2863 | 0.9032 |
| 0.1676 | 43.0 | 32293 | 0.2864 | 0.9015 |
| 0.129 | 44.0 | 33044 | 0.2863 | 0.9015 |
| 0.1268 | 45.0 | 33795 | 0.2864 | 0.9015 |
| 0.182 | 46.0 | 34546 | 0.2870 | 0.8998 |
| 0.0802 | 47.0 | 35297 | 0.2872 | 0.9015 |
| 0.1369 | 48.0 | 36048 | 0.2866 | 0.9015 |
| 0.1294 | 49.0 | 36799 | 0.2861 | 0.9015 |
| 0.1488 | 50.0 | 37550 | 0.2862 | 0.9015 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
Mihaiii/Metis-0.3
|
Mihaiii
| 2023-12-19T08:13:23Z | 1,656 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-12-16T21:51:42Z |
---
base_model: mistralai/Mistral-7B-Instruct-v0.2
inference: false
license: apache-2.0
license_name: apache-2.0
metrics:
- accuracy
---
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
An instruct based fine tune of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2).
It works well with long system prompts.
It isn't generic in a sense that it shouldn't be used for story telling, for example, but only for reasoning and text comprehension.
This model is trained on a private dataset. The high GSM8K score is **NOT** because of the MetaMath dataset.
# Prompt Format ([see the guidelines from the base model](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2#instruction-format)):
```
<s>[INST] {system_message} . Say "Acknowledged!" if you understood. [/INST] Acknowledged! </s> [INST] {prompt} [/INST]
```
|
ongkn/emikes-classifier
|
ongkn
| 2023-12-19T08:00:34Z | 15 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-19T07:54:21Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: emikes-classifier
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emikes-classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0253
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 69
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3954 | 1.25 | 10 | 0.3092 | 0.8571 |
| 0.1249 | 2.5 | 20 | 0.1407 | 1.0 |
| 0.046 | 3.75 | 30 | 0.0666 | 1.0 |
| 0.034 | 5.0 | 40 | 0.1060 | 0.9286 |
| 0.0255 | 6.25 | 50 | 0.0295 | 1.0 |
| 0.0198 | 7.5 | 60 | 0.0274 | 1.0 |
| 0.0209 | 8.75 | 70 | 0.1060 | 0.9286 |
| 0.02 | 10.0 | 80 | 0.0253 | 1.0 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.1+cu117
- Datasets 2.15.0
- Tokenizers 0.15.0
|
prd-nguyenvo/olivia-7b-dpo-lora-v2
|
prd-nguyenvo
| 2023-12-19T07:56:50Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:Open-Orca/Mistral-7B-OpenOrca",
"base_model:finetune:Open-Orca/Mistral-7B-OpenOrca",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-18T11:23:35Z |
---
license: apache-2.0
base_model: Open-Orca/Mistral-7B-OpenOrca
tags:
- generated_from_trainer
model-index:
- name: olivia-7b-dpo-lora-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# olivia-7b-dpo-lora-v2
This model is a fine-tuned version of [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2452
- Rewards/chosen: -0.7312
- Rewards/rejected: -2.7785
- Rewards/accuracies: 0.9132
- Rewards/margins: 2.0473
- Logps/rejected: -92.7458
- Logps/chosen: -70.4321
- Logits/rejected: -2.6590
- Logits/chosen: -2.6728
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 32
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.2434 | 1.0 | 109 | 0.2452 | -0.7312 | -2.7785 | 0.9132 | 2.0473 | -92.7458 | -70.4321 | -2.6590 | -2.6728 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.1+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
intrinsic-disorder/bert-250-redo
|
intrinsic-disorder
| 2023-12-19T07:56:17Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-19T07:01:48Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-250-redo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-250-redo
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2586
- Accuracy: 0.5448
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
l3utterfly/mistral-7b-v0.1-layla-v1
|
l3utterfly
| 2023-12-19T07:50:03Z | 1,530 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-31T05:33:09Z |
---
license: apache-2.0
language:
- en
---
# Model Card
### Model Description
Mistral 7B fine-tuned using ShareGPT datasets for multi-turn conversations.
- **Developed by:** l3utterfly
- **Funded by:** Layla Network
- **Model type:** Mistral
- **Language(s) (NLP):** English
- **License:** Apache-2.0
- **Finetuned from model:** Mistral 7B
## Uses
Base model used by Layla - the offline personal assistant: https://www.layla-network.ai
Help & support: https://discord.gg/x546YJ6nYC
Prompt:
```
User:
Assistant:
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_l3utterfly__mistral-7b-v0.1-layla-v1)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 55.05 |
| ARC (25-shot) | 60.15 |
| HellaSwag (10-shot) | 83.25 |
| MMLU (5-shot) | 60.31 |
| TruthfulQA (0-shot) | 48.9 |
| Winogrande (5-shot) | 75.93 |
| GSM8K (5-shot) | 16.83 |
| DROP (3-shot) | 40.01 |
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
l3utterfly/llama2-7b-layla
|
l3utterfly
| 2023-12-19T07:49:47Z | 1,508 | 2 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-07T06:37:32Z |
---
license: llama2
language:
- en
---
# Model Card
### Model Description
Llama2 7B fine-tuned using ShareGPT datasets for multi-turn conversations.
- **Developed by:** l3utterfly
- **Funded by:** Layla Network
- **Model type:** Llama2
- **Language(s) (NLP):** English
- **License:** Llama2
- **Finetuned from model:** Llama2 7B
## Uses
Base model used by Layla - the offline personal assistant: https://www.layla-network.ai
Help & support: https://discord.gg/x546YJ6nYC
Prompt:
```
User:
Assistant:
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_l3utterfly__llama2-7b-layla)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 45.56 |
| ARC (25-shot) | 54.18 |
| HellaSwag (10-shot) | 79.34 |
| MMLU (5-shot) | 49.7 |
| TruthfulQA (0-shot) | 46.5 |
| Winogrande (5-shot) | 74.11 |
| GSM8K (5-shot) | 8.49 |
| DROP (3-shot) | 6.57 |
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
l3utterfly/minima-3b-layla-v1
|
l3utterfly
| 2023-12-19T07:49:37Z | 1,506 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-12T01:33:02Z |
---
license: llama2
language:
- en
---
# Model Card
### Model Description
[MiniMA-3B](https://huggingface.co/GeneZC/MiniMA-3B) (from GeneZC) fine-tuned using ShareGPT datasets for multi-turn conversations.
- **Developed by:** l3utterfly
- **Funded by:** Layla Network
- **Model type:** Llama2
- **Language(s) (NLP):** English
- **License:** Llama2
- **Finetuned from model:** MiniMA-3B
## Uses
Base model used by Layla - the offline personal assistant: https://www.layla-network.ai
Help & support: https://discord.gg/x546YJ6nYC
Prompt:
```
USER:
ASSISTANT:
```
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
rgny/Taxi-v3
|
rgny
| 2023-12-19T07:42:44Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-19T07:06:30Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="rgny/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
dolo650/opt-6.7b-lora
|
dolo650
| 2023-12-19T07:42:08Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:facebook/opt-6.7b",
"base_model:adapter:facebook/opt-6.7b",
"region:us"
] | null | 2023-12-19T02:59:53Z |
---
library_name: peft
base_model: facebook/opt-6.7b
---
# Model Card for Model ID
This model has been fine tuned using PEFT/LoRA with the following data:
https://huggingface.co/datasets/Abirate/english_quotes/
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
SmitShah22ce/flan-t5-small-fine-tuned-adapters
|
SmitShah22ce
| 2023-12-19T07:38:11Z | 2 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:google/flan-t5-small",
"base_model:adapter:google/flan-t5-small",
"region:us"
] | null | 2023-12-19T07:38:09Z |
---
library_name: peft
base_model: google/flan-t5-small
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
ntc-ai/SDXL-LoRA-slider.judgemental-look
|
ntc-ai
| 2023-12-19T07:35:40Z | 66 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] |
text-to-image
| 2023-12-19T07:35:36Z |
---
language:
- en
thumbnail: "images/evaluate/judgemental look.../judgemental look_17_3.0.png"
widget:
- text: judgemental look
output:
url: images/judgemental look_17_3.0.png
- text: judgemental look
output:
url: images/judgemental look_19_3.0.png
- text: judgemental look
output:
url: images/judgemental look_20_3.0.png
- text: judgemental look
output:
url: images/judgemental look_21_3.0.png
- text: judgemental look
output:
url: images/judgemental look_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "judgemental look"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - judgemental look (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/judgemental look_17_-3.0.png" width=256 height=256 /> | <img src="images/judgemental look_17_0.0.png" width=256 height=256 /> | <img src="images/judgemental look_17_3.0.png" width=256 height=256 /> |
| <img src="images/judgemental look_19_-3.0.png" width=256 height=256 /> | <img src="images/judgemental look_19_0.0.png" width=256 height=256 /> | <img src="images/judgemental look_19_3.0.png" width=256 height=256 /> |
| <img src="images/judgemental look_20_-3.0.png" width=256 height=256 /> | <img src="images/judgemental look_20_0.0.png" width=256 height=256 /> | <img src="images/judgemental look_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
judgemental look
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.judgemental-look', weight_name='judgemental look.safetensors', adapter_name="judgemental look")
# Activate the LoRA
pipe.set_adapters(["judgemental look"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, judgemental look"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 470+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
Ramyashree/gte-large-with80records
|
Ramyashree
| 2023-12-19T07:32:27Z | 6 | 0 |
setfit
|
[
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"dataset:Ramyashree/Dataset-setfit-Trainer-80records",
"arxiv:2209.11055",
"base_model:thenlper/gte-large",
"base_model:finetune:thenlper/gte-large",
"region:us"
] |
text-classification
| 2023-12-19T07:30:54Z |
---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
datasets:
- Ramyashree/Dataset-setfit-Trainer-80records
metrics:
- accuracy
widget:
- text: I want to check your money back policy, what can I do?
- text: ask an agent if i can obtain some bills
- text: my account's been hacked, what do I have to do?
- text: the event was postponed, what do i have to do to request a reimbursement?
- text: how do i close my online account?
pipeline_tag: text-classification
inference: true
base_model: thenlper/gte-large
---
# SetFit with thenlper/gte-large
This is a [SetFit](https://github.com/huggingface/setfit) model trained on the [Ramyashree/Dataset-setfit-Trainer-80records](https://huggingface.co/datasets/Ramyashree/Dataset-setfit-Trainer-80records) dataset that can be used for Text Classification. This SetFit model uses [thenlper/gte-large](https://huggingface.co/thenlper/gte-large) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [thenlper/gte-large](https://huggingface.co/thenlper/gte-large)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 10 classes
- **Training Dataset:** [Ramyashree/Dataset-setfit-Trainer-80records](https://huggingface.co/datasets/Ramyashree/Dataset-setfit-Trainer-80records)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:--------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| create_account | <ul><li>"I don't have an online account, what do I have to do to register?"</li><li>'can you tell me if i can regisger two accounts with a single email address?'</li><li>'I have no online account, open one, please'</li></ul> |
| edit_account | <ul><li>'how can I modify the information on my profile?'</li><li>'can u ask an agent how to make changes to my profile?'</li><li>'I want to update the information on my profile'</li></ul> |
| delete_account | <ul><li>'can I close my account?'</li><li>"I don't want my account, can you delete it?"</li><li>'how do i close my online account?'</li></ul> |
| switch_account | <ul><li>'I would like to use my other online account , could you switch them, please?'</li><li>'i want to use my other online account, can u change them?'</li><li>'how do i change to another account?'</li></ul> |
| get_invoice | <ul><li>'what can you tell me about getting some bills?'</li><li>'tell me where I can request a bill'</li><li>'ask an agent if i can obtain some bills'</li></ul> |
| get_refund | <ul><li>'the game was postponed, help me obtain a reimbursement'</li><li>'the game was postponed, what should I do to obtain a reimbursement?'</li><li>'the concert was postponed, what should I do to request a reimbursement?'</li></ul> |
| payment_issue | <ul><li>'i have an issue making a payment with card and i want to inform of it, please'</li><li>'I got an error message when I attempted to pay, but my card was charged anyway and I want to notify it'</li><li>'I want to notify a problem making a payment, can you help me?'</li></ul> |
| check_refund_policy | <ul><li>"I'm interested in your reimbursement polivy"</li><li>'i wanna see your refund policy, can u help me?'</li><li>'where do I see your money back policy?'</li></ul> |
| recover_password | <ul><li>'my online account was hacked and I want tyo get it back'</li><li>"I lost my password and I'd like to retrieve it, please"</li><li>'could u ask an agent how i can reset my password?'</li></ul> |
| track_refund | <ul><li>'tell me if my refund was processed'</li><li>'I need help checking the status of my refund'</li><li>'I want to see the status of my refund, can you help me?'</li></ul> |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("Ramyashree/gte-large-with80records")
# Run inference
preds = model("how do i close my online account?")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 4 | 10.325 | 22 |
| Label | Training Sample Count |
|:--------------------|:----------------------|
| check_refund_policy | 8 |
| create_account | 8 |
| delete_account | 8 |
| edit_account | 8 |
| get_invoice | 8 |
| get_refund | 8 |
| payment_issue | 8 |
| recover_password | 8 |
| switch_account | 8 |
| track_refund | 8 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-----:|:----:|:-------------:|:---------------:|
| 0.005 | 1 | 0.3449 | - |
| 0.25 | 50 | 0.022 | - |
| 0.5 | 100 | 0.0039 | - |
| 0.75 | 150 | 0.0012 | - |
| 1.0 | 200 | 0.0012 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.1
- Sentence Transformers: 2.2.2
- Transformers: 4.35.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.15.0
- Tokenizers: 0.15.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
abhipn/gpt2medium-finetuned-dup
|
abhipn
| 2023-12-19T07:25:39Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openai-community/gpt2-medium",
"base_model:adapter:openai-community/gpt2-medium",
"region:us"
] | null | 2023-12-19T07:25:37Z |
---
library_name: peft
base_model: gpt2-medium
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0
|
pig4431/TextGPT4V-7B-LORA
|
pig4431
| 2023-12-19T07:20:26Z | 1 | 0 |
peft
|
[
"peft",
"llava",
"region:us"
] | null | 2023-11-28T09:40:49Z |
---
library_name: peft
---
## Training procedure
https://github.com/Etelis/TextGPT4V
https://huggingface.co/datasets/pig4431/TextGPT4V
### Framework versions
- PEFT 0.4.0
|
MasegoTheLastBlondedMan/dqn-SpaceInvadersNoFrameskip-v4Prac
|
MasegoTheLastBlondedMan
| 2023-12-19T07:19:46Z | 7 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-19T07:19:18Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 257.00 +/- 38.81
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga MasegoTheLastBlondedMan -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga MasegoTheLastBlondedMan -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga MasegoTheLastBlondedMan
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
dlby/testModel22
|
dlby
| 2023-12-19T07:17:55Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:beomi/polyglot-ko-12.8b-safetensors",
"base_model:adapter:beomi/polyglot-ko-12.8b-safetensors",
"region:us"
] | null | 2023-12-19T07:17:53Z |
---
library_name: peft
base_model: beomi/polyglot-ko-12.8b-safetensors
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
yuejunasia/rare-puppers
|
yuejunasia
| 2023-12-19T07:14:52Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-19T07:14:46Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rare-puppers
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8507462739944458
---
# rare-puppers
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### corgi

#### samoyed

#### shiba inu

|
Parth49/tmp_trainer
|
Parth49
| 2023-12-19T07:14:07Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-19T06:57:30Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: tmp_trainer
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8733333333333333
- name: F1
type: f1
value: 0.879746835443038
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tmp_trainer
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5565
- Accuracy: 0.8733
- F1: 0.8797
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Ramyashree/setfit-trained-model-with80records
|
Ramyashree
| 2023-12-19T07:13:18Z | 5 | 0 |
setfit
|
[
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"dataset:Ramyashree/Dataset-setfit-Trainer-80records",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"region:us"
] |
text-classification
| 2023-12-19T07:12:41Z |
---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
datasets:
- Ramyashree/Dataset-setfit-Trainer-80records
metrics:
- accuracy
widget:
- text: I want to check your money back policy, what can I do?
- text: ask an agent if i can obtain some bills
- text: my account's been hacked, what do I have to do?
- text: the event was postponed, what do i have to do to request a reimbursement?
- text: how do i close my online account?
pipeline_tag: text-classification
inference: true
base_model: sentence-transformers/paraphrase-mpnet-base-v2
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model trained on the [Ramyashree/Dataset-setfit-Trainer-80records](https://huggingface.co/datasets/Ramyashree/Dataset-setfit-Trainer-80records) dataset that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 10 classes
- **Training Dataset:** [Ramyashree/Dataset-setfit-Trainer-80records](https://huggingface.co/datasets/Ramyashree/Dataset-setfit-Trainer-80records)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:--------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| create_account | <ul><li>"I don't have an online account, what do I have to do to register?"</li><li>'can you tell me if i can regisger two accounts with a single email address?'</li><li>'I have no online account, open one, please'</li></ul> |
| edit_account | <ul><li>'how can I modify the information on my profile?'</li><li>'can u ask an agent how to make changes to my profile?'</li><li>'I want to update the information on my profile'</li></ul> |
| delete_account | <ul><li>'can I close my account?'</li><li>"I don't want my account, can you delete it?"</li><li>'how do i close my online account?'</li></ul> |
| switch_account | <ul><li>'I would like to use my other online account , could you switch them, please?'</li><li>'i want to use my other online account, can u change them?'</li><li>'how do i change to another account?'</li></ul> |
| get_invoice | <ul><li>'what can you tell me about getting some bills?'</li><li>'tell me where I can request a bill'</li><li>'ask an agent if i can obtain some bills'</li></ul> |
| get_refund | <ul><li>'the game was postponed, help me obtain a reimbursement'</li><li>'the game was postponed, what should I do to obtain a reimbursement?'</li><li>'the concert was postponed, what should I do to request a reimbursement?'</li></ul> |
| payment_issue | <ul><li>'i have an issue making a payment with card and i want to inform of it, please'</li><li>'I got an error message when I attempted to pay, but my card was charged anyway and I want to notify it'</li><li>'I want to notify a problem making a payment, can you help me?'</li></ul> |
| check_refund_policy | <ul><li>"I'm interested in your reimbursement polivy"</li><li>'i wanna see your refund policy, can u help me?'</li><li>'where do I see your money back policy?'</li></ul> |
| recover_password | <ul><li>'my online account was hacked and I want tyo get it back'</li><li>"I lost my password and I'd like to retrieve it, please"</li><li>'could u ask an agent how i can reset my password?'</li></ul> |
| track_refund | <ul><li>'tell me if my refund was processed'</li><li>'I need help checking the status of my refund'</li><li>'I want to see the status of my refund, can you help me?'</li></ul> |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("Ramyashree/setfit-trained-model-with80records")
# Run inference
preds = model("how do i close my online account?")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 4 | 10.325 | 22 |
| Label | Training Sample Count |
|:--------------------|:----------------------|
| check_refund_policy | 8 |
| create_account | 8 |
| delete_account | 8 |
| edit_account | 8 |
| get_invoice | 8 |
| get_refund | 8 |
| payment_issue | 8 |
| recover_password | 8 |
| switch_account | 8 |
| track_refund | 8 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-----:|:----:|:-------------:|:---------------:|
| 0.005 | 1 | 0.1535 | - |
| 0.25 | 50 | 0.0277 | - |
| 0.5 | 100 | 0.0091 | - |
| 0.75 | 150 | 0.0034 | - |
| 1.0 | 200 | 0.0022 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.1
- Sentence Transformers: 2.2.2
- Transformers: 4.35.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.15.0
- Tokenizers: 0.15.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
Ba2han/BruinsV2-OpHermesNeu-11B
|
Ba2han
| 2023-12-19T07:12:23Z | 1,492 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-11T10:48:46Z |
---
license: mit
---
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.6527|± |0.0139|
| | |acc_norm|0.6869|± |0.0136|
**Warning! This model may or may not be contaminated [See discussion 474](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/474). What a shame. It still does perform well though**
A passthrough merge of OpenHermes-2.5-neural-chat-7b-v3-1 and Bruins-V2. To be updated.
Template: ChatML
My settings:
Temperature: 0.7-0.8
Min_p: 0.12
Top_K: 0
Repetition Penalty: 1.16
Mirostat Tau: 2.5-3
Mirostat Eta: 0.12
Personal Thoughts:
- The model sometimes throws wrong tags, you can add those to "Custom stopping strings" in Oobabooga.
- Output with Mirostat consistently felt smarter than a set Top_K rate.
Note: The model is hallucinating hard in chat mode for me in some instances, like writing adblocker messages. Kind of funny.
I am not sure which dataset involved was poisoned.
|
nrshoudi/Whisper-medium-Arabic-phoneme-v2
|
nrshoudi
| 2023-12-19T07:09:24Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"region:us"
] | null | 2023-12-19T07:09:21Z |
---
license: apache-2.0
base_model: openai/whisper-medium
tags:
- generated_from_trainer
model-index:
- name: Whisper-medium-Arabic-phoneme-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper-medium-Arabic-phoneme-v2
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2289
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.063 | 1.0 | 410 | 0.2223 |
| 0.0764 | 2.0 | 820 | 0.2359 |
| 0.0293 | 3.0 | 1230 | 0.1965 |
| 0.0174 | 4.0 | 1640 | 0.2038 |
| 0.0307 | 5.0 | 2050 | 0.2155 |
| 0.0075 | 6.0 | 2460 | 0.1972 |
| 0.0105 | 7.0 | 2870 | 0.2084 |
| 0.0035 | 8.0 | 3280 | 0.2173 |
| 0.0006 | 9.0 | 3690 | 0.2282 |
| 0.0006 | 10.0 | 4100 | 0.2289 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
leroyrr/bert-for-political-news-sentiment-analysis-lora
|
leroyrr
| 2023-12-19T07:07:51Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:leroyrr/bert-base-head",
"base_model:adapter:leroyrr/bert-base-head",
"region:us"
] | null | 2023-12-19T04:31:58Z |
---
library_name: peft
base_model: leroyrr/bert-base-head
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
learner9/my-pet-dog-ttt
|
learner9
| 2023-12-19T07:05:36Z | 0 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-12-19T06:52:03Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog-ttt Dreambooth model trained by learner9 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 2023UGEC092
Sample pictures of this concept:
|
rgny/q-FrozenLake-v1-4x4-noSlippery
|
rgny
| 2023-12-19T06:59:55Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-18T13:59:20Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="rgny/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
stabilityai/japanese-stablelm-instruct-ja_vocab-beta-7b
|
stabilityai
| 2023-12-19T06:46:01Z | 217 | 12 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"japanese-stablelm",
"causal-lm",
"ja",
"dataset:kunishou/hh-rlhf-49k-ja",
"dataset:kunishou/databricks-dolly-15k-ja",
"dataset:kunishou/oasst1-89k-ja",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-30T07:49:38Z |
---
language:
- ja
tags:
- japanese-stablelm
- causal-lm
pipeline_tag: text-generation
datasets:
- kunishou/hh-rlhf-49k-ja
- kunishou/databricks-dolly-15k-ja
- kunishou/oasst1-89k-ja
license:
- llama2
extra_gated_fields:
Name: text
Email: text
Country: text
Organization or Affiliation: text
I allow Stability AI to contact me about information related to its models and research: checkbox
---
# Japanese-StableLM-Instruct-JAVocab-Beta-7B

> A cute robot wearing a kimono writes calligraphy with one single brush — [Stable Diffusion XL](https://clipdrop.co/stable-diffusion)
## Model Description
`japanese-stablelm-instruct-ja_vocab-beta-7b` is a 7B-parameter decoder-only language model based on [japanese-stablelm-ja_vocab-beta-7b](https://huggingface.co/stabilityai/japanese-stablelm-instruct-beta-7b) and further fine tuned on Databricks Dolly-15k, Anthropic HH, and other public data.
Compared to the [standard base model](https://huggingface.co/stabilityai/japanese-stablelm-base-beta-7b), this model uses a tokenizer with an expanded vocabulary derived from Japanese data. This allows it to represent the same amount of text with fewer tokens, which speeds up inference significantly.
## Usage
First install additional dependencies in [requirements.txt](./requirements.txt):
```sh
pip install -r requirements.txt
```
Then start generating text with `japanese-stablelm-instruct-ja_vocab-beta-7b` by using the following code snippet:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "stabilityai/japanese-stablelm-instruct-ja_vocab-beta-7b"
tokenizer = AutoTokenizer.from_pretrained(model_name)
# The next line may need to be modified depending on the environment
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, low_cpu_mem_usage=True, device_map="auto")
def build_prompt(user_query, inputs):
sys_msg = "<s>[INST] <<SYS>>\nあなたは役立つアシスタントです。\n<<SYS>>\n\n"
p = sys_msg + user_query + "\n\n" + inputs + " [/INST] "
return p
user_inputs = {
"user_query": "与えられたことわざの意味を小学生でも分かるように教えてください。",
"inputs": "情けは人のためならず"
}
prompt = build_prompt(**user_inputs)
input_ids = tokenizer.encode(
prompt,
add_special_tokens=True,
return_tensors="pt"
)
# this is for reproducibility.
# feel free to change to get different result
seed = 23
torch.manual_seed(seed)
tokens = model.generate(
input_ids.to(device=model.device),
max_new_tokens=128,
temperature=0.99,
top_p=0.95,
do_sample=True,
)
out = tokenizer.decode(tokens[0], skip_special_tokens=True)
print(out)
```
We suggest playing with different generation config (`top_p`, `repetition_penalty` etc) to find the best setup for your tasks. For example, use higher temperature for roleplay task, lower temperature for reasoning.
## Model Details
* **Model type**: `japanese-stablelm-instruct-ja_vocab-beta-7b` model is an auto-regressive language model based on the Llama2 transformer architecture.
* **Language(s)**: Japanese
* **License**: [Llama2 Community License](https://ai.meta.com/llama/license/).
* **Contact**: For questions and comments about the model, please join [Stable Community Japan](https://discord.gg/StableJP). For future announcements / information about Stability AI models, research, and events, please follow https://twitter.com/StabilityAI_JP.
## Training Dataset
The following datasets were used for the instruction training. Note these are Japanese translated versions of the original datasets, shared by [kunishou](https://huggingface.co/kunishou).
- [Anthropic HH-RLHF](https://huggingface.co/datasets/kunishou/hh-rlhf-49k-ja)
- [Databricks Dolly 15-k](https://huggingface.co/datasets/kunishou/databricks-dolly-15k-ja)
- [OpenAssistant Conversations Dataset](https://huggingface.co/datasets/kunishou/oasst1-89k-ja)
## Use and Limitations
### Intended Use
The model is intended to be used by all individuals as a foundation for application-specific fine-tuning without strict limitations on commercial use.
### Limitations and bias
The pre-training dataset may have contained offensive or inappropriate content even after applying data cleansing filters which can be reflected in the model generated text. We recommend users exercise reasonable caution when using these models in production systems. Do not use the model for any applications that may cause harm or distress to individuals or groups.
## Authors
This model was developed by the Research & Development team at Stability AI Japan, and the development was co-led by [Takuya Akiba](https://huggingface.co/iwiwi) and [Meng Lee](https://huggingface.co/leemeng). The members of the team are as follows:
- [Meng Lee](https://huggingface.co/leemeng)
- [Fujiki Nakamura](https://huggingface.co/fujiki)
- [Makoto Shing](https://huggingface.co/mkshing)
- [Paul McCann](https://huggingface.co/polm-stability)
- [Takuya Akiba](https://huggingface.co/iwiwi)
- [Naoki Orii](https://huggingface.co/mrorii)
## Acknowledgements
We thank Meta Research for releasing Llama 2 under an open license for others to build on.
We are grateful for the contributions of the EleutherAI Polyglot-JA team in helping us to collect a large amount of pre-training data in Japanese. Polyglot-JA members includes Hyunwoong Ko (Project Lead), Fujiki Nakamura (originally started this project when he commited to the Polyglot team), Yunho Mo, Minji Jung, KeunSeok Im, and Su-Kyeong Jang.
We are also appreciative of [AI Novelist/Sta (Bit192, Inc.)](https://ai-novel.com/index.php) and the numerous contributors from [Stable Community Japan](https://discord.gg/VPrcE475HB) for assisting us in gathering a large amount of high-quality Japanese textual data for model training.
|
jaiganesan/sdxl-lora-khuze-1e-4-1200-512x512images
|
jaiganesan
| 2023-12-19T06:42:57Z | 2 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2023-12-19T06:42:22Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: A photo of Khuze Siam man wearing casual clothes and smiling
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
sdpkjc/Humanoid-v4-td3_continuous_action-seed2
|
sdpkjc
| 2023-12-19T06:40:09Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"Humanoid-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-19T06:39:58Z |
---
tags:
- Humanoid-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: TD3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Humanoid-v4
type: Humanoid-v4
metrics:
- type: mean_reward
value: 5093.30 +/- 170.94
name: mean_reward
verified: false
---
# (CleanRL) **TD3** Agent Playing **Humanoid-v4**
This is a trained model of a TD3 agent playing Humanoid-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/td3_continuous_action.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[td3_continuous_action]"
python -m cleanrl_utils.enjoy --exp-name td3_continuous_action --env-id Humanoid-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/sdpkjc/Humanoid-v4-td3_continuous_action-seed2/raw/main/td3_continuous_action.py
curl -OL https://huggingface.co/sdpkjc/Humanoid-v4-td3_continuous_action-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/sdpkjc/Humanoid-v4-td3_continuous_action-seed2/raw/main/poetry.lock
poetry install --all-extras
python td3_continuous_action.py --save-model --upload-model --hf-entity sdpkjc --env-id Humanoid-v4 --seed 2 --track
```
# Hyperparameters
```python
{'batch_size': 256,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'env_id': 'Humanoid-v4',
'exp_name': 'td3_continuous_action',
'exploration_noise': 0.1,
'gamma': 0.99,
'hf_entity': 'sdpkjc',
'learning_rate': 0.0003,
'learning_starts': 25000.0,
'noise_clip': 0.5,
'policy_frequency': 2,
'policy_noise': 0.2,
'save_model': True,
'seed': 2,
'tau': 0.005,
'torch_deterministic': True,
'total_timesteps': 1000000,
'track': True,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
wuwx/ML-Agents-SoccerTwos
|
wuwx
| 2023-12-19T06:35:01Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"onnx",
"ML-Agents-SoccerTwos",
"reinforcement-learning",
"region:us"
] |
reinforcement-learning
| 2023-12-19T06:33:04Z |
---
task: reinforcement-learning
library_name: ml-agents
tags:
- ML-Agents-SoccerTwos
- reinforcement-learning
---
|
ding-diri-ding-dong/long-ke-t5-base-translation-aihub-ko2en
|
ding-diri-ding-dong
| 2023-12-19T06:34:35Z | 17 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"longt5",
"text2text-generation",
"translation",
"generated_from_trainer",
"base_model:KETI-AIR-Downstream/long-ke-t5-base-translation-aihub-ko2en",
"base_model:finetune:KETI-AIR-Downstream/long-ke-t5-base-translation-aihub-ko2en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-12-18T07:40:29Z |
---
license: apache-2.0
base_model: KETI-AIR-Downstream/long-ke-t5-base-translation-aihub-ko2en
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: long-ke-t5-base-translation-aihub-ko2en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# long-ke-t5-base-translation-aihub-ko2en
This model is a fine-tuned version of [KETI-AIR-Downstream/long-ke-t5-base-translation-aihub-ko2en](https://huggingface.co/KETI-AIR-Downstream/long-ke-t5-base-translation-aihub-ko2en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Bleu: 0.0
- Gen Len: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
A2H0H0R1/Qwen-7B-Chat-Int4-Qlora-biology
|
A2H0H0R1
| 2023-12-19T06:28:33Z | 11 | 0 |
peft
|
[
"peft",
"safetensors",
"Qwen",
"4bit",
"biology",
"text-generation",
"en",
"dataset:A2H0H0R1/Animal-nutrition-pair",
"arxiv:1910.09700",
"base_model:Qwen/Qwen-7B-Chat-Int4",
"base_model:adapter:Qwen/Qwen-7B-Chat-Int4",
"license:mit",
"region:us"
] |
text-generation
| 2023-12-18T12:21:41Z |
---
library_name: peft
base_model: Qwen/Qwen-7B-Chat-Int4
license: mit
datasets:
- A2H0H0R1/Animal-nutrition-pair
language:
- en
pipeline_tag: text-generation
tags:
- Qwen
- 4bit
- biology
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Ai-Farm.ir compay
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
brucethemoose/Capybara-Tess-Yi-34B-200K-DARE-Ties
|
brucethemoose
| 2023-12-19T06:26:00Z | 22 | 11 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"en",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-25T14:45:19Z |
---
license: other
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- merge
---
# This is not a great model, succeeded by a new merge: **https://huggingface.co/brucethemoose/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties**
**NousResearch/Nous-Capybara-34B**, **migtissera/Tess-M-v1.2** and **migtissera/Tess-M-v1.3** merged with a new, experimental implementation of "dare ties" via mergekit. See:
> Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch
https://github.com/yule-BUAA/MergeLM
https://github.com/cg123/mergekit/tree/dare-tokenizer
Highly experimental and still being tested! But this should yield a better merge than a typical linear/slerp merge or even a ties merge.
***
Merged with the following config, and the tokenizer from Yi Llamafied:
```
models:
- model: /home/alpha/Storage/Models/Raw/larryvrh_Yi-34B-200K-Llamafied
# no parameters necessary for base model
- model: /home/alpha/Storage/Models/Raw/migtissera_Tess-M-v1.3
parameters:
weight: 0.50
density: 0.56
- model: /home/alpha/Storage/Models/Raw/migtissera_Tess-M-v1.2
parameters:
weight: 0.20
density: 0.50
- model: /home/alpha/Storage/Models/Raw/Nous-Capybara-34B
parameters:
weight: 0.50
density: 0.56
merge_method: dare_ties
base_model: /home/alpha/Storage/Models/Raw/larryvrh_Yi-34B-200K-Llamafied
parameters:
int8_mask: true
dtype: bfloat16
```
Tess 1.2 (at a low weight) and 1.3 were used because, according to the trainer, they were trained on different datasets: https://migel.substack.com/p/learnings-from-training-tess
As the Tess creator warned about, if the model repeats at high context like Tess 1.2, let me know!
I chose not to include other finetunes, such as Dolphin, because they aren't trained on the 200K base. If any other 200K finetunes pop up, let me know.
***
## Prompt template: Orca-Vicuna
```
SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
```
Being a Yi model, try disabling the BOS token and/or running a lower temperature with MinP if output doesn't seem right.
Sometimes the model "spells out" the stop token as `</s>` like Capybara, so you may need to add `</s>` as an additional stopping condition.
***
Credits:
https://github.com/cg123/mergekit/tree/dare-tokenizer
https://huggingface.co/NousResearch/Nous-Capybara-34B/
https://huggingface.co/migtissera/Tess-M-v1.2
https://huggingface.co/migtissera/Tess-M-v1.3
https://huggingface.co/larryvrh/Yi-34B-200K-Llamafied
https://huggingface.co/01-ai/Yi-34B-200K
|
brucethemoose/CaPlatTessDolXaBoros-34B-200K-exl2-4bpw-fiction
|
brucethemoose
| 2023-12-19T06:24:06Z | 13 | 3 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"merge",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-11T07:39:36Z |
---
license: other
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- text-generation-inference
- merge
---
### Obsolete, see https://huggingface.co/brucethemoose/Yi-34B-200K-DARE-merge-v5
***
**Dolphin-2.2-yi-34b-200k**, **Nous-Capybara-34B**, **Tess-M-v1.4**, **Airoboros-3_1-yi-34b-200k**, **PlatYi-34B-200K-Q**, and **Una-xaberius-34b-v1beta** merged with a new, experimental implementation of "dare ties" via mergekit.
Quantized with the git version of exllamav2 with 200 rows (400K tokens) on a long Orca-Vicuna format chat, a selected sci fi story and a fantasy story. This should hopefully yield better chat/storytelling performance than the short, default wikitext quantization.
4bpw is enough for **~47K context on a 24GB GPU.** I would highly recommend running in exui for speed at long context. I go into more detail in this [Reddit post](https://old.reddit.com/r/LocalLLaMA/comments/1896igc/how_i_run_34b_models_at_75k_context_on_24gb_fast/)
Merged with the following config, and the tokenizer from chargoddard's Yi-Llama:
```
models:
- model: /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama
# no parameters necessary for base model
- model: /home/alpha/Storage/Models/Raw/migtissera_Tess-34B-v1.4
parameters:
weight: 0.19
density: 0.6
- model: /home/alpha//Storage/Models/Raw/bhenrym14_airoboros-3_1-yi-34b-200k
parameters:
weight: 0.14
density: 0.5
- model: /home/alpha/Storage/Models/Raw/Nous-Capybara-34B
parameters:
weight: 0.19
density: 0.6
- model: /home/alpha/Storage/Models/Raw/kyujinpy_PlatYi-34B-200K-Q
parameters:
weight: 0.14
density: 0.5
- model: /home/alpha/FastModels/ehartford_dolphin-2.2-yi-34b-200k
parameters:
weight: 0.19
density: 0.6
- model: /home/alpha/FastModels/fblgit_una-xaberius-34b-v1beta
parameters:
weight: 0.15
density: 0.08
merge_method: dare_ties
base_model: /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama
parameters:
int8_mask: true
dtype: bfloat16
```
First exllama quantization pass:
```
python convert.py --in_dir //home/alpha/FastModels/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties -o /home/alpha/FastModels/scratch -om /home/alpha/FastModels/mes.json --cal_dataset /home/alpha/Documents/smol.parquet -l 2048 -r 80 -ml 2048 -mr 40 -gr 40 -ss 4096 -nr -b 4.0 -hb 6
```
Second exllama quantization pass:
```
python convert.py --in_dir /home/alpha/FastModels/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties -o /home/alpha/FastModels/scratch -m /home/alpha/FastModels/mes.json --cal_dataset /home/alpha/Documents/medium.parquet -l 2048 -r 200 -ml 2048 -mr 40 -gr 200 -ss 4096 -b 4.0 -hb 6 -cf /home/alpha/FastModels/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-exl2-4bpw-fiction -nr
```
## Testing Notes
Various densities were tested with perplexity tests and high context prompts. Relatively high densities seem to perform better, contrary to the findings of the Super Mario paper.
Weights that add up to 1 seems to be optimal.
Dare Ties is also resulting in seemingly better, lower perplexity merges than a regular ties merge, task arithmetic or a slerp merge.
Xaberuis is not a 200K model, hence it was merged at a very low density to try and preserve Yi 200K's long context performance while still inheriting some of Xaberius's performance.
I chose not to include other finetunes because they aren't trained on the 200K base. If any other 200K finetunes pop up, let me know.
***
## Prompt template: Orca-Vicuna?
```
SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
```
It might recognize ChatML from Dolphin+Xaberius, and Llama-chat from Airoboros.
Sometimes the model "spells out" the stop token as `</s>` like Capybara, so you may need to add `</s>` as an additional stopping condition.
***
## Running
Being a Yi model, try disabling the BOS token and/or running a lower temperature with 0.05-0.13 MinP, a little repetition penalty, and no other samplers. Yi tends to run "hot" by default.
24GB GPUs can run Yi-34B-200K models at **45K-75K context** with exllamav2. I go into more detail in this [post](https://old.reddit.com/r/LocalLLaMA/comments/1896igc/how_i_run_34b_models_at_75k_context_on_24gb_fast/)
I recommend exl2 quantizations profiled on data similar to the desired task. It is especially sensitive to the quantization data at low bpw!
To load this in full-context backends like transformers and vllm, you *must* change `max_position_embeddings` in config.json to a lower value than 200,000, otherwise you will OOM!
***
## Credits:
https://github.com/turboderp/exllamav2
https://github.com/cg123/mergekit/tree/dare
https://huggingface.co/ehartford/dolphin-2.2-yi-34b-200k
https://huggingface.co/kyujinpy/PlatYi-34B-200K-Q
https://huggingface.co/NousResearch/Nous-Capybara-34B/
https://huggingface.co/bhenrym14/airoboros-3_1-yi-34b-200k
https://huggingface.co/migtissera/Tess-M-v1.4
https://huggingface.co/fblgit/una-xaberius-34b-v1beta
https://huggingface.co/chargoddard/Yi-34B-200K-Llama
https://huggingface.co/01-ai/Yi-34B-200K
|
igorshmel/pencil_style
|
igorshmel
| 2023-12-19T06:23:45Z | 1 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-12-19T06:10:59Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
Promt: wsctch
Promt: wsctch, sensual woman
Sample pictures of this concept:

|
brucethemoose/Capybara-Tess-Yi-34B-200K
|
brucethemoose
| 2023-12-19T06:22:31Z | 1,396 | 15 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"en",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-18T18:19:03Z |
---
license: other
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- merge
---
# Obsolete, succeeded by a new merge: **https://huggingface.co/brucethemoose/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity**
***
**NousResearch/Nous-Capybara-34B** and **migtissera/Tess-M-Creative-v1.0** ties merged with mergekit.
I would suggest an exllama version for local inference with 40K+ context in 24GB:
https://huggingface.co/brucethemoose/Capybara-Tess-Yi-34B-200K-exl2-4bpw-fiction
https://huggingface.co/brucethemoose/Capybara-Tess-Yi-34B-200K-exl2-31bpw-fiction
Merged with the following config:
```
models:
- model: /home/alpha/Storage/Models/Raw/larryvrh_Yi-34B-200K-Llamafied
# no parameters necessary for base model
- model: /home/alpha/Storage/Models/Raw/migtissera_Tess-M-v1.0
parameters:
density: 0.6
weight: 1.0
- model: /home/alpha/Storage/Models/Raw/Nous-Capybara-34B
parameters:
density: 0.6
weight: 1.0
merge_method: ties
base_model: //home/alpha/Storage/Models/Raw/larryvrh_Yi-34B-200K-Llamafied
parameters:
normalize: true
int8_mask: true
dtype: float16
```
Both are 200K context models with Vicuna syntax, so:
# Prompt Format:
```
SYSTEM: ...
USER: ...
ASSISTANT: ...
```
Sometimes the model "spells out" the stop token as `</s>` like Capybara, so you may need to add `</s>` this as an additional stopping condition.
***
Credits:
https://github.com/cg123/mergekit
https://huggingface.co/NousResearch/Nous-Capybara-34B/discussions
https://huggingface.co/migtissera/Tess-M-Creative-v1.0
https://huggingface.co/larryvrh/Yi-34B-200K-Llamafied
https://huggingface.co/01-ai/Yi-34B-200K
|
brucethemoose/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties
|
brucethemoose
| 2023-12-19T06:22:07Z | 1,395 | 4 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"merge",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-09T06:00:26Z |
---
license: other
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- text-generation-inference
- merge
---
A low density DARE ties merge, for benchmarking on the open llm leaderboard.
**You probably shouldn't use this model. Use this higher density merge instead, which is scoring much better on the llm leaderboard and perplexity tests:** https://huggingface.co/brucethemoose/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity
mergekit config:
```
models:
- model: /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama
# no parameters necessary for base model
- model: /home/alpha/Storage/Models/Raw/migtissera_Tess-34B-v1.4
parameters:
weight: 0.19
density: 0.44
- model: /home/alpha//Storage/Models/Raw/bhenrym14_airoboros-3_1-yi-34b-200k
parameters:
weight: 0.14
density: 0.34
- model: /home/alpha/Storage/Models/Raw/Nous-Capybara-34B
parameters:
weight: 0.19
density: 0.44
- model: /home/alpha/Storage/Models/Raw/kyujinpy_PlatYi-34B-200K-Q
parameters:
weight: 0.14
density: 0.34
- model: /home/alpha/FastModels/ehartford_dolphin-2.2-yi-34b-200k
parameters:
weight: 0.19
density: 0.44
- model: /home/alpha/FastModels/fblgit_una-xaberius-34b-v1beta
parameters:
weight: 0.15
density: 0.08
merge_method: dare_ties
base_model: /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama
parameters:
int8_mask: true
dtype: bfloat16
```
|
brucethemoose/Yi-34B-200K-DARE-merge-v5-4bpw-exl2-fiction
|
brucethemoose
| 2023-12-19T06:21:42Z | 6 | 6 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"merge",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-17T06:58:38Z |
---
license: other
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- text-generation-inference
- merge
---
[**Nous-Capybara-34B**](https://huggingface.co/NousResearch/Nous-Capybara-34B/), [**Tess-M-v1.4**](https://huggingface.co/migtissera/Tess-34B-v1.4), [**Airoboros-3_1-yi-34b-200k**](https://huggingface.co/bhenrym14/airoboros-3_1-yi-34b-200k), [**PlatYi-34B-200K-Q**](https://huggingface.co/kyujinpy/PlatYi-34B-200k-Q-FastChat), [**Pallas-0.4**](https://huggingface.co/Mihaiii/Pallas-0.4), [**Yi-34B-200K-AEZAKMI-v2**](https://huggingface.co/adamo1139/Yi-34B-200K-AEZAKMI-v2), and a tiny bit of [**SUS-Chat-34B**](https://huggingface.co/SUSTech/SUS-Chat-34B) merged with a new, experimental implementation of "dare ties" via mergekit.
See the main model card: https://huggingface.co/brucethemoose/Yi-34B-200K-DARE-merge-v5
The merge was then quantized with exllamav2's 0.0.11 brand new exl2 quantization, using 300K tokens from a sci fi story, a fantasy story, and a Vicuna format chat as profiling data, at a high context size. This should results in excellent writing performance for the model size.
This 4bpw quantization can fit ~**45K Context on a 24GB GPU** at high quality.
***
## Prompt template: Orca-Vicuna
```
SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
```
It might recognize ChatML, or maybe Llama-chat from Airoboros.
Sometimes the model "spells out" the stop token as `</s>` like Capybara, so you may need to add `</s>` as an additional stopping condition.
***
## Running
Being a Yi model, try running a lower temperature with 0.05-0.1 MinP, a little repitition penalty, and no other samplers. Yi tends to run "hot" by default.
24GB GPUs can run Yi-34B-200K models at **45K-75K context** with exllamav2, and performant UIs like [exui](https://github.com/turboderp/exui). I go into more detail in this [post](https://old.reddit.com/r/LocalLLaMA/comments/1896igc/how_i_run_34b_models_at_75k_context_on_24gb_fast/)
***
## Commands
First pass:
```
python convert.py --in_dir /home/alpha/FastModels/Yi-34B-200K-DARE-merge-v5 -o /home/alpha/FastModels/scratch -om /home/alpha/FastModels/v5.json --cal_dataset /home/alpha/Documents/smol.parquet -ml 32768 -mr 9 -ss 4096 -b 4.0 -hb 6 -nr
```
Second pass:
```
python convert.py --in_dir /home/alpha/FastModels/Yi-34B-200K-DARE-merge-v5 -o /home/alpha/FastModels/scratch -m /home/alpha/FastModels/v5.json --cal_dataset /home/alpha/Documents/medium.parquet -l 12288 -r 29 -ml 32768 -mr 9 -ss 4096 -b 4.0 -hb 6 -cf /home/alpha/FastModels/Yi-34B-200K-DARE-merge-v5-exl2-4bpw-fiction -nr
```
Merged in mergekit with the following config, and the tokenizer from chargoddard's Yi-Llama:
```
models:
- model: /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama
# No parameters necessary for base model
- model: /home/alpha/Storage/Models/Raw/migtissera_Tess-34B-v1.4
# Less weight than previous merge since Pallas is a finetune of Tess
parameters:
weight: 0.14
density: 0.62
- model: /home/alpha/FastModels/Mihaiii_Pallas-0.4
parameters:
weight: 0.14
density: 0.62
- model: /home/alpha//Storage/Models/Raw/bhenrym14_airoboros-3_1-yi-34b-200k
parameters:
weight: 0.14
density: 0.52
- model: /home/alpha/Storage/Models/Raw/Nous-Capybara-34B
parameters:
weight: 0.22
density: 0.62
- model: /home/alpha/Storage/Models/Raw/kyujinpy_PlatYi-34B-200k-Q-FastChat
parameters:
weight: 0.14
density: 0.52
#- model: /home/alpha/Storage/Models/Raw/ehartford_dolphin-2.2-yi-34b-200k
# Dolphin 200K seems to be broken according to multiple leaderboards and perplexity tests?
# parameters:
# weight: 0.15
# density: 0.6
- model: /home/alpha/Models/Raw/adamo1139_Yi-34B-200K-AEZAKMI-v2
parameters:
weight: 0.14
density: 0.52
- model: /home/alpha/Models/Raw/SUSTech_SUS-Chat-34B/
# Very low density and low weight since its a Yi 4K finetune, to try and preserve long context performance while "keeping" some of SUS
parameters:
weight: 0.08
density: 0.08
merge_method: dare_ties
base_model: /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama
parameters:
int8_mask: true
dtype: bfloat16
```
***
## Credits:
https://github.com/cg123/mergekit/tree/dare
https://huggingface.co/NousResearch/Nous-Capybara-34B/
https://huggingface.co/bhenrym14/airoboros-3_1-yi-34b-200k
https://huggingface.co/migtissera/Tess-M-v1.4
https://huggingface.co/kyujinpy/PlatYi-34B-200k-Q-FastChat
https://huggingface.co/adamo1139/Yi-34B-200K-AEZAKMI-v2
https://huggingface.co/Mihaiii/Pallas-0.4
https://huggingface.co/SUSTech/SUS-Chat-34B
https://huggingface.co/chargoddard/Yi-34B-200K-Llama
https://huggingface.co/01-ai/Yi-34B-200K
|
brucethemoose/Yi-34B-200K-DARE-merge-v5-2.67bpw-exl2-fiction
|
brucethemoose
| 2023-12-19T06:21:23Z | 5 | 2 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"merge",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-17T06:59:23Z |
---
license: other
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- text-generation-inference
- merge
---
[**Nous-Capybara-34B**](https://huggingface.co/NousResearch/Nous-Capybara-34B/), [**Tess-M-v1.4**](https://huggingface.co/migtissera/Tess-34B-v1.4), [**Airoboros-3_1-yi-34b-200k**](https://huggingface.co/bhenrym14/airoboros-3_1-yi-34b-200k), [**PlatYi-34B-200K-Q**](https://huggingface.co/kyujinpy/PlatYi-34B-200k-Q-FastChat), [**Pallas-0.4**](https://huggingface.co/Mihaiii/Pallas-0.4), [**Yi-34B-200K-AEZAKMI-v2**](https://huggingface.co/adamo1139/Yi-34B-200K-AEZAKMI-v2), and a tiny bit of [**SUS-Chat-34B**](https://huggingface.co/SUSTech/SUS-Chat-34B) merged with a new, experimental implementation of "dare ties" via mergekit.
See the main model card: https://huggingface.co/brucethemoose/Yi-34B-200K-DARE-merge-v5
The merge was then quantized with exllamav2's 0.0.11 brand new exl2 quantization, using 300K tokens from a sci fi story, a fantasy story, and a Vicuna format chat as profiling data, at a high context size. This should results in excellent writing performance for the model size.
This 2.67bpw quantization can fit **Long Context on a 16GB GPU** at usable quality.
***
## Prompt template: Orca-Vicuna
```
SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
```
It might recognize ChatML, or maybe Llama-chat from Airoboros.
Sometimes the model "spells out" the stop token as `</s>` like Capybara, so you may need to add `</s>` as an additional stopping condition.
***
## Running
Being a Yi model, try running a lower temperature with 0.05-0.1 MinP, a little repitition penalty, and no other samplers. Yi tends to run "hot" by default.
24GB GPUs can run Yi-34B-200K models at **45K-75K context** with exllamav2, and performant UIs like [exui](https://github.com/turboderp/exui). I go into more detail in this [post](https://old.reddit.com/r/LocalLLaMA/comments/1896igc/how_i_run_34b_models_at_75k_context_on_24gb_fast/)
***
## Commands
First pass:
```
python convert.py --in_dir /home/alpha/FastModels/Yi-34B-200K-DARE-merge-v5 -o /home/alpha/FastModels/scratch -om /home/alpha/FastModels/v5.json --cal_dataset /home/alpha/Documents/smol.parquet -ml 32768 -mr 9 -ss 4096 -b 4.0 -hb 6 -nr
```
Second pass:
```
python convert.py --in_dir /home/alpha/FastModels/Yi-34B-200K-DARE-merge-v5 -o /home/alpha/FastModels/scratch -m /home/alpha/FastModels/v5.json --cal_dataset /home/alpha/Documents/medium.parquet -l 12288 -r 29 -ml 32768 -mr 9 -ss 4096 -b 2.67 -hb 6 -cf /home/alpha/FastModels/Yi-34B-200K-DARE-merge-v5-exl2-2.67bpw-fiction -nr
```
Merged in mergekit with the following config, and the tokenizer from chargoddard's Yi-Llama:
```
models:
- model: /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama
# No parameters necessary for base model
- model: /home/alpha/Storage/Models/Raw/migtissera_Tess-34B-v1.4
# Less weight than previous merge since Pallas is a finetune of Tess
parameters:
weight: 0.14
density: 0.62
- model: /home/alpha/FastModels/Mihaiii_Pallas-0.4
parameters:
weight: 0.14
density: 0.62
- model: /home/alpha//Storage/Models/Raw/bhenrym14_airoboros-3_1-yi-34b-200k
parameters:
weight: 0.14
density: 0.52
- model: /home/alpha/Storage/Models/Raw/Nous-Capybara-34B
parameters:
weight: 0.22
density: 0.62
- model: /home/alpha/Storage/Models/Raw/kyujinpy_PlatYi-34B-200k-Q-FastChat
parameters:
weight: 0.14
density: 0.52
#- model: /home/alpha/Storage/Models/Raw/ehartford_dolphin-2.2-yi-34b-200k
# Dolphin 200K seems to be broken according to multiple leaderboards and perplexity tests?
# parameters:
# weight: 0.15
# density: 0.6
- model: /home/alpha/Models/Raw/adamo1139_Yi-34B-200K-AEZAKMI-v2
parameters:
weight: 0.14
density: 0.52
- model: /home/alpha/Models/Raw/SUSTech_SUS-Chat-34B/
# Very low density and low weight since its a Yi 4K finetune, to try and preserve long context performance while "keeping" some of SUS
parameters:
weight: 0.08
density: 0.08
merge_method: dare_ties
base_model: /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama
parameters:
int8_mask: true
dtype: bfloat16
```
***
## Credits:
https://github.com/cg123/mergekit/tree/dare
https://huggingface.co/NousResearch/Nous-Capybara-34B/
https://huggingface.co/bhenrym14/airoboros-3_1-yi-34b-200k
https://huggingface.co/migtissera/Tess-M-v1.4
https://huggingface.co/kyujinpy/PlatYi-34B-200k-Q-FastChat
https://huggingface.co/adamo1139/Yi-34B-200K-AEZAKMI-v2
https://huggingface.co/Mihaiii/Pallas-0.4
https://huggingface.co/SUSTech/SUS-Chat-34B
https://huggingface.co/chargoddard/Yi-34B-200K-Llama
https://huggingface.co/01-ai/Yi-34B-200K
|
nm-testing/MiniChat-1.5-3B-pruned70-quant-ds
|
nm-testing
| 2023-12-19T06:11:42Z | 6 | 0 |
transformers
|
[
"transformers",
"onnx",
"llama",
"text-generation",
"deepsparse",
"arxiv:2301.00774",
"base_model:GeneZC/MiniChat-1.5-3B",
"base_model:quantized:GeneZC/MiniChat-1.5-3B",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-12-18T07:44:22Z |
---
base_model: GeneZC/MiniChat-1.5-3B
inference: false
model_type: llama
prompt_template: |
<s> [|User|]\n
{prompt}</s>
[|Assistant|]\n
quantized_by: mwitiderrick
tags:
- deepsparse
---
# MiniChat-1.5-3B - DeepSparse
This repo contains model files for [MiniChat-1.5-3B](https://huggingface.co/GeneZC/MiniChat-1.5-3B) optimized for [DeepSparse](https://github.com/neuralmagic/deepsparse), a CPU inference runtime for sparse models.
This model was quantized and pruned with [SparseGPT](https://arxiv.org/abs/2301.00774), using [SparseML](https://github.com/neuralmagic/sparseml).
## Inference
Install [DeepSparse LLM](https://github.com/neuralmagic/deepsparse) for fast inference on CPUs:
```bash
pip install deepsparse-nightly[llm]
```
Run in a [Python pipeline](https://github.com/neuralmagic/deepsparse/blob/main/docs/llms/text-generation-pipeline.md):
```python
from deepsparse import TextGeneration
prompt = "How to make banana bread?"
formatted_prompt = f"<s> [|User|]\n{prompt}</s>[|Assistant|]\n"
model = TextGeneration(model_path="hf:nm-testing/MiniChat-1.5-3B-pruned70-quant-ds")
print(model(formatted_prompt, max_new_tokens=200).generations[0].text)
"""
To make banana bread, you need to follow these steps:
1. Mix the ingredients:
- Flour
- Flour
- Flour
- Flour
- Flour
- Flour
- Flour
- Flour
- Flour
- Flour
- Flour
- Flour
- Flour
- Flour
- Flour
- Flour
- Flour
- Flour
- Flour
- Flour
- Flour
"""
```
## Prompt template
```
<s> [|User|]\n
{prompt}</s>
</s>[|Assistant|]\n
```
## Sparsification
For details on how this model was sparsified, see the `recipe.yaml` in this repo and follow the instructions below.
```bash
git clone https://github.com/neuralmagic/sparseml
pip install -e "sparseml[transformers]"
python sparseml/src/sparseml/transformers/sparsification/obcq/obcq.py GeneZC/MiniChat-1.5-3B open_platypus --recipe recipe.yaml --save True
python sparseml/src/sparseml/transformers/sparsification/obcq/export.py --task text-generation --model_path obcq_deployment
cp deployment/model.onnx deployment/model-orig.onnx
```
Run this kv-cache injection to speed up the model at inference by caching the Key and Value states:
```python
import os
import onnx
from sparseml.exporters.kv_cache_injector import KeyValueCacheInjector
input_file = "deployment/model-orig.onnx"
output_file = "deployment/model.onnx"
model = onnx.load(input_file, load_external_data=False)
model = KeyValueCacheInjector(model_path=os.path.dirname(input_file)).apply(model)
onnx.save(model, output_file)
print(f"Modified model saved to: {output_file}")
```
Follow the instructions on our [One Shot With SparseML](https://github.com/neuralmagic/sparseml/tree/main/src/sparseml/transformers/sparsification/obcq) page for a step-by-step guide for performing one-shot quantization of large language models.
## Slack
For further support, and discussions on these models and AI in general, join [Neural Magic's Slack Community](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ)
|
Maxx0/5306-zam1-j73y-0
|
Maxx0
| 2023-12-19T06:01:51Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"autotrain",
"text-generation",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-19T04:15:02Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
raja22/finetune-sd-abdul1
|
raja22
| 2023-12-19T06:01:02Z | 0 | 0 | null |
[
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-12-19T05:58:24Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### finetune_sd_abdul1 Dreambooth model trained by raja22 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
cdofitas/videomae-base-finetuned-ucf101-subset
|
cdofitas
| 2023-12-19T05:38:32Z | 10 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2023-11-09T12:59:41Z |
---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ucf101-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1144
- Accuracy: 0.5987
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5857 | 0.04 | 22 | 1.5601 | 0.3459 |
| 1.5269 | 1.04 | 44 | 1.4939 | 0.3151 |
| 1.3172 | 2.04 | 66 | 1.1239 | 0.4658 |
| 1.2214 | 3.04 | 88 | 1.1980 | 0.4623 |
| 1.208 | 4.04 | 110 | 1.2465 | 0.4555 |
| 1.0422 | 5.04 | 132 | 1.2294 | 0.4966 |
| 1.0219 | 6.04 | 154 | 1.1516 | 0.5240 |
| 0.9113 | 7.04 | 176 | 1.2117 | 0.5068 |
| 1.035 | 8.04 | 198 | 1.0770 | 0.5616 |
| 0.8992 | 9.04 | 220 | 1.0658 | 0.5582 |
| 0.7292 | 10.04 | 242 | 1.2217 | 0.5445 |
| 0.8545 | 11.04 | 264 | 1.0260 | 0.5514 |
| 0.5855 | 12.04 | 286 | 1.0646 | 0.5993 |
| 0.7059 | 13.04 | 308 | 1.1769 | 0.5582 |
| 0.8109 | 14.04 | 330 | 1.1800 | 0.5137 |
| 0.6262 | 15.04 | 352 | 1.0740 | 0.5890 |
| 0.6297 | 16.04 | 374 | 1.0434 | 0.5719 |
| 0.7063 | 17.04 | 396 | 1.0205 | 0.5548 |
| 0.587 | 18.04 | 418 | 0.9799 | 0.6027 |
| 0.6087 | 19.04 | 440 | 0.9967 | 0.6164 |
| 0.5973 | 20.04 | 462 | 0.9683 | 0.6096 |
| 0.6971 | 21.04 | 484 | 1.0395 | 0.6027 |
| 0.6395 | 22.03 | 500 | 1.0663 | 0.5993 |
### Framework versions
- Transformers 4.35.2
- Pytorch 1.13.1
- Datasets 2.15.0
- Tokenizers 0.15.0
|
FounderOfHuggingface/gpt2_lora_r16_dbpedia_14_t18_e75
|
FounderOfHuggingface
| 2023-12-19T05:36:24Z | 1 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-19T05:36:21Z |
---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
## Training procedure
### Framework versions
- PEFT 0.6.2
|
Guru-Prasad/Falcon_Tweet
|
Guru-Prasad
| 2023-12-19T05:21:52Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:vilsonrodrigues/falcon-7b-instruct-sharded",
"base_model:adapter:vilsonrodrigues/falcon-7b-instruct-sharded",
"region:us"
] | null | 2023-12-19T05:18:37Z |
---
library_name: peft
base_model: vilsonrodrigues/falcon-7b-instruct-sharded
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
seoulsky-field/KoBART_jeju_dialect
|
seoulsky-field
| 2023-12-19T05:10:30Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:gogamza/kobart-base-v2",
"base_model:finetune:gogamza/kobart-base-v2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-12-19T04:10:37Z |
---
license: mit
base_model: gogamza/kobart-base-v2
tags:
- generated_from_trainer
model-index:
- name: KoBART_base_v2-trial2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# KoBART_base_v2-trial2
This model is a fine-tuned version of [gogamza/kobart-base-v2](https://huggingface.co/gogamza/kobart-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1820
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.3889 | 0.11 | 50 | 0.5425 |
| 0.5339 | 0.22 | 100 | 0.4328 |
| 0.4609 | 0.32 | 150 | 0.4180 |
| 0.4631 | 0.43 | 200 | 0.4167 |
| 0.4065 | 0.54 | 250 | 0.3775 |
| 0.3898 | 0.65 | 300 | 0.3539 |
| 0.3637 | 0.76 | 350 | 0.3389 |
| 0.3347 | 0.87 | 400 | 0.3275 |
| 0.3428 | 0.97 | 450 | 0.3087 |
| 0.2871 | 1.08 | 500 | 0.3189 |
| 0.2843 | 1.19 | 550 | 0.3016 |
| 0.2685 | 1.3 | 600 | 0.2954 |
| 0.2603 | 1.41 | 650 | 0.2860 |
| 0.2636 | 1.52 | 700 | 0.2804 |
| 0.2586 | 1.62 | 750 | 0.2821 |
| 0.2485 | 1.73 | 800 | 0.2674 |
| 0.2483 | 1.84 | 850 | 0.2662 |
| 0.2322 | 1.95 | 900 | 0.2525 |
| 0.2052 | 2.06 | 950 | 0.2634 |
| 0.1838 | 2.16 | 1000 | 0.2472 |
| 0.1859 | 2.27 | 1050 | 0.2432 |
| 0.1887 | 2.38 | 1100 | 0.2392 |
| 0.1756 | 2.49 | 1150 | 0.2314 |
| 0.1697 | 2.6 | 1200 | 0.2332 |
| 0.1741 | 2.71 | 1250 | 0.2257 |
| 0.1665 | 2.81 | 1300 | 0.2204 |
| 0.1655 | 2.92 | 1350 | 0.2097 |
| 0.1539 | 3.03 | 1400 | 0.2141 |
| 0.126 | 3.14 | 1450 | 0.2129 |
| 0.1241 | 3.25 | 1500 | 0.2068 |
| 0.1266 | 3.35 | 1550 | 0.1999 |
| 0.1161 | 3.46 | 1600 | 0.1996 |
| 0.1183 | 3.57 | 1650 | 0.1943 |
| 0.1123 | 3.68 | 1700 | 0.1914 |
| 0.1096 | 3.79 | 1750 | 0.1881 |
| 0.1089 | 3.9 | 1800 | 0.1835 |
| 0.1096 | 4.0 | 1850 | 0.1803 |
| 0.0857 | 4.11 | 1900 | 0.1873 |
| 0.0833 | 4.22 | 1950 | 0.1857 |
| 0.0791 | 4.33 | 2000 | 0.1871 |
| 0.0825 | 4.44 | 2050 | 0.1852 |
| 0.0813 | 4.55 | 2100 | 0.1834 |
| 0.0806 | 4.65 | 2150 | 0.1830 |
| 0.0805 | 4.76 | 2200 | 0.1822 |
| 0.0786 | 4.87 | 2250 | 0.1820 |
| 0.0775 | 4.98 | 2300 | 0.1820 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.1+cu117
- Datasets 2.15.0
- Tokenizers 0.15.0
|
GALAIXYZ/GALFreak
|
GALAIXYZ
| 2023-12-19T05:10:24Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-10-31T03:45:54Z |
---
license: creativeml-openrail-m
---
|
sdpkjc/Swimmer-v4-td3_continuous_action-seed4
|
sdpkjc
| 2023-12-19T04:52:30Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"Swimmer-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-19T04:52:26Z |
---
tags:
- Swimmer-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: TD3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Swimmer-v4
type: Swimmer-v4
metrics:
- type: mean_reward
value: 116.65 +/- 1.38
name: mean_reward
verified: false
---
# (CleanRL) **TD3** Agent Playing **Swimmer-v4**
This is a trained model of a TD3 agent playing Swimmer-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/td3_continuous_action.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[td3_continuous_action]"
python -m cleanrl_utils.enjoy --exp-name td3_continuous_action --env-id Swimmer-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/sdpkjc/Swimmer-v4-td3_continuous_action-seed4/raw/main/td3_continuous_action.py
curl -OL https://huggingface.co/sdpkjc/Swimmer-v4-td3_continuous_action-seed4/raw/main/pyproject.toml
curl -OL https://huggingface.co/sdpkjc/Swimmer-v4-td3_continuous_action-seed4/raw/main/poetry.lock
poetry install --all-extras
python td3_continuous_action.py --save-model --upload-model --hf-entity sdpkjc --env-id Swimmer-v4 --seed 4 --track
```
# Hyperparameters
```python
{'batch_size': 256,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'env_id': 'Swimmer-v4',
'exp_name': 'td3_continuous_action',
'exploration_noise': 0.1,
'gamma': 0.99,
'hf_entity': 'sdpkjc',
'learning_rate': 0.0003,
'learning_starts': 25000.0,
'noise_clip': 0.5,
'policy_frequency': 2,
'policy_noise': 0.2,
'save_model': True,
'seed': 4,
'tau': 0.005,
'torch_deterministic': True,
'total_timesteps': 1000000,
'track': True,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
chienweichang/formatted_address
|
chienweichang
| 2023-12-19T04:49:04Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"dataset:cwchang/tw_address_large",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-12-19T03:36:12Z |
---
license: apache-2.0
base_model: google/mt5-small
tags:
- generated_from_trainer
datasets:
- cwchang/tw_address_large
metrics:
- rouge
model-index:
- name: formatted_address
results:
- task:
name: Summarization
type: summarization
dataset:
name: cwchang/tw_address_large
type: cwchang/tw_address_large
metrics:
- name: Rouge1
type: rouge
value: 97.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# formatted_address
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the cwchang/tw_address_large dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1388
- Rouge1: 97.0
- Rouge2: 48.3471
- Rougel: 96.996
- Rougelsum: 96.9932
- Gen Len: 13.7152
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
rubble/HelpCode
|
rubble
| 2023-12-19T04:44:34Z | 0 | 0 | null |
[
"code",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-12-19T04:42:55Z |
---
license: apache-2.0
language:
- en
metrics:
- accuracy
tags:
- code
---
|
ziumks/zwave-dbgatekeeper-v0.2
|
ziumks
| 2023-12-19T04:41:15Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2023-12-19T04:40:44Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.2
model-index:
- name: mistral-sql-finetune
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-sql-finetune
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7687
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1
- training_steps: 300
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.349 | 0.52 | 25 | 1.5889 |
| 1.1213 | 1.04 | 50 | 1.3232 |
| 0.6369 | 1.56 | 75 | 1.3226 |
| 0.525 | 2.08 | 100 | 1.3375 |
| 0.256 | 2.6 | 125 | 1.5105 |
| 0.2127 | 3.12 | 150 | 1.4755 |
| 0.145 | 3.65 | 175 | 1.6525 |
| 0.1388 | 4.17 | 200 | 1.7168 |
| 0.114 | 4.69 | 225 | 1.7377 |
| 0.1076 | 5.21 | 250 | 1.7087 |
| 0.0986 | 5.73 | 275 | 1.7649 |
| 0.0912 | 6.25 | 300 | 1.7687 |
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.37.0.dev0
- Pytorch 2.1.2
- Datasets 2.15.0
- Tokenizers 0.15.0
|
sdpkjc/Swimmer-v4-td3_continuous_action-seed5
|
sdpkjc
| 2023-12-19T04:39:19Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"Swimmer-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-19T04:39:14Z |
---
tags:
- Swimmer-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: TD3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Swimmer-v4
type: Swimmer-v4
metrics:
- type: mean_reward
value: 80.59 +/- 12.45
name: mean_reward
verified: false
---
# (CleanRL) **TD3** Agent Playing **Swimmer-v4**
This is a trained model of a TD3 agent playing Swimmer-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/td3_continuous_action.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[td3_continuous_action]"
python -m cleanrl_utils.enjoy --exp-name td3_continuous_action --env-id Swimmer-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/sdpkjc/Swimmer-v4-td3_continuous_action-seed5/raw/main/td3_continuous_action.py
curl -OL https://huggingface.co/sdpkjc/Swimmer-v4-td3_continuous_action-seed5/raw/main/pyproject.toml
curl -OL https://huggingface.co/sdpkjc/Swimmer-v4-td3_continuous_action-seed5/raw/main/poetry.lock
poetry install --all-extras
python td3_continuous_action.py --save-model --upload-model --hf-entity sdpkjc --env-id Swimmer-v4 --seed 5 --track
```
# Hyperparameters
```python
{'batch_size': 256,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'env_id': 'Swimmer-v4',
'exp_name': 'td3_continuous_action',
'exploration_noise': 0.1,
'gamma': 0.99,
'hf_entity': 'sdpkjc',
'learning_rate': 0.0003,
'learning_starts': 25000.0,
'noise_clip': 0.5,
'policy_frequency': 2,
'policy_noise': 0.2,
'save_model': True,
'seed': 5,
'tau': 0.005,
'torch_deterministic': True,
'total_timesteps': 1000000,
'track': True,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
sdpkjc/Swimmer-v4-td3_continuous_action-seed3
|
sdpkjc
| 2023-12-19T04:37:50Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"Swimmer-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-19T04:37:41Z |
---
tags:
- Swimmer-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: TD3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Swimmer-v4
type: Swimmer-v4
metrics:
- type: mean_reward
value: 60.66 +/- 11.88
name: mean_reward
verified: false
---
# (CleanRL) **TD3** Agent Playing **Swimmer-v4**
This is a trained model of a TD3 agent playing Swimmer-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/td3_continuous_action.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[td3_continuous_action]"
python -m cleanrl_utils.enjoy --exp-name td3_continuous_action --env-id Swimmer-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/sdpkjc/Swimmer-v4-td3_continuous_action-seed3/raw/main/td3_continuous_action.py
curl -OL https://huggingface.co/sdpkjc/Swimmer-v4-td3_continuous_action-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/sdpkjc/Swimmer-v4-td3_continuous_action-seed3/raw/main/poetry.lock
poetry install --all-extras
python td3_continuous_action.py --save-model --upload-model --hf-entity sdpkjc --env-id Swimmer-v4 --seed 3 --track
```
# Hyperparameters
```python
{'batch_size': 256,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'env_id': 'Swimmer-v4',
'exp_name': 'td3_continuous_action',
'exploration_noise': 0.1,
'gamma': 0.99,
'hf_entity': 'sdpkjc',
'learning_rate': 0.0003,
'learning_starts': 25000.0,
'noise_clip': 0.5,
'policy_frequency': 2,
'policy_noise': 0.2,
'save_model': True,
'seed': 3,
'tau': 0.005,
'torch_deterministic': True,
'total_timesteps': 1000000,
'track': True,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cuongdz01/layoutlm-funsd-c
|
cuongdz01
| 2023-12-19T04:36:36Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"layoutlm",
"token-classification",
"generated_from_trainer",
"dataset:funsd",
"base_model:microsoft/layoutlm-base-uncased",
"base_model:finetune:microsoft/layoutlm-base-uncased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-12-19T03:59:57Z |
---
license: mit
base_model: microsoft/layoutlm-base-uncased
tags:
- generated_from_trainer
datasets:
- funsd
model-index:
- name: layoutlm-funsd-c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlm-funsd-c
This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on the funsd dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7152
- Answer: {'precision': 0.7134894091415831, 'recall': 0.7911001236093943, 'f1': 0.7502930832356389, 'number': 809}
- Header: {'precision': 0.31007751937984496, 'recall': 0.33613445378151263, 'f1': 0.3225806451612903, 'number': 119}
- Question: {'precision': 0.7805309734513274, 'recall': 0.828169014084507, 'f1': 0.8036446469248291, 'number': 1065}
- Overall Precision: 0.7245
- Overall Recall: 0.7837
- Overall F1: 0.7530
- Overall Accuracy: 0.8069
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 1.7835 | 1.0 | 10 | 1.5696 | {'precision': 0.02753303964757709, 'recall': 0.030902348578491966, 'f1': 0.029120559114735003, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.23644444444444446, 'recall': 0.24976525821596243, 'f1': 0.24292237442922376, 'number': 1065} | 0.1431 | 0.1460 | 0.1446 | 0.4162 |
| 1.4134 | 2.0 | 20 | 1.2167 | {'precision': 0.15942028985507245, 'recall': 0.13597033374536466, 'f1': 0.1467645096731154, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.42325227963525835, 'recall': 0.5230046948356808, 'f1': 0.4678706425871483, 'number': 1065} | 0.3322 | 0.3347 | 0.3334 | 0.5768 |
| 1.0829 | 3.0 | 30 | 0.9351 | {'precision': 0.4783599088838269, 'recall': 0.519159456118665, 'f1': 0.4979253112033195, 'number': 809} | {'precision': 0.034482758620689655, 'recall': 0.008403361344537815, 'f1': 0.013513513513513513, 'number': 119} | {'precision': 0.6103896103896104, 'recall': 0.6619718309859155, 'f1': 0.6351351351351351, 'number': 1065} | 0.5461 | 0.5650 | 0.5554 | 0.7105 |
| 0.8077 | 4.0 | 40 | 0.7702 | {'precision': 0.6122233930453108, 'recall': 0.7181705809641533, 'f1': 0.6609783845278726, 'number': 809} | {'precision': 0.2033898305084746, 'recall': 0.10084033613445378, 'f1': 0.13483146067415733, 'number': 119} | {'precision': 0.6381631037212985, 'recall': 0.7568075117370892, 'f1': 0.6924398625429553, 'number': 1065} | 0.6160 | 0.7020 | 0.6562 | 0.7659 |
| 0.6407 | 5.0 | 50 | 0.7146 | {'precision': 0.6491978609625668, 'recall': 0.7503090234857849, 'f1': 0.6961009174311926, 'number': 809} | {'precision': 0.2948717948717949, 'recall': 0.19327731092436976, 'f1': 0.233502538071066, 'number': 119} | {'precision': 0.6921221864951769, 'recall': 0.8084507042253521, 'f1': 0.7457773928107406, 'number': 1065} | 0.6606 | 0.7481 | 0.7016 | 0.7869 |
| 0.5585 | 6.0 | 60 | 0.6995 | {'precision': 0.673866090712743, 'recall': 0.7713226205191595, 'f1': 0.7193083573487031, 'number': 809} | {'precision': 0.3372093023255814, 'recall': 0.24369747899159663, 'f1': 0.2829268292682927, 'number': 119} | {'precision': 0.7374784110535406, 'recall': 0.8018779342723005, 'f1': 0.768331084120558, 'number': 1065} | 0.6945 | 0.7561 | 0.7240 | 0.7948 |
| 0.4934 | 7.0 | 70 | 0.6852 | {'precision': 0.6681222707423581, 'recall': 0.7564894932014833, 'f1': 0.7095652173913044, 'number': 809} | {'precision': 0.37777777777777777, 'recall': 0.2857142857142857, 'f1': 0.3253588516746411, 'number': 119} | {'precision': 0.7634408602150538, 'recall': 0.8, 'f1': 0.7812929848693261, 'number': 1065} | 0.7059 | 0.7516 | 0.7281 | 0.7979 |
| 0.4384 | 8.0 | 80 | 0.6731 | {'precision': 0.6920492721164614, 'recall': 0.7639060568603214, 'f1': 0.7262044653349001, 'number': 809} | {'precision': 0.3008130081300813, 'recall': 0.31092436974789917, 'f1': 0.3057851239669422, 'number': 119} | {'precision': 0.7508503401360545, 'recall': 0.8291079812206573, 'f1': 0.788041053101294, 'number': 1065} | 0.7016 | 0.7717 | 0.7350 | 0.8021 |
| 0.3737 | 9.0 | 90 | 0.6766 | {'precision': 0.6993392070484582, 'recall': 0.7849196538936959, 'f1': 0.7396622015142692, 'number': 809} | {'precision': 0.2992125984251969, 'recall': 0.31932773109243695, 'f1': 0.30894308943089427, 'number': 119} | {'precision': 0.7890974084003575, 'recall': 0.8291079812206573, 'f1': 0.8086080586080587, 'number': 1065} | 0.7224 | 0.7807 | 0.7504 | 0.8046 |
| 0.341 | 10.0 | 100 | 0.6950 | {'precision': 0.6888888888888889, 'recall': 0.7663782447466008, 'f1': 0.7255705090696314, 'number': 809} | {'precision': 0.3619047619047619, 'recall': 0.31932773109243695, 'f1': 0.33928571428571425, 'number': 119} | {'precision': 0.7859030837004405, 'recall': 0.8375586854460094, 'f1': 0.8109090909090909, 'number': 1065} | 0.7243 | 0.7777 | 0.7501 | 0.8088 |
| 0.3178 | 11.0 | 110 | 0.6979 | {'precision': 0.7157534246575342, 'recall': 0.7750309023485785, 'f1': 0.7442136498516321, 'number': 809} | {'precision': 0.375, 'recall': 0.35294117647058826, 'f1': 0.3636363636363636, 'number': 119} | {'precision': 0.7805092186128183, 'recall': 0.8347417840375587, 'f1': 0.8067150635208712, 'number': 1065} | 0.7325 | 0.7817 | 0.7563 | 0.8059 |
| 0.2998 | 12.0 | 120 | 0.7019 | {'precision': 0.7027624309392265, 'recall': 0.7861557478368356, 'f1': 0.7421236872812136, 'number': 809} | {'precision': 0.32061068702290074, 'recall': 0.35294117647058826, 'f1': 0.336, 'number': 119} | {'precision': 0.7885816235504014, 'recall': 0.8300469483568075, 'f1': 0.808783165599268, 'number': 1065} | 0.7242 | 0.7837 | 0.7528 | 0.8069 |
| 0.2809 | 13.0 | 130 | 0.7056 | {'precision': 0.7177777777777777, 'recall': 0.7985166872682324, 'f1': 0.7559976594499708, 'number': 809} | {'precision': 0.3565217391304348, 'recall': 0.3445378151260504, 'f1': 0.3504273504273504, 'number': 119} | {'precision': 0.7911504424778761, 'recall': 0.8394366197183099, 'f1': 0.8145785876993167, 'number': 1065} | 0.7371 | 0.7933 | 0.7641 | 0.8097 |
| 0.2656 | 14.0 | 140 | 0.7117 | {'precision': 0.718609865470852, 'recall': 0.792336217552534, 'f1': 0.7536743092298648, 'number': 809} | {'precision': 0.33884297520661155, 'recall': 0.3445378151260504, 'f1': 0.3416666666666667, 'number': 119} | {'precision': 0.7888198757763976, 'recall': 0.8347417840375587, 'f1': 0.8111313868613138, 'number': 1065} | 0.7341 | 0.7883 | 0.7602 | 0.8098 |
| 0.2669 | 15.0 | 150 | 0.7152 | {'precision': 0.7134894091415831, 'recall': 0.7911001236093943, 'f1': 0.7502930832356389, 'number': 809} | {'precision': 0.31007751937984496, 'recall': 0.33613445378151263, 'f1': 0.3225806451612903, 'number': 119} | {'precision': 0.7805309734513274, 'recall': 0.828169014084507, 'f1': 0.8036446469248291, 'number': 1065} | 0.7245 | 0.7837 | 0.7530 | 0.8069 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
ntc-ai/SDXL-LoRA-slider.shiny
|
ntc-ai
| 2023-12-19T04:35:21Z | 17 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] |
text-to-image
| 2023-12-19T04:35:18Z |
---
language:
- en
thumbnail: "images/evaluate/shiny...dull/shiny_17_3.0.png"
widget:
- text: shiny
output:
url: images/shiny_17_3.0.png
- text: shiny
output:
url: images/shiny_19_3.0.png
- text: shiny
output:
url: images/shiny_20_3.0.png
- text: shiny
output:
url: images/shiny_21_3.0.png
- text: shiny
output:
url: images/shiny_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "shiny"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - shiny (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/shiny_17_-3.0.png" width=256 height=256 /> | <img src="images/shiny_17_0.0.png" width=256 height=256 /> | <img src="images/shiny_17_3.0.png" width=256 height=256 /> |
| <img src="images/shiny_19_-3.0.png" width=256 height=256 /> | <img src="images/shiny_19_0.0.png" width=256 height=256 /> | <img src="images/shiny_19_3.0.png" width=256 height=256 /> |
| <img src="images/shiny_20_-3.0.png" width=256 height=256 /> | <img src="images/shiny_20_0.0.png" width=256 height=256 /> | <img src="images/shiny_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
shiny
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.shiny', weight_name='shiny.safetensors', adapter_name="shiny")
# Activate the LoRA
pipe.set_adapters(["shiny"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, shiny"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 470+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
sdpkjc/Ant-v4-td3_continuous_action-seed5
|
sdpkjc
| 2023-12-19T04:34:14Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"Ant-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-19T04:34:06Z |
---
tags:
- Ant-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: TD3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Ant-v4
type: Ant-v4
metrics:
- type: mean_reward
value: 2204.28 +/- 800.23
name: mean_reward
verified: false
---
# (CleanRL) **TD3** Agent Playing **Ant-v4**
This is a trained model of a TD3 agent playing Ant-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/td3_continuous_action.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[td3_continuous_action]"
python -m cleanrl_utils.enjoy --exp-name td3_continuous_action --env-id Ant-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/sdpkjc/Ant-v4-td3_continuous_action-seed5/raw/main/td3_continuous_action.py
curl -OL https://huggingface.co/sdpkjc/Ant-v4-td3_continuous_action-seed5/raw/main/pyproject.toml
curl -OL https://huggingface.co/sdpkjc/Ant-v4-td3_continuous_action-seed5/raw/main/poetry.lock
poetry install --all-extras
python td3_continuous_action.py --save-model --upload-model --hf-entity sdpkjc --env-id Ant-v4 --seed 5 --track
```
# Hyperparameters
```python
{'batch_size': 256,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'env_id': 'Ant-v4',
'exp_name': 'td3_continuous_action',
'exploration_noise': 0.1,
'gamma': 0.99,
'hf_entity': 'sdpkjc',
'learning_rate': 0.0003,
'learning_starts': 25000.0,
'noise_clip': 0.5,
'policy_frequency': 2,
'policy_noise': 0.2,
'save_model': True,
'seed': 5,
'tau': 0.005,
'torch_deterministic': True,
'total_timesteps': 1000000,
'track': True,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
sdpkjc/Swimmer-v4-td3_continuous_action-seed2
|
sdpkjc
| 2023-12-19T04:30:25Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"Swimmer-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-19T04:30:20Z |
---
tags:
- Swimmer-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: TD3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Swimmer-v4
type: Swimmer-v4
metrics:
- type: mean_reward
value: 84.65 +/- 18.55
name: mean_reward
verified: false
---
# (CleanRL) **TD3** Agent Playing **Swimmer-v4**
This is a trained model of a TD3 agent playing Swimmer-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/td3_continuous_action.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[td3_continuous_action]"
python -m cleanrl_utils.enjoy --exp-name td3_continuous_action --env-id Swimmer-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/sdpkjc/Swimmer-v4-td3_continuous_action-seed2/raw/main/td3_continuous_action.py
curl -OL https://huggingface.co/sdpkjc/Swimmer-v4-td3_continuous_action-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/sdpkjc/Swimmer-v4-td3_continuous_action-seed2/raw/main/poetry.lock
poetry install --all-extras
python td3_continuous_action.py --save-model --upload-model --hf-entity sdpkjc --env-id Swimmer-v4 --seed 2 --track
```
# Hyperparameters
```python
{'batch_size': 256,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'env_id': 'Swimmer-v4',
'exp_name': 'td3_continuous_action',
'exploration_noise': 0.1,
'gamma': 0.99,
'hf_entity': 'sdpkjc',
'learning_rate': 0.0003,
'learning_starts': 25000.0,
'noise_clip': 0.5,
'policy_frequency': 2,
'policy_noise': 0.2,
'save_model': True,
'seed': 2,
'tau': 0.005,
'torch_deterministic': True,
'total_timesteps': 1000000,
'track': True,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
dude121/ppo-SnowballTarget
|
dude121
| 2023-12-19T04:15:11Z | 11 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-12-19T04:15:07Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: dude121/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
jinnkenny99/mistral_twitter
|
jinnkenny99
| 2023-12-19T03:56:26Z | 0 | 0 | null |
[
"safetensors",
"autotrain",
"text-generation",
"conversational",
"license:other",
"region:us"
] |
text-generation
| 2023-12-15T10:34:36Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.