modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-04 12:28:55
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 539
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-04 12:28:29
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
ntc-ai/SDXL-LoRA-slider.mid-dance-move
|
ntc-ai
| 2024-01-28T07:30:05Z | 20 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] |
text-to-image
| 2024-01-28T07:30:00Z |
---
language:
- en
thumbnail: "images/evaluate/mid-dance move.../mid-dance move_17_3.0.png"
widget:
- text: mid-dance move
output:
url: images/mid-dance move_17_3.0.png
- text: mid-dance move
output:
url: images/mid-dance move_19_3.0.png
- text: mid-dance move
output:
url: images/mid-dance move_20_3.0.png
- text: mid-dance move
output:
url: images/mid-dance move_21_3.0.png
- text: mid-dance move
output:
url: images/mid-dance move_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "mid-dance move"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - mid-dance move (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/mid-dance move_17_-3.0.png" width=256 height=256 /> | <img src="images/mid-dance move_17_0.0.png" width=256 height=256 /> | <img src="images/mid-dance move_17_3.0.png" width=256 height=256 /> |
| <img src="images/mid-dance move_19_-3.0.png" width=256 height=256 /> | <img src="images/mid-dance move_19_0.0.png" width=256 height=256 /> | <img src="images/mid-dance move_19_3.0.png" width=256 height=256 /> |
| <img src="images/mid-dance move_20_-3.0.png" width=256 height=256 /> | <img src="images/mid-dance move_20_0.0.png" width=256 height=256 /> | <img src="images/mid-dance move_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
mid-dance move
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.mid-dance-move', weight_name='mid-dance move.safetensors', adapter_name="mid-dance move")
# Activate the LoRA
pipe.set_adapters(["mid-dance move"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, mid-dance move"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 1140+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
Jayanka/a-butterfly
|
Jayanka
| 2024-01-28T07:20:03Z | 0 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-28T07:15:28Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### -A-Butterfly- Dreambooth model trained by Jayanka following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 4MN21CS023
Sample pictures of this concept:

|
datawealthy/logo-classifier
|
datawealthy
| 2024-01-28T07:16:26Z | 0 | 0 | null |
[
"image-classification",
"en",
"dataset:datawealthy/logo-classification",
"license:mit",
"region:us"
] |
image-classification
| 2024-01-23T12:45:39Z |
---
license: mit
datasets:
- datawealthy/logo-classification
language:
- en
pipeline_tag: image-classification
---
|
JesseGuerrero/deepseekAllDarkan
|
JesseGuerrero
| 2024-01-28T07:10:57Z | 85 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-28T06:49:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hitoshiNagaoka/xlm-roberta-base-finetuned-panx-de
|
hitoshiNagaoka
| 2024-01-28T07:10:06Z | 91 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-27T23:56:07Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.860127157129882
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1353
- F1: 0.8601
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2537 | 1.0 | 525 | 0.1713 | 0.8219 |
| 0.131 | 2.0 | 1050 | 0.1394 | 0.8492 |
| 0.0824 | 3.0 | 1575 | 0.1353 | 0.8601 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.1.0+cu121
- Datasets 1.16.1
- Tokenizers 0.15.1
|
coke0zero/ppo-SnowballTarget
|
coke0zero
| 2024-01-28T07:06:45Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2024-01-28T07:06:38Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: coke0zero/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
thiagobarbosa/whisper-base-common-voice-16-pt-v6
|
thiagobarbosa
| 2024-01-28T07:00:59Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"pt",
"dataset:mozilla-foundation/common_voice_16_0",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-23T13:24:04Z |
---
language:
- pt
license: apache-2.0
base_model: openai/whisper-base
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_16_0
metrics:
- wer
model-index:
- name: Whisper Base using Common Voice 16 (pt)
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Mozilla Common Voices - 16.0 - Portuguese
type: mozilla-foundation/common_voice_16_0
config: pt
split: test
args: pt
metrics:
- name: Wer
type: wer
value: 25.436328377504847
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Base using Common Voice 16 (pt)
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the Mozilla Common Voices - 16.0 - Portuguese dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3552
- Wer: 25.4363
- Wer Normalized: 19.4668
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Wer Normalized |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:--------------:|
| 0.6085 | 0.19 | 500 | 0.4465 | 32.1833 | 25.3383 |
| 0.4624 | 0.37 | 1000 | 0.4131 | 28.9867 | 22.8488 |
| 0.4375 | 0.56 | 1500 | 0.3936 | 27.8135 | 21.3817 |
| 0.4372 | 0.74 | 2000 | 0.3784 | 27.5695 | 21.7171 |
| 0.4704 | 0.93 | 2500 | 0.3630 | 26.1167 | 20.5133 |
| 0.2013 | 1.11 | 3000 | 0.3600 | 25.5462 | 19.7750 |
| 0.2261 | 1.3 | 3500 | 0.3570 | 25.5010 | 19.5181 |
| 0.2118 | 1.48 | 4000 | 0.3552 | 25.4363 | 19.4668 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.1
- Datasets 2.16.1
- Tokenizers 0.15.0
|
yunconglong/MoE_13B_DPO
|
yunconglong
| 2024-01-28T06:49:29Z | 4,237 | 6 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"DPO",
"RL-TUNED",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-28T01:19:24Z |
---
license: other
tags:
- moe
- DPO
- RL-TUNED
---
* [DPO Trainer](https://huggingface.co/docs/trl/main/en/dpo_trainer) with dataset Intel/orca_dpo_pairs to improve [yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B]
```
DPO Trainer
TRL supports the DPO Trainer for training language models from preference data, as described in the paper Direct Preference Optimization: Your Language Model is Secretly a Reward Model by Rafailov et al., 2023.
```
|
brightlee6/test_trainer
|
brightlee6
| 2024-01-28T06:48:24Z | 175 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-28T06:47:57Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: test_trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_trainer
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0037
- Accuracy: 0.57
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 125 | 1.5580 | 0.314 |
| No log | 2.0 | 250 | 1.0432 | 0.546 |
| No log | 3.0 | 375 | 1.0037 | 0.57 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
Subhamoy12/my-pet-cat-xzr
|
Subhamoy12
| 2024-01-28T06:47:15Z | 0 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-28T06:43:20Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Cat-XZR Dreambooth model trained by Subhamoy12 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 38700323045
Sample pictures of this concept:

|
stablediffusionapi/kuronekoanimemixv10
|
stablediffusionapi
| 2024-01-28T06:47:00Z | 30 | 1 |
diffusers
|
[
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-28T06:45:29Z |
---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# Kuroneko_animemix_v10 API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "kuronekoanimemixv10"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs)
Try model for free: [Generate Images](https://modelslab.com/models/kuronekoanimemixv10)
Model link: [View model](https://modelslab.com/models/kuronekoanimemixv10)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "kuronekoanimemixv10",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
jiandong/crimson-embedding-v1.5
|
jiandong
| 2024-01-28T06:36:30Z | 47 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"dataset:jiandong/crimson-embedding-dataset",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-01-28T06:05:48Z |
---
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
datasets:
- jiandong/crimson-embedding-dataset
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 3898 with parameters:
```
{'batch_size': 10, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 50,
"evaluator": "sentence_transformers.evaluation.InformationRetrievalEvaluator.InformationRetrievalEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1169,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
yoshinori-sano/bert-base-japanese-v3-jnli-v1
|
yoshinori-sano
| 2024-01-28T06:32:58Z | 120 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-28T06:32:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
slapula/commonvoice_be_tts_male_1
|
slapula
| 2024-01-28T06:17:15Z | 2 | 1 |
transformers
|
[
"transformers",
"be",
"dataset:mozilla-foundation/common_voice_16_0",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-01-28T05:45:54Z |
---
license: mit
datasets:
- mozilla-foundation/common_voice_16_0
language:
- be
---
# GlowTTS + HifiGAN Male Belarusian Voice #1
This is my third attempt at training a Belarusian voice using [Coqui TTS](https://docs.coqui.ai/en/dev/index.html) and Mozilla's [CommonVoice](https://commonvoice.mozilla.org/en) dataset. This model was developed based on the [excellent recipe](https://github.com/coqui-ai/TTS/tree/dev/recipes/bel-alex73) provided by bel-alex73. For this particular model, I tweaked the search results to find single speakers with over 30 hours of audio and selected speakers based on clarity and relatively slow speaking cadence. This was a manual selection process that involved me tweaking bel-alex73 `choose_speaker.ipynb` notebook to show/process more that just the top ranked speaker.
This model is generated from the following client_id: 235555b6d6c6b4d882a5a0e6160f245c03e61d266c112dc3cecaeb7bcf9802d70be375ffaf9590dd7b24e95284ce06ee295da529cebd9c67f29db31cb8f092cb
I am not a native speaker of Belarusian and I am doing this to assist in my language learning efforts. I am open to any and all feedback (esp. from native speakers) so feel free to post questions/comments.
## Sythesizing text to speech
Input text needs to be phoneme-ized in order for this model to process the speech correctly. This process has been documented in [bel-alex73's README](https://github.com/coqui-ai/TTS/tree/dev/recipes/bel-alex73#prepare-to-training---locally).
```
tts --text "<phonemes>" --out_path output.wav \
--config_path config.json \
--model_path best_model.pth \
--vocoder_config_path vocoder_config.json \
--vocoder_path vocoder_best_model.pth
```
|
Crystalcareai/CrystalMistral_7bv1
|
Crystalcareai
| 2024-01-28T06:08:02Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Crystalcareai/CrystalMistral_7b_v.03",
"Crystalcareai/CrystalMistral_7b_v.04",
"conversational",
"base_model:Crystalcareai/CrystalMistral_7b_v.03",
"base_model:merge:Crystalcareai/CrystalMistral_7b_v.03",
"base_model:Crystalcareai/CrystalMistral_7b_v.04",
"base_model:merge:Crystalcareai/CrystalMistral_7b_v.04",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-27T06:37:03Z |
---
tags:
- merge
- mergekit
- lazymergekit
- Crystalcareai/CrystalMistral_7b_v.03
- Crystalcareai/CrystalMistral_7b_v.04
base_model:
- Crystalcareai/CrystalMistral_7b_v.03
- Crystalcareai/CrystalMistral_7b_v.04
---
# CrystalMistral_7bv1
CrystalMistral_7bv1 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Crystalcareai/CrystalMistral_7b_v.03](https://huggingface.co/Crystalcareai/CrystalMistral_7b_v.03)
* [Crystalcareai/CrystalMistral_7b_v.04](https://huggingface.co/Crystalcareai/CrystalMistral_7b_v.04)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Crystalcareai/CrystalMistral_7b_v.03
layer_range: [0, 32]
- model: Crystalcareai/CrystalMistral_7b_v.04
layer_range: [0, 32]
merge_method: slerp
base_model: Crystalcareai/CrystalMistral_7b_v.04
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Crystalcareai/CrystalMistral_7bv1"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
Kwabs-10/Llama-2-7b-chat-finetune
|
Kwabs-10
| 2024-01-28T05:44:50Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-28T05:33:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
asun17904/gpt2-adviter
|
asun17904
| 2024-01-28T05:35:35Z | 1 | 0 |
pytorch
|
[
"pytorch",
"gpt2",
"en",
"license:mit",
"region:us"
] | null | 2024-01-27T22:54:12Z |
---
language: en
license: mit
library_name: pytorch
---
# Adversarial Training Through Iterations
Trainer Hyperparameters:
- `lr` = 5e-05
- `per_device_batch_size` = 8
- `gradient_accumulation_steps` = 2
- `weight_decay` = 1e-09
- `seed` = 42
Extended Logs:
|eval_loss|eval_accuracy|epoch|
|--|--|--|
|0.370|0.942|1.0|
|0.375|0.937|2.0|
|0.371|0.942|3.0|
|0.370|0.942|4.0|
|0.367|0.945|5.0|
|0.366|0.946|6.0|
|0.367|0.945|7.0|
|0.367|0.945|8.0|
|0.366|0.947|9.0|
|
juliansmidek/donut_test_docvqa
|
juliansmidek
| 2024-01-28T05:30:52Z | 33 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-01-27T05:52:29Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gustavokpc/bert-base-portuguese-cased_LRATE_5e-06_EPOCHS_6
|
gustavokpc
| 2024-01-28T05:28:29Z | 47 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:neuralmind/bert-base-portuguese-cased",
"base_model:finetune:neuralmind/bert-base-portuguese-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-28T03:42:52Z |
---
license: mit
base_model: neuralmind/bert-base-portuguese-cased
tags:
- generated_from_keras_callback
model-index:
- name: gustavokpc/bert-base-portuguese-cased_LRATE_5e-06_EPOCHS_6
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# gustavokpc/bert-base-portuguese-cased_LRATE_5e-06_EPOCHS_6
This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0811
- Train Accuracy: 0.9728
- Train F1 M: 0.5572
- Train Precision M: 0.4036
- Train Recall M: 0.9646
- Validation Loss: 0.1804
- Validation Accuracy: 0.9387
- Validation F1 M: 0.5549
- Validation Precision M: 0.3999
- Validation Recall M: 0.9504
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-06, 'decay_steps': 4548, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train F1 M | Train Precision M | Train Recall M | Validation Loss | Validation Accuracy | Validation F1 M | Validation Precision M | Validation Recall M | Epoch |
|:----------:|:--------------:|:----------:|:-----------------:|:--------------:|:---------------:|:-------------------:|:---------------:|:----------------------:|:-------------------:|:-----:|
| 0.2887 | 0.8821 | 0.4544 | 0.3418 | 0.7393 | 0.1871 | 0.9321 | 0.5574 | 0.4039 | 0.9455 | 0 |
| 0.1571 | 0.9439 | 0.5463 | 0.3992 | 0.9299 | 0.1740 | 0.9321 | 0.5596 | 0.4040 | 0.9542 | 1 |
| 0.1185 | 0.9587 | 0.5529 | 0.4020 | 0.9480 | 0.1714 | 0.9367 | 0.5588 | 0.4030 | 0.9555 | 2 |
| 0.0950 | 0.9662 | 0.5572 | 0.4033 | 0.9621 | 0.1775 | 0.9373 | 0.5604 | 0.4033 | 0.9607 | 3 |
| 0.0811 | 0.9728 | 0.5572 | 0.4036 | 0.9646 | 0.1804 | 0.9387 | 0.5549 | 0.3999 | 0.9504 | 4 |
### Framework versions
- Transformers 4.34.1
- TensorFlow 2.10.0
- Datasets 2.14.5
- Tokenizers 0.14.1
|
lokesh2002/t5-small-finetuned-mydata
|
lokesh2002
| 2024-01-28T05:24:24Z | 90 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-27T09:09:16Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-finetuned-mydata
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-mydata
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7077
- Rouge1: 41.6567
- Rouge2: 23.7942
- Rougel: 41.0101
- Rougelsum: 41.5048
- Gen Len: 7.6027
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 19 | 4.9039 | 20.0474 | 7.234 | 18.2098 | 17.9517 | 10.9589 |
| No log | 2.0 | 38 | 4.5878 | 23.0871 | 8.221 | 21.7521 | 21.6804 | 11.3425 |
| No log | 3.0 | 57 | 4.3925 | 23.4492 | 8.8479 | 22.0822 | 22.1146 | 12.0548 |
| No log | 4.0 | 76 | 4.2184 | 26.0031 | 9.4235 | 24.6843 | 24.6388 | 12.6438 |
| No log | 5.0 | 95 | 4.0619 | 26.7979 | 9.548 | 25.7363 | 25.7928 | 12.8219 |
| No log | 6.0 | 114 | 3.9334 | 26.9541 | 9.7913 | 25.9349 | 25.9444 | 12.726 |
| No log | 7.0 | 133 | 3.8185 | 28.0578 | 10.9266 | 26.9035 | 26.746 | 12.1507 |
| No log | 8.0 | 152 | 3.7113 | 28.296 | 10.9928 | 26.6577 | 26.446 | 12.0822 |
| No log | 9.0 | 171 | 3.6335 | 30.3027 | 11.4952 | 28.313 | 28.2952 | 11.7397 |
| No log | 10.0 | 190 | 3.5584 | 30.8405 | 11.0987 | 28.7148 | 28.8457 | 11.0822 |
| No log | 11.0 | 209 | 3.4895 | 30.2533 | 10.9185 | 28.3191 | 28.4837 | 11.0685 |
| No log | 12.0 | 228 | 3.4216 | 30.3158 | 11.3392 | 28.3347 | 28.5197 | 10.7534 |
| No log | 13.0 | 247 | 3.3705 | 30.8803 | 12.1903 | 29.3055 | 29.4952 | 10.4521 |
| No log | 14.0 | 266 | 3.3190 | 31.0433 | 12.2378 | 29.4309 | 29.6068 | 9.9315 |
| No log | 15.0 | 285 | 3.2699 | 31.8936 | 12.9061 | 30.1597 | 30.6298 | 9.6849 |
| No log | 16.0 | 304 | 3.2192 | 33.4292 | 13.8997 | 31.779 | 32.0884 | 9.1096 |
| No log | 17.0 | 323 | 3.1740 | 33.729 | 14.1086 | 32.0316 | 32.315 | 9.0411 |
| No log | 18.0 | 342 | 3.1394 | 36.7725 | 17.2736 | 35.2518 | 35.7599 | 8.7671 |
| No log | 19.0 | 361 | 3.1014 | 36.4014 | 17.4106 | 34.8341 | 35.3403 | 8.7397 |
| No log | 20.0 | 380 | 3.0691 | 36.6132 | 17.4341 | 35.0468 | 35.5194 | 8.5616 |
| No log | 21.0 | 399 | 3.0368 | 37.4634 | 18.3921 | 35.8956 | 36.3709 | 8.4658 |
| No log | 22.0 | 418 | 3.0071 | 37.1796 | 18.0799 | 35.6085 | 36.102 | 8.4247 |
| No log | 23.0 | 437 | 2.9806 | 37.6934 | 19.5239 | 36.4692 | 36.9152 | 8.2055 |
| No log | 24.0 | 456 | 2.9535 | 38.3271 | 20.1594 | 37.0697 | 37.6403 | 8.0959 |
| No log | 25.0 | 475 | 2.9325 | 38.5833 | 20.7699 | 37.3922 | 37.9437 | 8.1781 |
| No log | 26.0 | 494 | 2.9105 | 38.5591 | 21.1086 | 37.8183 | 38.2351 | 8.137 |
| 3.6364 | 27.0 | 513 | 2.8892 | 38.1741 | 20.492 | 37.4062 | 37.765 | 7.863 |
| 3.6364 | 28.0 | 532 | 2.8716 | 38.0978 | 20.3115 | 37.0709 | 37.3916 | 7.7808 |
| 3.6364 | 29.0 | 551 | 2.8541 | 38.7918 | 20.6816 | 37.4011 | 37.7503 | 7.8219 |
| 3.6364 | 30.0 | 570 | 2.8392 | 38.9202 | 20.7127 | 37.5863 | 37.8795 | 7.863 |
| 3.6364 | 31.0 | 589 | 2.8256 | 38.6036 | 21.0085 | 37.8739 | 38.1613 | 7.6164 |
| 3.6364 | 32.0 | 608 | 2.8122 | 39.0417 | 21.677 | 38.2494 | 38.6465 | 7.726 |
| 3.6364 | 33.0 | 627 | 2.7994 | 39.2329 | 21.7591 | 38.5074 | 38.8281 | 7.6986 |
| 3.6364 | 34.0 | 646 | 2.7862 | 40.9608 | 23.3487 | 39.9721 | 40.4826 | 7.6301 |
| 3.6364 | 35.0 | 665 | 2.7752 | 40.3292 | 23.0376 | 39.6256 | 40.123 | 7.6986 |
| 3.6364 | 36.0 | 684 | 2.7658 | 40.3589 | 22.9372 | 39.6409 | 40.1315 | 7.6438 |
| 3.6364 | 37.0 | 703 | 2.7562 | 40.6065 | 22.9372 | 39.8863 | 40.4343 | 7.6575 |
| 3.6364 | 38.0 | 722 | 2.7495 | 40.9141 | 22.9372 | 40.1929 | 40.7218 | 7.6575 |
| 3.6364 | 39.0 | 741 | 2.7425 | 40.5265 | 22.9372 | 39.7735 | 40.3237 | 7.6849 |
| 3.6364 | 40.0 | 760 | 2.7367 | 40.5265 | 22.9372 | 39.7735 | 40.3237 | 7.6849 |
| 3.6364 | 41.0 | 779 | 2.7308 | 40.5265 | 22.9372 | 39.7735 | 40.3237 | 7.6849 |
| 3.6364 | 42.0 | 798 | 2.7264 | 41.0514 | 22.9372 | 40.3332 | 40.8709 | 7.6986 |
| 3.6364 | 43.0 | 817 | 2.7233 | 41.0514 | 22.9372 | 40.3332 | 40.8709 | 7.6986 |
| 3.6364 | 44.0 | 836 | 2.7193 | 41.4655 | 23.3863 | 40.7719 | 41.274 | 7.7123 |
| 3.6364 | 45.0 | 855 | 2.7164 | 41.6567 | 23.7942 | 41.0101 | 41.5048 | 7.6027 |
| 3.6364 | 46.0 | 874 | 2.7135 | 41.6567 | 23.7942 | 41.0101 | 41.5048 | 7.6027 |
| 3.6364 | 47.0 | 893 | 2.7108 | 41.6567 | 23.7942 | 41.0101 | 41.5048 | 7.6027 |
| 3.6364 | 48.0 | 912 | 2.7092 | 41.6567 | 23.7942 | 41.0101 | 41.5048 | 7.6027 |
| 3.6364 | 49.0 | 931 | 2.7081 | 41.6567 | 23.7942 | 41.0101 | 41.5048 | 7.6027 |
| 3.6364 | 50.0 | 950 | 2.7077 | 41.6567 | 23.7942 | 41.0101 | 41.5048 | 7.6027 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
felixbrock/amazing
|
felixbrock
| 2024-01-28T05:14:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-28T05:14:06Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
zhimeng/sft_openassistant-guanaco
|
zhimeng
| 2024-01-28T05:13:48Z | 4 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:facebook/opt-350m",
"base_model:adapter:facebook/opt-350m",
"license:other",
"region:us"
] | null | 2024-01-27T08:35:56Z |
---
license: other
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: facebook/opt-350m
model-index:
- name: sft_openassistant-guanaco
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft_openassistant-guanaco
This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.41e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.37.1
- Pytorch 2.0.0+cu117
- Datasets 2.16.1
- Tokenizers 0.15.1
|
SC44/Mistral-7B-private-spnf
|
SC44
| 2024-01-28T04:59:09Z | 0 | 0 | null |
[
"safetensors",
"arxiv:1910.09700",
"license:cc-by-4.0",
"region:us"
] | null | 2024-01-28T04:56:14Z |
---
license: cc-by-4.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
SC44/Mistral-7B-private-spef
|
SC44
| 2024-01-28T04:59:00Z | 0 | 0 | null |
[
"safetensors",
"arxiv:1910.09700",
"license:cc-by-4.0",
"region:us"
] | null | 2024-01-28T04:57:55Z |
---
license: cc-by-4.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
casque/refi2
|
casque
| 2024-01-28T04:50:46Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-01-28T04:50:24Z |
---
license: creativeml-openrail-m
---
|
SC56/Mistral-7B-private-spef
|
SC56
| 2024-01-28T04:48:22Z | 0 | 0 | null |
[
"safetensors",
"arxiv:1910.09700",
"license:cc-by-4.0",
"region:us"
] | null | 2024-01-28T04:46:48Z |
---
license: cc-by-4.0
---
---
license: cc-by-4.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nick911/Tanjiro_asd
|
nick911
| 2024-01-28T04:44:39Z | 0 | 0 | null |
[
"safetensors",
"license:mit",
"region:us"
] | null | 2024-01-27T18:59:24Z |
---
license: mit
inference: true
---
|
sevvalkapcak/newModel2
|
sevvalkapcak
| 2024-01-28T04:42:59Z | 24 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-28T01:06:33Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: sevvalkapcak/newModel2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# sevvalkapcak/newModel2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0138
- Validation Loss: 0.6631
- Train Accuracy: 0.9225
- Epoch: 89
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 5e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.2465 | 0.2029 | 0.9085 | 0 |
| 0.1354 | 0.1302 | 0.939 | 1 |
| 0.1121 | 0.1588 | 0.934 | 2 |
| 0.0945 | 0.1551 | 0.937 | 3 |
| 0.0815 | 0.1696 | 0.939 | 4 |
| 0.0778 | 0.1647 | 0.932 | 5 |
| 0.0522 | 0.2356 | 0.931 | 6 |
| 0.0444 | 0.2861 | 0.9335 | 7 |
| 0.0329 | 0.2144 | 0.9355 | 8 |
| 0.0290 | 0.2548 | 0.935 | 9 |
| 0.0222 | 0.2866 | 0.93 | 10 |
| 0.0256 | 0.2787 | 0.9385 | 11 |
| 0.0267 | 0.2764 | 0.941 | 12 |
| 0.0201 | 0.2888 | 0.9315 | 13 |
| 0.0221 | 0.2737 | 0.934 | 14 |
| 0.0174 | 0.4403 | 0.93 | 15 |
| 0.0170 | 0.2836 | 0.932 | 16 |
| 0.0214 | 0.3033 | 0.9375 | 17 |
| 0.0125 | 0.3894 | 0.934 | 18 |
| 0.0271 | 0.3687 | 0.9305 | 19 |
| 0.0154 | 0.3817 | 0.9305 | 20 |
| 0.0149 | 0.4736 | 0.93 | 21 |
| 0.0196 | 0.4435 | 0.9325 | 22 |
| 0.0124 | 0.4873 | 0.929 | 23 |
| 0.0157 | 0.4008 | 0.932 | 24 |
| 0.0153 | 0.4074 | 0.931 | 25 |
| 0.0176 | 0.3996 | 0.9295 | 26 |
| 0.0160 | 0.3652 | 0.9355 | 27 |
| 0.0081 | 0.4446 | 0.934 | 28 |
| 0.0098 | 0.5249 | 0.934 | 29 |
| 0.0151 | 0.4112 | 0.937 | 30 |
| 0.0124 | 0.4888 | 0.929 | 31 |
| 0.0146 | 0.5022 | 0.9325 | 32 |
| 0.0130 | 0.5585 | 0.9305 | 33 |
| 0.0102 | 0.4304 | 0.935 | 34 |
| 0.0158 | 0.4239 | 0.933 | 35 |
| 0.0156 | 0.4849 | 0.93 | 36 |
| 0.0153 | 0.5097 | 0.9245 | 37 |
| 0.0135 | 0.4689 | 0.934 | 38 |
| 0.0178 | 0.4578 | 0.9285 | 39 |
| 0.0124 | 0.4083 | 0.9275 | 40 |
| 0.0106 | 0.4946 | 0.926 | 41 |
| 0.0098 | 0.4908 | 0.927 | 42 |
| 0.0131 | 0.5604 | 0.928 | 43 |
| 0.0143 | 0.4226 | 0.9315 | 44 |
| 0.0105 | 0.5664 | 0.9245 | 45 |
| 0.0189 | 0.5121 | 0.925 | 46 |
| 0.0148 | 0.5259 | 0.9245 | 47 |
| 0.0090 | 0.4567 | 0.9295 | 48 |
| 0.0156 | 0.4633 | 0.926 | 49 |
| 0.0128 | 0.5222 | 0.9295 | 50 |
| 0.0118 | 0.5461 | 0.921 | 51 |
| 0.0172 | 0.4626 | 0.927 | 52 |
| 0.0129 | 0.5266 | 0.922 | 53 |
| 0.0159 | 0.5203 | 0.925 | 54 |
| 0.0106 | 0.5360 | 0.9265 | 55 |
| 0.0158 | 0.4766 | 0.9305 | 56 |
| 0.0106 | 0.5630 | 0.926 | 57 |
| 0.0142 | 0.6162 | 0.922 | 58 |
| 0.0137 | 0.5518 | 0.916 | 59 |
| 0.0083 | 0.6281 | 0.9155 | 60 |
| 0.0071 | 0.6263 | 0.9245 | 61 |
| 0.0116 | 0.6166 | 0.9235 | 62 |
| 0.0162 | 0.5217 | 0.9195 | 63 |
| 0.0158 | 0.6366 | 0.9215 | 64 |
| 0.0120 | 0.5511 | 0.9245 | 65 |
| 0.0093 | 0.4895 | 0.9225 | 66 |
| 0.0094 | 0.5207 | 0.9255 | 67 |
| 0.0067 | 0.6252 | 0.9275 | 68 |
| 0.0058 | 0.6934 | 0.9235 | 69 |
| 0.0055 | 0.6577 | 0.928 | 70 |
| 0.0073 | 0.5865 | 0.9255 | 71 |
| 0.0336 | 0.4875 | 0.9175 | 72 |
| 0.0177 | 0.5256 | 0.923 | 73 |
| 0.0143 | 0.5042 | 0.917 | 74 |
| 0.0076 | 0.6803 | 0.9225 | 75 |
| 0.0114 | 0.5571 | 0.9205 | 76 |
| 0.0118 | 0.5649 | 0.9235 | 77 |
| 0.0147 | 0.5592 | 0.9245 | 78 |
| 0.0109 | 0.6044 | 0.9195 | 79 |
| 0.0095 | 0.6940 | 0.921 | 80 |
| 0.0139 | 0.6246 | 0.9245 | 81 |
| 0.0145 | 0.7057 | 0.917 | 82 |
| 0.0147 | 0.6455 | 0.9155 | 83 |
| 0.0100 | 0.6044 | 0.922 | 84 |
| 0.0074 | 0.6786 | 0.92 | 85 |
| 0.0093 | 0.7300 | 0.9125 | 86 |
| 0.0152 | 0.6264 | 0.9205 | 87 |
| 0.0115 | 0.6208 | 0.915 | 88 |
| 0.0138 | 0.6631 | 0.9225 | 89 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.1
|
gotutiyan/gec-t5-large-clang8
|
gotutiyan
| 2024-01-28T04:35:51Z | 127 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"clang8",
"grammatical error correction",
"en",
"arxiv:2106.03830",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-28T01:11:33Z |
---
language: en
license: cc-by-nc-sa-4.0
tags:
- clang8
- grammatical error correction
---
A reproduction of training T5 on cLang-8 (corresponding to Table 4) in the following paper:
[A Simple Recipe for Multilingual Grammatical Error Correction](https://arxiv.org/abs/2106.03830).
The code and the performance on GEC benchmarks are avaliable from https://github.com/gotutiyan/gec-t5.
The same as cLang-8 corpus and the original Lang-8 corpus, the pre-trained models are distributed for research and educational purposes only.
|
zorobin/mistral-class-shishya-all-hal-7b-ep3
|
zorobin
| 2024-01-28T04:35:23Z | 47 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-28T04:35:21Z |
---
library_name: transformers
license: llama2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
luffycodes/mistral-class-shishya-all-hal-7b-ep3
|
luffycodes
| 2024-01-28T04:32:12Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-28T04:32:12Z |
---
library_name: transformers
license: llama2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
namirocks/mistral-class-shishya-all-hal-7b-ep3
|
namirocks
| 2024-01-28T04:31:48Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-28T04:25:26Z |
---
library_name: transformers
license: llama2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
luffycodes/mistral-class-shishya-7b-ep3
|
luffycodes
| 2024-01-28T04:30:29Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-28T04:30:26Z |
---
library_name: transformers
license: llama2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bartowski/WestLake-7B-v2-laser-truthy-dpo-exl2
|
bartowski
| 2024-01-28T04:28:04Z | 6 | 0 |
transformers
|
[
"transformers",
"text-generation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-28T04:11:32Z |
---
library_name: transformers
license: apache-2.0
quantized_by: bartowski
pipeline_tag: text-generation
---
## Exllama v2 Quantizations of WestLake-7B-v2-laser-truthy-dpo
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.12">turboderp's ExLlamaV2 v0.0.12</a> for quantization.
# The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/macadeliccc/WestLake-7B-v2-laser-truthy-dpo
| Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description |
| ----- | ---- | ------- | ------ | ------ | ------ | ------------ |
| [8_0](https://huggingface.co/Bartowski/WestLake-7B-v2-laser-truthy-dpo-exl2/tree/8_0) | 8.0 | 8.0 | 8.4 GB | 9.8 GB | 11.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/Bartowski/WestLake-7B-v2-laser-truthy-dpo-exl2/tree/6_5) | 6.5 | 8.0 | 7.2 GB | 8.6 GB | 10.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
| [5_0](https://huggingface.co/Bartowski/WestLake-7B-v2-laser-truthy-dpo-exl2/tree/5_0) | 5.0 | 6.0 | 6.0 GB | 7.4 GB | 9.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. |
| [4_25](https://huggingface.co/Bartowski/WestLake-7B-v2-laser-truthy-dpo-exl2/tree/4_25) | 4.25 | 6.0 | 5.3 GB | 6.7 GB | 8.7 GB | GPTQ equivalent bits per weight, slightly higher quality. |
| [3_5](https://huggingface.co/Bartowski/WestLake-7B-v2-laser-truthy-dpo-exl2/tree/3_5) | 3.5 | 6.0 | 4.7 GB | 6.1 GB | 8.1 GB | Lower quality, only use if you have to. |
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/WestLake-7B-v2-laser-truthy-dpo-exl2 WestLake-7B-v2-laser-truthy-dpo-exl2-6_5
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `WestLake-7B-v2-laser-truthy-dpo-exl2`:
```shell
mkdir WestLake-7B-v2-laser-truthy-dpo-exl2
huggingface-cli download bartowski/WestLake-7B-v2-laser-truthy-dpo-exl2 --local-dir WestLake-7B-v2-laser-truthy-dpo-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
Linux:
```shell
mkdir WestLake-7B-v2-laser-truthy-dpo-exl2-6_5
huggingface-cli download bartowski/WestLake-7B-v2-laser-truthy-dpo-exl2 --revision 6_5 --local-dir WestLake-7B-v2-laser-truthy-dpo-exl2-6_5 --local-dir-use-symlinks False
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
mkdir WestLake-7B-v2-laser-truthy-dpo-exl2-6.5
huggingface-cli download bartowski/WestLake-7B-v2-laser-truthy-dpo-exl2 --revision 6_5 --local-dir WestLake-7B-v2-laser-truthy-dpo-exl2-6.5 --local-dir-use-symlinks False
```
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
Spanicin/Fulcrum_Aura2
|
Spanicin
| 2024-01-28T04:25:10Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"mistralai/Mistral-7B-v0.1",
"HuggingFaceH4/zephyr-7b-alpha",
"cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser",
"ai4bharat/Airavata",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-28T04:20:42Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- mistralai/Mistral-7B-v0.1
- HuggingFaceH4/zephyr-7b-alpha
- cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser
- ai4bharat/Airavata
---
# Fulcrum_Aura2
Fulcrum_Aura2 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
* [HuggingFaceH4/zephyr-7b-alpha](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha)
* [cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser)
* [ai4bharat/Airavata](https://huggingface.co/ai4bharat/Airavata)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-v0.1
layer_range: [0, 32]
- model: HuggingFaceH4/zephyr-7b-alpha
layer_range: [0, 32]
parameters:
density: 0.53
weight: 0.4
- model: cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser
layer_range: [0, 32]
parameters:
density: 0.53
weight: 0.4
- model: ai4bharat/Airavata
layer_range: [0, 32]
parameters:
density: 0.53
weight: 0.4
merge_method: dare_linear
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Spanicin/Fulcrum_Aura2"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
bartowski/HuginnV5.5-12.6B-exl2
|
bartowski
| 2024-01-28T04:12:25Z | 1 | 2 | null |
[
"text-generation",
"license:cc-by-4.0",
"region:us"
] |
text-generation
| 2024-01-28T01:29:29Z |
---
license: cc-by-4.0
quantized_by: bartowski
pipeline_tag: text-generation
---
## Exllama v2 Quantizations of HuginnV5.5-12.6B
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.12">turboderp's ExLlamaV2 v0.0.12</a> for quantization.
# The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/The-Face-Of-Goonery/HuginnV5.5-12.6B
| Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description |
| ----- | ---- | ------- | ------ | ------ | ------ | ------------ |
| [6_5](https://huggingface.co/Bartowski/HuginnV5.5-12.6B-exl2/tree/6_5) | 6.5 | 8.0 | 12.0 GB | 14.7 GB | 18.4 GB | Near unquantized performance at vastly reduced size, **recommended**. |
| [5_0](https://huggingface.co/Bartowski/HuginnV5.5-12.6B-exl2/tree/5_0) | 5.0 | 6.0 | 9.8 GB | 12.4 GB | 16.1 GB | Slightly lower quality vs 6.5. |
| [4_25](https://huggingface.co/Bartowski/HuginnV5.5-12.6B-exl2/tree/4_25) | 4.25 | 6.0 | 8.7 GB | 11.3 GB | 15.0 GB | GPTQ equivalent bits per weight. |
| [3_5](https://huggingface.co/Bartowski/HuginnV5.5-12.6B-exl2/tree/3_5) | 3.5 | 6.0 | 7.6 GB | 10.1 GB | 13.8 GB | Lower quality, not recommended. |
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/HuginnV5.5-12.6B-exl2 HuginnV5.5-12.6B-exl2-6_5
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `HuginnV5.5-12.6B-exl2`:
```shell
mkdir HuginnV5.5-12.6B-exl2
huggingface-cli download bartowski/HuginnV5.5-12.6B-exl2 --local-dir HuginnV5.5-12.6B-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
Linux:
```shell
mkdir HuginnV5.5-12.6B-exl2-6_5
huggingface-cli download bartowski/HuginnV5.5-12.6B-exl2 --revision 6_5 --local-dir HuginnV5.5-12.6B-exl2-6_5 --local-dir-use-symlinks False
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
mkdir HuginnV5.5-12.6B-exl2-6.5
huggingface-cli download bartowski/HuginnV5.5-12.6B-exl2 --revision 6_5 --local-dir HuginnV5.5-12.6B-exl2-6.5 --local-dir-use-symlinks False
```
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
kanishka/smolm-autoreg-bpe-counterfactual-babylm-pipps_and_keys_to_it_all_10k-3e-4
|
kanishka
| 2024-01-28T04:08:30Z | 12 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-27T05:20:30Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: smolm-autoreg-bpe-counterfactual-babylm-pipps_and_keys_to_it_all_10k-3e-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smolm-autoreg-bpe-counterfactual-babylm-pipps_and_keys_to_it_all_10k-3e-4
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3416
- Accuracy: 0.4114
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 32000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 3.7439 | 1.0 | 18844 | 3.8602 | 0.3475 |
| 3.4436 | 2.0 | 37688 | 3.5370 | 0.3777 |
| 3.2979 | 3.0 | 56532 | 3.3990 | 0.3927 |
| 3.2129 | 4.0 | 75376 | 3.3575 | 0.3992 |
| 3.1532 | 5.0 | 94220 | 3.3300 | 0.4014 |
| 3.1098 | 6.0 | 113064 | 3.3082 | 0.4056 |
| 3.0691 | 7.0 | 131908 | 3.2938 | 0.4069 |
| 3.042 | 8.0 | 150752 | 3.2975 | 0.4077 |
| 3.0098 | 9.0 | 169596 | 3.2770 | 0.4112 |
| 2.9839 | 10.0 | 188440 | 3.2937 | 0.4114 |
| 2.9607 | 11.0 | 207284 | 3.2879 | 0.4114 |
| 2.94 | 12.0 | 226128 | 3.2938 | 0.4115 |
| 2.9154 | 13.0 | 244972 | 3.3142 | 0.4101 |
| 2.8939 | 14.0 | 263816 | 3.2931 | 0.4124 |
| 2.8771 | 15.0 | 282660 | 3.3156 | 0.4114 |
| 2.8566 | 16.0 | 301504 | 3.3241 | 0.4112 |
| 2.8321 | 17.0 | 320348 | 3.3228 | 0.4120 |
| 2.8173 | 18.0 | 339192 | 3.3250 | 0.4116 |
| 2.7989 | 19.0 | 358036 | 3.3380 | 0.4114 |
| 2.7807 | 20.0 | 376880 | 3.3416 | 0.4114 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.14.1
|
Antice41/TheLion
|
Antice41
| 2024-01-28T04:01:57Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"region:us"
] | null | 2024-01-28T04:01:03Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
coke0zero/Reinforce-Pixelcopter-PLE-v0
|
coke0zero
| 2024-01-28T03:52:28Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-28T03:52:25Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 40.40 +/- 34.72
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
TschaettGiPiTi/TschaettGiPiTi
|
TschaettGiPiTi
| 2024-01-28T03:50:32Z | 0 | 0 | null |
[
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2024-01-28T03:50:30Z |
---
license: cc-by-nc-sa-4.0
---
|
karawalla/aqmodel_20240126
|
karawalla
| 2024-01-28T03:48:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-28T01:19:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
haizad/a2c-PandaReachDense-v3
|
haizad
| 2024-01-28T03:42:12Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-28T03:40:09Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.18 +/- 0.11
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
graceneutrality/ppo-LunarLander-v2
|
graceneutrality
| 2024-01-28T03:41:38Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-28T03:41:19Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 282.79 +/- 23.65
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Seynabou/whisper-small-wolof
|
Seynabou
| 2024-01-28T03:34:40Z | 61 | 1 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:fleurs",
"base_model:cifope/whisper-small-wolof",
"base_model:finetune:cifope/whisper-small-wolof",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-28T02:52:37Z |
---
license: apache-2.0
base_model: cifope/whisper-small-wolof
tags:
- generated_from_trainer
datasets:
- fleurs
metrics:
- wer
model-index:
- name: whisper-small-wolof
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: fleurs
type: fleurs
config: wo_sn
split: test
args: wo_sn
metrics:
- name: Wer
type: wer
value: 64.90514905149053
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-wolof
This model is a fine-tuned version of [cifope/whisper-small-wolof](https://huggingface.co/cifope/whisper-small-wolof) on the fleurs dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4778
- Wer: 64.9051
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 1.3863 | 1.4 | 50 | 1.4778 | 64.9051 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
gustavokpc/bert-base-portuguese-cased_LRATE_1e-05_EPOCHS_5
|
gustavokpc
| 2024-01-28T03:15:13Z | 46 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:neuralmind/bert-base-portuguese-cased",
"base_model:finetune:neuralmind/bert-base-portuguese-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-28T01:35:46Z |
---
license: mit
base_model: neuralmind/bert-base-portuguese-cased
tags:
- generated_from_keras_callback
model-index:
- name: gustavokpc/bert-base-portuguese-cased_LRATE_1e-05_EPOCHS_5
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# gustavokpc/bert-base-portuguese-cased_LRATE_1e-05_EPOCHS_5
This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0570
- Train Accuracy: 0.9806
- Train F1 M: 0.5606
- Train Precision M: 0.4043
- Train Recall M: 0.9769
- Validation Loss: 0.1851
- Validation Accuracy: 0.9446
- Validation F1 M: 0.5629
- Validation Precision M: 0.4035
- Validation Recall M: 0.9763
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 1e-05, 'decay_steps': 3790, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train F1 M | Train Precision M | Train Recall M | Validation Loss | Validation Accuracy | Validation F1 M | Validation Precision M | Validation Recall M | Epoch |
|:----------:|:--------------:|:----------:|:-----------------:|:--------------:|:---------------:|:-------------------:|:---------------:|:----------------------:|:-------------------:|:-----:|
| 0.2400 | 0.9057 | 0.5084 | 0.3774 | 0.8407 | 0.1924 | 0.9294 | 0.5681 | 0.4101 | 0.9715 | 0 |
| 0.1325 | 0.9529 | 0.5557 | 0.4036 | 0.9509 | 0.1685 | 0.9367 | 0.5519 | 0.3998 | 0.9380 | 1 |
| 0.0929 | 0.9681 | 0.5582 | 0.4031 | 0.9644 | 0.1650 | 0.9426 | 0.5583 | 0.4027 | 0.9554 | 2 |
| 0.0703 | 0.9764 | 0.5599 | 0.4042 | 0.9720 | 0.1808 | 0.9426 | 0.5670 | 0.4068 | 0.9794 | 3 |
| 0.0570 | 0.9806 | 0.5606 | 0.4043 | 0.9769 | 0.1851 | 0.9446 | 0.5629 | 0.4035 | 0.9763 | 4 |
### Framework versions
- Transformers 4.34.1
- TensorFlow 2.10.0
- Datasets 2.14.5
- Tokenizers 0.14.1
|
nickmiller795/ppo-Huggy
|
nickmiller795
| 2024-01-28T03:14:17Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2024-01-28T03:14:11Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: nickmiller795/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
avocado123/finetuning-sentiment-model-3000-samples
|
avocado123
| 2024-01-28T02:57:36Z | 91 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-28T02:51:20Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3390
- Accuracy: 0.8667
- F1: 0.8701
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
nbeerbower/bruphin-zeta
|
nbeerbower
| 2024-01-28T02:35:25Z | 48 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser",
"base_model:merge:cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser",
"base_model:nbeerbower/bruphin-epsilon",
"base_model:merge:nbeerbower/bruphin-epsilon",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-27T23:51:00Z |
---
license: apache-2.0
base_model:
- nbeerbower/bruphin-epsilon
- cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser
tags:
- mergekit
- merge
---
# bruphin-zeta
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
Rebased off Dolphin 2.6 for correct ChatML support.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [nbeerbower/bruphin-epsilon](https://huggingface.co/nbeerbower/bruphin-epsilon)
* [cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: nbeerbower/bruphin-epsilon
layer_range: [0, 32]
- model: cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser
layer_range: [0, 32]
merge_method: slerp
base_model: cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
Mary8/tinyroberta-squad2-step1
|
Mary8
| 2024-01-28T02:22:15Z | 102 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-01-27T03:23:37Z |
---
tags:
- generated_from_trainer
model-index:
- name: tinyroberta-squad2-step1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tinyroberta-squad2-step1
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
akhilantony11/my-pet-cat-gcz
|
akhilantony11
| 2024-01-28T01:33:41Z | 14 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-11-04T11:26:34Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Cat-GCZ Dreambooth model trained by akhilantony11 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: BJK23EC003
Sample pictures of this concept:
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
|
ntc-ai/SDXL-LoRA-slider.captivating-eyes
|
ntc-ai
| 2024-01-28T01:29:50Z | 92 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] |
text-to-image
| 2024-01-28T01:29:44Z |
---
language:
- en
thumbnail: "images/evaluate/captivating eyes.../captivating eyes_17_3.0.png"
widget:
- text: captivating eyes
output:
url: images/captivating eyes_17_3.0.png
- text: captivating eyes
output:
url: images/captivating eyes_19_3.0.png
- text: captivating eyes
output:
url: images/captivating eyes_20_3.0.png
- text: captivating eyes
output:
url: images/captivating eyes_21_3.0.png
- text: captivating eyes
output:
url: images/captivating eyes_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "captivating eyes"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - captivating eyes (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/captivating eyes_17_-3.0.png" width=256 height=256 /> | <img src="images/captivating eyes_17_0.0.png" width=256 height=256 /> | <img src="images/captivating eyes_17_3.0.png" width=256 height=256 /> |
| <img src="images/captivating eyes_19_-3.0.png" width=256 height=256 /> | <img src="images/captivating eyes_19_0.0.png" width=256 height=256 /> | <img src="images/captivating eyes_19_3.0.png" width=256 height=256 /> |
| <img src="images/captivating eyes_20_-3.0.png" width=256 height=256 /> | <img src="images/captivating eyes_20_0.0.png" width=256 height=256 /> | <img src="images/captivating eyes_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
captivating eyes
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.captivating-eyes', weight_name='captivating eyes.safetensors', adapter_name="captivating eyes")
# Activate the LoRA
pipe.set_adapters(["captivating eyes"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, captivating eyes"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 1140+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
AzureBlack/KitchenSink_103b-2.5bpw-6h-exl2
|
AzureBlack
| 2024-01-28T01:04:12Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"rp",
"erp",
"chat",
"storywriting",
"en",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-28T00:57:16Z |
---
license: llama2
language:
- en
tags:
- rp
- erp
- chat
- storywriting
---
# Kitchen Sink 103b

This model is a rotating-stack merge of three 70b models in a 103b (120 layer) configuration inspired by Venus 103b. The result of
this "frankenmerge" is a large model that contains a little bit of everything - including the kitchen sink. RP, chat, storywriting,
and instruct are all well supported. It may or may not code well - I lack the expertise to test it in that capacity, but considering
the source models, I suspect it is unlikely.
Component models for the rotating stack are
- royallab/Aetheria-L2-70B
- lizpreciatior/lzlv_70b_fp16_hf
- Sao10K/WinterGoddess-1.4x-70B-L2
Components of those models are purported to include: Nous-Hermes-Llama2-70b, Xwin-LM-7B-V0.1, Mythospice-70b, Euryale-1.3-L2-70B,
tulu-2-dpo-70b, GOAT-70B-Storytelling, Platypus2-70B-instruct, Lila-70B, SunsetBoulevard, and some private LoRAs.
As all components are based on Llama2 70b, native context length is 4k tokens. Coherency out to 8k is extremely good with rope scaling,
but starts to decline beyond that.
This model is uncensored and perfectly capable of generating objectionable material. However, it is not an explicitely-NSFW model,
and it has never "gone rogue" and tried to insert NSFW content into SFW prompts in my experience. As with any LLM, no factual claims
made by the model should be taken at face value. You know that boilerplate safety disclaimer that most professional models have?
Assume this has it too. This model is for entertainment purposes only.
FP16 and Q4_K_S GGUFs are located here: https://huggingface.co/MarsupialAI/KitchenSink_103b_GGUF_Q4KS_FP16
If you create additional quants, please let me know and I will link them here as well.
# Sample output
```
{{[INPUT]}}
Write a detailed and humorous story about a cute and fluffy bunny that goes to a Gwar concert.
{{[OUTPUT]}}
Once upon a time, in a magical land called Fluffyville, there lived a cute and fluffy little bunny named Fuzzlesworth. Fuzzlesworth
was your average, everyday bunny; he hopped around, nibbled on carrots, and generally lived a quiet, peaceful life. However,
Fuzzlesworth had a secret: he was a closet metalhead. When no one else was around, he would crank up his tiny bunny-sized speakers
and rock out to the loudest, most aggressive metal music he could find. His favorite band of all time was Gwar, the infamous
intergalactic warriors who sprayed fake blood and played insane heavy metal.
One day, Fuzzlesworth stumbled upon a flyer nailed to a tree trunk in the meadow that announced that Gwar would be performing at a
nearby human concert venue. His fluffy little bunny heart skipped a beat! He simply had to go! But, he knew his friends and family
would never understand his love for such an aggressive and intense music, so he decided to keep his plan a secret.
The big night finally arrived, and Fuzzlesworth, dressed in his best leather bunny-sized studded vest and spiked collar, hopped
his way to the concert venue. He stood outside the door, shaking with excitement and nervousness. The line of people waiting to
enter was intimidating: there were tall, muscular men and women covered in spikes, chains, and tattoos, and Fuzzlesworth felt like
a small, fluffy speck in a sea of metal.
But Fuzzlesworth was determined to see his favorite band, so he summoned up his bunny-sized courage and hopped to the front of the
line. The bouncer, a hulking behemoth of a man, looked down at Fuzzlesworth and raised a thick, hairy eyebrow.
"Umm...I'm here for the Gwar concert," Fuzzlesworth squeaked, holding up his ticket.
The bouncer stared at him incredulously for a moment, then let out a hearty laugh. "Aw, look at the little bunny in his metal gear!
Alright, go on in, but stay out of trouble, you hear?"
Fuzzlesworth's tail was wagging so hard, it could've powered a small wind turbine. He bounced his way inside the venue, taking in
the sights and sounds of the raucous crowd. The air was thick with the scent of sweat, cheap beer, and dry-ice smoke. He found a
small nook near the stage where he could see everything and waited for the show to start.
Finally, the lights dimmed, and the band took the stage. Fuzzlesworth's eyes went wide as the members of Gwar, adorned in their
grotesque, otherworldly costumes, began to thrash and wail on their instruments. He couldn't believe his fluffy little ears! He
was actually at a Gwar concert!
As the set went on, Fuzzlesworth got more and more into the music, headbanging and thrashing around in his own adorable bunny way.
The other concertgoers couldn't help but smile and chuckle at the sight of the tiny bunny moshing it up in the front row.
During one particularly intense song, Fuzzlesworth found himself lifted up by the mosh pit and deposited on stage. He looked out
at the roaring crowd, his little bunny heart pounding. In a moment of metal madness, he jumped up on the speaker stack and began
to shred an air-guitar solo, much to the amusement of the band and audience alike.
At the end of the set, the band's lead singer, Oderus Urungus, spotted Fuzzlesworth and motioned him over. Fuzzlesworth, overcome
with joy, hopped over and was lifted up onto the frontman's massive, spiked shoulder.
"Ladies and gentlemen," Oderus bellowed into the microphone, "I'd like to introduce our special guest tonight! Give it up for
Fuzzlesworth the Metal Bunny!"
The crowd went wild, cheering and clapping as Fuzzlesworth soaked up the attention. He had never felt so alive!
After the show, Fuzzlesworth got to meet the band, posing for photos and getting autographs. As he hopped back home that night,
his head still buzzing with metal and excitement, he knew that his life would never be the same. He was no longer just a small,
fluffy bunny from Fluffyville; he was Fuzzlesworth the Metal Bunny, and he wasn't afraid to rock out!
```
# Prompt format
Seems to have the strongest affinity for Alpaca prompts, but Vicuna works as well. Considering the variety of components, most
formats will probbaly work to some extent.
# WTF is a rotating-stack merge?
Inspired by Undi's experiments with stacked merges, Jeb Carter found that output quality and model initiative could be significantly
improved by reversing the model order in the stack, and then doing a linear merge between the original and reversed stacks. That is
what I did here. I created three passthrough stacked merges using the three source models (rotating the model order in each stack),
and then doing a linear merge of all three stacks. The exact merge configs can be found in the recipe.txt file.
|
SC56/Mistral-7B-sumz-dpo-4h
|
SC56
| 2024-01-28T00:52:05Z | 0 | 0 | null |
[
"safetensors",
"arxiv:1910.09700",
"license:cc-by-4.0",
"region:us"
] | null | 2024-01-28T00:45:45Z |
---
license: cc-by-4.0
---
---
license: cc-by-4.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
SC56/Mistral-7B-sumz-dpo-3h
|
SC56
| 2024-01-28T00:51:50Z | 0 | 1 | null |
[
"safetensors",
"arxiv:1910.09700",
"license:cc-by-4.0",
"region:us"
] | null | 2024-01-28T00:45:26Z |
---
license: cc-by-4.0
---
---
license: cc-by-4.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
coke0zero/Reinforce-Cartpole-v1
|
coke0zero
| 2024-01-28T00:39:51Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-28T00:39:41Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
techSnipe/whisper-small-hi-Atmin
|
techSnipe
| 2024-01-28T00:30:18Z | 60 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-10-18T18:36:49Z |
---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
model-index:
- name: whisper-small-hi-Atmin
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-hi-Atmin
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 40
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.14.1
|
tnn1t1s/lines
|
tnn1t1s
| 2024-01-28T00:29:45Z | 0 | 0 | null |
[
"pytorch",
"dataset:tnn1t1s/lines",
"license:apache-2.0",
"region:us"
] | null | 2024-01-28T00:02:29Z |
---
license: apache-2.0
datasets:
- tnn1t1s/lines
---
Lines is a model that predicts a steep, straight line. It will be trained on tnn1t1s/lines dataset.
|
Patcas/my_awesome-assert-new
|
Patcas
| 2024-01-28T00:17:47Z | 13 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"plbart",
"text2text-generation",
"generated_from_trainer",
"base_model:Patcas/plbart-works",
"base_model:finetune:Patcas/plbart-works",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-27T09:02:02Z |
---
base_model: Patcas/plbart-works
tags:
- generated_from_trainer
model-index:
- name: my_awesome-assert-new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome-assert-new
This model is a fine-tuned version of [Patcas/plbart-works](https://huggingface.co/Patcas/plbart-works) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4168
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.4144 | 1.0 | 15052 | 0.4181 |
| 0.3772 | 2.0 | 30104 | 0.3961 |
| 0.358 | 3.0 | 45156 | 0.4086 |
| 0.3221 | 4.0 | 60208 | 0.4168 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
lllyasviel/fav_models
|
lllyasviel
| 2024-01-28T00:15:07Z | 0 | 106 | null |
[
"region:us"
] | null | 2023-10-13T22:28:52Z |
Some of the models I use myself. This space is for my personal use only, not a distributing page.
|
SC56/Mistral-7B-orca-dpo-8h
|
SC56
| 2024-01-28T00:13:43Z | 0 | 0 | null |
[
"safetensors",
"arxiv:1910.09700",
"license:cc-by-4.0",
"region:us"
] | null | 2024-01-28T00:03:41Z |
---
license: cc-by-4.0
---
---
license: cc-by-4.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AliRiza/corgy_dog_LoRA
|
AliRiza
| 2024-01-28T00:02:46Z | 1 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-01-27T23:57:52Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of TOK dog
license: openrail++
---
# SDXL LoRA DreamBooth - AliRiza/corgy_dog_LoRA
<Gallery />
## Model description
These are AliRiza/corgy_dog_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK dog to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](AliRiza/corgy_dog_LoRA/tree/main) them in the Files & versions tab.
|
lmg-anon/vntl-7b-v0.3.1-lora
|
lmg-anon
| 2024-01-28T00:01:32Z | 0 | 1 | null |
[
"safetensors",
"translation",
"ja",
"en",
"dataset:lmg-anon/VNTL-v2.5-1k",
"license:llama2",
"region:us"
] |
translation
| 2024-01-27T23:36:09Z |
---
license: llama2
datasets:
- lmg-anon/VNTL-v2.5-1k
language:
- ja
- en
pipeline_tag: translation
---
This is an experimental LLaMA2 7B lora created using the [VNTL-v2.5-1k](https://huggingface.co/datasets/lmg-anon/VNTL-v2.5-1k) dataset.
This is a update of version 0.3:
- adamw_bnb_8bit -> adamw_8bit (this is the default in unsloth)
- 2 epoches -> 1 epoch (2 epoches seemed to increase eval loss)
- Added EOS after each translation pair.
**Eval Loss**: 0.72
This is an prompt example:
```
<<START>>
Name: Uryuu Shingo (瓜生 新吾) | Gender: Male | Aliases: Onii-chan (お兄ちゃん)
Name: Uryuu Sakuno (瓜生 桜乃) | Gender: Female
<<JAPANESE>>
[桜乃]: 『……ごめん』
<<ENGLISH>> (fidelity = absolute)
[Sakuno]: 『... Sorry.』</s>
<<JAPANESE>>
[新吾]: 「ううん、こう言っちゃなんだけど、迷子でよかったよ。桜乃は可愛いから、いろいろ心配しちゃってたんだぞ俺」
<<ENGLISH>> (fidelity = high)
```
The generated translation for that prompt, with temperature 0, is:
```
[Shingo]: 「No, don't apologize. I'm just glad you're safe. You're so cute, Sakuno, I was worried sick.」
```
|
EddyGiusepe/tinyllama-aira_Chatbot-lora
|
EddyGiusepe
| 2024-01-27T23:58:04Z | 12 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v0.3",
"base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v0.3",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-27T02:50:14Z |
---
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v0.3
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: tinyllama-aira_Chatbot-lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tinyllama-aira_Chatbot-lora
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v0.3](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.3) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
lmg-anon/vntl-7b-v0.3-lora
|
lmg-anon
| 2024-01-27T23:57:26Z | 0 | 0 | null |
[
"safetensors",
"translation",
"ja",
"en",
"dataset:lmg-anon/VNTL-v2-1k",
"license:llama2",
"region:us"
] |
translation
| 2024-01-26T00:02:21Z |
---
license: llama2
datasets:
- lmg-anon/VNTL-v2-1k
language:
- ja
- en
pipeline_tag: translation
---
This is an experimental llama2 7B lora created using the [VNTL-v2-1k](https://huggingface.co/datasets/lmg-anon/VNTL-v2-1k) dataset.
There have been some minor changes in the dataset since version 0.2, and I have made the following adjustments to the training arguments:
- Model loaded in 8 bits.
- Sequence length limited to 1024 tokens to speed up experiments.
- Effective batch size changed to 30 (6 batch size + 5 grad acc).
- 2 epochs.
**Eval Loss**: 0.78
This lora was trained alongside a 4-bit lora (qlora), the goal being to see if training a lora would be better than training a qlora.
In the end, I don't think that there was much of a difference. At most I saw a consistent 0.01 drop in loss, but the loss graph looked the same, which meant both fine-tunes converged the same way.
This is an prompt example:
```
<<START>>
Name: Uryuu Shingo (瓜生 新吾) | Gender: Male | Aliases: Onii-chan (お兄ちゃん)
Name: Uryuu Sakuno (瓜生 桜乃) | Gender: Female
<<JAPANESE>>
[桜乃]: 『……ごめん』
<<ENGLISH>> (fidelity = absolute)
[Sakuno]: 『... Sorry.』
<<JAPANESE>>
[新吾]: 「ううん、こう言っちゃなんだけど、迷子でよかったよ。桜乃は可愛いから、いろいろ心配しちゃってたんだぞ俺」
<<ENGLISH>> (fidelity = high)
```
The generated translation for that prompt, with temperature 0, is:
```
[Shingo]: 「No, don't apologize. I'm just glad you're safe. You're so cute, Sakuno, I was worried sick.」
```
|
bearbearyu1223/whisper-large-peft-lora-zh_TW-v0.2
|
bearbearyu1223
| 2024-01-27T23:51:08Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:openai/whisper-large-v2",
"base_model:finetune:openai/whisper-large-v2",
"license:apache-2.0",
"region:us"
] | null | 2024-01-27T21:11:52Z |
---
license: apache-2.0
base_model: openai/whisper-large-v2
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-large-peft-lora-zh_TW-v0.2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-large-peft-lora-zh_TW-v0.2
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1253
- Wer: 0.2981
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2518 | 1.0 | 443 | 0.1478 | 0.3202 |
| 0.1048 | 2.0 | 886 | 0.1348 | 0.3505 |
| 0.0209 | 3.0 | 1329 | 0.1253 | 0.2981 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
samot-samoe/gpt-neo-sft-4000-steps-lora
|
samot-samoe
| 2024-01-27T23:34:25Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:IlyaGusev/rulm_gpt_neo_small",
"base_model:adapter:IlyaGusev/rulm_gpt_neo_small",
"region:us"
] | null | 2024-01-27T23:34:21Z |
---
library_name: peft
base_model: IlyaGusev/rulm_gpt_neo_small
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
wahdan99/q-taxi-v3
|
wahdan99
| 2024-01-27T23:18:12Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-27T23:18:10Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.46 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="wahdan99/q-taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
bartowski/internlm2-chat-7b-llama-exl2-old
|
bartowski
| 2024-01-27T23:14:32Z | 4 | 1 | null |
[
"text-generation",
"license:other",
"region:us"
] |
text-generation
| 2024-01-18T16:21:55Z |
---
pipeline_tag: text-generation
license: other
quantized_by: bartowski
---
Update Jan 27: This model was done before some config updates from internlm, please try the new one here and report any differences: https://huggingface.co/bartowski/internlm2-chat-7b-llama-exl2/
## Exllama v2 Quantizations of internlm2-chat-7b-llama
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.11">turboderp's ExLlamaV2 v0.0.11</a> for quantization.
# The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/internlm/internlm2-chat-7b
Model Size: 7b
| Branch | Bits | lm_head bits | Dataset | Size | Description |
| ----- | ---- | ------- | ------- | ------ | ------------ |
| [8_0](https://huggingface.co/Bartowski/internlm2-chat-7b-llama-exl2-old/tree/8_0) | 8.0 | 8.0 | Default | 9.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/Bartowski/internlm2-chat-7b-llama-exl2-old/tree/6_5) | 6.5 | 8.0 | Default | 8.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
| [5_0](https://huggingface.co/Bartowski/internlm2-chat-7b-llama-exl2-old/tree/5_0) | 5.0 | 6.0 | Default | 7.4 GB | Slightly lower perplexity vs 6.5. |
| [4_0](https://huggingface.co/Bartowski/internlm2-chat-7b-llama-exl2-old/tree/4_0) | 4.0 | 6.0 | Default | 6.5 GB | Just under GPTQ equivalent bits per weight. |
| [3_5](https://huggingface.co/Bartowski/internlm2-chat-7b-llama-exl2-old/tree/3_5) | 3.5 | 6.0 | Default | 6.1 GB | Lower quality, only use if you have to. |
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/internlm2-chat-7b-llama-exl2-old
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `internlm2-chat-7b-llama-exl2`:
```shell
mkdir internlm2-chat-7b-llama-exl2
huggingface-cli download bartowski/internlm2-chat-7b-llama-exl2-old --local-dir internlm2-chat-7b-llama-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir internlm2-chat-7b-llama-exl2-6_5
huggingface-cli download bartowski/internlm2-chat-7b-llama-exl2-old --revision 6_5 --local-dir internlm2-chat-7b-llama-exl2-6_5 --local-dir-use-symlinks False
```
|
barandinho/distilbert-base-turkish-cased-toxic-lang
|
barandinho
| 2024-01-27T23:02:04Z | 187 | 1 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:dbmdz/distilbert-base-turkish-cased",
"base_model:finetune:dbmdz/distilbert-base-turkish-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-27T20:47:05Z |
---
license: apache-2.0
base_model: dbmdz/distilbert-base-turkish-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: results
results: []
pipeline_tag: text-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
## Modelin açıklaması (model description)
Bu model distilbert-base-turkish'in turkish-toxic-language dataset'i üzerinde fine-tune edilmiş versiyonudur.
## Sonuçlar (results)
Model google colab üzerinde T4 GPU ile yaklaşık 1.5 saatte eğitilmiştir. Eğitim parametleri aşağıda verilmiştir.
Model evaluation veri setinde şu değerlere ulaşmıştır:
- Loss: 0.2019
- Accuracy: 0.9287
## Modelin kullanım amacı (intended uses)
Verilen Türkçe cümlenin toksik olup olmadığını tespit edip etiketleme amaçlı kullanılmaktadır.
Etiketler :
OTHER : Toksik değil\
INSULT : Hakaret\
PROFANITY : Küfür\
SEXIST : Cinsiyetçilik\
RACIST : Irkçılık
## Modelin Kullanılması (Usage)
```python
from transformers import pipeline
pipe = pipeline("sentiment-analysis", model="barandinho/distilbert-base-turkish-cased-toxic-lang")
```
## Training ve evaluation verisi (training and evaluation data)
Model en başta training ve evaluation diye ayrılıp 2 epoch boyunca eğitilmiştir. Modelin başarısı doğrulanınca training ve evaluation veri setleri birleştirilip 3 epoch boyunca tekrardan eğitilecektir.
### Eğitilirken kullanılan parametreler (training hyperparameters)
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
### Eğitim sonuçları (training results)
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.762 | 1.0 | 972 | 0.2179 | 0.9210 |
| 0.1682 | 2.0 | 1944 | 0.2019 | 0.9287 |
### Framework versiyonları (framework versions)
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
chuqiaog/distilbert-base-uncased-finetuned-ner
|
chuqiaog
| 2024-01-27T22:48:08Z | 92 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-25T21:57:13Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0265
- Precision: 0.9257
- Recall: 0.9346
- F1: 0.9301
- Accuracy: 0.9833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0961 | 1.0 | 878 | 0.0315 | 0.8969 | 0.9221 | 0.9093 | 0.9791 |
| 0.0192 | 2.0 | 1756 | 0.0260 | 0.9261 | 0.9312 | 0.9287 | 0.9829 |
| 0.0121 | 3.0 | 2634 | 0.0265 | 0.9257 | 0.9346 | 0.9301 | 0.9833 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2
- Datasets 2.16.1
- Tokenizers 0.15.0
|
asun17904/bert-base-uncased-adviter
|
asun17904
| 2024-01-27T22:37:20Z | 1 | 0 |
pytorch
|
[
"pytorch",
"bert",
"en",
"license:mit",
"region:us"
] | null | 2024-01-27T19:15:40Z |
---
language: en
license: mit
library_name: pytorch
---
# Adversarial Training Through Iterations
Trainer Hyperparameters:
- `lr` = 5e-05
- `per_device_batch_size` = 16
- `gradient_accumulation_steps` = 1
- `weight_decay` = 1e-09
- `seed` = 42
Extended Logs:
|eval_loss|eval_accuracy|epoch|
|--|--|--|
|
pedroferreira/speecht5_finetuned_voxpopuli_fr_2
|
pedroferreira
| 2024-01-27T22:23:50Z | 68 | 0 |
transformers
|
[
"transformers",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"text-to-speech",
"dataset:facebook/voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2024-01-27T20:08:20Z |
---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
- text-to-speech
datasets:
- facebook/voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_fr
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the facebook/voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 0.4565 | 533.33 | 1000 | 0.8182 |
| 0.4089 | 1066.67 | 2000 | 0.8548 |
| 0.3972 | 1600.0 | 3000 | 0.8666 |
| 0.3876 | 2133.33 | 4000 | 0.8649 |
### Framework versions
- Transformers 4.37.1
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
bartowski/Tess-10.7B-v1.5-exl2
|
bartowski
| 2024-01-27T22:23:19Z | 0 | 0 | null |
[
"text-generation",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2024-01-27T07:22:08Z |
---
license: apache-2.0
quantized_by: bartowski
pipeline_tag: text-generation
---
## Exllama v2 Quantizations of Tess-10.7B-v1.5
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.12">turboderp's ExLlamaV2 v0.0.12</a> for quantization.
## The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Conversion was done using the default calibration dataset.
Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6.
Original model: https://huggingface.co/migtissera/Tess-10.7B-v1.5
| Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description |
| ----- | ---- | ------- | ------ | ------ | ------ | ------------ |
| [8_0](https://huggingface.co/Bartowski/Tess-10.7B-v1.5-exl2/tree/8_0) | 8.0 | 8.0 | 11.9 GB | 13.3 GB | 15.3 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/Bartowski/Tess-10.7B-v1.5-exl2/tree/6_5) | 6.5 | 8.0 | 10.3 GB | 11.7 GB | 13.7 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
| [5_0](https://huggingface.co/Bartowski/Tess-10.7B-v1.5-exl2/tree/5_0) | 5.0 | 6.0 | 8.3 GB | 9.7 GB | 11.7 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. |
| [4_25](https://huggingface.co/Bartowski/Tess-10.7B-v1.5-exl2/tree/4_25) | 4.25 | 6.0 | 7.4 GB | 8.6 GB | 10.6 GB | GPTQ equivalent bits per weight, slightly higher quality. |
| [3_5](https://huggingface.co/Bartowski/Tess-10.7B-v1.5-exl2/tree/3_5) | 3.5 | 6.0 | 6.4 GB | 7.8 GB | 9.8 GB | Lower quality, only use if you have to. |
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/Tess-10.7B-v1.5-exl2
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `Tess-10.7B-v1.5-exl2`:
```shell
mkdir Tess-10.7B-v1.5-exl2
huggingface-cli download bartowski/Tess-10.7B-v1.5-exl2 --local-dir Tess-10.7B-v1.5-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
Linux:
```shell
mkdir Tess-10.7B-v1.5-exl2-6_5
huggingface-cli download bartowski/Tess-10.7B-v1.5-exl2 --revision 6_5 --local-dir Tess-10.7B-v1.5-exl2-6_5 --local-dir-use-symlinks False
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
mkdir Tess-10.7B-v1.5-exl2-6.5
huggingface-cli download bartowski/Tess-10.7B-v1.5-exl2 --revision 6_5 --local-dir Tess-10.7B-v1.5-exl2-6.5 --local-dir-use-symlinks False
```
|
sevvalkapcak/model2
|
sevvalkapcak
| 2024-01-27T22:23:12Z | 46 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-27T21:54:28Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: sevvalkapcak/model2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# sevvalkapcak/model2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0243
- Validation Loss: 0.2900
- Train Accuracy: 0.9315
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 5e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.3784 | 0.1846 | 0.9225 | 0 |
| 0.1372 | 0.1524 | 0.937 | 1 |
| 0.1077 | 0.1561 | 0.933 | 2 |
| 0.0970 | 0.1525 | 0.9355 | 3 |
| 0.0830 | 0.1944 | 0.9285 | 4 |
| 0.0652 | 0.1864 | 0.9265 | 5 |
| 0.0510 | 0.2117 | 0.9365 | 6 |
| 0.0462 | 0.2221 | 0.9335 | 7 |
| 0.0409 | 0.1977 | 0.9375 | 8 |
| 0.0243 | 0.2900 | 0.9315 | 9 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.1
|
yc4142/RedPajama-INCITE-Instruct-3B-v1-lora-ethics-nonCoT
|
yc4142
| 2024-01-27T22:13:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-27T22:13:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
coke0zero/dqn-SpaceInvadersNoFrameskip-v4
|
coke0zero
| 2024-01-27T22:10:59Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-27T22:10:25Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 539.00 +/- 173.56
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga coke0zero -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga coke0zero -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga coke0zero
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
StanleyOne/mistral-devign-finetune-instruct-v2-12k
|
StanleyOne
| 2024-01-27T21:57:21Z | 61 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-01-27T21:46:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Professor/CGIAR-Crop-disease
|
Professor
| 2024-01-27T21:53:25Z | 33 | 5 |
transformers
|
[
"transformers",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"base_model:gianlab/swin-tiny-patch4-window7-224-finetuned-plantdisease",
"base_model:finetune:gianlab/swin-tiny-patch4-window7-224-finetuned-plantdisease",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-26T23:37:49Z |
---
license: apache-2.0
base_model: gianlab/swin-tiny-patch4-window7-224-finetuned-plantdisease
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: CGIAR-Crop-disease
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CGIAR-Crop-disease
This model is a fine-tuned version of [gianlab/swin-tiny-patch4-window7-224-finetuned-plantdisease](https://huggingface.co/gianlab/swin-tiny-patch4-window7-224-finetuned-plantdisease) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7438
- Accuracy: 0.6964
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0386 | 1.0 | 652 | 0.9385 | 0.5669 |
| 0.9619 | 2.0 | 1304 | 0.9422 | 0.5811 |
| 0.9193 | 3.0 | 1956 | 0.8806 | 0.6348 |
| 0.8876 | 4.0 | 2608 | 0.8703 | 0.6488 |
| 0.8777 | 5.0 | 3260 | 0.8361 | 0.6607 |
| 0.863 | 6.0 | 3912 | 0.8543 | 0.6417 |
| 0.8316 | 7.0 | 4564 | 0.8101 | 0.6607 |
| 0.8301 | 8.0 | 5216 | 0.8197 | 0.6609 |
| 0.8264 | 9.0 | 5868 | 0.8111 | 0.6720 |
| 0.8283 | 10.0 | 6520 | 0.8065 | 0.6669 |
| 0.816 | 11.0 | 7172 | 0.8115 | 0.6578 |
| 0.8263 | 12.0 | 7824 | 0.8029 | 0.6753 |
| 0.8017 | 13.0 | 8476 | 0.7929 | 0.6707 |
| 0.8005 | 14.0 | 9128 | 0.8025 | 0.6661 |
| 0.7989 | 15.0 | 9780 | 0.8153 | 0.6594 |
| 0.7961 | 16.0 | 10432 | 0.8033 | 0.6720 |
| 0.7769 | 17.0 | 11084 | 0.7879 | 0.6682 |
| 0.7757 | 18.0 | 11736 | 0.7868 | 0.6732 |
| 0.7713 | 19.0 | 12388 | 0.7773 | 0.6747 |
| 0.7638 | 20.0 | 13040 | 0.7678 | 0.6811 |
| 0.7645 | 21.0 | 13692 | 0.7826 | 0.6795 |
| 0.7497 | 22.0 | 14344 | 0.7931 | 0.6807 |
| 0.761 | 23.0 | 14996 | 0.7719 | 0.6820 |
| 0.7486 | 24.0 | 15648 | 0.7641 | 0.6895 |
| 0.7446 | 25.0 | 16300 | 0.7686 | 0.6832 |
| 0.7418 | 26.0 | 16952 | 0.7683 | 0.6904 |
| 0.7344 | 27.0 | 17604 | 0.7549 | 0.6895 |
| 0.7369 | 28.0 | 18256 | 0.7501 | 0.6891 |
| 0.7238 | 29.0 | 18908 | 0.7454 | 0.6933 |
| 0.7264 | 30.0 | 19560 | 0.7565 | 0.6876 |
| 0.7185 | 31.0 | 20212 | 0.7524 | 0.6880 |
| 0.7112 | 32.0 | 20864 | 0.7712 | 0.6807 |
| 0.7073 | 33.0 | 21516 | 0.7532 | 0.6897 |
| 0.7102 | 34.0 | 22168 | 0.7457 | 0.6960 |
| 0.7053 | 35.0 | 22820 | 0.7438 | 0.6964 |
| 0.6979 | 36.0 | 23472 | 0.7449 | 0.6933 |
| 0.6973 | 37.0 | 24124 | 0.7477 | 0.6929 |
| 0.6967 | 38.0 | 24776 | 0.7508 | 0.6926 |
| 0.6939 | 39.0 | 25428 | 0.7481 | 0.6933 |
| 0.6936 | 40.0 | 26080 | 0.7460 | 0.6968 |
### Framework versions
- Transformers 4.37.1
- Pytorch 2.0.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
jartine/mistral-7b.llamafile
|
jartine
| 2024-01-27T21:49:30Z | 52 | 24 | null |
[
"llamafile",
"license:apache-2.0",
"region:us"
] | null | 2023-11-22T01:17:19Z |
---
license: apache-2.0
tags:
- llamafile
---
|
jondurbin/bagel-2.8b-v0.2
|
jondurbin
| 2024-01-27T21:49:14Z | 4 | 7 |
transformers
|
[
"transformers",
"pytorch",
"dataset:ai2_arc",
"dataset:unalignment/spicy-3.1",
"dataset:codeparrot/apps",
"dataset:facebook/belebele",
"dataset:boolq",
"dataset:jondurbin/cinematika-v0.1",
"dataset:drop",
"dataset:lmsys/lmsys-chat-1m",
"dataset:TIGER-Lab/MathInstruct",
"dataset:cais/mmlu",
"dataset:Muennighoff/natural-instructions",
"dataset:openbookqa",
"dataset:piqa",
"dataset:Vezora/Tested-22k-Python-Alpaca",
"dataset:cakiki/rosetta-code",
"dataset:Open-Orca/SlimOrca",
"dataset:spider",
"dataset:squad_v2",
"dataset:migtissera/Synthia-v1.3",
"dataset:datasets/winogrande",
"dataset:nvidia/HelpSteer",
"dataset:Intel/orca_dpo_pairs",
"dataset:Squish42/bluemoon-fandom-1-1-rp-cleaned",
"dataset:LDJnr/Capybara",
"dataset:JULIELab/EmoBank",
"dataset:kingbri/PIPPA-shareGPT",
"base_model:state-spaces/mamba-2.8b-slimpj",
"base_model:finetune:state-spaces/mamba-2.8b-slimpj",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-12-30T12:50:40Z |
---
license: apache-2.0
base_model: state-spaces/mamba-2.8b-slimpj
datasets:
- ai2_arc
- unalignment/spicy-3.1
- codeparrot/apps
- facebook/belebele
- boolq
- jondurbin/cinematika-v0.1
- drop
- lmsys/lmsys-chat-1m
- TIGER-Lab/MathInstruct
- cais/mmlu
- Muennighoff/natural-instructions
- openbookqa
- piqa
- Vezora/Tested-22k-Python-Alpaca
- cakiki/rosetta-code
- Open-Orca/SlimOrca
- spider
- squad_v2
- migtissera/Synthia-v1.3
- datasets/winogrande
- nvidia/HelpSteer
- Intel/orca_dpo_pairs
- Squish42/bluemoon-fandom-1-1-rp-cleaned
- LDJnr/Capybara
- JULIELab/EmoBank
- kingbri/PIPPA-shareGPT
---
# A bagel, with everything (except DPO)

## Overview
An experimental fine-tune of [mamba-2.8b-slimpj](https://hf.co/state-spaces/mamba-2.8b-slimpj) using [bagel](https://github.com/jondurbin/bagel)
Default recommended system prompt:
```
You are a helpful, unbiased, uncensored assistant.
```
Supports several prompt formats, but you can also use `tokenizer.apply_chat_template`
You probably want the [DPO version](https://hf.co/jondurbin/bagel-dpo-2.8b-v0.2) - it's much better.
## Example chat script
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from mamba_ssm.models.mixer_seq_simple import MambaLMHeadModel
device = "cuda"
tokenizer = AutoTokenizer.from_pretrained("bagel-final-2.8b-v0.2")
model = MambaLMHeadModel.from_pretrained("bagel-final-2.8b-v0.2", device="cuda", dtype=torch.float32)
messages = [{"role": "system", "content": "You are a helpful, unbiased, uncensored assistant."}]
while True:
user_message = input("[INST] ")
messages.append({"role": "user", "content": user_message})
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True).to("cuda")
out = model.generate(input_ids=input_ids, max_length=2000, temperature=0.9, top_p=0.7, eos_token_id=tokenizer.eos_token_id, repetition_penalty=1.07)
decoded = tokenizer.batch_decode(out)[0].split("[/INST]")[-1].replace("</s>", "").strip()
messages.append({"role": "assistant", "content": decoded})
print("[/INST]", decoded)
```
## SFT data sources
*Yes, you will see benchmark names in the list, but this only uses the train splits, and a decontamination by cosine similarity is performed at the end as a sanity check*
- [ai2_arc](https://huggingface.co/datasets/ai2_arc)
- Abstraction and reasoning dataset, useful in measuring "intelligence" to a certain extent.
- [airoboros](https://huggingface.co/datasets/unalignment/spicy-3.1)
- Variety of categories of synthetic instructions generated by gpt-4.
- [apps](https://huggingface.co/datasets/codeparrot/apps)
- Python coding dataset with 10k problems.
- [belebele](https://huggingface.co/datasets/facebook/belebele)
- Multi-lingual reading comprehension dataset.
- [bluemoon](https://huggingface.co/datasets/Squish42/bluemoon-fandom-1-1-rp-cleaned)
- Roleplay data scraped from Bluemoon, then cleaned and formatted as ShareGPT.
- [boolq](https://huggingface.co/datasets/boolq)
- Corpus of yes/no questions (which can be surprisingly difficult for AI to answer apparently?)
- [capybara](https://huggingface.co/datasets/LDJnr/Capybara)
- Multi-turn dataset used to create the capybara models.
- [cinematika](https://huggingface.co/datasets/jondurbin/cinematika-v0.1) (instruction and plain text)
- RP-style data synthesized from movie scripts so the model isn't quite as boring as it otherwise would be.
- [drop](https://huggingface.co/datasets/drop)
- More reading comprehension.
- [emobank](https://github.com/JULIELab/EmoBank)
- Emotion annotations using the Valence-Arousal-Domninance scheme.
- [gutenberg](https://www.gutenberg.org/) (plain text)
- Books/plain text, again to make the model less boring, only a handful of examples supported by [chapterize](https://github.com/JonathanReeve/chapterize)
- [lmsys_chat_1m](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) (only gpt-4 items, also used for DPO)
- Chats collected by the lmsys chat arena, containing a wide variety of chats with various models.
- [mathinstruct](https://huggingface.co/datasets/TIGER-Lab/MathInstruct)
- Composite dataset with a variety of math-related tasks and problem/question formats.
- [mmlu](https://huggingface.co/datasets/cais/mmlu)
- Massive Multitask Language Understanding - a wide variety of questions about various subject matters.
- [natural_instructions](https://huggingface.co/datasets/Muennighoff/natural-instructions)
- Millions of instructions from 1600+ task categories (sampled down substantially, stratified by task type)
- [openbookqa](https://huggingface.co/datasets/openbookqa)
- Question answering dataset.
- [pippa](https://huggingface.co/datasets/kingbri/PIPPA-shareGPT)
- Deduped version of [PIPPA](https://huggingface.co/datasets/PygmalionAI/PIPPA) in ShareGPT format.
- [piqa](https://huggingface.co/datasets/piqa)
- Phyiscal interaction question answering.
- [python_alpaca](https://huggingface.co/datasets/Vezora/Tested-22k-Python-Alpaca)
- Python instruction response pairs, validated as functional.
- [rosetta_code](https://huggingface.co/datasets/cakiki/rosetta-code)
- Code problems and solutions in a variety of programming languages taken from rosettacode.org.
- [slimorca](https://huggingface.co/datasets/Open-Orca/SlimOrca)
- Collection of ~500k gpt-4 verified chats from OpenOrca.
- [spider](https://huggingface.co/datasets/spider)
- SQL-targeted dataset.
- [squad_v2](https://huggingface.co/datasets/squad_v2)
- Contextual question answering (RAG).
- [synthia](https://huggingface.co/datasets/migtissera/Synthia-v1.3)
- GPT-4 generated data using advanced prompting from Migel Tissera.
- [winogrande](https://huggingface.co/datasets/winogrande)
- Fill in the blank style prompts.
Only the train splits were used (if a split was provided), and an additional pass of decontamination is performed using approximate nearest neighbor search (via faiss).
## Prompt formatting
In sticking with the theme of the bagel, I didn't want to use a single prompt format, so I used 4 - vicuna, llama-2, alpaca, and chat-ml (sorta).
I also didn't want to randomly select a single prompt format for each item (hoping each instruction would generalize more when used in a variety of prompt formats), so each instruction is actually converted into every prompt format.
This means each epoch of our fine-tune is really basically 4 epochs. So, for the fine-tunes, I would recommend only doing 1 epoch (or 0.75 epochs). I am testing with a single epoch using a relatively low learning rate.
### Alpaca (sort of)
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:
```
The main difference here is that because of the dataset formatting and variety of data sources, it would have been much to tedious to add an `### Input:` block, so the inputs are just in the instruction section.
### Vicuna
```
{system prompt, if provided, randomly defaulting to "A chat between a user and an unbiased, uncensored assistant."}
USER: {instruction}
ASSISTANT:
```
### ChatML (sort of)
I don't really understand the point of having special tokens for `<|im_start|>` and `<|im_end|>`, because in practice they just act as BOS and EOS tokens (but, please correct me if I'm wrong).
So, instead of:
```text
{bos}<|im_start|>{role}
{text}
<|im_end|>{eos}
```
I just changed it to:
```text
{bos}{role}
{text}
{eos}
```
If you *really* want to use `<|im_start|>` and `<|im_end|>`, just update your `tokenizer_config.json` to use `<|im_start|>` instead of `<s>` and `<|im_end|>` instead of `</s>` and when tokenizing. And if you still don't like what I've done to this chat-ml-ish format, feel free to cry into your pillow or fork the code and do a new fine-tune.
### Llama-2 chat
```
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]
```
### Contribute
If you're interested in new functionality/datasets, take a look at [bagel repo](https://github.com/jondurbin/bagel) and either make a PR or open an issue with details.
To help me with the OpenAI/compute costs:
- https://bmc.link/jondurbin
- ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11
- BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf
|
samot-samoe/gpt-neo-sft-1000-steps-lora
|
samot-samoe
| 2024-01-27T21:47:39Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:AlexWortega/wortegaLM",
"base_model:adapter:AlexWortega/wortegaLM",
"region:us"
] | null | 2024-01-27T21:44:01Z |
---
library_name: peft
base_model: AlexWortega/wortegaLM
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
jondurbin/bagel-dpo-2.8b-v0.2
|
jondurbin
| 2024-01-27T21:47:31Z | 8 | 20 |
transformers
|
[
"transformers",
"pytorch",
"dataset:ai2_arc",
"dataset:unalignment/spicy-3.1",
"dataset:codeparrot/apps",
"dataset:facebook/belebele",
"dataset:boolq",
"dataset:jondurbin/cinematika-v0.1",
"dataset:drop",
"dataset:lmsys/lmsys-chat-1m",
"dataset:TIGER-Lab/MathInstruct",
"dataset:cais/mmlu",
"dataset:Muennighoff/natural-instructions",
"dataset:openbookqa",
"dataset:piqa",
"dataset:Vezora/Tested-22k-Python-Alpaca",
"dataset:cakiki/rosetta-code",
"dataset:Open-Orca/SlimOrca",
"dataset:spider",
"dataset:squad_v2",
"dataset:migtissera/Synthia-v1.3",
"dataset:datasets/winogrande",
"dataset:nvidia/HelpSteer",
"dataset:Intel/orca_dpo_pairs",
"dataset:unalignment/toxic-dpo-v0.1",
"dataset:jondurbin/truthy-dpo-v0.1",
"dataset:allenai/ultrafeedback_binarized_cleaned",
"dataset:Squish42/bluemoon-fandom-1-1-rp-cleaned",
"dataset:LDJnr/Capybara",
"dataset:JULIELab/EmoBank",
"dataset:kingbri/PIPPA-shareGPT",
"base_model:state-spaces/mamba-2.8b-slimpj",
"base_model:finetune:state-spaces/mamba-2.8b-slimpj",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-12-30T12:53:10Z |
---
license: apache-2.0
base_model: state-spaces/mamba-2.8b-slimpj
datasets:
- ai2_arc
- unalignment/spicy-3.1
- codeparrot/apps
- facebook/belebele
- boolq
- jondurbin/cinematika-v0.1
- drop
- lmsys/lmsys-chat-1m
- TIGER-Lab/MathInstruct
- cais/mmlu
- Muennighoff/natural-instructions
- openbookqa
- piqa
- Vezora/Tested-22k-Python-Alpaca
- cakiki/rosetta-code
- Open-Orca/SlimOrca
- spider
- squad_v2
- migtissera/Synthia-v1.3
- datasets/winogrande
- nvidia/HelpSteer
- Intel/orca_dpo_pairs
- unalignment/toxic-dpo-v0.1
- jondurbin/truthy-dpo-v0.1
- allenai/ultrafeedback_binarized_cleaned
- Squish42/bluemoon-fandom-1-1-rp-cleaned
- LDJnr/Capybara
- JULIELab/EmoBank
- kingbri/PIPPA-shareGPT
---
# A bagel, with everything

## Overview
An experimental fine-tune of [mamba-2.8b-slimpj](https://hf.co/state-spaces/mamba-2.8b-slimpj) using [bagel](https://github.com/jondurbin/bagel)
Default recommended system prompt:
```
You are a helpful, unbiased, uncensored assistant.
```
Supports several prompt formats, but you can also use `tokenizer.apply_chat_template`
This model did surprisingly well on MT-Bench, for a 2.8b that was only pre-trained on the slimpajama dataset!
```text
########## First turn ##########
score
model turn
bagel-dpo-2.8b-v0.2 1 5.10625
########## Second turn ##########
score
model turn
bagel-dpo-2.8b-v0.2 2 4.7375
########## Average ##########
score
model
bagel-dpo-2.8b-v0.2 4.921875
```
## Example chat script
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from mamba_ssm.models.mixer_seq_simple import MambaLMHeadModel
device = "cuda"
tokenizer = AutoTokenizer.from_pretrained("bagel-final-2.8b-v0.2")
model = MambaLMHeadModel.from_pretrained("bagel-final-2.8b-v0.2", device="cuda", dtype=torch.float32)
messages = [{"role": "system", "content": "You are a helpful, unbiased, uncensored assistant."}]
while True:
user_message = input("[INST] ")
messages.append({"role": "user", "content": user_message})
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True).to("cuda")
out = model.generate(input_ids=input_ids, max_length=2000, temperature=0.9, top_p=0.7, eos_token_id=tokenizer.eos_token_id, repetition_penalty=1.07)
decoded = tokenizer.batch_decode(out)[0].split("[/INST]")[-1].replace("</s>", "").strip()
messages.append({"role": "assistant", "content": decoded})
print("[/INST]", decoded)
```
## SFT data sources
*Yes, you will see benchmark names in the list, but this only uses the train splits, and a decontamination by cosine similarity is performed at the end as a sanity check*
- [ai2_arc](https://huggingface.co/datasets/ai2_arc)
- Abstraction and reasoning dataset, useful in measuring "intelligence" to a certain extent.
- [airoboros](https://huggingface.co/datasets/unalignment/spicy-3.1)
- Variety of categories of synthetic instructions generated by gpt-4.
- [apps](https://huggingface.co/datasets/codeparrot/apps)
- Python coding dataset with 10k problems.
- [belebele](https://huggingface.co/datasets/facebook/belebele)
- Multi-lingual reading comprehension dataset.
- [bluemoon](https://huggingface.co/datasets/Squish42/bluemoon-fandom-1-1-rp-cleaned)
- Roleplay data scraped from Bluemoon, then cleaned and formatted as ShareGPT.
- [boolq](https://huggingface.co/datasets/boolq)
- Corpus of yes/no questions (which can be surprisingly difficult for AI to answer apparently?)
- [capybara](https://huggingface.co/datasets/LDJnr/Capybara)
- Multi-turn dataset used to create the capybara models.
- [cinematika](https://huggingface.co/datasets/jondurbin/cinematika-v0.1) (instruction and plain text)
- RP-style data synthesized from movie scripts so the model isn't quite as boring as it otherwise would be.
- [drop](https://huggingface.co/datasets/drop)
- More reading comprehension.
- [emobank](https://github.com/JULIELab/EmoBank)
- Emotion annotations using the Valence-Arousal-Domninance scheme.
- [gutenberg](https://www.gutenberg.org/) (plain text)
- Books/plain text, again to make the model less boring, only a handful of examples supported by [chapterize](https://github.com/JonathanReeve/chapterize)
- [lmsys_chat_1m](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) (only gpt-4 items, also used for DPO)
- Chats collected by the lmsys chat arena, containing a wide variety of chats with various models.
- [mathinstruct](https://huggingface.co/datasets/TIGER-Lab/MathInstruct)
- Composite dataset with a variety of math-related tasks and problem/question formats.
- [mmlu](https://huggingface.co/datasets/cais/mmlu)
- Massive Multitask Language Understanding - a wide variety of questions about various subject matters.
- [natural_instructions](https://huggingface.co/datasets/Muennighoff/natural-instructions)
- Millions of instructions from 1600+ task categories (sampled down substantially, stratified by task type)
- [openbookqa](https://huggingface.co/datasets/openbookqa)
- Question answering dataset.
- [pippa](https://huggingface.co/datasets/kingbri/PIPPA-shareGPT)
- Deduped version of [PIPPA](https://huggingface.co/datasets/PygmalionAI/PIPPA) in ShareGPT format.
- [piqa](https://huggingface.co/datasets/piqa)
- Phyiscal interaction question answering.
- [python_alpaca](https://huggingface.co/datasets/Vezora/Tested-22k-Python-Alpaca)
- Python instruction response pairs, validated as functional.
- [rosetta_code](https://huggingface.co/datasets/cakiki/rosetta-code)
- Code problems and solutions in a variety of programming languages taken from rosettacode.org.
- [slimorca](https://huggingface.co/datasets/Open-Orca/SlimOrca)
- Collection of ~500k gpt-4 verified chats from OpenOrca.
- [spider](https://huggingface.co/datasets/spider)
- SQL-targeted dataset.
- [squad_v2](https://huggingface.co/datasets/squad_v2)
- Contextual question answering (RAG).
- [synthia](https://huggingface.co/datasets/migtissera/Synthia-v1.3)
- GPT-4 generated data using advanced prompting from Migel Tissera.
- [winogrande](https://huggingface.co/datasets/winogrande)
- Fill in the blank style prompts.
## DPO data sources
- [airoboros 3.1](https://huggingface.co/datasets/unalignment/spicy-3.1) vs [airoboros 2.2.1](https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.4.1)
- The creative/writing tasks from airoboros-2.2.1 were re-generated using gpt4-0314 and a custom prompt to get longer, more creative, less clichè responses for airoboros 3.1, so we can use the shorter/boring version as the "rejected" value and the rerolled response as "chosen"
- [helpsteer](https://huggingface.co/datasets/nvidia/HelpSteer)
- Really neat dataset provided by the folks at NVidia with human annotation across a variety of metrics. Only items with the highest "correctness" value were used for DPO here, with the highest scoring output as "chosen" and random lower scoring value as "rejected"
- [orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs)
- Another interesting dataset by Intel, which provides various DPO pairs generated from prompts included in the SlimOrca dataset.
- [toxic-dpo](https://huggingface.co/datasets/unalignment/toxic-dpo-v0.1)
- __*highly toxic and potentially illegal content!*__ De-censorship, for academic and lawful purposes only, of course. Generated by llama-2-70b via prompt engineering.
- [truthy](https://huggingface.co/datasets/jondurbin/truthy-dpo-v0.1)
- DPO pairs meant to increase truthfulness of the model, e.g. common misconceptions, differentiate between AI assistants and roleplayed human in terms of corporeal awareness/locality/etc.
- [ultrafeedback](https://huggingface.co/datasets/allenai/ultrafeedback_binarized_cleaned)
- One of the bits of magic behind the Zephyr model. Only the items with a chosen score of 8 or higher were included.
Only the train splits were used (if a split was provided), and an additional pass of decontamination is performed using approximate nearest neighbor search (via faiss).
## Prompt formatting
In sticking with the theme of the bagel, I didn't want to use a single prompt format, so I used 4 - vicuna, llama-2, alpaca, and chat-ml (sorta).
I also didn't want to randomly select a single prompt format for each item (hoping each instruction would generalize more when used in a variety of prompt formats), so each instruction is actually converted into every prompt format.
This means each epoch of our fine-tune is really basically 4 epochs. So, for the fine-tunes, I would recommend only doing 1 epoch (or 0.75 epochs). I am testing with a single epoch using a relatively low learning rate.
### Alpaca (sort of)
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:
```
The main difference here is that because of the dataset formatting and variety of data sources, it would have been much to tedious to add an `### Input:` block, so the inputs are just in the instruction section.
### Vicuna
```
{system prompt, if provided, randomly defaulting to "A chat between a user and an unbiased, uncensored assistant."}
USER: {instruction}
ASSISTANT:
```
### ChatML (sort of)
I don't really understand the point of having special tokens for `<|im_start|>` and `<|im_end|>`, because in practice they just act as BOS and EOS tokens (but, please correct me if I'm wrong).
So, instead of:
```text
{bos}<|im_start|>{role}
{text}
<|im_end|>{eos}
```
I just changed it to:
```text
{bos}{role}
{text}
{eos}
```
If you *really* want to use `<|im_start|>` and `<|im_end|>`, just update your `tokenizer_config.json` to use `<|im_start|>` instead of `<s>` and `<|im_end|>` instead of `</s>` and when tokenizing. And if you still don't like what I've done to this chat-ml-ish format, feel free to cry into your pillow or fork the code and do a new fine-tune.
### Llama-2 chat
```
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]
```
### Contribute
If you're interested in new functionality/datasets, take a look at [bagel repo](https://github.com/jondurbin/bagel) and either make a PR or open an issue with details.
To help me with the OpenAI/compute costs:
- https://bmc.link/jondurbin
- ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11
- BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf
|
AlekseyKorshuk/ultrachat-phi-2-dpo-chatml
|
AlekseyKorshuk
| 2024-01-27T21:42:54Z | 10 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"phi",
"text-generation",
"axolotl",
"dpo",
"trl",
"generated_from_trainer",
"conversational",
"custom_code",
"base_model:AlekseyKorshuk/ultrachat-phi-2-sft-chatml",
"base_model:finetune:AlekseyKorshuk/ultrachat-phi-2-sft-chatml",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-27T09:40:03Z |
---
license: mit
base_model: AlekseyKorshuk/ultrachat-phi-2-sft-chatml
tags:
- axolotl
- dpo
- trl
- dpo
- generated_from_trainer
model-index:
- name: ultrachat-phi-2-dpo-chatml
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: AlekseyKorshuk/ultrachat-phi-2-sft-chatml
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
trust_remote_code: true
hub_model_id: AlekseyKorshuk/ultrachat-phi-2-dpo-chatml
hub_strategy: every_save
load_in_8bit: false
load_in_4bit: false
strict: false
rl: dpo
datasets:
- path: argilla/ultrafeedback-binarized-preferences
split: train
type: chatml.argilla
dataset_prepared_path:
#val_set_size: 0.001
output_dir: ./output
sequence_len: 2048
#sample_packing: false # currently unsupported
pad_to_sequence_len:
lora_r:
lora_alpha:
lora_dropout:
lora_target_modules:
lora_target_linear:
lora_fan_in_fan_out:
wandb_project: ui-thesis
wandb_entity:
wandb_watch:
wandb_name: ultrachat-phi-2-dpo-chatml
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 8
num_epochs: 3
optimizer: paged_adamw_8bit
adam_beta1: 0.9
adam_beta2: 0.95
max_grad_norm: 1.0
adam_epsilon: 0.00001
lr_scheduler: cosine
cosine_min_lr_ratio: 0.1
learning_rate: 5.0e-7
warmup_steps: 32
#warmup_ratio: 0.1
weight_decay: 0.01
dpo_beta: 0.01
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: true
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
#evals_per_epoch: 5
#eval_table_size: 8 # Approximate number of predictions sent to wandb depending on batch size. Enabled above 0. Default is 0
#eval_table_max_new_tokens: 768 # Total number of tokens generated for predictions sent to wandb. Default is 128
chat_template: chatml
#saves_per_epoch: 1
save_steps: 500
save_total_limit: 1
seed: 42
debug:
deepspeed:
fsdp:
fsdp_config:
resize_token_embeddings_to_32x: true
```
</details><br>
# ultrachat-phi-2-dpo-chatml
This model is a fine-tuned version of [AlekseyKorshuk/ultrachat-phi-2-sft-chatml](https://huggingface.co/AlekseyKorshuk/ultrachat-phi-2-sft-chatml) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-05
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 32
- training_steps: 1492
### Training results
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
iamkprasad/SeetaXL
|
iamkprasad
| 2024-01-27T21:36:39Z | 4 | 2 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:Lykon/AAM_XL_AnimeMix",
"base_model:adapter:Lykon/AAM_XL_AnimeMix",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-01-27T21:36:36Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: Lykon/AAM_XL_AnimeMix
instance_prompt: a photo of s33ta person, anime
license: openrail++
---
# SDXL LoRA DreamBooth - iamkprasad/s33ta
<Gallery />
## Model description
These are iamkprasad/s33ta LoRA adaption weights for Lykon/AAM_XL_AnimeMix.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of s33ta person, anime to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](iamkprasad/s33ta/tree/main) them in the Files & versions tab.
|
hanasim/wav2vec2-common_voice-hi-mms-demo
|
hanasim
| 2024-01-27T21:35:44Z | 13 | 1 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_16_0",
"mms",
"generated_from_trainer",
"hi",
"dataset:common_voice_16_0",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-27T10:24:59Z |
---
language:
- hi
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_16_0
- mms
- generated_from_trainer
datasets:
- common_voice_16_0
metrics:
- wer
model-index:
- name: wav2vec2-common_voice-hi-mms-demo
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: MOZILLA-FOUNDATION/COMMON_VOICE_16_0 - HI
type: common_voice_16_0
config: hi
split: test
args: 'Config: hi, Training split: train+validation, Eval split: test'
metrics:
- name: Wer
type: wer
value: 0.2516432655283731
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-common_voice-hi-mms-demo
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the MOZILLA-FOUNDATION/COMMON_VOICE_16_0 - HI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2672
- Wer: 0.2516
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 4.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.11 | 100 | 0.4487 | 0.3565 |
| No log | 0.23 | 200 | 0.3544 | 0.3317 |
| No log | 0.34 | 300 | 0.3693 | 0.3088 |
| No log | 0.45 | 400 | 0.3404 | 0.3040 |
| 1.5084 | 0.56 | 500 | 0.3346 | 0.2995 |
| 1.5084 | 0.68 | 600 | 0.3411 | 0.2936 |
| 1.5084 | 0.79 | 700 | 0.3175 | 0.2887 |
| 1.5084 | 0.9 | 800 | 0.3159 | 0.2898 |
| 1.5084 | 1.02 | 900 | 0.3139 | 0.3045 |
| 0.3485 | 1.13 | 1000 | 0.3067 | 0.2958 |
| 0.3485 | 1.24 | 1100 | 0.2969 | 0.2767 |
| 0.3485 | 1.35 | 1200 | 0.2916 | 0.2714 |
| 0.3485 | 1.47 | 1300 | 0.2893 | 0.2663 |
| 0.3485 | 1.58 | 1400 | 0.3183 | 0.2985 |
| 0.3152 | 1.69 | 1500 | 0.2961 | 0.2688 |
| 0.3152 | 1.81 | 1600 | 0.2848 | 0.2665 |
| 0.3152 | 1.92 | 1700 | 0.2844 | 0.2656 |
| 0.3152 | 2.03 | 1800 | 0.2855 | 0.2707 |
| 0.3152 | 2.14 | 1900 | 0.2887 | 0.2686 |
| 0.3058 | 2.26 | 2000 | 0.2858 | 0.2657 |
| 0.3058 | 2.37 | 2100 | 0.2814 | 0.2629 |
| 0.3058 | 2.48 | 2200 | 0.2809 | 0.2633 |
| 0.3058 | 2.6 | 2300 | 0.2779 | 0.2613 |
| 0.3058 | 2.71 | 2400 | 0.2745 | 0.2581 |
| 0.2861 | 2.82 | 2500 | 0.2769 | 0.2618 |
| 0.2861 | 2.93 | 2600 | 0.2742 | 0.2576 |
| 0.2861 | 3.05 | 2700 | 0.2730 | 0.2575 |
| 0.2861 | 3.16 | 2800 | 0.2727 | 0.2564 |
| 0.2861 | 3.27 | 2900 | 0.2726 | 0.2563 |
| 0.2839 | 3.39 | 3000 | 0.2713 | 0.2576 |
| 0.2839 | 3.5 | 3100 | 0.2690 | 0.2537 |
| 0.2839 | 3.61 | 3200 | 0.2706 | 0.2540 |
| 0.2839 | 3.72 | 3300 | 0.2687 | 0.2542 |
| 0.2839 | 3.84 | 3400 | 0.2671 | 0.2521 |
| 0.2706 | 3.95 | 3500 | 0.2673 | 0.2522 |
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
silvente93/tfm_rev6
|
silvente93
| 2024-01-27T21:30:10Z | 4 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
"base_model:adapter:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-01-27T18:50:16Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TheBloke/Mistral-7B-Instruct-v0.2-GPTQ
model-index:
- name: tfm_rev6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tfm_rev6
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
smangrul/mistral-dpo
|
smangrul
| 2024-01-27T21:27:50Z | 2 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-01-27T13:59:43Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- dpo
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: mistral-dpo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-dpo
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5536
- Rewards/chosen: 0.2363
- Rewards/rejected: -0.3821
- Rewards/accuracies: 0.7095
- Rewards/margins: 0.6183
- Logps/rejected: -278.2731
- Logps/chosen: -292.0563
- Logits/rejected: -2.6610
- Logits/chosen: -2.7000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 4
- seed: 100
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.5093 | 1.0 | 625 | 0.5536 | 0.2363 | -0.3821 | 0.7095 | 0.6183 | -278.2731 | -292.0563 | -2.6610 | -2.7000 |
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.38.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.15.0
- Tokenizers 0.15.1
|
graizelle/phee_1
|
graizelle
| 2024-01-27T21:26:42Z | 9 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail",
"region:us"
] |
text-to-image
| 2024-01-26T03:48:17Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '1girl, phee, looking at viewer, long blonde hair, black dress, jewelery, masterpiece best quality, realistic, dramatic lighting'
parameter:
negative_prompt: >-
worse quality, lowres, bad anatomy, bad hands, text, error, missing fingers, extra
digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry
width=512,
height=910,
guidance_scale=4,
num_inference_steps=35
example_title: phee
parameters:
negative_prompt: worse quality
output:
url: images/phee_20240125213621_e000001_01.png
- text: '1girl, phee, looking at viewer, long blonde hair, green eyes, red dress, jewelery, masterpiece best quality, realistic, dramatic lighting'
parameters:
negative_prompt: worse quality
output:
url: images/phee_20240125222036_e000008_00.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: phee
license: openrail
library_name: diffusers
---
# phee_1
<Gallery />
## Trigger words
You should use `phee` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/graizelle/phee_1/tree/main) them in the Files & versions tab.
|
bartowski/internlm2-chat-7b-llama-exl2
|
bartowski
| 2024-01-27T21:15:49Z | 5 | 0 | null |
[
"text-generation",
"license:other",
"region:us"
] |
text-generation
| 2024-01-27T19:20:37Z |
---
pipeline_tag: text-generation
license: other
quantized_by: bartowski
---
Update Jan 27: This has been redone with the proper token mappings and rope scaling, performance seems improved, please comment if not
## Exllama v2 Quantizations of internlm2-chat-7b-llama
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.12">turboderp's ExLlamaV2 v0.0.12</a> for quantization.
# The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/bartowski/internlm2-chat-7b-llama/
| Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description |
| ----- | ---- | ------- | ------ | ------ | ------ | ------------ |
| [8_0](https://huggingface.co/Bartowski/internlm2-chat-7b-llama-exl2/tree/8_0) | 8.0 | 8.0 | 8.4 GB | 9.8 GB | 11.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/Bartowski/internlm2-chat-7b-llama-exl2/tree/6_5) | 6.5 | 8.0 | 7.2 GB | 8.6 GB | 10.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
| [5_0](https://huggingface.co/Bartowski/internlm2-chat-7b-llama-exl2/tree/5_0) | 5.0 | 6.0 | 6.0 GB | 7.4 GB | 9.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. |
| [4_25](https://huggingface.co/Bartowski/internlm2-chat-7b-llama-exl2/tree/4_25) | 4.25 | 6.0 | 5.3 GB | 6.7 GB | 8.7 GB | GPTQ equivalent bits per weight, slightly higher quality. |
| [3_5](https://huggingface.co/Bartowski/internlm2-chat-7b-llama-exl2/tree/3_5) | 3.5 | 6.0 | 4.7 GB | 6.1 GB | 8.1 GB | Lower quality, only use if you have to. |
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/internlm2-chat-7b-llama-exl2 internlm2-chat-7b-llama-exl2-6_5
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `internlm2-chat-7b-llama-exl2`:
```shell
mkdir internlm2-chat-7b-llama-exl2
huggingface-cli download bartowski/internlm2-chat-7b-llama-exl2 --local-dir internlm2-chat-7b-llama-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
Linux:
```shell
mkdir internlm2-chat-7b-llama-exl2-6_5
huggingface-cli download bartowski/internlm2-chat-7b-llama-exl2 --revision 6_5 --local-dir internlm2-chat-7b-llama-exl2-6_5 --local-dir-use-symlinks False
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
mkdir internlm2-chat-7b-llama-exl2-6.5
huggingface-cli download bartowski/internlm2-chat-7b-llama-exl2 --revision 6_5 --local-dir internlm2-chat-7b-llama-exl2-6.5 --local-dir-use-symlinks False
```
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
gardner/TinyLlama-1.1B-DPO-Function-Calling-3T
|
gardner
| 2024-01-27T21:13:00Z | 73 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"en",
"dataset:argilla/distilabel-intel-orca-dpo-pairs",
"base_model:gardner/TinyLlama-1.1B-SlimOrca-Function-Calling-3T",
"base_model:quantized:gardner/TinyLlama-1.1B-SlimOrca-Function-Calling-3T",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-01-27T13:51:17Z |
---
license: apache-2.0
base_model: gardner/TinyLlama-1.1B-SlimOrca-Function-Calling-3T
tags:
- generated_from_trainer
model-index:
- name: TinyLlama-1.1B-DPO-Function-Calling-3T
results: []
datasets:
- argilla/distilabel-intel-orca-dpo-pairs
language:
- en
---
## TinyLlama-1.1B-DPO-Function-Calling-3T
This model is a DPO fine tune of [gardner/TinyLlama-1.1B-SlimOrca-Function-Calling-3T](https://huggingface.co/datasets/gardner/TinyLlama-1.1B-SlimOrca-Function-Calling-3T) which itself was trained on:
1. [Open-Orca/SlimOrca-Dedup](https://huggingface.co/datasets/Open-Orca/SlimOrca-Dedup)
1. [gardner/glaive-function-calling-v2-sharegpt](https://huggingface.co/datasets/gardner/glaive-function-calling-v2-sharegpt)
The model scores unusually high on GSM8K which indicates the glaive function calling dataset may introduce data contamination.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: gardner/TinyLlama-1.1B-SlimOrca-Function-Calling-3T
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
chat_template: chatml
is_llama_derived_model: true
load_in_8bit: true
load_in_4bit: false
strict: false
rl: dpo
datasets:
- path: argilla/distilabel-intel-orca-dpo-pairs
split: train
type: chatml.gardner
dataset_prepared_path: ./dsprepare/argilla/distilabel-intel-orca-dpo-pairs
val_set_size: 0.05
output_dir: ./TinyLlama-1.1B-DPO-Function-Calling-3T
sequence_len: 4096
sample_packing: false
pad_to_sequence_len: false
adapter: lora
lora_model_dir:
lora_r: 256
lora_alpha: 128
lora_dropout: 0.05
lora_target_linear: true
lora_modules_to_save:
lora_fan_in_fan_out:
lora_target_modules:
- gate_proj
- down_proj
- up_proj
- q_proj
- v_proj
- k_proj
- o_proj
wandb_project: tinyllama
wandb_entity: gardner
wandb_name: tinyllama-distilabel-intel-orca-dpo-pairs
gradient_accumulation_steps: 1
micro_batch_size: 2
num_epochs: 3
optimizer: paged_adamw_8bit
adam_beta2: 0.95
adam_epsilion: 0.00001
lr_scheduler: linear
learning_rate: 1.414e-5
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: true
gradient_checkpointing: true
gradient_checkpoint_kwargs:
use_reentrant: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
eval_steps:
eval_table_size:
eval_table_max_new_tokens: 128
save_steps: 45
debug:
deepspeed:
weight_decay: 0.1
fsdp:
fsdp_config:
special_tokens:
save_safetensors: true
dataloader_num_workers: 16
dataloader_pin_memory: true
```
</details><br>
# TinyLlama-1.1B-DPO-Function-Calling-3T
This model is a fine-tuned version of [gardner/TinyLlama-1.1B-SlimOrca-Function-Calling-3T](https://huggingface.co/gardner/TinyLlama-1.1B-SlimOrca-Function-Calling-3T) on the None dataset.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.414e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 19289
### Training results
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Akil15/finetune_llama_v_0.1
|
Akil15
| 2024-01-27T21:12:01Z | 0 | 0 |
peft,sfttrainer
|
[
"peft,sfttrainer",
"safetensors",
"text-generation",
"en",
"dataset:Akil15/evol_20k_filter",
"base_model:NousResearch/Llama-2-7b-chat-hf",
"base_model:finetune:NousResearch/Llama-2-7b-chat-hf",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2024-01-26T20:51:03Z |
---
library_name: peft,sfttrainer
base_model: NousResearch/Llama-2-7b-chat-hf
hub-id: Akil15/finetune_llama_v_0.1
license: apache-2.0
datasets:
- Akil15/evol_20k_filter
language:
- en
pipeline_tag: text-generation
---
# Model Card for Model ID:
This is a Supervised PEFT(Parameter Efficient Fine-Tuning) based tuning of the Llama model of base conversational type to a code-based chatbot using the alpaca Dataset and SFT Trainer.
# Training:
The model was trained under one epoch using SFT Trainer for up to 200 Steps by observing through significant gradient loss value (step-wise).
# Training Args:
{
"num_train_epochs": 1,
"fp16": false,
"bf16": false,
"per_device_train_batch_size": 4,
"per_device_eval_batch_size": 4,
"gradient_accumulation_steps": 4,
"gradient_checkpointing": true,
"max_grad_norm": 0.3,
"learning_rate": 2e-4,
"weight_decay": 0.001,
"optim": "paged_adamw_32bit",
"lr_scheduler_type": "cosine",
"max_steps": -1,
"warmup_ratio": 0.03,
"group_by_length": true,
"save_steps": 0,
"logging_steps": 25,
"base_lrs": [0.0002, 0.0002],
"last_epoch": 199,
"verbose": false,
"_step_count": 200,
"_get_lr_called_within_step": false,
"_last_lr": [0.00019143163189119916, 0.00019143163189119916],
"lr_lambdas": [{}, {}]
}
## Usage:
These configurations (trained weights) are injected into the base model using PeftModel.from_pretrained() method.
## Git-Repos:
Refer to this Github repo for notebooks: https://github.com/mr-nobody15/codebot_llama/tree/main
### Framework versions:
- PEFT 0.7.1
|
llmixer/BigWeave-v12-90b-Q_3_L-GGUF
|
llmixer
| 2024-01-27T21:11:27Z | 1 | 0 | null |
[
"gguf",
"Q_3_L",
"GGUF",
"conversational",
"en",
"license:llama2",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-27T18:51:29Z |
---
license: llama2
language:
- en
pipeline_tag: conversational
tags:
- Q_3_L
- GGUF
---
GGUF Q_3_L quant for [BigWeave-v12-90b](https://huggingface.co/llmixer/BigWeave-v12-90b).
Calibration dataset: [llmixer/20k_random_data](https://huggingface.co/datasets/llmixer/20k_random_data)
|
graizelle/phee_3
|
graizelle
| 2024-01-27T21:06:54Z | 6 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail",
"region:us"
] |
text-to-image
| 2024-01-26T03:14:29Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
widget:
- text: 1girl, phee, emo, looking at viewer, long pink hair, black hoodie, shorts, masterpiece best quality, realistic, dramatic lighting
parameter:
negative_prompt: >-
lowres, bad anatomy, bad hands, text, error, missing fingers, extra
digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry
width=512,
height=910,
guidance_scale=5,
num_inference_steps=30
example_title: phee
parameters:
negative_prompt: worse quality
output:
url: images/phee_20240125224025_e000010_00.png
- text: 1girl, phee, looking at viewer, long blonde hair, black dress, jewelery, masterpiece best quality, realistic, dramatic lighting
parameters:
negative_prompt: worse quality
output:
url: images/phee_20240125223146_e000001_01.png
- text: 1girl, phee, scene girl, looking at viewer, long blonde hair, magenta tank top, skirt, masterpiece best quality, realistic, dramatic lighting
parameters:
negative_prompt: worse quality
output:
url: images/phee_20240125223425_e000004_00.png
instance_prompt: phee
license: openrail
---
# Phee 3
<Gallery />
## Trigger words
You should use `phee` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/graizelle/phee_3/tree/main) them in the Files & versions tab.
|
sevvalkapcak/newModel
|
sevvalkapcak
| 2024-01-27T21:05:09Z | 1 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-27T19:27:02Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: sevvalkapcak/newModel
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# sevvalkapcak/newModel
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0146
- Validation Loss: 0.7180
- Train Accuracy: 0.909
- Epoch: 37
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 5e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.0133 | 0.6573 | 0.901 | 0 |
| 0.0135 | 0.7314 | 0.9065 | 1 |
| 0.0104 | 0.6544 | 0.913 | 2 |
| 0.0148 | 0.7763 | 0.9035 | 3 |
| 0.0171 | 0.7110 | 0.9055 | 4 |
| 0.0121 | 0.7075 | 0.9015 | 5 |
| 0.0126 | 0.7461 | 0.8945 | 6 |
| 0.0212 | 0.7539 | 0.9035 | 7 |
| 0.0183 | 0.7842 | 0.9005 | 8 |
| 0.0192 | 0.7431 | 0.901 | 9 |
| 0.0224 | 0.6014 | 0.9065 | 10 |
| 0.0168 | 0.6000 | 0.914 | 11 |
| 0.0133 | 0.6241 | 0.9125 | 12 |
| 0.0097 | 0.6747 | 0.9075 | 13 |
| 0.0122 | 0.7352 | 0.908 | 14 |
| 0.0123 | 0.8061 | 0.905 | 15 |
| 0.0139 | 0.7254 | 0.8985 | 16 |
| 0.0120 | 0.6856 | 0.903 | 17 |
| 0.0175 | 0.6727 | 0.905 | 18 |
| 0.0155 | 0.6912 | 0.9055 | 19 |
| 0.0192 | 0.7535 | 0.903 | 20 |
| 0.0206 | 0.7428 | 0.8995 | 21 |
| 0.0108 | 0.7883 | 0.8965 | 22 |
| 0.0159 | 0.7443 | 0.8885 | 23 |
| 0.0238 | 0.7381 | 0.8935 | 24 |
| 0.0167 | 0.7888 | 0.901 | 25 |
| 0.0207 | 0.7062 | 0.899 | 26 |
| 0.0148 | 0.7670 | 0.9065 | 27 |
| 0.0177 | 0.6694 | 0.8925 | 28 |
| 0.0157 | 0.7312 | 0.9045 | 29 |
| 0.0145 | 0.6551 | 0.905 | 30 |
| 0.0188 | 0.7582 | 0.906 | 31 |
| 0.0136 | 0.7531 | 0.9085 | 32 |
| 0.0119 | 0.7965 | 0.8905 | 33 |
| 0.0069 | 0.8430 | 0.901 | 34 |
| 0.0100 | 0.7795 | 0.8975 | 35 |
| 0.0100 | 0.9567 | 0.889 | 36 |
| 0.0146 | 0.7180 | 0.909 | 37 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.1
|
ShieldX/manovyadh-1.1B-v1-chat
|
ShieldX
| 2024-01-27T20:51:53Z | 112 | 1 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"medical",
"trl",
"trainer",
"conversational",
"en",
"dataset:ShieldX/manovyadh-3.5k",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-26T08:17:33Z |
---
library_name: transformers
tags:
- medical
- trl
- trainer
license: apache-2.0
thumbnail: https://huggingface.co/ShieldX/manovyadh-1.1B-v1-chat/blob/main/manovyadh.png
datasets:
- ShieldX/manovyadh-3.5k
language:
- en
metrics:
- accuracy
pipeline_tag: text-generation
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
widget:
- text: >
###SYSTEM: You are an AI assistant that helps people cope with stress and improve their mental health. User will tell you about their feelings and challenges. Your task is to listen empathetically and offer helpful suggestions. While responding, think about the user’s needs and goals and show compassion and support
###USER: I don't know how to tell someone how I feel about them. How can I get better at expressing how I feel??
###ASSISTANT:
model-index:
- name: manovyadh-1.1B-v1-chat
results:
- task:
type: text-generation
dataset:
name: ai2_arc
type: arc
metrics:
- name: pass@1
type: pass@1
value: 35.92
source:
name: Open LLM Leaderboard
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- task:
type: text-generation
dataset:
name: hellaswag
type: hellaswag
metrics:
- name: pass@1
type: pass@1
value: 60.03
source:
name: Open LLM Leaderboard
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- task:
type: text-generation
dataset:
name: truthful_qa
type: truthful_qa
metrics:
- name: pass@1
type: pass@1
value: 39.17
source:
name: Open LLM Leaderboard
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- task:
type: text-generation
dataset:
name: winogrande
type: winogrande
metrics:
- name: pass@1
type: pass@1
value: 61.09
source:
name: Open LLM Leaderboard
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
---
# Uploaded model
- **Developed by:** ShieldX
- **License:** apache-2.0
- **Finetuned from model :** TinyLlama/TinyLlama-1.1B-Chat-v1.0
<style>
img{
width: 40vw;
height: auto;
margin: 0 auto;
display: flex;
align-items: center;
justify-content: center;
}
</style>
# ShieldX/manovyadh-1.1B-v1
Introducing ManoVyadh, A finetuned version of TinyLlama 1.1B Chat on Mental Health Counselling Dataset.
<img class="custom-image" src="manovyadh.png" alt="BongLlama">
# Model Details
## Model Description
ManoVyadh is a LLM for mental health counselling.
# Uses
## Direct Use
- base model for further finetuning
- for fun
## Downstream Use
- can be deployed with api
- used to create webapp or app to show demo
## Out-of-Scope Use
- cannot be used for production purpose
- not to be applied in real life health purpose
- cannot be used to generate text for research or academic purposes
# Usage
```
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM, AutoConfig
tokenizer = AutoTokenizer.from_pretrained("ShieldX/manovyadh-1.1B-v1-chat")
model = AutoModelForCausalLM.from_pretrained("ShieldX/manovyadh-1.1B-v1-chat").to("cuda")
config = AutoConfig.from_pretrained("ShieldX/manovyadh-1.1B-v1-chat")
def format_prompt(q):
return f"""###SYSTEM: You are an AI assistant that helps people cope with stress and improve their mental health. User will tell you about their feelings and challenges. Your task is to listen empathetically and offer helpful suggestions. While responding, think about the user’s needs and goals and show compassion and support
###USER: {q}
###ASSISTANT:"""
prompt = format_prompt("I've never been able to talk with my parents. My parents are in their sixties while I am a teenager. I love both of them but not their personalities. I feel that they do not take me seriously whenever I talk about a serious event in my life. If my dad doesn’t believe me, then my mom goes along with my dad and acts like she doesn’t believe me either. I’m a pansexual, but I can’t trust my own parents. I've fought depression and won; however, stress and anxiety are killing me. I feel that my friends don't listen to me. I know they have their own problems, which I do my best to help with. But they don't always try to help me with mine, when I really need them. I feel as if my childhood has been taken from me. I feel as if I have no one whom I can trust.")
import torch
from transformers import GenerationConfig, TextStreamer
from time import perf_counter
# Check for GPU availability
if torch.cuda.is_available():
device = "cuda"
else:
device = "cpu"
# Move model and inputs to the GPU (if available)
model.to(device)
inputs = tokenizer(prompt, return_tensors="pt").to(device)
streamer = TextStreamer(tokenizer)
generation_config = GenerationConfig(
penalty_alpha=0.6,
do_sample=True,
top_k=5,
temperature=0.5,
repetition_penalty=1.2,
max_new_tokens=256,
streamer=streamer,
pad_token_id=tokenizer.eos_token_id
)
start_time = perf_counter()
outputs = model.generate(**inputs, generation_config=generation_config)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
output_time = perf_counter() - start_time
print(f"Time taken for inference: {round(output_time, 2)} seconds")
```
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
# Training Details
# Model Examination
We will be further finetuning this model on large dataset to see how it performs
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** 1 X Tesla T4
- **Hours used:** 0.48
- **Cloud Provider:** Google Colab
- **Compute Region:** India
# Technical Specifications
## Model Architecture and Objective
Finetuned on Tiny-Llama 1.1B Chat model
### Hardware
1 X Tesla T4
# training
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on [ShieldX/manovyadh-3.5k](https://huggingface.co/datasets/ShieldX/manovyadh-3.5k) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8587
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 400
- mixed_precision_training: Native AMP
-
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5894 | 0.01 | 5 | 2.5428 |
| 2.5283 | 0.02 | 10 | 2.5240 |
| 2.5013 | 0.03 | 15 | 2.5033 |
| 2.378 | 0.05 | 20 | 2.4770 |
| 2.3735 | 0.06 | 25 | 2.4544 |
| 2.3894 | 0.07 | 30 | 2.4335 |
| 2.403 | 0.08 | 35 | 2.4098 |
| 2.3719 | 0.09 | 40 | 2.3846 |
| 2.3691 | 0.1 | 45 | 2.3649 |
| 2.3088 | 0.12 | 50 | 2.3405 |
| 2.3384 | 0.13 | 55 | 2.3182 |
| 2.2577 | 0.14 | 60 | 2.2926 |
| 2.245 | 0.15 | 65 | 2.2702 |
| 2.1389 | 0.16 | 70 | 2.2457 |
| 2.1482 | 0.17 | 75 | 2.2176 |
| 2.1567 | 0.18 | 80 | 2.1887 |
| 2.1533 | 0.2 | 85 | 2.1616 |
| 2.0629 | 0.21 | 90 | 2.1318 |
| 2.1068 | 0.22 | 95 | 2.0995 |
| 2.0196 | 0.23 | 100 | 2.0740 |
| 2.062 | 0.24 | 105 | 2.0461 |
| 1.9436 | 0.25 | 110 | 2.0203 |
| 1.9348 | 0.26 | 115 | 1.9975 |
| 1.8803 | 0.28 | 120 | 1.9747 |
| 1.9108 | 0.29 | 125 | 1.9607 |
| 1.7826 | 0.3 | 130 | 1.9506 |
| 1.906 | 0.31 | 135 | 1.9374 |
| 1.8745 | 0.32 | 140 | 1.9300 |
| 1.8634 | 0.33 | 145 | 1.9232 |
| 1.8561 | 0.35 | 150 | 1.9183 |
| 1.8371 | 0.36 | 155 | 1.9147 |
| 1.8006 | 0.37 | 160 | 1.9106 |
| 1.8941 | 0.38 | 165 | 1.9069 |
| 1.8456 | 0.39 | 170 | 1.9048 |
| 1.8525 | 0.4 | 175 | 1.9014 |
| 1.8475 | 0.41 | 180 | 1.8998 |
| 1.8255 | 0.43 | 185 | 1.8962 |
| 1.9358 | 0.44 | 190 | 1.8948 |
| 1.758 | 0.45 | 195 | 1.8935 |
| 1.7859 | 0.46 | 200 | 1.8910 |
| 1.8412 | 0.47 | 205 | 1.8893 |
| 1.835 | 0.48 | 210 | 1.8875 |
| 1.8739 | 0.49 | 215 | 1.8860 |
| 1.9397 | 0.51 | 220 | 1.8843 |
| 1.8187 | 0.52 | 225 | 1.8816 |
| 1.8174 | 0.53 | 230 | 1.8807 |
| 1.8 | 0.54 | 235 | 1.8794 |
| 1.7736 | 0.55 | 240 | 1.8772 |
| 1.7429 | 0.56 | 245 | 1.8778 |
| 1.8024 | 0.58 | 250 | 1.8742 |
| 1.8431 | 0.59 | 255 | 1.8731 |
| 1.7692 | 0.6 | 260 | 1.8706 |
| 1.8084 | 0.61 | 265 | 1.8698 |
| 1.7602 | 0.62 | 270 | 1.8705 |
| 1.7751 | 0.63 | 275 | 1.8681 |
| 1.7403 | 0.64 | 280 | 1.8672 |
| 1.8078 | 0.66 | 285 | 1.8648 |
| 1.8464 | 0.67 | 290 | 1.8648 |
| 1.7853 | 0.68 | 295 | 1.8651 |
| 1.8546 | 0.69 | 300 | 1.8643 |
| 1.8319 | 0.7 | 305 | 1.8633 |
| 1.7908 | 0.71 | 310 | 1.8614 |
| 1.738 | 0.72 | 315 | 1.8625 |
| 1.8868 | 0.74 | 320 | 1.8630 |
| 1.7744 | 0.75 | 325 | 1.8621 |
| 1.8292 | 0.76 | 330 | 1.8609 |
| 1.7905 | 0.77 | 335 | 1.8623 |
| 1.7652 | 0.78 | 340 | 1.8610 |
| 1.8371 | 0.79 | 345 | 1.8611 |
| 1.7024 | 0.81 | 350 | 1.8593 |
| 1.7328 | 0.82 | 355 | 1.8593 |
| 1.7376 | 0.83 | 360 | 1.8606 |
| 1.747 | 0.84 | 365 | 1.8601 |
| 1.7777 | 0.85 | 370 | 1.8602 |
| 1.8701 | 0.86 | 375 | 1.8598 |
| 1.7165 | 0.87 | 380 | 1.8579 |
| 1.779 | 0.89 | 385 | 1.8588 |
| 1.8536 | 0.9 | 390 | 1.8583 |
| 1.7263 | 0.91 | 395 | 1.8582 |
| 1.7983 | 0.92 | 400 | 1.8587 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.37.1
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
# Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```
@misc{ShieldX/manovyadh-1.1B-v1-chat,
url={[https://huggingface.co/ShieldX/manovyadh-1.1B-v1-chat](https://huggingface.co/ShieldX/manovyadh-1.1B-v1-chat)},
title={ManoVyadh},
author={Rohan Shaw},
year={2024}, month={Jan}
}
```
# Model Card Authors
ShieldX a.k.a Rohan Shaw
# Model Card Contact
email : rohanshaw.dev@gmail.com
|
MaziyarPanahi/CollectiveCognition-v1.1-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF
|
MaziyarPanahi
| 2024-01-27T20:50:43Z | 54 | 0 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"Safetensors",
"text-generation-inference",
"merge",
"7b",
"mistralai/Mistral-7B-Instruct-v0.1",
"teknium/CollectiveCognition-v1.1-Mistral-7B",
"pytorch",
"mistral-7b",
"instruct",
"finetune",
"gpt4",
"synthetic data",
"distillation",
"sharegpt",
"en",
"dataset:CollectiveCognition/chats-data-2023-09-27",
"base_model:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"base_model:MaziyarPanahi/CollectiveCognition-v1.1-Mistral-7B-Mistral-7B-Instruct-v0.1",
"base_model:quantized:MaziyarPanahi/CollectiveCognition-v1.1-Mistral-7B-Mistral-7B-Instruct-v0.1",
"conversational"
] |
text-generation
| 2024-01-27T20:41:55Z |
---
license: apache-2.0
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- Safetensors
- text-generation-inference
- merge
- 7b
- mistralai/Mistral-7B-Instruct-v0.1
- teknium/CollectiveCognition-v1.1-Mistral-7B
- pytorch
- mistral-7b
- instruct
- finetune
- gpt4
- synthetic data
- distillation
- sharegpt
- en
- dataset:CollectiveCognition/chats-data-2023-09-27
- base_model:mistralai/Mistral-7B-v0.1
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- region:us
model_name: CollectiveCognition-v1.1-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF
base_model: MaziyarPanahi/CollectiveCognition-v1.1-Mistral-7B-Mistral-7B-Instruct-v0.1
inference: false
model_creator: MaziyarPanahi
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/CollectiveCognition-v1.1-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/CollectiveCognition-v1.1-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF)
- Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi)
- Original model: [MaziyarPanahi/CollectiveCognition-v1.1-Mistral-7B-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/CollectiveCognition-v1.1-Mistral-7B-Mistral-7B-Instruct-v0.1)
## Description
[MaziyarPanahi/CollectiveCognition-v1.1-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/CollectiveCognition-v1.1-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF) contains GGUF format model files for [MaziyarPanahi/CollectiveCognition-v1.1-Mistral-7B-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/CollectiveCognition-v1.1-Mistral-7B-Mistral-7B-Instruct-v0.1).
## How to use
Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models:
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
### Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: [MaziyarPanahi/CollectiveCognition-v1.1-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/CollectiveCognition-v1.1-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF) and below it, a specific filename to download, such as: CollectiveCognition-v1.1-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download MaziyarPanahi/CollectiveCognition-v1.1-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF CollectiveCognition-v1.1-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
</details>
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download [MaziyarPanahi/CollectiveCognition-v1.1-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/CollectiveCognition-v1.1-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/CollectiveCognition-v1.1-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF CollectiveCognition-v1.1-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m CollectiveCognition-v1.1-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./CollectiveCognition-v1.1-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./CollectiveCognition-v1.1-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
|
flutter-painter/nllb-fra-fuf-v2
|
flutter-painter
| 2024-01-27T20:45:57Z | 125 | 1 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"translation",
"ff",
"fr",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2024-01-27T09:43:24Z |
---
license: cc-by-nc-4.0
language:
- ff
- fr
metrics:
- bleu
tags:
- translation
---
This is a NLLB-200-600M model fine-tuned for translation between French and Fula languages following the unprecedented example shared by slone.
- Fula only covers Guinea / Futa Djalon, not all kinds of fula.
- Dataset = Bible + Quran + Dictionaries and bits, see https://github.com/refacto-org/MaLiNaLi
- bleu fra-fuf = 16
- bleu fuf-fra = 15
|
Ahmed235/roberta-base-topic_classification_simple2
|
Ahmed235
| 2024-01-27T20:38:18Z | 9 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-25T22:09:16Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: roberta-base-topic_classification_simple2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-topic_classification_simple2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1250
- Accuracy: {'accuracy': 0.866996699669967}
- F1: {'f1': 0.8657113367537151}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------------------------------:|:--------------------------:|
| No log | 1.0 | 313 | 0.5920 | {'accuracy': 0.8158415841584158} | {'f1': 0.8063426391052376} |
| 0.7507 | 2.0 | 626 | 0.5183 | {'accuracy': 0.8419141914191419} | {'f1': 0.8450438669495921} |
| 0.7507 | 3.0 | 939 | 0.5089 | {'accuracy': 0.8514851485148515} | {'f1': 0.8522994355907825} |
| 0.3199 | 4.0 | 1252 | 0.6030 | {'accuracy': 0.8508250825082508} | {'f1': 0.8484331857141633} |
| 0.1504 | 5.0 | 1565 | 0.6894 | {'accuracy': 0.8617161716171617} | {'f1': 0.8599694556754336} |
| 0.1504 | 6.0 | 1878 | 0.8381 | {'accuracy': 0.8448844884488449} | {'f1': 0.8461993387843019} |
| 0.0822 | 7.0 | 2191 | 0.8515 | {'accuracy': 0.8554455445544554} | {'f1': 0.8542784950089077} |
| 0.0551 | 8.0 | 2504 | 0.9319 | {'accuracy': 0.8531353135313532} | {'f1': 0.853451943641699} |
| 0.0551 | 9.0 | 2817 | 0.9478 | {'accuracy': 0.8577557755775578} | {'f1': 0.8565849659994866} |
| 0.0377 | 10.0 | 3130 | 0.9998 | {'accuracy': 0.8554455445544554} | {'f1': 0.8550659197552203} |
| 0.0377 | 11.0 | 3443 | 1.0025 | {'accuracy': 0.8554455445544554} | {'f1': 0.8550137537621838} |
| 0.0279 | 12.0 | 3756 | 1.0728 | {'accuracy': 0.8574257425742574} | {'f1': 0.8566278925949554} |
| 0.0132 | 13.0 | 4069 | 1.0873 | {'accuracy': 0.8623762376237624} | {'f1': 0.8610125122049608} |
| 0.0132 | 14.0 | 4382 | 1.0989 | {'accuracy': 0.8653465346534653} | {'f1': 0.863969705278768} |
| 0.0124 | 15.0 | 4695 | 1.1379 | {'accuracy': 0.8643564356435643} | {'f1': 0.8630599594036119} |
| 0.0095 | 16.0 | 5008 | 1.1207 | {'accuracy': 0.8653465346534653} | {'f1': 0.8639194427774014} |
| 0.0095 | 17.0 | 5321 | 1.1053 | {'accuracy': 0.866006600660066} | {'f1': 0.8652013668499585} |
| 0.0074 | 18.0 | 5634 | 1.1296 | {'accuracy': 0.863036303630363} | {'f1': 0.8615189712315606} |
| 0.0074 | 19.0 | 5947 | 1.1099 | {'accuracy': 0.8689768976897689} | {'f1': 0.867663744149239} |
| 0.0046 | 20.0 | 6260 | 1.1250 | {'accuracy': 0.866996699669967} | {'f1': 0.8657113367537151} |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
KatyTestHistorical/CrystalMaid-0.1-13B-GGUF
|
KatyTestHistorical
| 2024-01-27T20:35:49Z | 23 | 12 |
transformers
|
[
"transformers",
"gguf",
"furry",
"roleplay",
"text-generation-inference",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-01-18T21:28:26Z |
---
license: apache-2.0
language:
- en
library_name: transformers
tags:
- furry
- roleplay
- text-generation-inference
---

CrystalMaid is here, with increased verbosity and creative thought.
- Recommended settings:
- SillyTavern Default Preset.
- Temperature: 0.7
- Min-P: 0.3
- Amount to Gen: 256
- Top P: 1
- Repetition penalty: 1.10
Models used:
BlueNipples/TimeCrystal-l2-13B
cgato/Thespis-13b-DPO-v0.7
KoboldAI/LLaMA2-13B-Tiefighter
NeverSleep/Noromaid-13B-0.4-DPO
Doctor-Shotgun/cat-v1.0-13b
Feedback is always appreciated!
Thank you KoboldAI for their usage of their MergeBox and Caitlyn G. for their support and feedback.
(Please note that I am unable to provide an FP16 version or any other quantized versions as I do not have access to the original files, nor do I have the compute to remake them.)
|
badokorach/roberta-base-squad2-agric-060124
|
badokorach
| 2024-01-27T20:32:01Z | 1 | 0 |
transformers
|
[
"transformers",
"tf",
"xlm-roberta",
"question-answering",
"generated_from_keras_callback",
"base_model:badokorach/afriqa_afroxlmr_squad_v2_060124",
"base_model:finetune:badokorach/afriqa_afroxlmr_squad_v2_060124",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-01-06T01:47:53Z |
---
license: mit
base_model: badokorach/afriqa_afroxlmr_squad_v2_060124
tags:
- generated_from_keras_callback
model-index:
- name: badokorach/roberta-base-squad2-agric-060124
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# badokorach/roberta-base-squad2-agric-060124
This model is a fine-tuned version of [badokorach/afriqa_afroxlmr_squad_v2_060124](https://huggingface.co/badokorach/afriqa_afroxlmr_squad_v2_060124) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.0741
- Validation Loss: 0.0
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 4e-05, 'decay_steps': 555, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.02}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.0532 | 0.0 | 0 |
| 1.8311 | 0.0 | 1 |
| 1.6644 | 0.0 | 2 |
| 1.3868 | 0.0 | 3 |
| 1.0741 | 0.0 | 4 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.1
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.