modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-07 12:31:56
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 544
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-07 12:31:42
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
not-lain/finetuned_tinyllama_on_ads
|
not-lain
| 2024-06-10T20:03:09Z | 76 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-06-10T19:38:10Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
habedi/deberta-v3-small-kaggle-mlm
|
habedi
| 2024-06-10T20:02:08Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"fill-mask",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-small",
"base_model:finetune:microsoft/deberta-v3-small",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-06-10T16:18:56Z |
---
license: mit
base_model: microsoft/deberta-v3-small
tags:
- generated_from_trainer
model-index:
- name: deberta-v3-small-kaggle-mlm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-small-kaggle-mlm
This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6169
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 3.0931 | 1.0 | 6848 | 2.8467 |
| 2.6186 | 2.0 | 13696 | 2.4089 |
| 2.3498 | 3.0 | 20544 | 2.2224 |
| 2.2399 | 4.0 | 27392 | 2.1105 |
| 2.1226 | 5.0 | 34240 | 2.0204 |
| 2.0768 | 6.0 | 41088 | 1.9402 |
| 2.0251 | 7.0 | 47936 | 1.8767 |
| 1.9587 | 8.0 | 54784 | 1.8527 |
| 1.9209 | 9.0 | 61632 | 1.8108 |
| 1.8829 | 10.0 | 68480 | 1.8113 |
| 1.8454 | 11.0 | 75328 | 1.7698 |
| 1.8077 | 12.0 | 82176 | 1.7504 |
| 1.7991 | 13.0 | 89024 | 1.7390 |
| 1.7896 | 14.0 | 95872 | 1.7138 |
| 1.7608 | 15.0 | 102720 | 1.6847 |
| 1.7636 | 16.0 | 109568 | 1.6863 |
| 1.7416 | 17.0 | 116416 | 1.6816 |
| 1.7363 | 18.0 | 123264 | 1.6651 |
| 1.7013 | 19.0 | 130112 | 1.6465 |
| 1.6828 | 20.0 | 136960 | 1.6528 |
| 1.6889 | 21.0 | 143808 | 1.6406 |
| 1.6882 | 22.0 | 150656 | 1.6358 |
| 1.6742 | 23.0 | 157504 | 1.6338 |
| 1.6657 | 24.0 | 164352 | 1.6062 |
| 1.6685 | 25.0 | 171200 | 1.6086 |
| 1.6701 | 26.0 | 178048 | 1.6256 |
| 1.6755 | 27.0 | 184896 | 1.6186 |
| 1.6505 | 28.0 | 191744 | 1.6013 |
| 1.6573 | 29.0 | 198592 | 1.6108 |
| 1.6497 | 30.0 | 205440 | 1.6009 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
jren123/sac-walker2d-v4
|
jren123
| 2024-06-10T19:47:16Z | 16 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"Walker2d-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-06-10T19:11:34Z |
---
library_name: stable-baselines3
tags:
- Walker2d-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: SAC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Walker2d-v4
type: Walker2d-v4
metrics:
- type: mean_reward
value: 4201.90 +/- 62.23
name: mean_reward
verified: false
---
# **SAC** Agent playing **Walker2d-v4**
This is a trained model of a **SAC** agent playing **Walker2d-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
```
from stable_baselines3 import SAC
from huggingface_sb3 import load_from_hub
checkpoint = load_from_hub(
repo_id="jren123/sac-walker2d-v4",
filename="SAC-Walker2d-v4.zip",
)
model = SAC.load(checkpoint)
```
|
camenduru/MuseTalk
|
camenduru
| 2024-06-10T19:46:01Z | 0 | 3 |
diffusers
|
[
"diffusers",
"onnx",
"safetensors",
"en",
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-06-10T17:21:02Z |
---
license: creativeml-openrail-m
language:
- en
---
# MuseTalk
MuseTalk: Real-Time High Quality Lip Synchronization with Latent Space Inpainting
</br>
Yue Zhang <sup>\*</sup>,
Minhao Liu<sup>\*</sup>,
Zhaokang Chen,
Bin Wu<sup>β </sup>,
Yingjie He,
Chao Zhan,
Wenjiang Zhou
(<sup>*</sup>Equal Contribution, <sup>β </sup>Corresponding Author, benbinwu@tencent.com)
**[github](https://github.com/TMElyralab/MuseTalk)** **[huggingface](https://huggingface.co/TMElyralab/MuseTalk)** **Project(comming soon)** **Technical report (comming soon)**
We introduce `MuseTalk`, a **real-time high quality** lip-syncing model (30fps+ on an NVIDIA Tesla V100). MuseTalk can be applied with input videos, e.g., generated by [MuseV](https://github.com/TMElyralab/MuseV), as a complete virtual human solution.
# Overview
`MuseTalk` is a real-time high quality audio-driven lip-syncing model trained in the latent space of `ft-mse-vae`, which
1. modifies an unseen face according to the input audio, with a size of face region of `256 x 256`.
1. supports audio in various languages, such as Chinese, English, and Japanese.
1. supports real-time inference with 30fps+ on an NVIDIA Tesla V100.
1. supports modification of the center point of the face region proposes, which **SIGNIFICANTLY** affects generation results.
1. checkpoint available trained on the HDTF dataset.
1. training codes (comming soon).
# News
- [04/02/2024] Released MuseTalk project and pretrained models.
## Model

MuseTalk was trained in latent spaces, where the images were encoded by a freezed VAE. The audio was encoded by a freezed `whisper-tiny` model. The architecture of the generation network was borrowed from the UNet of the `stable-diffusion-v1-4`, where the audio embeddings were fused to the image embeddings by cross-attention.
## Cases
### MuseV + MuseTalk make human photos aliveοΌ
<table class="center">
<tr style="font-weight: bolder;text-align:center;">
<td width="33%">Image</td>
<td width="33%">MuseV</td>
<td width="33%">+MuseTalk</td>
</tr>
<tr>
<td>
<img src=assets/demo/musk/musk.png width="95%">
</td>
<td >
<video src=assets/demo/yongen/yongen_musev.mp4 controls preload></video>
</td>
<td >
<video src=assets/demo/yongen/yongen_musetalk.mp4 controls preload></video>
</td>
</tr>
<tr>
<td>
<img src=assets/demo/yongen/yongen.jpeg width="95%">
</td>
<td >
<video src=https://github.com/TMElyralab/MuseTalk/assets/163980830/57ef9dee-a9fd-4dc8-839b-3fbbbf0ff3f4 controls preload></video>
</td>
<td >
<video src=https://github.com/TMElyralab/MuseTalk/assets/163980830/94d8dcba-1bcd-4b54-9d1d-8b6fc53228f0 controls preload></video>
</td>
</tr>
<tr>
<td>
<img src=assets/demo/monalisa/monalisa.png width="95%">
</td>
<td >
<video src=https://github.com/TMElyralab/MuseTalk/assets/163980830/1568f604-a34f-4526-a13a-7d282aa2e773 controls preload></video>
</td>
<td >
<video src=https://github.com/TMElyralab/MuseTalk/assets/163980830/a40784fc-a885-4c1f-9b7e-8f87b7caf4e0 controls preload></video>
</td>
</tr>
<tr>
<td>
<img src=assets/demo/sun1/sun.png width="95%">
</td>
<td >
<video src=https://github.com/TMElyralab/MuseTalk/assets/163980830/37a3a666-7b90-4244-8d3a-058cb0e44107 controls preload></video>
</td>
<td >
<video src=https://github.com/TMElyralab/MuseTalk/assets/163980830/172f4ff1-d432-45bd-a5a7-a07dec33a26b controls preload></video>
</td>
</tr>
<tr>
<td>
<img src=assets/demo/sun2/sun.png width="95%">
</td>
<td >
<video src=https://github.com/TMElyralab/MuseTalk/assets/163980830/37a3a666-7b90-4244-8d3a-058cb0e44107 controls preload></video>
</td>
<td >
<video src=https://github.com/TMElyralab/MuseTalk/assets/163980830/85a6873d-a028-4cce-af2b-6c59a1f2971d controls preload></video>
</td>
</tr>
</table >
* The character of the last two rows, `Xinying Sun`, is a supermodel KOL. You can follow her on [douyin](https://www.douyin.com/user/MS4wLjABAAAAWDThbMPN_6Xmm_JgXexbOii1K-httbu2APdG8DvDyM8).
## Video dubbing
<table class="center">
<tr style="font-weight: bolder;text-align:center;">
<td width="70%">MuseTalk</td>
<td width="30%">Original videos</td>
</tr>
<tr>
<td>
<video src=https://github.com/TMElyralab/MuseTalk/assets/163980830/4d7c5fa1-3550-4d52-8ed2-52f158150f24 controls preload></video>
</td>
<td>
<a href="//www.bilibili.com/video/BV1wT411b7HU">Link</a>
<href src=""></href>
</td>
</tr>
</table>
* For video dubbing, we applied a self-developed tool which can detect the talking person.
# TODO:
- [x] trained models and inference codes.
- [ ] technical report.
- [ ] training codes.
- [ ] online UI.
- [ ] a better model (may take longer).
# Getting Started
We provide a detailed tutorial about the installation and the basic usage of MuseTalk for new users:
## Installation
To prepare the Python environment and install additional packages such as opencv, diffusers, mmcv, etc., please follow the steps below:
### Build environment
We recommend a python version >=3.10 and cuda version =11.7. Then build environment as follows:
```shell
pip install -r requirements.txt
```
### whisper
install whisper to extract audio feature (only encoder)
```
pip install --editable ./musetalk/whisper
```
### mmlab packages
```bash
pip install --no-cache-dir -U openmim
mim install mmengine
mim install "mmcv>=2.0.1"
mim install "mmdet>=3.1.0"
mim install "mmpose>=1.1.0"
```
### Download ffmpeg-static
Download the ffmpeg-static and
```
export FFMPEG_PATH=/path/to/ffmpeg
```
for example:
```
export FFMPEG_PATH=/musetalk/ffmpeg-4.4-amd64-static
```
### Download weights
You can download weights manually as follows:
1. Download our trained [weights](https://huggingface.co/TMElyralab/MuseTalk).
2. Download the weights of other components:
- [sd-vae-ft-mse](https://huggingface.co/stabilityai/sd-vae-ft-mse)
- [whisper](https://openaipublic.azureedge.net/main/whisper/models/65147644a518d12f04e32d6f3b26facc3f8dd46e5390956a9424a650c0ce22b9/tiny.pt)
- [dwpose](https://huggingface.co/yzd-v/DWPose/tree/main)
- [face-parse-bisent](https://github.com/zllrunning/face-parsing.PyTorch)
- [resnet18](https://download.pytorch.org/models/resnet18-5c106cde.pth)
Finally, these weights should be organized in `models` as follows:
```
./models/
βββ musetalk
β βββ musetalk.json
β βββ pytorch_model.bin
βββ dwpose
β βββ dw-ll_ucoco_384.pth
βββ face-parse-bisent
β βββ 79999_iter.pth
β βββ resnet18-5c106cde.pth
βββ sd-vae-ft-mse
β βββ config.json
β βββ diffusion_pytorch_model.bin
βββ whisper
βββ tiny.pt
```
## Quickstart
### Inference
Here, we provide the inference script.
```
python -m scripts.inference --inference_config configs/inference/test.yaml
```
configs/inference/test.yaml is the path to the inference configuration file, including video_path and audio_path.
The video_path should be either a video file or a directory of images.
#### Use of bbox_shift to have adjustable results
:mag_right: We have found that upper-bound of the mask has an important impact on mouth openness. Thus, to control the mask region, we suggest using the `bbox_shift` parameter. Positive values (moving towards the lower half) increase mouth openness, while negative values (moving towards the upper half) decrease mouth openness.
You can start by running with the default configuration to obtain the adjustable value range, and then re-run the script within this range.
For example, in the case of `Xinying Sun`, after running the default configuration, it shows that the adjustable value rage is [-9, 9]. Then, to decrease the mouth openness, we set the value to be `-7`.
```
python -m scripts.inference --inference_config configs/inference/test.yaml --bbox_shift -7
```
:pushpin: More technical details can be found in [bbox_shift](assets/BBOX_SHIFT.md).
#### Combining MuseV and MuseTalk
As a complete solution to virtual human generation, you are suggested to first apply [MuseV](https://github.com/TMElyralab/MuseV) to generate a video (text-to-video, image-to-video or pose-to-video) by referring [this](https://github.com/TMElyralab/MuseV?tab=readme-ov-file#text2video). Then, you can use `MuseTalk` to generate a lip-sync video by referring [this](https://github.com/TMElyralab/MuseTalk?tab=readme-ov-file#inference).
# Note
If you want to launch online video chats, you are suggested to generate videos using MuseV and apply necessary pre-processing such as face detection in advance. During online chatting, only UNet and the VAE decoder are involved, which makes MuseTalk real-time.
# Acknowledgement
1. We thank open-source components like [whisper](https://github.com/isaacOnline/whisper/tree/extract-embeddings), [dwpose](https://github.com/IDEA-Research/DWPose), [face-alignment](https://github.com/1adrianb/face-alignment), [face-parsing](https://github.com/zllrunning/face-parsing.PyTorch), [S3FD](https://github.com/yxlijun/S3FD.pytorch).
1. MuseTalk has referred much to [diffusers](https://github.com/huggingface/diffusers).
1. MuseTalk has been built on `HDTF` datasets.
Thanks for open-sourcing!
# Limitations
- Resolution: Though MuseTalk uses a face region size of 256 x 256, which make it better than other open-source methods, it has not yet reached the theoretical resolution bound. We will continue to deal with this problem.
If you need higher resolution, you could apply super resolution models such as [GFPGAN](https://github.com/TencentARC/GFPGAN) in combination with MuseTalk.
- Identity preservation: Some details of the original face are not well preserved, such as mustache, lip shape and color.
- Jitter: There exists some jitter as the current pipeline adopts single-frame generation.
# Citation
```bib
@article{musetalk,
title={MuseTalk: Real-Time High Quality Lip Synchorization with Latent Space Inpainting},
author={Zhang, Yue and Liu, Minhao and Chen, Zhaokang and Wu, Bin and He, Yingjie and Zhan, Chao and Zhou, Wenjiang},
journal={arxiv},
year={2024}
}
```
# Disclaimer/License
1. `code`: The code of MuseTalk is released under the MIT License. There is no limitation for both academic and commercial usage.
1. `model`: The trained model are available for any purpose, even commercially.
1. `other opensource model`: Other open-source models used must comply with their license, such as `whisper`, `ft-mse-vae`, `dwpose`, `S3FD`, etc..
1. The testdata are collected from internet, which are available for non-commercial research purposes only.
1. `AIGC`: This project strives to impact the domain of AI-driven video generation positively. Users are granted the freedom to create videos using this tool, but they are expected to comply with local laws and utilize it responsibly. The developers do not assume any responsibility for potential misuse by users.
|
silent666/Qwen-Qwen1.5-1.8B-1718048511
|
silent666
| 2024-06-10T19:43:45Z | 131 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-10T19:41:52Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
xy4286/yang-summarization
|
xy4286
| 2024-06-10T19:43:08Z | 91 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"base_model:google/pegasus-cnn_dailymail",
"base_model:finetune:google/pegasus-cnn_dailymail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-06-10T18:54:17Z |
---
base_model: google/pegasus-cnn_dailymail
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: yang-summarization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# yang-summarization
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4845
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.6718 | 0.5430 | 500 | 1.4845 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
silent666/Qwen-Qwen1.5-0.5B-1718048419
|
silent666
| 2024-06-10T19:40:57Z | 131 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-10T19:40:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
theprint/Mistral-7b-Instruct-v0.2-python-18k
|
theprint
| 2024-06-10T19:36:03Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"base_model:quantized:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-13T01:10:59Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit
---
# Uploaded model
- **Developed by:** theprint
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jren123/sac-ant-v4
|
jren123
| 2024-06-10T19:35:08Z | 24 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"Ant-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-06-10T18:21:15Z |
---
library_name: stable-baselines3
tags:
- Ant-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: SAC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Ant-v4
type: Ant-v4
metrics:
- type: mean_reward
value: 5635.13 +/- 88.03
name: mean_reward
verified: false
---
# **SAC** Agent playing **Ant-v4**
This is a trained model of a **SAC** agent playing **Ant-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import SAC
from huggingface_sb3 import load_from_hub
checkpoint = load_from_hub(
repo_id="jren123/sac-ant-v4",
filename="SAC-Ant-v4.zip",
)
model = SAC.load(checkpoint)
```
|
ih8l1ght/finetuning-sentiment-model-3000-samples
|
ih8l1ght
| 2024-06-10T19:29:41Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-10T19:15:50Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3366
- Accuracy: 0.8733
- F1: 0.8766
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1+cu118
- Datasets 2.19.2
- Tokenizers 0.19.1
|
ansilmbabl/vit-base-patch16-224-in21k-cards-june-08-cropping-filtered-preprocess-change-test-2
|
ansilmbabl
| 2024-06-10T19:25:52Z | 219 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:ansilmbabl/vit-base-patch16-224-in21k-cards-june-07-cropping-filtered-preprocess-change-test",
"base_model:finetune:ansilmbabl/vit-base-patch16-224-in21k-cards-june-07-cropping-filtered-preprocess-change-test",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-06-08T16:42:30Z |
---
license: apache-2.0
base_model: ansilmbabl/vit-base-patch16-224-in21k-cards-june-07-cropping-filtered-preprocess-change-test
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-in21k-cards-june-08-cropping-filtered-preprocess-change-test-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-cards-june-08-cropping-filtered-preprocess-change-test-2
This model is a fine-tuned version of [ansilmbabl/vit-base-patch16-224-in21k-cards-june-07-cropping-filtered-preprocess-change-test](https://huggingface.co/ansilmbabl/vit-base-patch16-224-in21k-cards-june-07-cropping-filtered-preprocess-change-test) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5958
- Accuracy: 0.5147
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:------:|:-----:|:--------:|:---------------:|
| 1.0182 | 0.9998 | 1298 | 0.4287 | 1.5280 |
| 0.9583 | 1.9996 | 2596 | 0.4475 | 1.4878 |
| 0.8452 | 2.9998 | 3894 | 1.4847 | 0.4716 |
| 0.6887 | 3.9996 | 5192 | 1.5848 | 0.4736 |
| 0.5269 | 4.9994 | 6490 | 1.6689 | 0.493 |
| 0.4018 | 6.0 | 7789 | 1.8483 | 0.4986 |
| 0.2909 | 6.9998 | 9087 | 2.0319 | 0.5079 |
| 0.1823 | 7.9996 | 10385 | 2.2540 | 0.5127 |
| 0.1056 | 8.9994 | 11683 | 2.4652 | 0.511 |
| 0.0767 | 9.9985 | 12980 | 2.5958 | 0.5147 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.0.1+cu117
- Datasets 2.19.2
- Tokenizers 0.19.1
|
jost/mistral7b_plantuml
|
jost
| 2024-06-10T19:21:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-10T19:20:49Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/mistral-7b-bnb-4bit
---
# Uploaded model
- **Developed by:** jost
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
svjack/Genshin-Impact-LandScape-lora-sd-xl-rk32
|
svjack
| 2024-06-10T19:20:39Z | 1 | 0 |
diffusers
|
[
"diffusers",
"region:us"
] | null | 2024-06-10T18:44:34Z |
---
library_name: diffusers
---
## Generate Genshin Impact LandScape style image by lora tuned on stable-diffusion-xl
### Install
```bash
pip install git+https://github.com/huggingface/diffusers.git peft
```
```python
import torch
from diffusers import (
StableDiffusionXLPipeline,
AutoencoderKL,
)
vae = AutoencoderKL.from_pretrained(
"madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16, use_safetensors=True
)
model_path = "stabilityai/stable-diffusion-xl-base-1.0"
pipe = StableDiffusionXLPipeline.from_pretrained(
model_path, torch_dtype=torch.float16, vae=vae
)
pipe.to("cuda")
pipe.load_lora_weights("svjack/Genshin-Impact-LandScape-lora-sd-xl-rk4")
```
### Generate Genshin Mondstadt LandScape Image
```python
prompt = "European, green coniferous tree, yellow coniferous tree, rock, creek, sunny day, pastel tones, 3D"
image = pipe(prompt=prompt,
num_inference_steps=50, guidance_scale=5.0,
cross_attention_kwargs={"scale": 0.7}
).images[0]
image
```

### Generate Genshin Mondstadt LandScape Clear Image
```python
prompt = "European, green coniferous tree, yellow coniferous tree, rock, creek, sunny day, pastel tones, 3D"
image = pipe(prompt=prompt,
negative_prompt = "blurred, uneven, messy, foggy",
num_inference_steps=50, guidance_scale=5.0,
cross_attention_kwargs={"scale": 0.7}
).images[0]
image
```

### Generate Genshin Liyue LandScape Image
```python
prompt = "Chinese, yellow deciduous wood, orange deciduous wood, rock, sunny day, pastel tones, 3D"
image = pipe(prompt=prompt,
num_inference_steps=50, guidance_scale=5.0,
cross_attention_kwargs={"scale": 0.7}
).images[0]
image
```

### Generate Genshin Liyue LandScape Clear Image
```python
prompt = "Chinese, yellow deciduous wood, orange deciduous wood, rock, sunny day, bright, 3D"
image = pipe(prompt=prompt,
num_inference_steps=50, guidance_scale=5.0,
cross_attention_kwargs={"scale": 0.7}
).images[0]
image
```

|
LinhCT/mt5-small-finetuned-xsum
|
LinhCT
| 2024-06-10T19:17:11Z | 114 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-06-10T13:15:18Z |
---
license: apache-2.0
base_model: google/mt5-small
tags:
- generated_from_trainer
datasets:
- samsum
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-xsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: samsum
type: samsum
config: samsum
split: validation
args: samsum
metrics:
- name: Rouge1
type: rouge
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-xsum
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 0.0
- Rouge2: 0.0
- Rougel: 0.0
- Rougelsum: 0.0
- Gen Len: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.0 | 1.0 | 3683 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
enriquesaou/roberta-vmw-mrqa-old-but-not-that-old
|
enriquesaou
| 2024-06-10T19:17:03Z | 114 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"question-answering",
"generated_from_trainer",
"base_model:VMware/roberta-base-mrqa",
"base_model:finetune:VMware/roberta-base-mrqa",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-06-10T19:16:40Z |
---
license: apache-2.0
base_model: VMware/roberta-base-mrqa
tags:
- generated_from_trainer
model-index:
- name: roberta-vmw-mrqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/favcowboy/huggingface/runs/1s0nzjkc)
# roberta-vmw-mrqa
This model is a fine-tuned version of [VMware/roberta-base-mrqa](https://huggingface.co/VMware/roberta-base-mrqa) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 20
- eval_batch_size: 20
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2285 | 1.0 | 1399 | 1.8940 |
| 1.521 | 2.0 | 2798 | 1.6821 |
| 1.0055 | 3.0 | 4197 | 1.7997 |
### Framework versions
- Transformers 4.42.0.dev0
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
RajuEEE/RewardModel_RobertaBase_GPT_Data
|
RajuEEE
| 2024-06-10T19:11:37Z | 107 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-06T12:10:25Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: RewardModel_RobertaBase_GPT_Data
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RewardModel_RobertaBase_GPT_Data
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2827
- F1: 0.9076
- Roc Auc: 0.9420
- Accuracy: 0.8393
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| No log | 1.0 | 16 | 0.6224 | 0.0 | 0.5 | 0.0 |
| No log | 2.0 | 32 | 0.5112 | 0.4658 | 0.6518 | 0.3036 |
| No log | 3.0 | 48 | 0.3407 | 0.8235 | 0.8571 | 0.75 |
| No log | 4.0 | 64 | 0.3243 | 0.85 | 0.8973 | 0.7679 |
| No log | 5.0 | 80 | 0.2827 | 0.9076 | 0.9420 | 0.8393 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
svilupp/onnx-cross-encoders
|
svilupp
| 2024-06-10T19:08:53Z | 0 | 1 | null |
[
"onnx",
"cross-encoder",
"text-classification",
"en",
"dataset:microsoft/ms_marco",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2024-06-10T15:41:25Z |
---
license: apache-2.0
datasets:
- microsoft/ms_marco
language:
- en
pipeline_tag: text-classification
tags:
- onnx
- cross-encoder
---
# Cross-Encoder for MS Marco - ONNX
ONNX versions of [Sentence Transformers Cross Encoders](https://huggingface.co/cross-encoder) to allow ranking without heavy dependencies.
The models were trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task.
The models can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html) for more details.
## Models Available
| Model Name | Precision | File Name | File Size |
|--------------------------------------|-----------|------------------------------------------|-----------|
| ms-marco-MiniLM-L-4-v2 ONNX | FP32 | ms-marco-MiniLM-L-4-v2-onnx.zip | 70 MB |
| ms-marco-MiniLM-L-4-v2 ONNX (Quantized) | INT8 | ms-marco-MiniLM-L-4-v2-onnx-int8.zip | 12.8 MB |
| ms-marco-MiniLM-L-6-v2 ONNX | FP32 | ms-marco-MiniLM-L-6-v2-onnx.zip | 83.4 MB |
| ms-marco-MiniLM-L-6-v2 ONNX (Quantized) | INT8 | ms-marco-MiniLM-L-6-v2-onnx-int8.zip | 15.2 MB |
## Usage with ONNX Runtime
```python
import onnxruntime as ort
from transformers import AutoTokenizer
model_path="ms-marco-MiniLM-L-4-v2-onnx/"
tokenizer = AutoTokenizer.from_pretrained('model_path')
ort_sess = ort.InferenceSession(model_path + "ms-marco-MiniLM-L-4-v2.onnx")
features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="np")
ort_outs = ort_sess.run(None, features)
print(ort_outs)
```
## Performance
TBU...
|
abby101/test-model-card-template-dreambooth-sdxl-lora-adv
|
abby101
| 2024-06-10T19:08:44Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"lora",
"template:sd-lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-04-09T07:19:51Z |
---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: A mushroom in [V] style
widget:
- text: ' '
output:
url: image_0.png
- text: ' '
output:
url: image_1.png
- text: ' '
output:
url: image_2.png
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - abby101/test
<Gallery />
## Model description
### These are abby101/test LoRA adaption weights for runwayml/stable-diffusion-v1-5.
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`..safetensors` here πΎ](/abby101/test/blob/main/..safetensors)**.
- Place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:.:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('abby101/test', weight_name='pytorch_lora_weights.safetensors')
image = pipeline('A mushroom in [V] style').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Trigger words
You should use A mushroom in [V] style to trigger the image generation.
## Details
All [Files & versions](/abby101/test/tree/main).
The weights were trained using [𧨠diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: False.
Special VAE used for training: None.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
kumarchavda/zx80zx81b
|
kumarchavda
| 2024-06-10T19:03:06Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2024-06-10T19:03:03Z |
---
tags:
- fastai
---
# Amazing!
π₯³ Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using π€ Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner π€! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
areegtarek/ArabicTranslationPC
|
areegtarek
| 2024-06-10T19:02:20Z | 80 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-06-10T18:59:52Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** areegtarek
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
naveenreddy/unit-4
|
naveenreddy
| 2024-06-10T19:00:32Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-06-10T19:00:28Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: unit-4
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 198.60 +/- 8.75
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
bartowski/Qwen2-7B-Instruct-deccp-GGUF
|
bartowski
| 2024-06-10T18:53:11Z | 243 | 5 | null |
[
"gguf",
"text-generation",
"en",
"zh",
"dataset:augmxnt/deccp",
"base_model:Qwen/Qwen2-7B-Instruct",
"base_model:quantized:Qwen/Qwen2-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-06-09T14:14:11Z |
---
license: apache-2.0
datasets:
- augmxnt/deccp
language:
- en
- zh
base_model: Qwen/Qwen2-7B-Instruct
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp imatrix Quantizations of Qwen2-7B-Instruct-deccp
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3086">b3086</a> for quantization.
Original model: https://huggingface.co/augmxnt/Qwen2-7B-Instruct-deccp
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
## Prompt format
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Qwen2-7B-Instruct-deccp-Q8_0.gguf](https://huggingface.co/bartowski/Qwen2-7B-Instruct-deccp-GGUF/blob/main/Qwen2-7B-Instruct-deccp-Q8_0.gguf) | Q8_0 | 8.09GB | Extremely high quality, generally unneeded but max available quant. |
| [Qwen2-7B-Instruct-deccp-Q6_K.gguf](https://huggingface.co/bartowski/Qwen2-7B-Instruct-deccp-GGUF/blob/main/Qwen2-7B-Instruct-deccp-Q6_K.gguf) | Q6_K | 6.25GB | Very high quality, near perfect, *recommended*. |
| [Qwen2-7B-Instruct-deccp-Q5_K_M.gguf](https://huggingface.co/bartowski/Qwen2-7B-Instruct-deccp-GGUF/blob/main/Qwen2-7B-Instruct-deccp-Q5_K_M.gguf) | Q5_K_M | 5.44GB | High quality, *recommended*. |
| [Qwen2-7B-Instruct-deccp-Q5_K_S.gguf](https://huggingface.co/bartowski/Qwen2-7B-Instruct-deccp-GGUF/blob/main/Qwen2-7B-Instruct-deccp-Q5_K_S.gguf) | Q5_K_S | 5.31GB | High quality, *recommended*. |
| [Qwen2-7B-Instruct-deccp-Q4_K_M.gguf](https://huggingface.co/bartowski/Qwen2-7B-Instruct-deccp-GGUF/blob/main/Qwen2-7B-Instruct-deccp-Q4_K_M.gguf) | Q4_K_M | 4.68GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [Qwen2-7B-Instruct-deccp-Q4_K_S.gguf](https://huggingface.co/bartowski/Qwen2-7B-Instruct-deccp-GGUF/blob/main/Qwen2-7B-Instruct-deccp-Q4_K_S.gguf) | Q4_K_S | 4.45GB | Slightly lower quality with more space savings, *recommended*. |
| [Qwen2-7B-Instruct-deccp-IQ4_XS.gguf](https://huggingface.co/bartowski/Qwen2-7B-Instruct-deccp-GGUF/blob/main/Qwen2-7B-Instruct-deccp-IQ4_XS.gguf) | IQ4_XS | 4.21GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Qwen2-7B-Instruct-deccp-Q3_K_L.gguf](https://huggingface.co/bartowski/Qwen2-7B-Instruct-deccp-GGUF/blob/main/Qwen2-7B-Instruct-deccp-Q3_K_L.gguf) | Q3_K_L | 4.08GB | Lower quality but usable, good for low RAM availability. |
| [Qwen2-7B-Instruct-deccp-Q3_K_M.gguf](https://huggingface.co/bartowski/Qwen2-7B-Instruct-deccp-GGUF/blob/main/Qwen2-7B-Instruct-deccp-Q3_K_M.gguf) | Q3_K_M | 3.80GB | Even lower quality. |
| [Qwen2-7B-Instruct-deccp-IQ3_M.gguf](https://huggingface.co/bartowski/Qwen2-7B-Instruct-deccp-GGUF/blob/main/Qwen2-7B-Instruct-deccp-IQ3_M.gguf) | IQ3_M | 3.57GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Qwen2-7B-Instruct-deccp-Q3_K_S.gguf](https://huggingface.co/bartowski/Qwen2-7B-Instruct-deccp-GGUF/blob/main/Qwen2-7B-Instruct-deccp-Q3_K_S.gguf) | Q3_K_S | 3.49GB | Low quality, not recommended. |
| [Qwen2-7B-Instruct-deccp-IQ3_XS.gguf](https://huggingface.co/bartowski/Qwen2-7B-Instruct-deccp-GGUF/blob/main/Qwen2-7B-Instruct-deccp-IQ3_XS.gguf) | IQ3_XS | 3.34GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Qwen2-7B-Instruct-deccp-IQ3_XXS.gguf](https://huggingface.co/bartowski/Qwen2-7B-Instruct-deccp-GGUF/blob/main/Qwen2-7B-Instruct-deccp-IQ3_XXS.gguf) | IQ3_XXS | 3.11GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Qwen2-7B-Instruct-deccp-Q2_K.gguf](https://huggingface.co/bartowski/Qwen2-7B-Instruct-deccp-GGUF/blob/main/Qwen2-7B-Instruct-deccp-Q2_K.gguf) | Q2_K | 3.01GB | Very low quality but surprisingly usable. |
| [Qwen2-7B-Instruct-deccp-IQ2_M.gguf](https://huggingface.co/bartowski/Qwen2-7B-Instruct-deccp-GGUF/blob/main/Qwen2-7B-Instruct-deccp-IQ2_M.gguf) | IQ2_M | 2.78GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [Qwen2-7B-Instruct-deccp-IQ2_S.gguf](https://huggingface.co/bartowski/Qwen2-7B-Instruct-deccp-GGUF/blob/main/Qwen2-7B-Instruct-deccp-IQ2_S.gguf) | IQ2_S | 2.59GB | Very low quality, uses SOTA techniques to be usable. |
| [Qwen2-7B-Instruct-deccp-IQ2_XS.gguf](https://huggingface.co/bartowski/Qwen2-7B-Instruct-deccp-GGUF/blob/main/Qwen2-7B-Instruct-deccp-IQ2_XS.gguf) | IQ2_XS | 2.46GB | Very low quality, uses SOTA techniques to be usable. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Qwen2-7B-Instruct-deccp-GGUF --include "Qwen2-7B-Instruct-deccp-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Qwen2-7B-Instruct-deccp-GGUF --include "Qwen2-7B-Instruct-deccp-Q8_0.gguf/*" --local-dir Qwen2-7B-Instruct-deccp-Q8_0
```
You can either specify a new local-dir (Qwen2-7B-Instruct-deccp-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
RamyaRamakrishna/llama3-adapters-1
|
RamyaRamakrishna
| 2024-06-10T18:40:20Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:gradientai/Llama-3-8B-Instruct-Gradient-1048k",
"base_model:adapter:gradientai/Llama-3-8B-Instruct-Gradient-1048k",
"region:us"
] | null | 2024-06-10T17:54:13Z |
---
library_name: peft
base_model: gradientai/Llama-3-8B-Instruct-Gradient-1048k
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1
|
sajjad55/wsdbanglat5_1e4_ED1
|
sajjad55
| 2024-06-10T18:39:56Z | 118 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:ka05ar/Banglat5_EDx1",
"base_model:finetune:ka05ar/Banglat5_EDx1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-06-10T17:42:41Z |
---
base_model: ka05ar/Banglat5_EDx1
tags:
- generated_from_trainer
model-index:
- name: wsdbanglat5_1e4_ED1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wsdbanglat5_1e4_ED1
This model is a fine-tuned version of [ka05ar/Banglat5_EDx1](https://huggingface.co/ka05ar/Banglat5_EDx1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0049
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.1282 | 1.0 | 1481 | 0.0998 |
| 0.0283 | 2.0 | 2962 | 0.0112 |
| 0.0162 | 3.0 | 4443 | 0.0081 |
| 0.0112 | 4.0 | 5924 | 0.0051 |
| 0.0088 | 5.0 | 7405 | 0.0044 |
| 0.0063 | 6.0 | 8886 | 0.0046 |
| 0.0064 | 7.0 | 10367 | 0.0048 |
| 0.0055 | 8.0 | 11848 | 0.0049 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1
|
KPostOffice/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned-adapters
|
KPostOffice
| 2024-06-10T18:37:11Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"region:us"
] | null | 2024-06-07T21:28:52Z |
---
library_name: peft
base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.2.dev0
|
OliBomby/rcomplexion
|
OliBomby
| 2024-06-10T18:35:49Z | 0 | 0 | null |
[
"pytorch",
"region:us"
] | null | 2024-06-10T18:33:39Z |
This model is trained on osu! ranked beatmaps. It predicts the time until the next hit object based on the previous hit objects.
It's used to estimate the complexity of rhythm in beatmaps.
https://github.com/OliBomby/Mapperatorinator/tree/main/rcomplexion
|
PB7-DUT-2023/finetuned_Mistral-7B_v1
|
PB7-DUT-2023
| 2024-06-10T18:31:25Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-10T18:25:52Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
SalimBou5/sft_dpo_argilla_cleaned
|
SalimBou5
| 2024-06-10T18:27:09Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/gemma-7b-bnb-4bit",
"base_model:adapter:unsloth/gemma-7b-bnb-4bit",
"region:us"
] | null | 2024-06-10T18:26:55Z |
---
library_name: peft
base_model: unsloth/gemma-7b-bnb-4bit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1
|
MudassirFayaz/career_councling_bart_0.2
|
MudassirFayaz
| 2024-06-10T18:21:53Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-10T18:11:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ehristoforu/Visionix-alpha-inpainting
|
ehristoforu
| 2024-06-10T18:20:34Z | 16 | 5 |
diffusers
|
[
"diffusers",
"safetensors",
"StableDiffusionXLInpaintPipeline",
"stable-diffusion",
"sdxl",
"sdxl-inpainting",
"inpainting",
"visionix",
"visionix-alpha",
"realism",
"hyperrealism",
"photorealism",
"photo",
"cinematic",
"nature",
"human",
"lighting",
"trained",
"image-to-image",
"en",
"base_model:diffusers/stable-diffusion-xl-1.0-inpainting-0.1",
"base_model:finetune:diffusers/stable-diffusion-xl-1.0-inpainting-0.1",
"license:creativeml-openrail-m",
"region:us"
] |
image-to-image
| 2024-06-10T12:42:57Z |
---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
pipeline_tag: image-to-image
base_model: diffusers/stable-diffusion-xl-1.0-inpainting-0.1
tags:
- safetensors
- StableDiffusionXLInpaintPipeline
- stable-diffusion
- sdxl
- sdxl-inpainting
- inpainting
- visionix
- visionix-alpha
- realism
- hyperrealism
- photorealism
- photo
- cinematic
- nature
- human
- lighting
- trained
inference: false
---
# **[VisioniX](https://huggingface.co/ehristoforu/Visionix-alpha)** Alpha-inpainting - the most powerful realism-model

We present the best realism model at the moment - VisioniX.
## About this model
This model was created through complex training on huge, ultra-realistic datasets.
### Why is this model better than its competitors?
All, absolutely all realism models make one important mistake: they chase only super realism (super detailed skin and others) completely forgetting about general aesthetics, anatomy, etc.
### Who is this model for?
The main feature of this model is that the model can generate not only super realistic photos, but also realistic detailed art and much more, so the model is suitable for a large audience and can solve a wide range of problems. If this model still does not suit you, we recommend using FluentlyXL model.
### Optimal settings for this model
- **Sampler**: *DPM++ 3M SDE* (Karras), DPM++ SDE (Karras)
- **Inference Steps**: *22*-25
- **Guidance Scale (CFG)**: 5-7
- **Negative Prompt**: *not* or:
```
cartoon, 3D, disfigured, bad, art, deformed, extra limbs, weird, blurry, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn, hands, poorly drawn face, mutation, ugly, bad, anatomy, bad proportions, extra limbs, clone, clone-faced, cross proportions, missing arms, malformed limbs, missing legs, mutated, hands, fused fingers, too many fingers, photo shop, video game, ugly, tiling, cross-eye, mutation of eyes, long neck, bonnet, hat, beanie, cap, B&W
```
### End
After this model, you will not want to use the rest of the realism models, if you like the model, we ask you to leave a good review and a couple of your results in the review, thank you, this will greatly help in promoting this wonderful model π
|
mnlp-2024/dpo_lora_mcqa
|
mnlp-2024
| 2024-06-10T18:18:43Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-10T13:38:37Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Rendika/tweets-election-classification
|
Rendika
| 2024-06-10T18:16:31Z | 106 | 1 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"legal",
"en",
"id",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-08T15:33:16Z |
---
license: mit
language:
- en
- id
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- legal
---
# Election Tweets Classification Model
This repository contains a fine-tuned of ***indolem/indobertweet-base-uncased model*** for classifying tweets related to election topics. The model has been trained to categorize tweets into eight distinct classes, providing valuable insights into public opinion and discourse during election periods.
## Classes
The model classifies tweets into the following categories:
1. **Politik** (2972 samples)
2. **Sosial Budaya** (425 samples)
3. **Ideologi** (343 samples)
4. **Pertahanan dan Keamanan** (331 samples)
5. **Ekonomi** (310 samples)
6. **Sumber Daya Alam** (157 samples)
7. **Demografi** (61 samples)
8. **Geografi** (20 samples)
| Encoded | Label |
|:---------:|:---------------------------:|
| 0 | Demografi |
| 1 | Ekonomi |
| 2 | Geografi |
| 3 | Ideologi |
| 4 | Pertahanan dan Keamanan |
| 5 | Politik |
| 6 | Sosial Budaya |
| 7 | Sumber Daya Alam |
## Libraries Used
The following libraries were used for data processing, model training, and evaluation:
- Data processing: `numpy`, `pandas`, `re`, `string`, `random`
- Visualization: `matplotlib.pyplot`, `seaborn`, `tqdm`, `plotly.graph_objs`, `plotly.express`, `plotly.figure_factory`
- Word cloud generation: `PIL`, `wordcloud`
- NLP: `nltk`, `nlp_id`, `Sastrawi`, `tweet-preprocessor`
- Machine Learning: `tensorflow`, `keras`, `sklearn`, `transformers`, `torch`
## Data Preparation
### Data Split
The dataset was split into training, validation, and test sets with the following proportions:
- **Training Set**: 85% (3925 samples)
- **Validation Set**: 10% (463 samples)
- **Test Set**: 5% (231 samples)
### Training Details
- **Epochs**: 3
- **Batch Size**: 32
### Training Results
| Epoch | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy |
|-------|------------|----------------|-----------------|---------------------|
| 1 | 0.9382 | 0.7167 | 0.7518 | 0.7671 |
| 2 | 0.5741 | 0.8229 | 0.7081 | 0.7931 |
| 3 | 0.3541 | 0.8958 | 0.7473 | 0.7953 |
## Model Architecture
The model is built using the TensorFlow and Keras libraries and employs the following architecture:
- **Embedding Layer**: Converts input tokens into dense vectors of fixed size.
- **LSTM Layers**: Bidirectional LSTM layers capture dependencies in the text data.
- **Dense Layers**: Fully connected layers for classification.
- **Dropout Layers**: Prevent overfitting by randomly dropping units during training.
- **Batch Normalization**: Normalizes activations of the previous layer.
## Usage
### Installation
To use the model, ensure you have the required libraries installed. You can install them using pip:
```bash
pip install transformers
```
```python
# Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("Rendika/tweets-election-classification")
model = AutoModelForSequenceClassification.from_pretrained("Rendika/tweets-election-classification")
```
```python
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="Rendika/tweets-election-classification")
```
### Data Cleaning
The data was cleaned using the following steps:
1. Converted text to lowercase.
2. Removed 'RT'.
3. Removed links.
4. Removed patterns like '[RE ...]'.
5. Removed patterns like '@ ... ='.
6. Removed non-ASCII characters (including emojis).
7. Removed punctuation (excluding '#').
8. Removed excessive whitespace.
### Sample Code
Here's a sample code snippet to load and use the model:
```python
import tensorflow as tf
from tensorflow.keras.models import load_model
import pandas as pd
# Load the trained model
model = load_model('path_to_your_model.h5')
# Preprocess new data
def preprocess_text(text):
# Include your text preprocessing steps here
pass
# Example usage
new_tweets = pd.Series(["Your new tweet text here"])
preprocessed_tweets = new_tweets.apply(preprocess_text)
# Tokenize and pad sequences as done during training
# ...
# Predict the class
predictions = model.predict(preprocessed_tweets)
predicted_classes = predictions.argmax(axis=-1)
```
## Evaluation
The model was evaluated using the following metrics:
- **Precision**: Measure of accuracy of the positive predictions.
- **Recall**: Measure of the ability to find all relevant instances.
- **F1 Score**: Harmonic mean of precision and recall.
- **Accuracy**: Overall accuracy of the model.
- **Balanced Accuracy**: Accuracy adjusted for class imbalance.
## Conclusion
This fine-tuned model provides a robust tool for classifying election-related tweets into distinct categories. It can be used to analyze public sentiment and trends during election periods, aiding in better understanding and decision-making.
## License
This project is licensed under the MIT License.
## Contact
For any questions or feedback, please contact [me] at [rendikarendi96@gmail.com].
|
DiogoF/q-FrozenLake-v1-4x4-noSlippery
|
DiogoF
| 2024-06-10T18:13:46Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-06-10T18:13:44Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="DiogoF/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ninyx/Phi-3-mini-128k-instruct-advisegpt-v0.2
|
ninyx
| 2024-06-10T18:11:30Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:microsoft/Phi-3-mini-128k-instruct",
"base_model:adapter:microsoft/Phi-3-mini-128k-instruct",
"license:mit",
"region:us"
] | null | 2024-05-09T05:24:02Z |
---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: microsoft/Phi-3-mini-128k-instruct
datasets:
- generator
metrics:
- bleu
- rouge
model-index:
- name: Phi-3-mini-128k-instruct-advisegpt-v0.2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Phi-3-mini-128k-instruct-advisegpt-v0.2
This model is a fine-tuned version of [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8937
- Bleu: {'bleu': 0.26205068002927057, 'precisions': [0.6385562102386747, 0.3220126728603845, 0.19412484437384622, 0.13232381936372636], 'brevity_penalty': 0.9720474824019883, 'length_ratio': 0.9724309736350426, 'translation_length': 187368, 'reference_length': 192680}
- Rouge: {'rouge1': 0.6264248496834525, 'rouge2': 0.3031545327309577, 'rougeL': 0.5022734325866114, 'rougeLsum': 0.5017276717558696}
- Exact Match: {'exact_match': 0.0}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 5
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 12
- total_train_batch_size: 60
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Rouge | Exact Match |
|:-------------:|:------:|:----:|:---------------:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------------------:|:--------------------:|
| 1.0389 | 0.9930 | 71 | 1.8937 | {'bleu': 0.26205068002927057, 'precisions': [0.6385562102386747, 0.3220126728603845, 0.19412484437384622, 0.13232381936372636], 'brevity_penalty': 0.9720474824019883, 'length_ratio': 0.9724309736350426, 'translation_length': 187368, 'reference_length': 192680} | {'rouge1': 0.6264248496834525, 'rouge2': 0.3031545327309577, 'rougeL': 0.5022734325866114, 'rougeLsum': 0.5017276717558696} | {'exact_match': 0.0} |
| 0.7026 | 2.0 | 143 | 2.0257 | {'bleu': 0.22948697087184314, 'precisions': [0.6175561684920868, 0.2864991434080009, 0.16448293132138875, 0.10829521706024982], 'brevity_penalty': 0.9685578318352563, 'length_ratio': 0.9690419348141998, 'translation_length': 186715, 'reference_length': 192680} | {'rouge1': 0.6021744263812635, 'rouge2': 0.2645080008922339, 'rougeL': 0.47549724399365867, 'rougeLsum': 0.47563577913274346} | {'exact_match': 0.0} |
| 0.5794 | 2.9930 | 214 | 2.0827 | {'bleu': 0.22453345451779733, 'precisions': [0.6142047063979434, 0.2794608644390257, 0.15886996662779512, 0.10500024249478636], 'brevity_penalty': 0.9706541083647924, 'length_ratio': 0.9710763960971559, 'translation_length': 187107, 'reference_length': 192680} | {'rouge1': 0.5986129640494808, 'rouge2': 0.2565288834240412, 'rougeL': 0.47029440892215696, 'rougeLsum': 0.4703605206181696} | {'exact_match': 0.0} |
| 0.5107 | 4.0 | 286 | 2.0999 | {'bleu': 0.22808449006897172, 'precisions': [0.6164639351259069, 0.28426452965847815, 0.16231439361428204, 0.10640438075565883], 'brevity_penalty': 0.9724315296841323, 'length_ratio': 0.9728046501972182, 'translation_length': 187440, 'reference_length': 192680} | {'rouge1': 0.6010609898102299, 'rouge2': 0.2621809898542294, 'rougeL': 0.4728255342917802, 'rougeLsum': 0.4728531320642606} | {'exact_match': 0.0} |
| 0.4923 | 4.9930 | 357 | 2.0932 | {'bleu': 0.23027336632996132, 'precisions': [0.6166044676937471, 0.2878130430610787, 0.1642595225622989, 0.10630862410891355], 'brevity_penalty': 0.975977176959311, 'length_ratio': 0.9762611583973427, 'translation_length': 188106, 'reference_length': 192680} | {'rouge1': 0.6020695302602435, 'rouge2': 0.2657671472450324, 'rougeL': 0.47423678533654967, 'rougeLsum': 0.47426066890913565} | {'exact_match': 0.0} |
| 0.4431 | 6.0 | 429 | 2.0962 | {'bleu': 0.22873099259924137, 'precisions': [0.6169168021752459, 0.28490855532923826, 0.16326705657201365, 0.10637588763042322], 'brevity_penalty': 0.9730979379483501, 'length_ratio': 0.9734533942287731, 'translation_length': 187565, 'reference_length': 192680} | {'rouge1': 0.6015904749444395, 'rouge2': 0.26263389133741416, 'rougeL': 0.4729371282759689, 'rougeLsum': 0.4730073305944661} | {'exact_match': 0.0} |
| 0.4291 | 6.9930 | 500 | 2.0895 | {'bleu': 0.23078161525345967, 'precisions': [0.6175051285594328, 0.2861604050093259, 0.16454167512744605, 0.10739661140462743], 'brevity_penalty': 0.9762747516268988, 'length_ratio': 0.9765517957234794, 'translation_length': 188162, 'reference_length': 192680} | {'rouge1': 0.6034137320239901, 'rouge2': 0.26422178262738116, 'rougeL': 0.47430934107431466, 'rougeLsum': 0.47430902463237395} | {'exact_match': 0.0} |
| 0.4297 | 8.0 | 572 | 2.0865 | {'bleu': 0.22849194288081487, 'precisions': [0.6172627948932184, 0.28407374796552737, 0.1623422141125731, 0.10599288515917175], 'brevity_penalty': 0.9749190245343078, 'length_ratio': 0.9752283578991073, 'translation_length': 187907, 'reference_length': 192680} | {'rouge1': 0.6027503352616924, 'rouge2': 0.2615077454867606, 'rougeL': 0.47349895225288113, 'rougeLsum': 0.47352034156560674} | {'exact_match': 0.0} |
| 0.4361 | 8.9930 | 643 | 2.0832 | {'bleu': 0.2305080658084417, 'precisions': [0.6175195604418985, 0.2856609509586922, 0.16423418171705448, 0.10763603992041658], 'brevity_penalty': 0.9754508959408048, 'length_ratio': 0.9757473531243512, 'translation_length': 188007, 'reference_length': 192680} | {'rouge1': 0.6029422201953518, 'rouge2': 0.26346694480161104, 'rougeL': 0.4742809273284626, 'rougeLsum': 0.4743122502561476} | {'exact_match': 0.0} |
| 0.4423 | 9.9301 | 710 | 2.0840 | {'bleu': 0.230038020190203, 'precisions': [0.6176251608717387, 0.2855817326664036, 0.16376314072743217, 0.10700689536841428], 'brevity_penalty': 0.9756157200699793, 'length_ratio': 0.9759082416441769, 'translation_length': 188038, 'reference_length': 192680} | {'rouge1': 0.603139585918947, 'rouge2': 0.26328950362942705, 'rougeL': 0.4742788009942601, 'rougeLsum': 0.47433418479279266} | {'exact_match': 0.0} |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
matthieulel/vit-base-patch32-384-finetuned-galaxy10-decals
|
matthieulel
| 2024-06-10T18:10:03Z | 18 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"vision",
"generated_from_trainer",
"base_model:google/vit-base-patch32-384",
"base_model:finetune:google/vit-base-patch32-384",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-06-10T16:21:04Z |
---
license: apache-2.0
base_model: google/vit-base-patch32-384
tags:
- image-classification
- vision
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: vit-base-patch32-384-finetuned-galaxy10-decals
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch32-384-finetuned-galaxy10-decals
This model is a fine-tuned version of [google/vit-base-patch32-384](https://huggingface.co/google/vit-base-patch32-384) on the matthieulel/galaxy10_decals dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5542
- Accuracy: 0.8326
- Precision: 0.8324
- Recall: 0.8326
- F1: 0.8298
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.68 | 0.99 | 31 | 1.3835 | 0.5259 | 0.5014 | 0.5259 | 0.4922 |
| 0.9395 | 1.98 | 62 | 0.8286 | 0.7120 | 0.7053 | 0.7120 | 0.6986 |
| 0.7814 | 2.98 | 93 | 0.7194 | 0.7604 | 0.7515 | 0.7604 | 0.7456 |
| 0.7227 | 4.0 | 125 | 0.6271 | 0.7818 | 0.7913 | 0.7818 | 0.7743 |
| 0.6309 | 4.99 | 156 | 0.5944 | 0.7959 | 0.7959 | 0.7959 | 0.7952 |
| 0.5754 | 5.98 | 187 | 0.5448 | 0.8112 | 0.8165 | 0.8112 | 0.8087 |
| 0.5519 | 6.98 | 218 | 0.5456 | 0.8010 | 0.7990 | 0.8010 | 0.7991 |
| 0.5077 | 8.0 | 250 | 0.5458 | 0.8191 | 0.8229 | 0.8191 | 0.8160 |
| 0.5086 | 8.99 | 281 | 0.5326 | 0.8174 | 0.8181 | 0.8174 | 0.8146 |
| 0.455 | 9.98 | 312 | 0.5379 | 0.8174 | 0.8179 | 0.8174 | 0.8143 |
| 0.4532 | 10.98 | 343 | 0.5239 | 0.8247 | 0.8238 | 0.8247 | 0.8225 |
| 0.4311 | 12.0 | 375 | 0.5290 | 0.8202 | 0.8197 | 0.8202 | 0.8169 |
| 0.4399 | 12.99 | 406 | 0.5355 | 0.8236 | 0.8269 | 0.8236 | 0.8213 |
| 0.4026 | 13.98 | 437 | 0.5132 | 0.8303 | 0.8288 | 0.8303 | 0.8268 |
| 0.3964 | 14.98 | 468 | 0.5101 | 0.8269 | 0.8290 | 0.8269 | 0.8247 |
| 0.3649 | 16.0 | 500 | 0.5296 | 0.8253 | 0.8242 | 0.8253 | 0.8222 |
| 0.3353 | 16.99 | 531 | 0.5319 | 0.8236 | 0.8212 | 0.8236 | 0.8198 |
| 0.3372 | 17.98 | 562 | 0.5203 | 0.8303 | 0.8315 | 0.8303 | 0.8300 |
| 0.3281 | 18.98 | 593 | 0.5428 | 0.8315 | 0.8319 | 0.8315 | 0.8289 |
| 0.3152 | 20.0 | 625 | 0.5453 | 0.8264 | 0.8283 | 0.8264 | 0.8262 |
| 0.3016 | 20.99 | 656 | 0.5464 | 0.8224 | 0.8252 | 0.8224 | 0.8192 |
| 0.2826 | 21.98 | 687 | 0.5473 | 0.8241 | 0.8214 | 0.8241 | 0.8213 |
| 0.2832 | 22.98 | 718 | 0.5596 | 0.8275 | 0.8281 | 0.8275 | 0.8255 |
| 0.2547 | 24.0 | 750 | 0.5768 | 0.8247 | 0.8260 | 0.8247 | 0.8243 |
| 0.2682 | 24.99 | 781 | 0.5693 | 0.8230 | 0.8244 | 0.8230 | 0.8226 |
| 0.245 | 25.98 | 812 | 0.5542 | 0.8326 | 0.8324 | 0.8326 | 0.8298 |
| 0.2575 | 26.98 | 843 | 0.5665 | 0.8241 | 0.8254 | 0.8241 | 0.8234 |
| 0.2386 | 28.0 | 875 | 0.5716 | 0.8309 | 0.8314 | 0.8309 | 0.8293 |
| 0.2452 | 28.99 | 906 | 0.5659 | 0.8303 | 0.8295 | 0.8303 | 0.8279 |
| 0.2394 | 29.76 | 930 | 0.5674 | 0.8315 | 0.8313 | 0.8315 | 0.8294 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.15.1
|
XMin08/Model_Llama2_v6
|
XMin08
| 2024-06-10T18:02:18Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-10T17:58:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
srishma/starcoder-3b-multi-lora-tagger-edition_1.0
|
srishma
| 2024-06-10T17:57:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-10T17:51:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** Srishma Raparthy
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** LLM
- **Language(s) (NLP):** Python
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** StarCoder 3b
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
comaniac/Mixtral-8x7B-Instruct-v0.1-FP8-v3
|
comaniac
| 2024-06-10T17:56:55Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"fp8",
"region:us"
] |
text-generation
| 2024-06-10T17:45:10Z |
## Mixtral-8x7B-Instruct-v0.1-FP8-v3
* Weights and activations are per-tensor quantized to float8_e4m3.
* Quantization with AutoFP8 with updated activation scaling factor names.
* Calibration dataset: Ultrachat-200k
* Samples: 4096
* Sequence length: 8192
## Evaluation
TBA
|
sajjad55/wsdbanglat5_1e4_E4
|
sajjad55
| 2024-06-10T17:55:23Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:ka05ar/Banglat5_Ex4",
"base_model:finetune:ka05ar/Banglat5_Ex4",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-06-10T16:29:19Z |
---
base_model: ka05ar/Banglat5_Ex4
tags:
- generated_from_trainer
model-index:
- name: wsdbanglat5_1e4_E4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wsdbanglat5_1e4_E4
This model is a fine-tuned version of [ka05ar/Banglat5_Ex4](https://huggingface.co/ka05ar/Banglat5_Ex4) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0046
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.0633 | 1.0 | 1481 | 0.0339 |
| 0.0259 | 2.0 | 2962 | 0.0109 |
| 0.0192 | 3.0 | 4443 | 0.0084 |
| 0.0108 | 4.0 | 5924 | 0.0061 |
| 0.0061 | 5.0 | 7405 | 0.0048 |
| 0.0046 | 6.0 | 8886 | 0.0045 |
| 0.0042 | 7.0 | 10367 | 0.0047 |
| 0.0049 | 8.0 | 11848 | 0.0043 |
| 0.0022 | 9.0 | 13329 | 0.0044 |
| 0.0014 | 10.0 | 14810 | 0.0046 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.1.2
- Datasets 2.19.1
- Tokenizers 0.19.1
|
hdve/google-gemma-7b-1718041666
|
hdve
| 2024-06-10T17:50:44Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-10T17:47:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MudassirFayaz/career_councling_bart_0.1
|
MudassirFayaz
| 2024-06-10T17:50:29Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-10T17:50:28Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rahulAkaVector/modely
|
rahulAkaVector
| 2024-06-10T17:47:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-10T17:37:19Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
JaaackXD/Llama-3-70B-GGUF
|
JaaackXD
| 2024-06-10T17:46:42Z | 11 | 0 | null |
[
"safetensors",
"gguf",
"llama",
"llama-3",
"meta",
"facebook",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | 2024-05-04T03:11:04Z |
---
license: llama3
tags:
- llama
- llama-3
- meta
- facebook
- gguf
---
Directly converted and quantized into GGUF based on `llama.cpp` (release tag: b2843) from the 'Mata-Llama-3' repo from Meta on Hugging Face.
Including the original LLaMA 3 models file cloning from the Meta HF repo. (https://huggingface.co/meta-llama/Meta-Llama-3-70B)
If you have issues downloading the models from Meta or converting models for `llama.cpp`, feel free to download this one!
### How to use the `gguf-split` / Model sharding demo : https://github.com/ggerganov/llama.cpp/discussions/6404
## Perplexity table on LLaMA 3 70B
Less perplexity is better. (credit to: [dranger003](https://github.com/ggerganov/llama.cpp/pull/6745#issuecomment-2093892514))
| Quantization | Size (GiB) | Perplexity (wiki.test) | Delta (FP16)|
|--------------|------------|------------------------|-------------|
| IQ1_S | 14.29 | 9.8655 +/- 0.0625 | 248.51% |
| IQ1_M | 15.60 | 8.5193 +/- 0.0530 | 201.94% |
| IQ2_XXS | 17.79 | 6.6705 +/- 0.0405 | 135.64% |
| IQ2_XS | 19.69 | 5.7486 +/- 0.0345 | 103.07% |
| IQ2_S | 20.71 | 5.5215 +/- 0.0318 | 95.05% |
| Q2_K_S | 22.79 | 5.4334 +/- 0.0325 | 91.94% |
| IQ2_M | 22.46 | 4.8959 +/- 0.0276 | 72.35% |
| Q2_K | 24.56 | 4.7763 +/- 0.0274 | 68.73% |
| IQ3_XXS | 25.58 | 3.9671 +/- 0.0211 | 40.14% |
| IQ3_XS | 27.29 | 3.7210 +/- 0.0191 | 31.45% |
| Q3_K_S | 28.79 | 3.6502 +/- 0.0192 | 28.95% |
| IQ3_S | 28.79 | 3.4698 +/- 0.0174 | 22.57% |
| IQ3_M | 29.74 | 3.4402 +/- 0.0171 | 21.53% |
| Q3_K_M | 31.91 | 3.3617 +/- 0.0172 | 18.75% |
| Q3_K_L | 34.59 | 3.3016 +/- 0.0168 | 16.63% |
| IQ4_XS | 35.30 | 3.0310 +/- 0.0149 | 7.07% |
| IQ4_NL | 37.30 | 3.0261 +/- 0.0149 | 6.90% |
| Q4_K_S | 37.58 | 3.0050 +/- 0.0148 | 6.15% |
| Q4_K_M | 39.60 | 2.9674 +/- 0.0146 | 4.83% |
| Q5_K_S | 45.32 | 2.8843 +/- 0.0141 | 1.89% |
| Q5_K_M | 46.52 | 2.8656 +/- 0.0139 | 1.23% |
| Q6_K | 53.91 | 2.8441 +/- 0.0138 | 0.47% |
| Q8_0 | 69.83 | 2.8316 +/- 0.0138 | 0.03% |
| F16 | 131.43 | 2.8308 +/- 0.0138 | 0.00% |
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
## License
See the License file for Meta Llama 3 [here](https://llama.meta.com/llama3/license/) and Acceptable Use Policy [here](https://llama.meta.com/llama3/use-policy/)
|
MohammadKarami/bloom560mtask1
|
MohammadKarami
| 2024-06-10T17:45:21Z | 104 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bloom",
"text-classification",
"generated_from_trainer",
"base_model:bigscience/bloom-560m",
"base_model:finetune:bigscience/bloom-560m",
"license:bigscience-bloom-rail-1.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-10T17:43:07Z |
---
license: bigscience-bloom-rail-1.0
base_model: bigscience/bloom-560m
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: Model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Model
This model is a fine-tuned version of [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6651
- F1: 0.8817
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.486 | 1.0 | 2500 | 0.3856 | 0.8432 |
| 0.2451 | 2.0 | 5000 | 0.4047 | 0.8704 |
| 0.0957 | 3.0 | 7500 | 0.6651 | 0.8817 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
MudassirFayaz/career_councling_bart
|
MudassirFayaz
| 2024-06-10T17:41:54Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-10T17:41:53Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Aloshe/Sheared-LLaMA-1.3B-Q8_0-GGUF
|
Aloshe
| 2024-06-10T17:41:07Z | 4 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:princeton-nlp/Sheared-LLaMA-1.3B",
"base_model:quantized:princeton-nlp/Sheared-LLaMA-1.3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-10T17:41:01Z |
---
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
base_model: princeton-nlp/Sheared-LLaMA-1.3B
---
# Aloshe/Sheared-LLaMA-1.3B-Q8_0-GGUF
This model was converted to GGUF format from [`princeton-nlp/Sheared-LLaMA-1.3B`](https://huggingface.co/princeton-nlp/Sheared-LLaMA-1.3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/princeton-nlp/Sheared-LLaMA-1.3B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama --hf-repo Aloshe/Sheared-LLaMA-1.3B-Q8_0-GGUF --hf-file sheared-llama-1.3b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Aloshe/Sheared-LLaMA-1.3B-Q8_0-GGUF --hf-file sheared-llama-1.3b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./main --hf-repo Aloshe/Sheared-LLaMA-1.3B-Q8_0-GGUF --hf-file sheared-llama-1.3b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./server --hf-repo Aloshe/Sheared-LLaMA-1.3B-Q8_0-GGUF --hf-file sheared-llama-1.3b-q8_0.gguf -c 2048
```
|
AMfeta99/vit-base-oxford-brain-tumor
|
AMfeta99
| 2024-06-10T17:35:08Z | 196 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"dataset:Mahadih534/brain-tumor-dataset",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-06-09T17:09:11Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- image-classification
- generated_from_trainer
datasets:
- imagefolder
- Mahadih534/brain-tumor-dataset
metrics:
- accuracy
model-index:
- name: vit-base-oxford-brain-tumor
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: Mahadih534/brain-tumor-dataset
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.6923076923076923
pipeline_tag: image-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-brain-tumor
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the Mahadih534/brain-tumor-dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5719
- Accuracy: 0.6923
## Model description
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224), which is a Vision Transformer (ViT)
ViT model is originaly a transformer encoder model pre-trained and fine-tuned on ImageNet 2012.
It was introduced in the paper "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale" by Dosovitskiy et al.
The model processes images as sequences of 16x16 patches, adding a [CLS] token for classification tasks, and uses absolute position embeddings. Pre-training enables the model to learn rich image representations, which can be leveraged for downstream tasks by adding a linear classifier on top of the [CLS] token. The weights were converted from the timm repository by Ross Wightman.
## Intended uses & limitations
This must be used for classification of x-ray images of the brain to diagnose of brain tumor.
## Training and evaluation data
The model was fine-tuned in the dataset [Mahadih534/brain-tumor-dataset](https://huggingface.co/datasets/Mahadih534/brain-tumor-dataset) that contains 253 brain images. This dataset was originally created by Yousef Ghanem.
The original dataset was splitted into training and evaluation subsets, 80% for training and 20% for evaluation.
For robust framework evaluation, the evaluation subset is further split into two equal parts for validation and testing.
This results in three distinct datasets: training, validation, and testing
### Training procedure/hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 20
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 11 | 0.5904 | 0.64 |
| No log | 2.0 | 22 | 0.5276 | 0.68 |
| No log | 3.0 | 33 | 0.4864 | 0.8 |
| No log | 4.0 | 44 | 0.4566 | 0.8 |
| No log | 5.0 | 55 | 0.4390 | 0.88 |
| No log | 6.0 | 66 | 0.4294 | 0.96 |
| No log | 7.0 | 77 | 0.4259 | 0.96 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
imdatta0/llama_2_13b_Magiccoder_evol_10k_reverse
|
imdatta0
| 2024-06-10T17:34:07Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"unsloth",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-13b-hf",
"base_model:adapter:meta-llama/Llama-2-13b-hf",
"license:llama2",
"region:us"
] | null | 2024-06-10T13:59:29Z |
---
license: llama2
library_name: peft
tags:
- unsloth
- generated_from_trainer
base_model: meta-llama/Llama-2-13b-hf
model-index:
- name: llama_2_13b_Magiccoder_evol_10k_reverse
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama_2_13b_Magiccoder_evol_10k_reverse
This model is a fine-tuned version of [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0887
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 0.02
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.173 | 0.0262 | 4 | 1.1853 |
| 1.1716 | 0.0523 | 8 | 1.1587 |
| 1.105 | 0.0785 | 12 | 1.1410 |
| 1.0534 | 0.1047 | 16 | 1.1289 |
| 1.0911 | 0.1308 | 20 | 1.1239 |
| 1.0565 | 0.1570 | 24 | 1.1172 |
| 1.0589 | 0.1832 | 28 | 1.1140 |
| 1.1027 | 0.2093 | 32 | 1.1106 |
| 1.0379 | 0.2355 | 36 | 1.1096 |
| 1.1134 | 0.2617 | 40 | 1.1087 |
| 1.0969 | 0.2878 | 44 | 1.1049 |
| 1.1361 | 0.3140 | 48 | 1.1056 |
| 1.1121 | 0.3401 | 52 | 1.1023 |
| 1.0828 | 0.3663 | 56 | 1.1047 |
| 1.1246 | 0.3925 | 60 | 1.1027 |
| 1.1285 | 0.4186 | 64 | 1.0990 |
| 1.0788 | 0.4448 | 68 | 1.0998 |
| 1.0917 | 0.4710 | 72 | 1.0950 |
| 1.0395 | 0.4971 | 76 | 1.0977 |
| 1.1267 | 0.5233 | 80 | 1.0954 |
| 1.1414 | 0.5495 | 84 | 1.0955 |
| 1.0821 | 0.5756 | 88 | 1.0930 |
| 1.0277 | 0.6018 | 92 | 1.0908 |
| 1.0303 | 0.6280 | 96 | 1.0917 |
| 1.0947 | 0.6541 | 100 | 1.0905 |
| 1.0824 | 0.6803 | 104 | 1.0903 |
| 1.0726 | 0.7065 | 108 | 1.0912 |
| 1.1064 | 0.7326 | 112 | 1.0907 |
| 1.0467 | 0.7588 | 116 | 1.0892 |
| 1.0725 | 0.7850 | 120 | 1.0885 |
| 1.09 | 0.8111 | 124 | 1.0893 |
| 1.0506 | 0.8373 | 128 | 1.0900 |
| 0.9951 | 0.8635 | 132 | 1.0902 |
| 1.1032 | 0.8896 | 136 | 1.0895 |
| 1.0116 | 0.9158 | 140 | 1.0891 |
| 1.0683 | 0.9419 | 144 | 1.0889 |
| 1.0902 | 0.9681 | 148 | 1.0888 |
| 1.0721 | 0.9943 | 152 | 1.0887 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
NastyBaster/brelok
|
NastyBaster
| 2024-06-10T17:33:39Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-06-10T17:33:39Z |
---
license: apache-2.0
---
|
minaj546/CardiB
|
minaj546
| 2024-06-10T17:33:22Z | 3 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:c-uda",
"region:us"
] |
text-to-image
| 2024-06-09T18:21:28Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: "ASCII\0\0\0photo of (ohwx woman) wearing a blue hoodie, <lora:CardiB:1>"
parameters:
negative_prompt: cleavage, nsfw
output:
url: images/IMG_9585.jpeg
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: ohwx woman, ohwx
license: c-uda
---
# Cardi B SDXL LoRA
<Gallery />
## Model description
This model was testing a couple of things. One being the larger training image dataset, hence the larger amount of steps. Two being, yes, a non-1.7GB LoRA. Only future tests/outputs/comparisons will tell if there is a quality loss in this or not. I decided to scale down the size per high demand and preliminary speculation based on observations that hopefully it wont diminish the overall quality of the model. Worth noting if you are also training SDXL LoRAs yourself that this method of training to achieve 800mb files also lowers your overall GPU VRAM usage from ~18gb to ~15gb.
Most images were on DreamShaper XL A2 in A1111/ComfyUI. Hi-res fix with R-ESRGAN (1.25) and 0.2-0.4 denoising strength. Upscaled using "4x_NickelbackFS_72000_G" or "4x_NMKD-Siax_200k"
## Trigger words
You should use `ohwx woman` to trigger the image generation.
You should use `ohwx` to trigger the image generation.
## Download model
[Download](/minaj546/CardiB/tree/main) them in the Files & versions tab.
|
SidXXD/noise2latent_vae
|
SidXXD
| 2024-06-10T17:17:14Z | 2 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"custom-diffusion",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-06-10T16:54:10Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
instance_prompt: photo of a <v1*> cat
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- custom-diffusion
inference: true
---
# Custom Diffusion - SidXXD/noise2latent_vae
These are Custom Diffusion adaption weights for stabilityai/stable-diffusion-2-1-base. The weights were trained on photo of a <v1*> cat using [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion). You can find some example images in the following.
For more details on the training, please follow [this link](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion).
|
DBangshu/Base_New_GPT2_8
|
DBangshu
| 2024-06-10T17:16:23Z | 200 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-10T17:16:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
brunolaudelino/alexandre
|
brunolaudelino
| 2024-06-10T17:16:21Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"region:us"
] | null | 2024-06-10T17:15:42Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
research-dump/Meta-Llama-3-8B-Instruct_mixed_sft_lexical_enhanced_no_instruction
|
research-dump
| 2024-06-10T17:10:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-09T23:12:45Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
neovalle/ArmoniosaaAnthea_en_es
|
neovalle
| 2024-06-10T17:07:58Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-10T17:03:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
haidermasood99/openhermes-mistral-dpo-gptq
|
haidermasood99
| 2024-06-10T17:07:58Z | 1 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:TheBloke/OpenHermes-2-Mistral-7B-GPTQ",
"base_model:adapter:TheBloke/OpenHermes-2-Mistral-7B-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-06-10T14:26:08Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- dpo
- generated_from_trainer
base_model: TheBloke/OpenHermes-2-Mistral-7B-GPTQ
model-index:
- name: openhermes-mistral-dpo-gptq
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openhermes-mistral-dpo-gptq
This model is a fine-tuned version of [TheBloke/OpenHermes-2-Mistral-7B-GPTQ](https://huggingface.co/TheBloke/OpenHermes-2-Mistral-7B-GPTQ) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7095
- Rewards/chosen: -0.1860
- Rewards/rejected: -0.3362
- Rewards/accuracies: 0.4904
- Rewards/margins: 0.1502
- Logps/rejected: -269.4139
- Logps/chosen: -269.0661
- Logits/rejected: -2.0876
- Logits/chosen: -2.1662
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- training_steps: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6952 | 0.0002 | 10 | 0.6717 | 0.1018 | 0.0250 | 0.5769 | 0.0769 | -265.8023 | -266.1874 | -2.1074 | -2.1866 |
| 0.7473 | 0.0003 | 20 | 0.6787 | 0.0390 | -0.0403 | 0.5192 | 0.0793 | -266.4547 | -266.8159 | -2.1064 | -2.1840 |
| 0.6557 | 0.0005 | 30 | 0.7320 | -0.2017 | -0.2789 | 0.4904 | 0.0772 | -268.8405 | -269.2226 | -2.0938 | -2.1716 |
| 0.8058 | 0.0007 | 40 | 0.7174 | -0.2018 | -0.3209 | 0.4808 | 0.1192 | -269.2612 | -269.2236 | -2.0878 | -2.1663 |
| 0.5939 | 0.0009 | 50 | 0.7095 | -0.1860 | -0.3362 | 0.4904 | 0.1502 | -269.4139 | -269.0661 | -2.0876 | -2.1662 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.2
- Pytorch 2.0.1+cu117
- Datasets 2.19.2
- Tokenizers 0.19.1
|
navanth360/codegen-2b-multi-lora-tagger
|
navanth360
| 2024-06-10T17:05:02Z | 0 | 1 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-10T17:04:54Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
llama-duo/gemma2b-summarize-claude3sonnet-128k
|
llama-duo
| 2024-06-10T17:04:35Z | 12 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"gemma",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"dataset:llama-duo/synth_summarize_dataset_dedup",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"license:gemma",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-06-05T09:26:35Z |
---
license: gemma
library_name: peft
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
base_model: google/gemma-2b
datasets:
- llama-duo/synth_summarize_dataset_dedup
model-index:
- name: gemma2b-summarize-claude3sonnet-128k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma2b-summarize-claude3sonnet-128k
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on the llama-duo/synth_summarize_dataset_dedup dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6928
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 3
- gradient_accumulation_steps: 2
- total_train_batch_size: 48
- total_eval_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0192 | 1.0 | 402 | 2.4514 |
| 0.9424 | 2.0 | 804 | 2.4604 |
| 0.8955 | 3.0 | 1206 | 2.5064 |
| 0.8659 | 4.0 | 1608 | 2.5306 |
| 0.8359 | 5.0 | 2010 | 2.5706 |
| 0.7986 | 6.0 | 2412 | 2.6196 |
| 0.7778 | 7.0 | 2814 | 2.6583 |
| 0.7562 | 8.0 | 3216 | 2.6846 |
| 0.7563 | 9.0 | 3618 | 2.6927 |
| 0.7461 | 10.0 | 4020 | 2.6928 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.2
- Pytorch 2.2.2+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Rolandtester/Testlolxd
|
Rolandtester
| 2024-06-10T17:04:04Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-10T16:58:32Z |
<marquee><h1>lol tu es fou</h1></marquee>
"><img src=x onerror=alert(document.cookie)>
"><img src=x test=test>
|
Rolyaj/Clippy
|
Rolyaj
| 2024-06-10T17:02:38Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:TheMistoAI/MistoLine",
"base_model:adapter:TheMistoAI/MistoLine",
"license:unknown",
"region:us"
] |
text-to-image
| 2024-06-10T17:02:26Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/IMG_0236.jpeg
base_model: TheMistoAI/MistoLine
instance_prompt: null
license: unknown
---
# Clippy Agent 0
<Gallery />
## Download model
[Download](/Rolyaj/Clippy/tree/main) them in the Files & versions tab.
|
miguelpezo/prueba2modelo3
|
miguelpezo
| 2024-06-10T17:00:30Z | 5 | 0 |
transformers
|
[
"transformers",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-10T15:43:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kvriza8/clip-microscopy-200-epoch-sem_only_vit-L-14_captions
|
kvriza8
| 2024-06-10T16:59:59Z | 2 | 0 |
open_clip
|
[
"open_clip",
"safetensors",
"clip",
"zero-shot-image-classification",
"license:mit",
"region:us"
] |
zero-shot-image-classification
| 2024-06-10T16:59:07Z |
---
tags:
- clip
library_name: open_clip
pipeline_tag: zero-shot-image-classification
license: mit
---
# Model card for clip-microscopy-200-epoch-sem_only_vit-L-14_captions
|
xy4286/yang-grammer-check
|
xy4286
| 2024-06-10T16:59:59Z | 116 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-10T16:59:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/miquplus-midnight-70b-GGUF
|
mradermacher
| 2024-06-10T16:59:13Z | 1 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-06-09T03:23:44Z |
---
base_model: jukofyork/miquplus-midnight-70b
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/jukofyork/miquplus-midnight-70b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/miquplus-midnight-70b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/miquplus-midnight-70b-GGUF/resolve/main/miquplus-midnight-70b.Q2_K.gguf) | Q2_K | 25.6 | |
| [GGUF](https://huggingface.co/mradermacher/miquplus-midnight-70b-GGUF/resolve/main/miquplus-midnight-70b.IQ3_XS.gguf) | IQ3_XS | 28.4 | |
| [GGUF](https://huggingface.co/mradermacher/miquplus-midnight-70b-GGUF/resolve/main/miquplus-midnight-70b.IQ3_S.gguf) | IQ3_S | 30.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/miquplus-midnight-70b-GGUF/resolve/main/miquplus-midnight-70b.Q3_K_S.gguf) | Q3_K_S | 30.0 | |
| [GGUF](https://huggingface.co/mradermacher/miquplus-midnight-70b-GGUF/resolve/main/miquplus-midnight-70b.IQ3_M.gguf) | IQ3_M | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/miquplus-midnight-70b-GGUF/resolve/main/miquplus-midnight-70b.Q3_K_M.gguf) | Q3_K_M | 33.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/miquplus-midnight-70b-GGUF/resolve/main/miquplus-midnight-70b.Q3_K_L.gguf) | Q3_K_L | 36.2 | |
| [GGUF](https://huggingface.co/mradermacher/miquplus-midnight-70b-GGUF/resolve/main/miquplus-midnight-70b.IQ4_XS.gguf) | IQ4_XS | 37.3 | |
| [GGUF](https://huggingface.co/mradermacher/miquplus-midnight-70b-GGUF/resolve/main/miquplus-midnight-70b.Q4_K_S.gguf) | Q4_K_S | 39.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/miquplus-midnight-70b-GGUF/resolve/main/miquplus-midnight-70b.Q4_K_M.gguf) | Q4_K_M | 41.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/miquplus-midnight-70b-GGUF/resolve/main/miquplus-midnight-70b.Q5_K_S.gguf) | Q5_K_S | 47.6 | |
| [GGUF](https://huggingface.co/mradermacher/miquplus-midnight-70b-GGUF/resolve/main/miquplus-midnight-70b.Q5_K_M.gguf) | Q5_K_M | 48.9 | |
| [PART 1](https://huggingface.co/mradermacher/miquplus-midnight-70b-GGUF/resolve/main/miquplus-midnight-70b.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/miquplus-midnight-70b-GGUF/resolve/main/miquplus-midnight-70b.Q6_K.gguf.part2of2) | Q6_K | 56.7 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/miquplus-midnight-70b-GGUF/resolve/main/miquplus-midnight-70b.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/miquplus-midnight-70b-GGUF/resolve/main/miquplus-midnight-70b.Q8_0.gguf.part2of2) | Q8_0 | 73.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Gregorig/deberta-v3-base-finetuned-subjective
|
Gregorig
| 2024-06-10T16:59:12Z | 104 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-10T16:58:51Z |
---
license: mit
base_model: microsoft/deberta-v3-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: deberta-v3-base-finetuned-subjective
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-base-finetuned-subjective
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0300
- Accuracy: 0.54
- F1: 0.5372
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.3623 | 1.0 | 26 | 1.3154 | 0.33 | 0.1638 |
| 1.2452 | 2.0 | 52 | 1.1446 | 0.465 | 0.3791 |
| 1.1157 | 3.0 | 78 | 1.0573 | 0.53 | 0.5277 |
| 1.0187 | 4.0 | 104 | 1.0184 | 0.54 | 0.5403 |
| 0.9542 | 5.0 | 130 | 1.0300 | 0.54 | 0.5372 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Tokenizers 0.19.1
|
medtalkai/wav2vec2-xls-r-1b-medical-domain-longer-test
|
medtalkai
| 2024-06-10T16:57:29Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-10T16:47:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Augusto777/vit-base-patch16-224-RU9-24
|
Augusto777
| 2024-06-10T16:54:11Z | 194 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-06-10T16:41:02Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-RU9-24
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8431372549019608
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-RU9-24
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5081
- Accuracy: 0.8431
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 24
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 8 | 1.3401 | 0.5098 |
| 1.3685 | 2.0 | 16 | 1.2193 | 0.5686 |
| 1.2413 | 3.0 | 24 | 1.1150 | 0.5882 |
| 1.1126 | 4.0 | 32 | 0.9957 | 0.7059 |
| 0.9285 | 5.0 | 40 | 0.8976 | 0.6863 |
| 0.9285 | 6.0 | 48 | 0.8580 | 0.6863 |
| 0.7793 | 7.0 | 56 | 0.8426 | 0.7647 |
| 0.6291 | 8.0 | 64 | 0.7899 | 0.6863 |
| 0.5401 | 9.0 | 72 | 0.7169 | 0.7255 |
| 0.4358 | 10.0 | 80 | 0.7505 | 0.7255 |
| 0.4358 | 11.0 | 88 | 0.8077 | 0.7059 |
| 0.3901 | 12.0 | 96 | 0.6803 | 0.7647 |
| 0.3033 | 13.0 | 104 | 0.6483 | 0.7647 |
| 0.267 | 14.0 | 112 | 0.6451 | 0.7451 |
| 0.2212 | 15.0 | 120 | 0.6119 | 0.7647 |
| 0.2212 | 16.0 | 128 | 0.6150 | 0.8039 |
| 0.2206 | 17.0 | 136 | 0.6270 | 0.7843 |
| 0.2285 | 18.0 | 144 | 0.6181 | 0.7647 |
| 0.1741 | 19.0 | 152 | 0.5081 | 0.8431 |
| 0.1708 | 20.0 | 160 | 0.5502 | 0.8235 |
| 0.1708 | 21.0 | 168 | 0.5689 | 0.8039 |
| 0.16 | 22.0 | 176 | 0.5137 | 0.8235 |
| 0.1567 | 23.0 | 184 | 0.5207 | 0.8431 |
| 0.1616 | 24.0 | 192 | 0.5375 | 0.8235 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
gglabs/Mistral-7B-FC-10-epoch
|
gglabs
| 2024-06-10T16:51:15Z | 4 | 0 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"base_model:quantized:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-10T16:33:46Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
---
# Uploaded model
- **Developed by:** gglabs
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
kajamo/model_18
|
kajamo
| 2024-06-10T16:49:40Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:adapter:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | 2024-06-10T16:05:00Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: distilbert-base-uncased
model-index:
- name: model_18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_18
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.5829
- eval_accuracy: 0.7726
- eval_precision: 0.7726
- eval_recall: 0.7726
- eval_f1: 0.7724
- eval_runtime: 31.4425
- eval_samples_per_second: 389.441
- eval_steps_per_second: 12.181
- epoch: 5.0
- step: 7655
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-06
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1
|
SungJoo/medical-ner-koelectra
|
SungJoo
| 2024-06-10T16:45:04Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"feature-extraction",
"medical",
"NER",
"ko",
"dataset:SungJoo/KBMC",
"arxiv:2403.16158",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-06-10T16:24:27Z |
---
license: apache-2.0
datasets:
- SungJoo/KBMC
language:
- ko
library_name: transformers
tags:
- medical
- NER
---
# Model Card for medical-ner-koelectra
## Model Summary
This model is a fine-tuned version of the [monologg/koelectra-base-v3-discriminator](https://huggingface.co/monologg/koelectra-base-v3-discriminator).
We fine-tuned the model using the KBMC and [Naver X Changwon Univ NER dataset](https://ko-nlp.github.io/Korpora/ko-docs/corpuslist/naver_changwon_ner.html) datasets.
## Model Details
### Model Description
- **Developed by:** Sungjoo Byun (Grace Byun)
- **Language(s) (NLP):** Korean
- **License:** Apache 2.0
- **Finetuned from model:** monologg/koelectra-base-v3-discriminator
## Training Data
The model was trained using the dataset [Naver X Changwon Univ NER dataset](https://ko-nlp.github.io/Korpora/ko-docs/corpuslist/naver_changwon_ner.html) and [Korean Bio-Medical Corpus (KBMC)](https://huggingface.co/datasets/SungJoo/KBMC).
# Model Performance
## Overall Metrics
- **F1 Score:** 0.8886
- **Loss:** 0.2949
- **Precision:** 0.8844
- **Recall:** 0.8928
## Class-wise Performance
| Class | Precision | Recall | F1-Score | Support |
|-------------|-----------|--------|----------|---------|
| AFW | 0.6676 | 0.6326 | 0.6496 | 362 |
| ANM | 0.7476 | 0.7800 | 0.7635 | 600 |
| Body | 0.9731 | 0.9813 | 0.9772 | 1068 |
| CVL | 0.8492 | 0.8579 | 0.8536 | 4977 |
| DAT | 0.9078 | 0.9286 | 0.9181 | 2130 |
| Disease | 0.9738 | 0.9872 | 0.9805 | 2109 |
| EVT | 0.7332 | 0.7446 | 0.7389 | 1026 |
| FLD | 0.6138 | 0.6170 | 0.6154 | 188 |
| LOC | 0.8721 | 0.8691 | 0.8706 | 1734 |
| MAT | 0.5385 | 0.5000 | 0.5185 | 14 |
| NUM | 0.9227 | 0.9305 | 0.9266 | 4660 |
| ORG | 0.8917 | 0.8866 | 0.8892 | 3307 |
| PER | 0.8918 | 0.9049 | 0.8983 | 3626 |
| PLT | 0.2941 | 0.2174 | 0.2500 | 23 |
| TIM | 0.8644 | 0.9173 | 0.8901 | 278 |
| Treatment | 0.9468 | 0.9852 | 0.9656 | 271 |
## Averages
| Metric | Micro Avg | Macro Avg | Weighted Avg |
|----------------|-----------|-----------|--------------|
| Precision | 0.8844 | 0.7930 | 0.8841 |
| Recall | 0.8928 | 0.7963 | 0.8928 |
| F1-Score | 0.8886 | 0.7941 | 0.8884 |
## Citations
Please cite our KBMC paper:
```bibtex
@misc{byun2024korean,
title={Korean Bio-Medical Corpus (KBMC) for Medical Named Entity Recognition},
author={Sungjoo Byun and Jiseung Hong and Sumin Park and Dongjun Jang and Jean Seo and Minseok Kim and Chaeyoung Oh and Hyopil Shin},
year={2024},
eprint={2403.16158},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Model Card Contact
For any questions or issues, please contact byunsj@snu.ac.kr.
|
Gregorig/deberta-v3-base-finetuned
|
Gregorig
| 2024-06-10T16:38:09Z | 107 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-04T22:55:14Z |
---
license: mit
base_model: microsoft/deberta-v3-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: deberta-v3-base-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-base-finetuned
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0005
- Accuracy: 1.0
- F1: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---:|
| 0.4571 | 1.0 | 26 | 0.0153 | 1.0 | 1.0 |
| 0.0099 | 2.0 | 52 | 0.0013 | 1.0 | 1.0 |
| 0.0024 | 3.0 | 78 | 0.0007 | 1.0 | 1.0 |
| 0.0016 | 4.0 | 104 | 0.0006 | 1.0 | 1.0 |
| 0.0014 | 5.0 | 130 | 0.0005 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Tokenizers 0.19.1
|
AsifAbrar6/mbert-bengali-qa-squad_bn-finetuned
|
AsifAbrar6
| 2024-06-10T16:31:02Z | 137 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"base_model:sagorsarker/mbert-bengali-tydiqa-qa",
"base_model:finetune:sagorsarker/mbert-bengali-tydiqa-qa",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-06-10T14:19:27Z |
---
license: mit
base_model: sagorsarker/mbert-bengali-tydiqa-qa
tags:
- generated_from_trainer
model-index:
- name: mbert-bengali-tydiqa-qa-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbert-bengali-tydiqa-qa-finetuned-squad
This model is a fine-tuned version of [sagorsarker/mbert-bengali-tydiqa-qa](https://huggingface.co/sagorsarker/mbert-bengali-tydiqa-qa) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0143
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2028 | 1.0 | 3750 | 0.9630 |
| 0.9308 | 2.0 | 7500 | 0.9966 |
| 0.6838 | 3.0 | 11250 | 1.0143 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1
|
kaouthardata/results
|
kaouthardata
| 2024-06-10T16:27:47Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:SI2M-Lab/DarijaBERT",
"base_model:finetune:SI2M-Lab/DarijaBERT",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-10T16:25:57Z |
---
base_model: SI2M-Lab/DarijaBERT
tags:
- generated_from_trainer
metrics:
- accuracy
- recall
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [SI2M-Lab/DarijaBERT](https://huggingface.co/SI2M-Lab/DarijaBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5291
- Macro F1: 0.7697
- Accuracy: 0.8007
- Recall: 0.7687
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Macro F1 | Accuracy | Recall |
|:-------------:|:------:|:----:|:---------------:|:--------:|:--------:|:------:|
| 0.6848 | 0.9877 | 40 | 0.6040 | 0.6869 | 0.7504 | 0.6821 |
| 0.5937 | 2.0 | 81 | 0.5376 | 0.7396 | 0.7799 | 0.7286 |
| 0.4946 | 2.9877 | 121 | 0.5313 | 0.7474 | 0.7816 | 0.7434 |
| 0.386 | 4.0 | 162 | 0.5291 | 0.7697 | 0.8007 | 0.7687 |
| 0.3114 | 4.9877 | 202 | 0.5690 | 0.7391 | 0.7782 | 0.7329 |
| 0.2477 | 6.0 | 243 | 0.5891 | 0.7480 | 0.7834 | 0.7441 |
| 0.1804 | 6.9877 | 283 | 0.6194 | 0.7422 | 0.7764 | 0.7366 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
yomilimi/gy_Jeolla_test
|
yomilimi
| 2024-06-10T16:27:01Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:yomilimi/Gyeongsang_encoder",
"base_model:finetune:yomilimi/Gyeongsang_encoder",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-06-10T16:04:33Z |
---
license: mit
base_model: yomilimi/Gyeongsang_encoder
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: gy_Jeolla_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gy_Jeolla_test
This model is a fine-tuned version of [yomilimi/Gyeongsang_encoder](https://huggingface.co/yomilimi/Gyeongsang_encoder) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0642
- Bleu: 80.4544
- Gen Len: 14.1795
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.0737 | 1.0 | 15477 | 0.0721 | 78.571 | 14.2081 |
| 0.0685 | 2.0 | 30954 | 0.0655 | 80.1472 | 14.1847 |
| 0.0643 | 3.0 | 46431 | 0.0642 | 80.4544 | 14.1795 |
### Framework versions
- Transformers 4.42.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
nttwt1597/test_v2_cancer_v4_500step
|
nttwt1597
| 2024-06-10T16:25:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-10T16:24:12Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** nttwt1597
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
metta-ai/baseline.v0.2.2
|
metta-ai
| 2024-06-10T16:24:52Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"region:us"
] |
reinforcement-learning
| 2024-06-10T16:24:11Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
---
A(n) **APPO** model trained on the **GDY-MettaGrid** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r metta-ai/baseline.v0.2.2
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=GDY-MettaGrid --train_dir=./train_dir --experiment=baseline.v0.2.2
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=GDY-MettaGrid --train_dir=./train_dir --experiment=baseline.v0.2.2 --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
knowledgator/t5-for-ie
|
knowledgator
| 2024-06-10T16:24:17Z | 24 | 2 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"information extraction",
"entity linking",
"named entity recogntion",
"relation extraction",
"text-to-text generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-06-10T15:24:48Z |
---
license: apache-2.0
language:
- en
library_name: transformers
pipeline_tag: text2text-generation
tags:
- information extraction
- entity linking
- named entity recogntion
- relation extraction
- text-to-text generation
---
# T5-for-information-extraction
This is an encoder-decoder model that was trained on various information extraction tasks, including text classification, named entity recognition, relation extraction and entity linking.
### How to use:
First of all, initialize the model:
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
import torch
device = torch.device("cuda") if torch.cuda.is_available() else torch.device('cpu')
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-large")
model = T5ForConditionalGeneration.from_pretrained("knowledgator/t5-for-ie").to(device)
```
You need to set a prompt and put it with text to the model, below are examples of how to use it for different tasks:
**named entity recognition**
```python
input_text = "Extract entity types from the text: <e1>Kyiv</e1> is the capital of <e2>Ukraine</e2>."
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(device)
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
**text classification**
```python
input_text = "Classify the following text into the most relevant categories: Kyiv is the capital of Ukraine"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(device)
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
**relation extraction**
```python
input_text = "Extract relations between entities in the text: <e1>Kyiv</e1> is the capital of <e2>Ukraine</e2>."
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(device)
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
### Unlimited-classifier
With our [unlimited-classifier](https://github.com/Knowledgator/unlimited_classifier) you can use `t5-for-ie` to classify text into millions of categories. It applies generation with contraints that is super helful when structured and deterministic outputs are needed.
To install it, run the following command:
```bash
pip install -U unlimited-classifier
```
Right now you can try it with the following example:
```python
from unlimited_classifier import TextClassifier
labels=[
"e1 - capital of Ukraine",
"e1 - capital of Poland",
"e1 - European city",
"e1 - Asian city",
"e1 - small country"
]
classifier = TextClassifier(
labels=['default'],
model=model,
tokenizer=tokenizer,
device=device #if cuda
)
classifier.initialize_labels_trie(labels)
text = "<e1>Kyiv</e1> is the capital <e2>Ukraine</e2>."
output = classifier.invoke(text)
print(output)
```
### Turbo T5
We recommend to use this model on GPU with our [TurboT5 package](https://github.com/Knowledgator/TurboT5), it uses custom CUDA kernels that accelerate computations and allows much longer sequences.
First of all, you need to install the package
```
pip install turbot5 -U
```
Then you can import different heads for various purposes; we released more encoder heads for tasks such as token classification, question-answering or text classification and, of course, encoder-decoder heads for conditional generation:
```python
from turbot5 import T5ForConditionalGeneration
from turbot5 import T5Config
from transformers import T5Tokenizer
import torch
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-large")
model = T5ForConditionalGeneration.from_pretrained("knowledgator/t5-for-ie",
attention_type = 'flash', #put attention type you want to use
use_triton=True).to('cuda')
```
### Feedback
We value your input! Share your feedback and suggestions to help us improve our models.
Fill out the feedback [form](https://forms.gle/5CPFFuLzNWznjcpL7)
### Join Our Discord
Connect with our community on Discord for news, support, and discussion about our models.
Join [Discord](https://discord.gg/dkyeAgs9DG)
|
mradermacher/Wizard-Kun-Lake_3x7B-MoE-GGUF
|
mradermacher
| 2024-06-10T16:24:10Z | 2 | 1 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:WesPro/Wizard-Kun-Lake_3x7B-MoE",
"base_model:quantized:WesPro/Wizard-Kun-Lake_3x7B-MoE",
"endpoints_compatible",
"region:us"
] | null | 2024-06-09T01:52:51Z |
---
base_model: WesPro/Wizard-Kun-Lake_3x7B-MoE
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/WesPro/Wizard-Kun-Lake_3x7B-MoE
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Wizard-Kun-Lake_3x7B-MoE-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Wizard-Kun-Lake_3x7B-MoE-GGUF/resolve/main/Wizard-Kun-Lake_3x7B-MoE.Q2_K.gguf) | Q2_K | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/Wizard-Kun-Lake_3x7B-MoE-GGUF/resolve/main/Wizard-Kun-Lake_3x7B-MoE.IQ3_XS.gguf) | IQ3_XS | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/Wizard-Kun-Lake_3x7B-MoE-GGUF/resolve/main/Wizard-Kun-Lake_3x7B-MoE.Q3_K_S.gguf) | Q3_K_S | 8.1 | |
| [GGUF](https://huggingface.co/mradermacher/Wizard-Kun-Lake_3x7B-MoE-GGUF/resolve/main/Wizard-Kun-Lake_3x7B-MoE.IQ3_S.gguf) | IQ3_S | 8.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Wizard-Kun-Lake_3x7B-MoE-GGUF/resolve/main/Wizard-Kun-Lake_3x7B-MoE.IQ3_M.gguf) | IQ3_M | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Wizard-Kun-Lake_3x7B-MoE-GGUF/resolve/main/Wizard-Kun-Lake_3x7B-MoE.Q3_K_M.gguf) | Q3_K_M | 9.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Wizard-Kun-Lake_3x7B-MoE-GGUF/resolve/main/Wizard-Kun-Lake_3x7B-MoE.Q3_K_L.gguf) | Q3_K_L | 9.7 | |
| [GGUF](https://huggingface.co/mradermacher/Wizard-Kun-Lake_3x7B-MoE-GGUF/resolve/main/Wizard-Kun-Lake_3x7B-MoE.IQ4_XS.gguf) | IQ4_XS | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/Wizard-Kun-Lake_3x7B-MoE-GGUF/resolve/main/Wizard-Kun-Lake_3x7B-MoE.Q4_K_S.gguf) | Q4_K_S | 10.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Wizard-Kun-Lake_3x7B-MoE-GGUF/resolve/main/Wizard-Kun-Lake_3x7B-MoE.Q4_K_M.gguf) | Q4_K_M | 11.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Wizard-Kun-Lake_3x7B-MoE-GGUF/resolve/main/Wizard-Kun-Lake_3x7B-MoE.Q5_K_S.gguf) | Q5_K_S | 12.9 | |
| [GGUF](https://huggingface.co/mradermacher/Wizard-Kun-Lake_3x7B-MoE-GGUF/resolve/main/Wizard-Kun-Lake_3x7B-MoE.Q5_K_M.gguf) | Q5_K_M | 13.2 | |
| [GGUF](https://huggingface.co/mradermacher/Wizard-Kun-Lake_3x7B-MoE-GGUF/resolve/main/Wizard-Kun-Lake_3x7B-MoE.Q6_K.gguf) | Q6_K | 15.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Wizard-Kun-Lake_3x7B-MoE-GGUF/resolve/main/Wizard-Kun-Lake_3x7B-MoE.Q8_0.gguf) | Q8_0 | 19.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
aman9608/en_comb_pipeline
|
aman9608
| 2024-06-10T16:22:15Z | 1 | 0 |
spacy
|
[
"spacy",
"token-classification",
"en",
"model-index",
"region:us"
] |
token-classification
| 2024-06-10T06:28:00Z |
---
tags:
- spacy
- token-classification
language:
- en
model-index:
- name: en_comb_pipeline
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.9654293323
- name: NER Recall
type: recall
value: 0.958178888
- name: NER F Score
type: f_score
value: 0.961790446
---
| Feature | Description |
| --- | --- |
| **Name** | `en_comb_pipeline` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.7.2,<3.8.0` |
| **Default Pipeline** | `tok2vec`, `ner` |
| **Components** | `tok2vec`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (5 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `Other`, `allergy_name`, `cancer`, `chronic_disease`, `treatment` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 96.18 |
| `ENTS_P` | 96.54 |
| `ENTS_R` | 95.82 |
| `TOK2VEC_LOSS` | 779912.20 |
| `NER_LOSS` | 745263.98 |
|
DBangshu/Base_New_GPT2_7
|
DBangshu
| 2024-06-10T16:21:56Z | 200 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-10T16:21:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kvriza8/clip-microscopy-200-epoch-sem_only_vit-L-14
|
kvriza8
| 2024-06-10T16:21:07Z | 4 | 0 |
open_clip
|
[
"open_clip",
"safetensors",
"clip",
"zero-shot-image-classification",
"license:mit",
"region:us"
] |
zero-shot-image-classification
| 2024-06-10T16:20:15Z |
---
tags:
- clip
library_name: open_clip
pipeline_tag: zero-shot-image-classification
license: mit
---
# Model card for clip-microscopy-200-epoch-sem_only_vit-L-14
|
Hanhpt23/whisper-base-engmed-v1
|
Hanhpt23
| 2024-06-10T16:20:19Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"en",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-05-31T12:35:21Z |
---
language:
- en
license: apache-2.0
base_model: openai/whisper-base
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: openai/whisper-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openai/whisper-base
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the pphuc25/EngMed dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2118
- Wer: 21.1498
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.5428 | 1.0 | 323 | 0.6628 | 36.9726 |
| 0.3049 | 2.0 | 646 | 0.7340 | 25.6329 |
| 0.1478 | 3.0 | 969 | 0.8008 | 32.5422 |
| 0.0905 | 4.0 | 1292 | 0.8517 | 21.2553 |
| 0.0556 | 5.0 | 1615 | 0.9244 | 26.4241 |
| 0.0474 | 6.0 | 1938 | 0.9692 | 25.3692 |
| 0.0338 | 7.0 | 2261 | 1.0099 | 25.7384 |
| 0.0196 | 8.0 | 2584 | 1.0844 | 27.6371 |
| 0.0152 | 9.0 | 2907 | 1.1063 | 22.7848 |
| 0.0062 | 10.0 | 3230 | 1.1242 | 22.6793 |
| 0.0064 | 11.0 | 3553 | 1.1909 | 26.1076 |
| 0.0046 | 12.0 | 3876 | 1.1556 | 21.7300 |
| 0.0021 | 13.0 | 4199 | 1.1804 | 20.8861 |
| 0.0023 | 14.0 | 4522 | 1.1757 | 21.2553 |
| 0.0003 | 15.0 | 4845 | 1.2014 | 22.9430 |
| 0.0003 | 16.0 | 5168 | 1.1849 | 21.7300 |
| 0.0004 | 17.0 | 5491 | 1.1936 | 21.6245 |
| 0.0002 | 18.0 | 5814 | 1.2106 | 20.9916 |
| 0.0002 | 19.0 | 6137 | 1.2111 | 20.9388 |
| 0.0001 | 20.0 | 6460 | 1.2118 | 21.1498 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Gregorig/xlm-roberta-base-finetuned
|
Gregorig
| 2024-06-10T16:19:11Z | 106 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-05T21:59:15Z |
---
license: mit
base_model: FacebookAI/xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned
This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1176
- Accuracy: 0.495
- F1: 0.4893
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.3697 | 1.0 | 26 | 1.3523 | 0.37 | 0.2659 |
| 1.3186 | 2.0 | 52 | 1.2948 | 0.36 | 0.2749 |
| 1.2448 | 3.0 | 78 | 1.1717 | 0.42 | 0.3988 |
| 1.1753 | 4.0 | 104 | 1.1279 | 0.49 | 0.4834 |
| 1.1227 | 5.0 | 130 | 1.1176 | 0.495 | 0.4893 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Tokenizers 0.19.1
|
ishjha1/bart-cnn-samsum-finetuned
|
ishjha1
| 2024-06-10T16:16:55Z | 120 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"base_model:facebook/bart-large-cnn",
"base_model:finetune:facebook/bart-large-cnn",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-06-10T16:15:21Z |
---
license: mit
base_model: facebook/bart-large-cnn
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: bart-cnn-samsum-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-samsum-finetuned
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0921
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0847 | 1.0 | 369 | 0.0793 |
| 0.0634 | 2.0 | 738 | 0.0779 |
| 0.0406 | 3.0 | 1107 | 0.0824 |
| 0.0348 | 4.0 | 1476 | 0.0888 |
| 0.0317 | 5.0 | 1845 | 0.0921 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
alexandrualexandru/code-llama-13b-text-to-sparql
|
alexandrualexandru
| 2024-06-10T16:15:26Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:codellama/CodeLlama-13b-hf",
"base_model:finetune:codellama/CodeLlama-13b-hf",
"license:llama2",
"region:us"
] | null | 2024-06-10T16:15:12Z |
---
license: llama2
base_model: codellama/CodeLlama-13b-hf
tags:
- generated_from_trainer
model-index:
- name: code-llama-13b-text-to-sparql
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# code-llama-13b-text-to-sparql
This model is a fine-tuned version of [codellama/CodeLlama-13b-hf](https://huggingface.co/codellama/CodeLlama-13b-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1870
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 400
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.1121 | 0.0710 | 20 | 1.0865 |
| 0.6067 | 0.1421 | 40 | 0.3437 |
| 0.2982 | 0.2131 | 60 | 0.2775 |
| 0.2433 | 0.2842 | 80 | 0.2465 |
| 0.2162 | 0.3552 | 100 | 0.2657 |
| 0.2308 | 0.4263 | 120 | 0.2303 |
| 0.2356 | 0.4973 | 140 | 0.2217 |
| 0.239 | 0.5684 | 160 | 0.2167 |
| 0.2159 | 0.6394 | 180 | 0.2112 |
| 0.2005 | 0.7105 | 200 | 0.2217 |
| 0.2177 | 0.7815 | 220 | 0.2070 |
| 0.2048 | 0.8526 | 240 | 0.2018 |
| 0.2092 | 0.9236 | 260 | 0.1976 |
| 0.2057 | 0.9947 | 280 | 0.1959 |
| 0.198 | 1.0657 | 300 | 0.1929 |
| 0.1988 | 1.1368 | 320 | 0.1908 |
| 0.1886 | 1.2078 | 340 | 0.1906 |
| 0.1927 | 1.2789 | 360 | 0.1883 |
| 0.1841 | 1.3499 | 380 | 0.1872 |
| 0.1863 | 1.4210 | 400 | 0.1870 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.10.1
- Tokenizers 0.19.1
|
NikolayKozloff/Llama-3-Steerpike-v1-OAS-8B-Q5_0-GGUF
|
NikolayKozloff
| 2024-06-10T16:14:43Z | 7 | 1 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:grimjim/Llama-3-Steerpike-v1-OAS-8B",
"base_model:quantized:grimjim/Llama-3-Steerpike-v1-OAS-8B",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | 2024-06-10T16:14:24Z |
---
license: llama3
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
base_model: grimjim/Llama-3-Steerpike-v1-OAS-8B
license_link: LICENSE
---
# NikolayKozloff/Llama-3-Steerpike-v1-OAS-8B-Q5_0-GGUF
This model was converted to GGUF format from [`grimjim/Llama-3-Steerpike-v1-OAS-8B`](https://huggingface.co/grimjim/Llama-3-Steerpike-v1-OAS-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/grimjim/Llama-3-Steerpike-v1-OAS-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama --hf-repo NikolayKozloff/Llama-3-Steerpike-v1-OAS-8B-Q5_0-GGUF --hf-file llama-3-steerpike-v1-oas-8b-q5_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/Llama-3-Steerpike-v1-OAS-8B-Q5_0-GGUF --hf-file llama-3-steerpike-v1-oas-8b-q5_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./main --hf-repo NikolayKozloff/Llama-3-Steerpike-v1-OAS-8B-Q5_0-GGUF --hf-file llama-3-steerpike-v1-oas-8b-q5_0.gguf -p "The meaning to life and the universe is"
```
or
```
./server --hf-repo NikolayKozloff/Llama-3-Steerpike-v1-OAS-8B-Q5_0-GGUF --hf-file llama-3-steerpike-v1-oas-8b-q5_0.gguf -c 2048
```
|
KuanP/baseline_2024-06-10_11-43-48_fold_3
|
KuanP
| 2024-06-10T16:14:40Z | 34 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-10T16:14:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Benphil/CoT-multiDomain-pegasus
|
Benphil
| 2024-06-10T16:13:27Z | 84 | 0 |
transformers
|
[
"transformers",
"safetensors",
"pegasus",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-06-10T16:09:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Gregorig/distilbert-base-uncased-finetuned
|
Gregorig
| 2024-06-10T16:13:23Z | 108 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-05T21:52:34Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0808
- Accuracy: 0.505
- F1: 0.5001
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.316 | 1.0 | 26 | 1.2469 | 0.42 | 0.3122 |
| 1.1839 | 2.0 | 52 | 1.1349 | 0.49 | 0.4587 |
| 1.0951 | 3.0 | 78 | 1.1039 | 0.485 | 0.4738 |
| 1.036 | 4.0 | 104 | 1.0734 | 0.485 | 0.4838 |
| 0.9994 | 5.0 | 130 | 1.0808 | 0.505 | 0.5001 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Tokenizers 0.19.1
|
datek/google-gemma-7b-1718035800
|
datek
| 2024-06-10T16:12:55Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-10T16:10:03Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Augusto777/vit-base-patch16-224-RX1-24
|
Augusto777
| 2024-06-10T16:12:32Z | 195 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-06-10T16:01:06Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-RX1-24
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8431372549019608
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-RX1-24
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5687
- Accuracy: 0.8431
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 24
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.93 | 7 | 1.3485 | 0.4706 |
| 1.3674 | 2.0 | 15 | 1.2284 | 0.5490 |
| 1.2414 | 2.93 | 22 | 1.1307 | 0.6471 |
| 1.1146 | 4.0 | 30 | 1.0230 | 0.6471 |
| 1.1146 | 4.93 | 37 | 0.9251 | 0.6863 |
| 0.9522 | 6.0 | 45 | 0.9122 | 0.6471 |
| 0.8247 | 6.93 | 52 | 0.9374 | 0.6275 |
| 0.6825 | 8.0 | 60 | 0.8320 | 0.6863 |
| 0.6825 | 8.93 | 67 | 0.8286 | 0.6667 |
| 0.6191 | 10.0 | 75 | 0.8418 | 0.6667 |
| 0.5312 | 10.93 | 82 | 0.7836 | 0.8235 |
| 0.454 | 12.0 | 90 | 0.7356 | 0.8039 |
| 0.454 | 12.93 | 97 | 0.6117 | 0.8235 |
| 0.3752 | 14.0 | 105 | 0.6014 | 0.8235 |
| 0.3269 | 14.93 | 112 | 0.6102 | 0.8039 |
| 0.2733 | 16.0 | 120 | 0.6404 | 0.8039 |
| 0.2733 | 16.93 | 127 | 0.5687 | 0.8431 |
| 0.2711 | 18.0 | 135 | 0.6120 | 0.8235 |
| 0.2519 | 18.93 | 142 | 0.6250 | 0.8431 |
| 0.2484 | 20.0 | 150 | 0.6086 | 0.7843 |
| 0.2484 | 20.93 | 157 | 0.6229 | 0.8235 |
| 0.2258 | 22.0 | 165 | 0.6390 | 0.7843 |
| 0.2258 | 22.4 | 168 | 0.6337 | 0.8039 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
madiramsey/baf2b252097d46299a_example_task_example_exp
|
madiramsey
| 2024-06-10T16:11:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-10T16:10:36Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Gregorig/roberta-large-finetuned
|
Gregorig
| 2024-06-10T16:10:16Z | 119 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-05T21:48:18Z |
---
license: mit
base_model: roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: roberta-large-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-finetuned
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0522
- Accuracy: 0.5
- F1: 0.4928
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.3471 | 1.0 | 51 | 1.2407 | 0.395 | 0.3678 |
| 1.1707 | 2.0 | 102 | 1.0926 | 0.47 | 0.4545 |
| 1.0079 | 3.0 | 153 | 1.0522 | 0.5 | 0.4928 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Tokenizers 0.19.1
|
UdS-LSV/mcse-coco-bert-base-uncased
|
UdS-LSV
| 2024-06-10T16:09:14Z | 108 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"feature-extraction",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-06-10T15:41:56Z |
---
library_name: transformers
license: mit
language:
- en
metrics:
- spearmanr
---
# MCSE: Multimodal Contrastive Learning of Sentence Embeddings (NAACL 2022)
Paper link: https://aclanthology.org/2022.naacl-main.436/
Github: https://github.com/uds-lsv/MCSE
Author list: Miaoran Zhang, Marius Mosbach, David Adelani, Michael Hedderich, Dietrich Klakow
## Model Details
- base model: [bert-base-uncased](google-bert/bert-base-uncased)
- training data: Wiki1M + MS-COCO
## Evaluation Results
| STS12 | STS13 | STS14 | STS15 | STS16 | STSBenchmark | SICKRelatedness | Avg. |
|:------:|:------:|:------:|:------:|:------:|:------------:|:---------------:|:------:|
| 72.34 | 79.44 | 72.88 | 82.95 | 78.98 | 79.01 | 73.96 | 77.08 |
|
Benphil/CoT-multiDomain-Summ
|
Benphil
| 2024-06-10T16:08:30Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"base_model:google/pegasus-cnn_dailymail",
"base_model:finetune:google/pegasus-cnn_dailymail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-06-10T09:43:33Z |
---
base_model: google/pegasus-cnn_dailymail
tags:
- generated_from_trainer
model-index:
- name: CoT-multiDomain-Summ
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CoT-multiDomain-Summ
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1456
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 438 | 1.2374 |
| 4.18 | 2.0 | 876 | 1.1642 |
| 1.1654 | 3.0 | 1314 | 1.1482 |
| 1.0725 | 4.0 | 1752 | 1.1456 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu118
- Datasets 2.19.1
- Tokenizers 0.19.1
|
UdS-LSV/mcse-flickr-roberta-base
|
UdS-LSV
| 2024-06-10T16:07:55Z | 105 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"feature-extraction",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-06-10T15:40:34Z |
---
library_name: transformers
license: mit
language:
- en
metrics:
- spearmanr
---
# MCSE: Multimodal Contrastive Learning of Sentence Embeddings (NAACL 2022)
Paper link: https://aclanthology.org/2022.naacl-main.436/
Github: https://github.com/uds-lsv/MCSE
Author list: Miaoran Zhang, Marius Mosbach, David Adelani, Michael Hedderich, Dietrich Klakow
## Model Details
- base model: [roberta-base](https://huggingface.co/FacebookAI/roberta-base)
- training data: Wiki1M + Flicker30k
## Evaluation Results
| STS12 | STS13 | STS14 | STS15 | STS16 | STSBenchmark | SICKRelatedness | Avg. |
|:------:|:------:|:------:|:------:|:------:|:------------:|:---------------:|:------:|
| 71.74 | 82.60 | 75.67 | 84.49 | 80.74 | 81.52 | 72.30 | 78.44 |
|
llama-duo/gemma2b-summarize-gemini1_5flash-256k
|
llama-duo
| 2024-06-10T16:06:34Z | 10 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"gemma",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"dataset:llama-duo/synth_summarize_dataset_dedup",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"license:gemma",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-06-05T10:15:18Z |
---
license: gemma
library_name: peft
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
base_model: google/gemma-2b
datasets:
- llama-duo/synth_summarize_dataset_dedup
model-index:
- name: gemma2b-summarize-gemini1_5flash-256k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma2b-summarize-gemini1_5flash-256k
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on the llama-duo/synth_summarize_dataset_dedup dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5681
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.0246 | 0.9976 | 207 | 2.4550 |
| 0.9556 | 2.0 | 415 | 2.4530 |
| 0.9114 | 2.9976 | 622 | 2.4641 |
| 0.8927 | 4.0 | 830 | 2.4882 |
| 0.8752 | 4.9976 | 1037 | 2.5081 |
| 0.8602 | 6.0 | 1245 | 2.5277 |
| 0.8464 | 6.9976 | 1452 | 2.5513 |
| 0.8353 | 8.0 | 1660 | 2.5615 |
| 0.8267 | 8.9976 | 1867 | 2.5674 |
| 0.8289 | 9.9976 | 2070 | 2.5681 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.2
- Pytorch 2.3.1+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
UdS-LSV/mcse-flickr-bert-base-uncased
|
UdS-LSV
| 2024-06-10T16:05:44Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"feature-extraction",
"en",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-06-10T15:22:23Z |
---
library_name: transformers
license: mit
language:
- en
metrics:
- spearmanr
---
# MCSE: Multimodal Contrastive Learning of Sentence Embeddings (NAACL 2022)
Paper link: https://aclanthology.org/2022.naacl-main.436/
Github: https://github.com/uds-lsv/MCSE
Author list: Miaoran Zhang, Marius Mosbach, David Adelani, Michael Hedderich, Dietrich Klakow
## Model Details
- base model: [bert-base-uncased](google-bert/bert-base-uncased)
- training data: Wiki1M + Flicker30k
## Evaluation Results
| STS12 | STS13 | STS14 | STS15 | STS16 | STSBenchmark | SICKRelatedness | Avg. |
|:------:|:------:|:------:|:------:|:------:|:------------:|:---------------:|:------:|
| 71.63 | 82.13 | 75.94 | 84.63 | 77.50 | 79.96 | 72.12 | 77.70 |
|
MeNeIaus/distilbert-base-uncased-finetuned-ner
|
MeNeIaus
| 2024-06-10T16:00:41Z | 106 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-06-10T15:52:35Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9267995570321151
- name: Recall
type: recall
value: 0.9362344781295447
- name: F1
type: f1
value: 0.9314931270521453
- name: Accuracy
type: accuracy
value: 0.9835258233116749
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0606
- Precision: 0.9268
- Recall: 0.9362
- F1: 0.9315
- Accuracy: 0.9835
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.246 | 1.0 | 878 | 0.0714 | 0.9025 | 0.9172 | 0.9098 | 0.9793 |
| 0.0514 | 2.0 | 1756 | 0.0603 | 0.9254 | 0.9332 | 0.9293 | 0.9830 |
| 0.0305 | 3.0 | 2634 | 0.0606 | 0.9268 | 0.9362 | 0.9315 | 0.9835 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
lightsource/yappy-fine-tuned-opus-mt-ru-en
|
lightsource
| 2024-06-10T16:00:05Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"marian",
"text2text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-06-10T15:58:07Z |
---
license: apache-2.0
---
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.