modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-04 00:37:20
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 537
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-04 00:37:04
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Fufka/Kunoichi-zephyr-pl-7B
|
Fufka
| 2024-02-03T14:32:53Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"SanjiWatsuki/Kunoichi-DPO-v2-7B",
"Nondzu/zephyr-7b-beta-pl",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-03T14:27:16Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- SanjiWatsuki/Kunoichi-DPO-v2-7B
- Nondzu/zephyr-7b-beta-pl
---
# Kunoichi-zephyr-pl-7B
Kunoichi-zephyr-pl-7B is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B)
* [Nondzu/zephyr-7b-beta-pl](https://huggingface.co/Nondzu/zephyr-7b-beta-pl)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
layer_range: [0, 32]
- sources:
- model: Nondzu/zephyr-7b-beta-pl
layer_range: [24, 32]
merge_method: passthrough
dtype: bfloat16
```
|
wcyat/whisper-small-cantomap
|
wcyat
| 2024-02-03T14:29:30Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-02-03T12:04:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
base_model: openai/whisper-small
model-index:
- name: whisper-small-cantomap
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-cantomap
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.3636
- eval_cer: 24.8193
- eval_runtime: 303.246
- eval_samples_per_second: 1.725
- eval_steps_per_second: 0.109
- epoch: 3.89
- step: 1143
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
Shymaa33/whisper-small-ar-translation
|
Shymaa33
| 2024-02-03T14:28:32Z | 0 | 0 | null |
[
"automatic-speech-recognition",
"dataset:mozilla-foundation/common_voice_11_0",
"region:us"
] |
automatic-speech-recognition
| 2024-02-03T08:26:47Z |
---
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
pipeline_tag: automatic-speech-recognition
---
|
avinasht/AugWordNet_BERT_FPB_finetuned
|
avinasht
| 2024-02-03T14:24:13Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-03T14:23:59Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: AugWordNet_BERT_FPB_finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AugWordNet_BERT_FPB_finetuned
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3789
- Accuracy: 0.9097
- F1: 0.9100
- Precision: 0.9140
- Recall: 0.9097
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.8426 | 1.0 | 91 | 0.7693 | 0.6978 | 0.6777 | 0.6887 | 0.6978 |
| 0.4269 | 2.0 | 182 | 0.3264 | 0.8816 | 0.8803 | 0.8820 | 0.8816 |
| 0.3055 | 3.0 | 273 | 0.2990 | 0.8832 | 0.8838 | 0.8888 | 0.8832 |
| 0.2135 | 4.0 | 364 | 0.3049 | 0.9003 | 0.8998 | 0.9006 | 0.9003 |
| 0.1275 | 5.0 | 455 | 0.3764 | 0.8801 | 0.8786 | 0.8839 | 0.8801 |
| 0.1033 | 6.0 | 546 | 0.3393 | 0.9019 | 0.9007 | 0.9048 | 0.9019 |
| 0.0635 | 7.0 | 637 | 0.3829 | 0.9081 | 0.9079 | 0.9082 | 0.9081 |
| 0.0657 | 8.0 | 728 | 0.4759 | 0.8972 | 0.8958 | 0.8986 | 0.8972 |
| 0.0548 | 9.0 | 819 | 0.3789 | 0.9097 | 0.9100 | 0.9140 | 0.9097 |
| 0.0695 | 10.0 | 910 | 0.4797 | 0.8894 | 0.8876 | 0.8979 | 0.8894 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.1
|
laterano/my_awesome_billsum_model
|
laterano
| 2024-02-03T14:21:15Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-03T14:12:12Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5251
- Rouge1: 0.1377
- Rouge2: 0.049
- Rougel: 0.115
- Rougelsum: 0.1147
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.8191 | 0.1225 | 0.0361 | 0.1053 | 0.1053 | 19.0 |
| No log | 2.0 | 124 | 2.6058 | 0.134 | 0.0461 | 0.112 | 0.1118 | 19.0 |
| No log | 3.0 | 186 | 2.5421 | 0.1368 | 0.0499 | 0.1143 | 0.1141 | 19.0 |
| No log | 4.0 | 248 | 2.5251 | 0.1377 | 0.049 | 0.115 | 0.1147 | 19.0 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
WGlint/SD_UI
|
WGlint
| 2024-02-03T14:17:18Z | 0 | 0 | null |
[
"arxiv:2211.06679",
"region:us"
] | null | 2024-02-03T13:58:59Z |
# Stable Diffusion web UI
A browser interface based on Gradio library for Stable Diffusion.

## Features
[Detailed feature showcase with images](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features):
- Original txt2img and img2img modes
- One click install and run script (but you still must install python and git)
- Outpainting
- Inpainting
- Color Sketch
- Prompt Matrix
- Stable Diffusion Upscale
- Attention, specify parts of text that the model should pay more attention to
- a man in a `((tuxedo))` - will pay more attention to tuxedo
- a man in a `(tuxedo:1.21)` - alternative syntax
- select text and press `Ctrl+Up` or `Ctrl+Down` (or `Command+Up` or `Command+Down` if you're on a MacOS) to automatically adjust attention to selected text (code contributed by anonymous user)
- Loopback, run img2img processing multiple times
- X/Y/Z plot, a way to draw a 3 dimensional plot of images with different parameters
- Textual Inversion
- have as many embeddings as you want and use any names you like for them
- use multiple embeddings with different numbers of vectors per token
- works with half precision floating point numbers
- train embeddings on 8GB (also reports of 6GB working)
- Extras tab with:
- GFPGAN, neural network that fixes faces
- CodeFormer, face restoration tool as an alternative to GFPGAN
- RealESRGAN, neural network upscaler
- ESRGAN, neural network upscaler with a lot of third party models
- SwinIR and Swin2SR ([see here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/2092)), neural network upscalers
- LDSR, Latent diffusion super resolution upscaling
- Resizing aspect ratio options
- Sampling method selection
- Adjust sampler eta values (noise multiplier)
- More advanced noise setting options
- Interrupt processing at any time
- 4GB video card support (also reports of 2GB working)
- Correct seeds for batches
- Live prompt token length validation
- Generation parameters
- parameters you used to generate images are saved with that image
- in PNG chunks for PNG, in EXIF for JPEG
- can drag the image to PNG info tab to restore generation parameters and automatically copy them into UI
- can be disabled in settings
- drag and drop an image/text-parameters to promptbox
- Read Generation Parameters Button, loads parameters in promptbox to UI
- Settings page
- Running arbitrary python code from UI (must run with `--allow-code` to enable)
- Mouseover hints for most UI elements
- Possible to change defaults/mix/max/step values for UI elements via text config
- Tiling support, a checkbox to create images that can be tiled like textures
- Progress bar and live image generation preview
- Can use a separate neural network to produce previews with almost none VRAM or compute requirement
- Negative prompt, an extra text field that allows you to list what you don't want to see in generated image
- Styles, a way to save part of prompt and easily apply them via dropdown later
- Variations, a way to generate same image but with tiny differences
- Seed resizing, a way to generate same image but at slightly different resolution
- CLIP interrogator, a button that tries to guess prompt from an image
- Prompt Editing, a way to change prompt mid-generation, say to start making a watermelon and switch to anime girl midway
- Batch Processing, process a group of files using img2img
- Img2img Alternative, reverse Euler method of cross attention control
- Highres Fix, a convenience option to produce high resolution pictures in one click without usual distortions
- Reloading checkpoints on the fly
- Checkpoint Merger, a tab that allows you to merge up to 3 checkpoints into one
- [Custom scripts](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Scripts) with many extensions from community
- [Composable-Diffusion](https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/), a way to use multiple prompts at once
- separate prompts using uppercase `AND`
- also supports weights for prompts: `a cat :1.2 AND a dog AND a penguin :2.2`
- No token limit for prompts (original stable diffusion lets you use up to 75 tokens)
- DeepDanbooru integration, creates danbooru style tags for anime prompts
- [xformers](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Xformers), major speed increase for select cards: (add `--xformers` to commandline args)
- via extension: [History tab](https://github.com/yfszzx/stable-diffusion-webui-images-browser): view, direct and delete images conveniently within the UI
- Generate forever option
- Training tab
- hypernetworks and embeddings options
- Preprocessing images: cropping, mirroring, autotagging using BLIP or deepdanbooru (for anime)
- Clip skip
- Hypernetworks
- Loras (same as Hypernetworks but more pretty)
- A separate UI where you can choose, with preview, which embeddings, hypernetworks or Loras to add to your prompt
- Can select to load a different VAE from settings screen
- Estimated completion time in progress bar
- API
- Support for dedicated [inpainting model](https://github.com/runwayml/stable-diffusion#inpainting-with-stable-diffusion) by RunwayML
- via extension: [Aesthetic Gradients](https://github.com/AUTOMATIC1111/stable-diffusion-webui-aesthetic-gradients), a way to generate images with a specific aesthetic by using clip images embeds (implementation of [https://github.com/vicgalle/stable-diffusion-aesthetic-gradients](https://github.com/vicgalle/stable-diffusion-aesthetic-gradients))
- [Stable Diffusion 2.0](https://github.com/Stability-AI/stablediffusion) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#stable-diffusion-20) for instructions
- [Alt-Diffusion](https://arxiv.org/abs/2211.06679) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#alt-diffusion) for instructions
- Now without any bad letters!
- Load checkpoints in safetensors format
- Eased resolution restriction: generated image's dimension must be a multiple of 8 rather than 64
- Now with a license!
- Reorder elements in the UI from settings screen
## Installation and Running
Make sure the required [dependencies](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies) are met and follow the instructions available for:
- [NVidia](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs) (recommended)
- [AMD](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs) GPUs.
- [Intel CPUs, Intel GPUs (both integrated and discrete)](https://github.com/openvinotoolkit/stable-diffusion-webui/wiki/Installation-on-Intel-Silicon) (external wiki page)
Alternatively, use online services (like Google Colab):
- [List of Online Services](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Online-Services)
### Installation on Windows 10/11 with NVidia-GPUs using release package
1. Download `sd.webui.zip` from [v1.0.0-pre](https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases/tag/v1.0.0-pre) and extract it's contents.
2. Run `update.bat`.
3. Run `run.bat`.
> For more details see [Install-and-Run-on-NVidia-GPUs](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs)
### Automatic Installation on Windows
1. Install [Python 3.10.6](https://www.python.org/downloads/release/python-3106/) (Newer version of Python does not support torch), checking "Add Python to PATH".
2. Install [git](https://git-scm.com/download/win).
3. Download the stable-diffusion-webui repository, for example by running `git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git`.
4. Run `webui-user.bat` from Windows Explorer as normal, non-administrator, user.
### Automatic Installation on Linux
1. Install the dependencies:
```bash
# Debian-based:
sudo apt install wget git python3 python3-venv libgl1 libglib2.0-0
# Red Hat-based:
sudo dnf install wget git python3
# Arch-based:
sudo pacman -S wget git python3
```
2. Navigate to the directory you would like the webui to be installed and execute the following command:
```bash
wget -q https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh
```
3. Run `webui.sh`.
4. Check `webui-user.sh` for options.
### Installation on Apple Silicon
Find the instructions [here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Installation-on-Apple-Silicon).
## Contributing
Here's how to add code to this repo: [Contributing](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing)
## Documentation
The documentation was moved from this README over to the project's [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki).
For the purposes of getting Google and other search engines to crawl the wiki, here's a link to the (not for humans) [crawlable wiki](https://github-wiki-see.page/m/AUTOMATIC1111/stable-diffusion-webui/wiki).
## Credits
Licenses for borrowed code can be found in `Settings -> Licenses` screen, and also in `html/licenses.html` file.
- Stable Diffusion - https://github.com/CompVis/stable-diffusion, https://github.com/CompVis/taming-transformers
- k-diffusion - https://github.com/crowsonkb/k-diffusion.git
- GFPGAN - https://github.com/TencentARC/GFPGAN.git
- CodeFormer - https://github.com/sczhou/CodeFormer
- ESRGAN - https://github.com/xinntao/ESRGAN
- SwinIR - https://github.com/JingyunLiang/SwinIR
- Swin2SR - https://github.com/mv-lab/swin2sr
- LDSR - https://github.com/Hafiidz/latent-diffusion
- MiDaS - https://github.com/isl-org/MiDaS
- Ideas for optimizations - https://github.com/basujindal/stable-diffusion
- Cross Attention layer optimization - Doggettx - https://github.com/Doggettx/stable-diffusion, original idea for prompt editing.
- Cross Attention layer optimization - InvokeAI, lstein - https://github.com/invoke-ai/InvokeAI (originally http://github.com/lstein/stable-diffusion)
- Sub-quadratic Cross Attention layer optimization - Alex Birch (https://github.com/Birch-san/diffusers/pull/1), Amin Rezaei (https://github.com/AminRezaei0x443/memory-efficient-attention)
- Textual Inversion - Rinon Gal - https://github.com/rinongal/textual_inversion (we're not using his code, but we are using his ideas).
- Idea for SD upscale - https://github.com/jquesnelle/txt2imghd
- Noise generation for outpainting mk2 - https://github.com/parlance-zz/g-diffuser-bot
- CLIP interrogator idea and borrowing some code - https://github.com/pharmapsychotic/clip-interrogator
- Idea for Composable Diffusion - https://github.com/energy-based-model/Compositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch
- xformers - https://github.com/facebookresearch/xformers
- DeepDanbooru - interrogator for anime diffusers https://github.com/KichangKim/DeepDanbooru
- Sampling in float32 precision from a float16 UNet - marunine for the idea, Birch-san for the example Diffusers implementation (https://github.com/Birch-san/diffusers-play/tree/92feee6)
- Instruct pix2pix - Tim Brooks (star), Aleksander Holynski (star), Alexei A. Efros (no star) - https://github.com/timothybrooks/instruct-pix2pix
- Security advice - RyotaK
- UniPC sampler - Wenliang Zhao - https://github.com/wl-zhao/UniPC
- TAESD - Ollin Boer Bohan - https://github.com/madebyollin/taesd
- LyCORIS - KohakuBlueleaf
- Restart sampling - lambertae - https://github.com/Newbeeer/diffusion_restart_sampling
- Initial Gradio script - posted on 4chan by an Anonymous user. Thank you Anonymous user.
- (You)
|
theZoo/Reinforce-1
|
theZoo
| 2024-02-03T14:14:19Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-03T14:14:10Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 404.30 +/- 191.40
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
imsanjoykb/mistral_7b_guanaco
|
imsanjoykb
| 2024-02-03T13:58:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-02T19:36:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
fia24/sentenec30kv2
|
fia24
| 2024-02-03T13:56:07Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:csebuetnlp/banglat5",
"base_model:finetune:csebuetnlp/banglat5",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-03T12:23:05Z |
---
base_model: csebuetnlp/banglat5
tags:
- generated_from_trainer
model-index:
- name: sentenec30kv2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentenec30kv2
This model is a fine-tuned version of [csebuetnlp/banglat5](https://huggingface.co/csebuetnlp/banglat5) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.1100
- eval_bleu: 88.2797
- eval_Val Accuracy: 0.683
- eval_Word_accuracy: 0.9845
- eval_gen_len: 14.3727
- eval_runtime: 141.8398
- eval_samples_per_second: 21.151
- eval_steps_per_second: 1.325
- epoch: 9.0
- step: 4005
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 54
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200
### Framework versions
- Transformers 4.35.2
- Pytorch 1.13.1+cu117
- Datasets 2.16.1
- Tokenizers 0.15.1
|
qilowoq/paraphrase-multilingual-mpnet-base-v2-en-ru
|
qilowoq
| 2024-02-03T13:49:55Z | 12 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"ru",
"en",
"arxiv:1908.10084",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-02-02T20:19:55Z |
---
language: ["ru", "en"]
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- feature-extraction
- sentence-similarity
- transformers
---
# Model for English and Russian
This is a truncated version of [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2).
This model has only English and Russian tokens left in the vocabulary. Thus making it twice as small as the original model while producing the same embeddings.
Model maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/paraphrase-multilingual-mpnet-base-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('qilowoq/paraphrase-multilingual-mpnet-base-v2-en-ru')
model = AutoModel.from_pretrained('qilowoq/paraphrase-multilingual-mpnet-base-v2-en-ru')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, average pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
```
The model has been truncated in [this notebook](https://colab.research.google.com/drive/19IFjWpJpxQie1gtHSvDeoKk7CQtpy6bT?usp=sharing).
|
PhilEO-community/PhilEO-Bench
|
PhilEO-community
| 2024-02-03T13:47:15Z | 0 | 5 | null |
[
"arxiv:2401.04464",
"license:mit",
"region:us"
] | null | 2024-01-13T18:23:19Z |
---
license: mit
---
# Model: PhilEO Bench
A novel evaluation framework for EO Foundation Models.
## Model Details
### Model Description
The PhilEO Bench evaluation framework comprises of a testbed that can be used to test any EO Foundation Model. The three downstream tasks are building density estimation, road segmentation, and land cover classification.
- **Developed by:** ESA, Phi-lab
- **Model type:** Evaluation Framework
- **License:** MIT
The aim of Foundation Models is to improve the performance on several diverse downstream tasks. However, these models are often evaluated on a range of datasets with different characteristics (size, resolution, locations, satellite sources, and capture dates). There is also a focus on evaluating classification downstream tasks, while omitting image-to-image downstream tasks (such as segmentation). Therefore, it is challenging to fairly compare the performance of these burgeoning EO FMs and draw meaningful conclusions. To evaluate FMs, we propose the PhilEO Bench, an evaluation framework with the aim of providing a flexible, consistent, and fair benchmark for EO Sentinel-2 FMs.
## Uses
The PhilEO Bench is used to evaluate EO Foundation Models. - We introduce a new flexible evaluation framework focused on generating comparable, fair, and reproducible results.
### Model Sources
The basic links for the model are:
- **Paper:** https://arxiv.org/pdf/2401.04464.pdf
- **Code:** http://github.com/ESA-PhiLab/PhilEO-Bench
- **Project Website:** http://phileo-bench.github.io
- **Repository:** http://huggingface.co/ESA-philab/PhilEO-Bench
- **arXiv:** https://arxiv.org/abs/2401.04464
- **Pre-trained models:** http://huggingface.co/ESA-philab/PhilEO-Bench/tree/main/pretrained_philab_models
- **Data in:** http://huggingface.co/datasets/ESA-philab/PhilEO-downstream
## Citation
Casper Fibaek, Luke Camilleri, Andreas Luyts, Nikolaos Dionelis, and Bertrand Le Saux, “PhilEO Bench: Evaluating Geo-Spatial Foundation Models,” arXiv:2401.04464, 2024.
|
philipp-zettl/qa-test
|
philipp-zettl
| 2024-02-03T13:37:22Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"question-answering",
"easybits",
"en",
"de",
"fr",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-10-01T07:16:34Z |
---
license: mit
language:
- en
- de
- fr
pipeline_tag: question-answering
tags:
- easybits
---
|
IndrasMirror/AmalgamationXL-V0.4
|
IndrasMirror
| 2024-02-03T13:28:29Z | 0 | 1 | null |
[
"region:us"
] | null | 2024-02-03T12:43:21Z |
AmalgamationXL-V0.4: A Recursive Merge Masterpiece
The AmalgamationXL-V0.4 model is the culmination of an intricate recursive merge process, meticulously crafted to incorporate the strengths and unique features of several leading-edge Stable Diffusion models. This model represents a harmonious blend of artistic flair, realism, and clarity, designed to deliver unparalleled image generation capabilities.
Creation Journey:
V0.1 Foundation: We began with the amalgamation of five distinct models: ColourfulXL2, FenrisXLV158, AlbedoXLV20, BetterThanWordsV10, and CrystalClearXL. This foundational merge laid the groundwork for a versatile model capable of producing vibrant, detailed, and expressive imagery.
V0.2 Enhancement: The next phase involved enhancing AmalgamationXL-V0.1 with three additional models: JuggernaugtXL_Vv8RunDiffusion, CopaxTimelessSDXL_v9, and RealismEngineSDXL_V3.0. This step aimed at bolstering the model's capabilities in generating robust, timelessly styled, and hyper-realistic images.
V0.3 Evolution: Progressing further, we merged AmalgamationXL-V0.2 with NewRealityXL_2.0 and ZavyChromaXL_v4.0, elevating the model to V0.3. This iteration introduced new dimensions of realism and chromatic finesse, pushing the boundaries of what our amalgamated model could achieve.
V0.4 Finalization: Finally, we arrived at AmalgamationXL-V0.4 by recursively merging V0.3 with SDXLYamersRealistic5_v5Rundiffusion and ProtovisionXLHighFidelity3D_beta0520. This ultimate version stands as a testament to high-fidelity 3D realism, blending the best of its predecessors into a single, powerful model.
This recursive merging process not only allowed us to incrementally integrate and balance the strengths of each contributing model but also to create a model that is greater than the sum of its parts. AmalgamationXL-V0.4 is designed for creators seeking unmatched versatility and quality in their generative art endeavors.
Check out the flowchart here: [https://i.imgur.com/RIc6hxW.png](https://i.imgur.com/RIc6hxW.png)
Model Available here: https://civitai.com/models/287016/amalgamationxl
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
If you like what I do, feel free to have a look at my ko-fi or patreon where I have a bunch of ComfyUI workflows and other Stable Diffusion related services.
https://ko-fi.com/indrasmirror
https://www.patreon.com/indrasmirror
|
LarryAIDraw/bagpipe_arknights
|
LarryAIDraw
| 2024-02-03T13:08:43Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-02-03T12:58:50Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/131831/bagpipe-arknights
|
LarryAIDraw/astesia_arknights
|
LarryAIDraw
| 2024-02-03T13:08:14Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-02-03T12:57:59Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/161329/astesia-arknights
|
sbulut/bert-finetuned-squad
|
sbulut
| 2024-02-03T12:53:46Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-02-02T22:24:05Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
vdo/stable-video-diffusion-img2vid-fp16
|
vdo
| 2024-02-03T12:52:44Z | 0 | 5 | null |
[
"region:us"
] | null | 2023-11-24T08:03:18Z |
These are unofficial fp16 versions of
* https://huggingface.co/stabilityai/stable-video-diffusion-img2vid
* https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt
They don't seem to reduce VRAM usage, but can save you data & disk.
I couldn't see any difference in generated results compared to the full models (in lowram mode).
--------
Follow me for AI tips & tricks and more:
* https://becausecurious.me/
* https://x.com/becausecurious/
|
GlycerinLOL/Bart_reddit_tifu
|
GlycerinLOL
| 2024-02-03T12:51:35Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:reddit_tifu",
"base_model:facebook/bart-large",
"base_model:finetune:facebook/bart-large",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-03T09:57:57Z |
---
license: apache-2.0
base_model: facebook/bart-large
tags:
- generated_from_trainer
datasets:
- reddit_tifu
metrics:
- rouge
- precision
- recall
- f1
model-index:
- name: Bart_reddit_tifu
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: reddit_tifu
type: reddit_tifu
config: long
split: train
args: long
metrics:
- name: Rouge1
type: rouge
value: 0.2709
- name: Precision
type: precision
value: 0.8768
- name: Recall
type: recall
value: 0.8648
- name: F1
type: f1
value: 0.8705
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bart_reddit_tifu
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the reddit_tifu dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5035
- Rouge1: 0.2709
- Rouge2: 0.0948
- Rougel: 0.2244
- Rougelsum: 0.2244
- Gen Len: 19.3555
- Precision: 0.8768
- Recall: 0.8648
- F1: 0.8705
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|:---------:|:------:|:------:|
| 2.6968 | 1.0 | 2370 | 2.5385 | 0.2634 | 0.0907 | 0.218 | 0.2182 | 19.4438 | 0.8766 | 0.8641 | 0.8701 |
| 2.4746 | 2.0 | 4741 | 2.5077 | 0.273 | 0.0941 | 0.2238 | 0.2239 | 19.2572 | 0.8774 | 0.8655 | 0.8712 |
| 2.3066 | 3.0 | 7111 | 2.5012 | 0.2671 | 0.0936 | 0.221 | 0.2211 | 19.3071 | 0.8756 | 0.864 | 0.8696 |
| 2.2041 | 4.0 | 9480 | 2.5035 | 0.2709 | 0.0948 | 0.2244 | 0.2244 | 19.3555 | 0.8768 | 0.8648 | 0.8705 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.15.0
|
gayanin/pubmed-mixed-noise-v5-0.1-large
|
gayanin
| 2024-02-03T12:45:35Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-large",
"base_model:finetune:facebook/bart-large",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-03T02:08:49Z |
---
license: apache-2.0
base_model: facebook/bart-large
tags:
- generated_from_trainer
model-index:
- name: pubmed-mixed-noise-v5-0.1-large
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pubmed-mixed-noise-v5-0.1-large
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3064
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.4762 | 0.11 | 500 | 0.4936 |
| 0.4174 | 0.21 | 1000 | 0.4293 |
| 0.3835 | 0.32 | 1500 | 0.4280 |
| 0.3628 | 0.43 | 2000 | 0.4472 |
| 0.3927 | 0.54 | 2500 | 0.3898 |
| 0.3012 | 0.64 | 3000 | 0.3744 |
| 0.3189 | 0.75 | 3500 | 0.3784 |
| 0.2986 | 0.86 | 4000 | 0.3624 |
| 0.2493 | 0.96 | 4500 | 0.3588 |
| 0.2438 | 1.07 | 5000 | 0.3439 |
| 0.2465 | 1.18 | 5500 | 0.3448 |
| 0.268 | 1.28 | 6000 | 0.3476 |
| 0.2298 | 1.39 | 6500 | 0.3411 |
| 0.2587 | 1.5 | 7000 | 0.3322 |
| 0.2499 | 1.61 | 7500 | 0.3253 |
| 0.2296 | 1.71 | 8000 | 0.3177 |
| 0.2184 | 1.82 | 8500 | 0.3175 |
| 0.2245 | 1.93 | 9000 | 0.3573 |
| 0.164 | 2.03 | 9500 | 0.3292 |
| 0.1784 | 2.14 | 10000 | 0.3224 |
| 0.1487 | 2.25 | 10500 | 0.3209 |
| 0.1818 | 2.35 | 11000 | 0.3175 |
| 0.1521 | 2.46 | 11500 | 0.3190 |
| 0.1663 | 2.57 | 12000 | 0.3137 |
| 0.1604 | 2.68 | 12500 | 0.3113 |
| 0.1447 | 2.78 | 13000 | 0.3080 |
| 0.162 | 2.89 | 13500 | 0.3068 |
| 0.1414 | 3.0 | 14000 | 0.3064 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
chathuranga-jayanath/codet5-small-v13
|
chathuranga-jayanath
| 2024-02-03T12:33:52Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:Salesforce/codet5-small",
"base_model:finetune:Salesforce/codet5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-03T10:24:12Z |
---
license: apache-2.0
base_model: Salesforce/codet5-small
tags:
- generated_from_trainer
model-index:
- name: codet5-small-v13
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codet5-small-v13
This model is a fine-tuned version of [Salesforce/codet5-small](https://huggingface.co/Salesforce/codet5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1512
- Bleu Score: 0.0007
- Gen Len: 14.6798
## Model description
Trained,
- on: chathuranga-jayanath/context-5-finmath-times4j-html-mavendoxia-wro4j-guava-supercsv-len-30000-prompt-1
- data sample count: 91.9k
- prompt: [BUG]... [CONTEXT]...
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 30
- eval_batch_size: 30
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu Score | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:----------:|:-------:|
| 0.237 | 1.0 | 3064 | 0.1859 | 0.0007 | 14.6328 |
| 0.1928 | 2.0 | 6128 | 0.1572 | 0.0007 | 14.6804 |
| 0.1733 | 3.0 | 9192 | 0.1512 | 0.0007 | 14.6798 |
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
wahdan99/ppo-PyramidsTraining
|
wahdan99
| 2024-02-03T12:27:09Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2024-02-03T12:21:48Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: wahdan99/ppo-PyramidsTraining
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Druvith/mistralmed-7b-v1.5.gguf
|
Druvith
| 2024-02-03T12:22:07Z | 5 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"gguf",
"llamacpp",
"medical",
"en",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-02-03T05:30:12Z |
---
license: mit
language:
- en
library_name: adapter-transformers
tags:
- llamacpp
- gguf
- medical
---
|
vlkn/models_adapter
|
vlkn
| 2024-02-03T12:15:16Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-13b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-13b-chat-hf",
"region:us"
] | null | 2024-02-02T10:58:42Z |
---
library_name: peft
base_model: meta-llama/Llama-2-13b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
jsfs11/HighdensityRPMerge-7B-GGUF
|
jsfs11
| 2024-02-03T12:12:53Z | 7 | 0 | null |
[
"gguf",
"merge",
"mergekit",
"lazymergekit",
"SanjiWatsuki/Silicon-Maid-7B",
"chargoddard/loyal-piano-m7-cdpo",
"jsfs11/RandomMergeNoNormWEIGHTED-7B-DARETIES",
"NeverSleep/Noromaid-7b-v0.2",
"athirdpath/NSFW_DPO_vmgb-7b",
"base_model:NeverSleep/Noromaid-7b-v0.2",
"base_model:merge:NeverSleep/Noromaid-7b-v0.2",
"base_model:SanjiWatsuki/Silicon-Maid-7B",
"base_model:merge:SanjiWatsuki/Silicon-Maid-7B",
"base_model:athirdpath/NSFW_DPO_vmgb-7b",
"base_model:merge:athirdpath/NSFW_DPO_vmgb-7b",
"base_model:chargoddard/loyal-piano-m7-cdpo",
"base_model:merge:chargoddard/loyal-piano-m7-cdpo",
"base_model:jsfs11/RandomMergeNoNormWEIGHTED-7B-DARETIES",
"base_model:merge:jsfs11/RandomMergeNoNormWEIGHTED-7B-DARETIES",
"endpoints_compatible",
"region:us"
] | null | 2024-02-03T12:06:53Z |
---
tags:
- merge
- mergekit
- lazymergekit
- SanjiWatsuki/Silicon-Maid-7B
- chargoddard/loyal-piano-m7-cdpo
- jsfs11/RandomMergeNoNormWEIGHTED-7B-DARETIES
- NeverSleep/Noromaid-7b-v0.2
- athirdpath/NSFW_DPO_vmgb-7b
base_model:
- SanjiWatsuki/Silicon-Maid-7B
- chargoddard/loyal-piano-m7-cdpo
- jsfs11/RandomMergeNoNormWEIGHTED-7B-DARETIES
- NeverSleep/Noromaid-7b-v0.2
- athirdpath/NSFW_DPO_vmgb-7b
---
# HighdensityRPMerge-7B
HighdensityRPMerge-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [SanjiWatsuki/Silicon-Maid-7B](https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B)
* [chargoddard/loyal-piano-m7-cdpo](https://huggingface.co/chargoddard/loyal-piano-m7-cdpo)
* [jsfs11/RandomMergeNoNormWEIGHTED-7B-DARETIES](https://huggingface.co/jsfs11/RandomMergeNoNormWEIGHTED-7B-DARETIES)
* [NeverSleep/Noromaid-7b-v0.2](https://huggingface.co/NeverSleep/Noromaid-7b-v0.2)
* [athirdpath/NSFW_DPO_vmgb-7b](https://huggingface.co/athirdpath/NSFW_DPO_vmgb-7b)
## 🧩 Configuration
```yaml
models:
- model: saishf/West-Hermes-7B
# no parameters necessary for base model
- model: SanjiWatsuki/Silicon-Maid-7B
parameters:
weight: 0.4
density: 0.8
- model: chargoddard/loyal-piano-m7-cdpo
parameters:
weight: 0.3
density: 0.8
- model: jsfs11/RandomMergeNoNormWEIGHTED-7B-DARETIES
parameters:
weight: 0.25
density: 0.45
- model: NeverSleep/Noromaid-7b-v0.2
parameters:
weight: 0.25
density: 0.4
- model: athirdpath/NSFW_DPO_vmgb-7b
parameters:
weight: 0.2
density: 0.4
merge_method: dare_ties
base_model: saishf/West-Hermes-7B
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "jsfs11/HighdensityRPMerge-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
Dhanraj1503/poca-SoccerTwos
|
Dhanraj1503
| 2024-02-03T12:05:20Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2024-02-03T12:04:11Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Dhanraj1503/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
mini0/Model
|
mini0
| 2024-02-03T11:56:02Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-02-03T11:18:25Z |
---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Model
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3208
- Wer: 0.2936
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 240
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1346 | 0.43 | 100 | 0.8999 | 196.8440 |
| 0.4533 | 0.85 | 200 | 0.3208 | 0.2936 |
### Framework versions
- Transformers 4.37.2
- Pytorch 1.13.1+cu117
- Datasets 2.16.1
- Tokenizers 0.15.1
|
weifeng1994/whisper-small-dv
|
weifeng1994
| 2024-02-03T11:42:01Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dv",
"dataset:mozilla-foundation/common_voice_13_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-02-03T10:12:22Z |
---
language:
- dv
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: Whisper Small Dv - Sanchit Gandhi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 13
type: mozilla-foundation/common_voice_13_0
config: dv
split: test
args: dv
metrics:
- name: Wer
type: wer
value: 12.965538825329483
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Dv - Sanchit Gandhi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1694
- Wer Ortho: 62.9988
- Wer: 12.9655
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.124 | 1.63 | 500 | 0.1694 | 62.9988 | 12.9655 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
sankalpakc/nepali-sbert-175k-mpnet
|
sankalpakc
| 2024-02-03T11:34:05Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-02-03T11:21:18Z |
---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 9525 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MSELoss.MSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 952,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Innerby/Reinforce-Pixelcopter-PLE-v0-noreplay
|
Innerby
| 2024-02-03T11:33:25Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-03T11:32:36Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0-noreplay
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 19.00 +/- 15.41
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
USER12345WWW/backpack-xzg
|
USER12345WWW
| 2024-02-03T11:24:25Z | 1 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-02-03T11:20:26Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### backpack-xzg Dreambooth model trained by USER12345WWW following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: GoX19932gfds
Sample pictures of this concept:
.jpg)
|
thesergiu/roberta2roberta_daily_cnn_finetuned
|
thesergiu
| 2024-02-03T11:14:27Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-03T11:10:43Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: roberta2roberta_daily_cnn_finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.37.2
- Pytorch 1.13.0+cu116
- Datasets 2.16.1
- Tokenizers 0.15.0
|
CLMBR/full-transformer-1
|
CLMBR
| 2024-02-03T11:09:14Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-26T10:08:41Z |
---
tags:
- generated_from_trainer
model-index:
- name: full2-transformer-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# full2-transformer-1
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8575
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.2215 | 0.03 | 76320 | 4.1935 |
| 4.0184 | 1.03 | 152640 | 4.0250 |
| 3.9089 | 0.03 | 228960 | 3.9513 |
| 3.8437 | 1.03 | 305280 | 3.9104 |
| 3.7912 | 0.03 | 381600 | 3.8856 |
| 3.7513 | 0.03 | 457920 | 3.8687 |
| 3.722 | 1.03 | 534240 | 3.8590 |
| 3.6915 | 0.03 | 610560 | 3.8514 |
| 3.6647 | 1.03 | 686880 | 3.8469 |
| 3.6384 | 0.03 | 763200 | 3.8437 |
| 3.6155 | 0.03 | 839520 | 3.8417 |
| 3.5932 | 1.03 | 915840 | 3.8412 |
| 3.5776 | 0.03 | 992160 | 3.8405 |
| 3.56 | 1.03 | 1068480 | 3.8412 |
| 3.5407 | 0.03 | 1144800 | 3.8419 |
| 3.5278 | 1.03 | 1221120 | 3.8412 |
| 3.509 | 0.03 | 1297440 | 3.8432 |
| 3.4952 | 1.03 | 1373760 | 3.8440 |
| 3.4796 | 0.03 | 1450080 | 3.8456 |
| 3.4729 | 0.03 | 1526400 | 3.8466 |
| 3.4662 | 1.03 | 1602720 | 3.8462 |
| 3.4547 | 0.03 | 1679040 | 3.8487 |
| 3.4501 | 1.03 | 1755360 | 3.8488 |
| 3.4412 | 0.03 | 1831680 | 3.8505 |
| 3.4263 | 0.03 | 1908000 | 3.8518 |
| 3.4148 | 1.03 | 1984320 | 3.8530 |
| 3.403 | 0.03 | 2060640 | 3.8536 |
| 3.3917 | 1.03 | 2136960 | 3.8543 |
| 3.3844 | 0.03 | 2213280 | 3.8566 |
| 3.374 | 1.03 | 2289600 | 3.8571 |
| 3.3612 | 0.03 | 2365920 | 3.8569 |
| 3.3518 | 0.03 | 2442240 | 3.8590 |
| 3.3394 | 1.03 | 2518560 | 3.8593 |
| 3.3304 | 0.03 | 2594880 | 3.8589 |
| 3.3182 | 1.03 | 2671200 | 3.8600 |
| 3.3153 | 0.03 | 2747520 | 3.8600 |
| 3.3098 | 1.03 | 2823840 | 3.8598 |
| 3.3034 | 0.03 | 2900160 | 3.8589 |
| 3.3014 | 1.03 | 2976480 | 3.8586 |
| 3.2952 | 0.02 | 3052726 | 3.8575 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
matteo1997/sdxl_controlnet
|
matteo1997
| 2024-02-03T10:57:08Z | 1 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"controlnet",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-02-03T09:39:42Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- controlnet
inference: true
---
# controlnet-matteo1997/sdxl_controlnet
These are controlnet weights trained on stabilityai/stable-diffusion-xl-base-1.0 with new type of conditioning.
|
lordberre/globy_mistral_instructv0.2-model_v1.2
|
lordberre
| 2024-02-03T10:34:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-03T09:06:36Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
p1atdev/siglip-tagger-test-2
|
p1atdev
| 2024-02-03T10:02:31Z | 10 | 2 |
transformers
|
[
"transformers",
"safetensors",
"siglip_vision_model",
"image-classification",
"generated_from_trainer",
"siglip",
"custom_code",
"base_model:google/siglip-base-patch16-512",
"base_model:finetune:google/siglip-base-patch16-512",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-02-02T16:51:00Z |
---
license: apache-2.0
tags:
- generated_from_trainer
- siglip
metrics:
- accuracy
- f1
base_model: google/siglip-base-patch16-512
model-index:
- name: siglip-tagger-test-2
results: []
pipeline_tag: image-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# siglip-tagger-test-2
This model is a fine-tuned version of [google/siglip-base-patch16-512](https://huggingface.co/google/siglip-base-patch16-512) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 364.7850
- Accuracy: 0.2539
- F1: 0.9967
## Model description
This model is an experimental model that predicts danbooru tags of images.
## Example
```py
from PIL import Image
import torch
from transformers import (
AutoModelForImageClassification,
AutoImageProcessor,
)
import numpy as np
MODEL_NAME = "p1atdev/siglip-tagger-test-2"
model = AutoModelForImageClassification.from_pretrained(
MODEL_NAME, torch_dtype=torch.bfloat16, trust_remote_code=True
)
model.eval()
processor = AutoImageProcessor.from_pretrained(MODEL_NAME)
image = Image.open("sample.jpg") # load your image
inputs = processor(image, return_tensors="pt").to(model.device, model.dtype)
logits = model(**inputs).logits.detach().cpu().float()[0]
logits = np.clip(logits, 0.0, 1.0)
results = {
model.config.id2label[i]: logit for i, logit in enumerate(logits) if logit > 0
}
results = sorted(results.items(), key=lambda x: x[1], reverse=True)
for tag, score in results:
print(f"{tag}: {score*100:.2f}%")
# 1girl: 100.00%
# outdoors: 100.00%
# sky: 100.00%
# solo: 100.00%
# school uniform: 96.88%
# skirt: 92.97%
# day: 89.06%
# ...
```
## Intended uses & limitations
This model is for research use only and is not recommended for production.
Please use wd-v1-4-tagger series by SmilingWolf:
- [SmilingWolf/wd-v1-4-moat-tagger-v2](https://huggingface.co/SmilingWolf/wd-v1-4-moat-tagger-v2)
- [SmilingWolf/wd-v1-4-swinv2-tagger-v2](https://huggingface.co/SmilingWolf/wd-v1-4-swinv2-tagger-v2)
etc.
## Training and evaluation data
High quality 5000 images from danbooru. They were shuffled and split into train:eval at 4500:500.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1496.9876 | 1.0 | 141 | 691.3267 | 0.1242 | 0.9957 |
| 860.0218 | 2.0 | 282 | 433.5286 | 0.1626 | 0.9965 |
| 775.4277 | 3.0 | 423 | 409.0374 | 0.1827 | 0.9966 |
| 697.2465 | 4.0 | 564 | 396.5604 | 0.2025 | 0.9966 |
| 582.6023 | 5.0 | 705 | 388.3294 | 0.2065 | 0.9966 |
| 617.5087 | 6.0 | 846 | 382.2605 | 0.2213 | 0.9966 |
| 627.533 | 7.0 | 987 | 377.6726 | 0.2269 | 0.9967 |
| 595.4033 | 8.0 | 1128 | 374.3268 | 0.2327 | 0.9967 |
| 593.3854 | 9.0 | 1269 | 371.4181 | 0.2409 | 0.9967 |
| 537.9777 | 10.0 | 1410 | 369.5010 | 0.2421 | 0.9967 |
| 552.3083 | 11.0 | 1551 | 368.0743 | 0.2468 | 0.9967 |
| 570.5438 | 12.0 | 1692 | 366.8302 | 0.2498 | 0.9967 |
| 507.5343 | 13.0 | 1833 | 366.1787 | 0.2499 | 0.9967 |
| 515.5528 | 14.0 | 1974 | 365.5653 | 0.2525 | 0.9967 |
| 458.5096 | 15.0 | 2115 | 365.1838 | 0.2528 | 0.9967 |
| 515.6953 | 16.0 | 2256 | 364.9844 | 0.2535 | 0.9967 |
| 533.7929 | 17.0 | 2397 | 364.8577 | 0.2538 | 0.9967 |
| 520.3728 | 18.0 | 2538 | 364.8066 | 0.2537 | 0.9967 |
| 525.1097 | 19.0 | 2679 | 364.7850 | 0.2539 | 0.9967 |
| 482.0612 | 20.0 | 2820 | 364.7876 | 0.2539 | 0.9967 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
wahdan99/ppo-SnowballTarget
|
wahdan99
| 2024-02-03T09:59:22Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2024-02-03T09:58:14Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: wahdan99/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Patcas/codet5-with-doc-new-v1
|
Patcas
| 2024-02-03T09:48:12Z | 11 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:Salesforce/codet5-base",
"base_model:finetune:Salesforce/codet5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-02T21:49:39Z |
---
license: apache-2.0
base_model: Salesforce/codet5-base
tags:
- generated_from_trainer
model-index:
- name: codet5-with-doc-new-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codet5-with-doc-new-v1
This model is a fine-tuned version of [Salesforce/codet5-base](https://huggingface.co/Salesforce/codet5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1073
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 230 | 1.6308 |
| No log | 2.0 | 460 | 1.3508 |
| 2.0598 | 3.0 | 690 | 1.2400 |
| 2.0598 | 4.0 | 920 | 1.1656 |
| 1.121 | 5.0 | 1150 | 1.1432 |
| 1.121 | 6.0 | 1380 | 1.1259 |
| 0.8281 | 7.0 | 1610 | 1.1214 |
| 0.8281 | 8.0 | 1840 | 1.1104 |
| 0.6739 | 9.0 | 2070 | 1.1037 |
| 0.6739 | 10.0 | 2300 | 1.1073 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
banhabang/Supernatural-Bert-Prod
|
banhabang
| 2024-02-03T09:46:39Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-03T09:46:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
FINETUNERMYSTRAL/mmistral-supervised-ft-1epochs
|
FINETUNERMYSTRAL
| 2024-02-03T09:44:25Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-02-03T09:41:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
s3nh/zephyr-speakleash-007-pl-8192-32-16-0.05-GGUF
|
s3nh
| 2024-02-03T09:42:00Z | 4 | 1 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-02-03T09:12:31Z |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/Nondzu/zephyr-speakleash-007-pl-8192-32-16-0.05).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### inference
User: Tell me story about what is an quantization and what do we need to build.
I have a little bit of experience with the term “quantization” from physics, but not much. When I hear it, the first thing that comes to mind is some kind of measuring instrument, like a ruler or voltmeter. What does the phrase “quantized by 1024” mean?
It sounds more mathematical than physical.
The term quantization comes from quantum mechanics and refers to a process whereby a continuous function is approximated by discrete values, that is, it is “discretized”. In this sense, we can say that the “quanta” are the differences between adjacent
# Original model card
|
Adeptschneider/merged-fine-tuned-Llama2
|
Adeptschneider
| 2024-02-03T09:38:38Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-03T09:34:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
superlazycoder/vit-base-beans-demo-v5
|
superlazycoder
| 2024-02-03T09:35:02Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-02-03T09:34:46Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base-beans-demo-v5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-beans-demo-v5
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0367
- Accuracy: 0.9850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0475 | 1.54 | 100 | 0.0625 | 0.9850 |
| 0.0038 | 3.08 | 200 | 0.0367 | 0.9850 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
Adeptschneider/fine-tuned-Llama2
|
Adeptschneider
| 2024-02-03T09:15:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-03T09:15:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
saheedsanni/distilbert-base-uncased-finetuned-cola
|
saheedsanni
| 2024-02-03T09:02:10Z | 1 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-03T09:01:18Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: saheedsanni/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# saheedsanni/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5203
- Validation Loss: 0.4792
- Train Matthews Correlation: 0.4572
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.5203 | 0.4792 | 0.4572 | 0 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.10.1
- Datasets 2.16.1
- Tokenizers 0.13.3
|
dagbs/deepseek-coder-7b-base-v1.5-GGUF
|
dagbs
| 2024-02-03T08:58:36Z | 43 | 2 | null |
[
"gguf",
"base_model:deepseek-ai/deepseek-coder-7b-base-v1.5",
"base_model:quantized:deepseek-ai/deepseek-coder-7b-base-v1.5",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-02-03T04:07:00Z |
---
license: other
license_name: deepseek-license
license_link: >-
https://huggingface.co/deepseek-ai/deepseek-coder-7b-base-v1.5/blob/main/LICENSE
base_model: deepseek-ai/deepseek-coder-7b-base-v1.5
quantized_by: dagbs
---
# deepseek-coder-7b-base-v1.5 - GGUF
- Model organization: [DeepSeek](https://huggingface.co/deepseek-ai)
- Original model: [deepseek-ai/deepseek-coder-7b-base-v1.5](https://huggingface.co/deepseek-ai/deepseek-coder-7b-base-v1.5)
F16 converted using llama.cpp convert.py with the following arguments
* --pad-vocab
* --vocab-type bpe
|
zzzghttt/CodeLlama-7b-Test-Instruct-lora
|
zzzghttt
| 2024-02-03T08:30:57Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-12-30T18:34:36Z |
---
library_name: peft
---
# CodeLlama-7b-Test-Instruct-lora
## Description
This repo contains a low-rank adapter for [CodeLlama-7b-Instruct](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) fit on the [zzzghttt/code2test](https://huggingface.co/datasets/zzzghttt/code2test) dataset.
The Lora model is primarily aimed at generating high-quality unit tests in Java.
### How to use
See [ChatUniTest Models](https://github.com/ZJU-ACES-ISE/chatunitest-models)
## Training data
[zzzghttt/code2test](https://huggingface.co/datasets/zzzghttt/code2test)
## Training procedure
This version of the weights was trained with the following hyperparameters:
- batch_size: 128
- micro_batch_size: 4
- num_epochs: 3 (load from best epoch)
- learning_rate: 3e-4
- cutoff_len: 2048
- lora_r: 64
- lora_alpha: 16
- lora_dropout: 0.05
- lora_target_modules: ['q_proj', 'v_proj']
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0
|
CLMBR/full-transformer-3
|
CLMBR
| 2024-02-03T08:28:16Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-26T10:06:44Z |
---
tags:
- generated_from_trainer
model-index:
- name: full2-transformer-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# full2-transformer-3
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 3
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.2206 | 0.03 | 76320 | 4.1916 |
| 4.0169 | 1.03 | 152640 | 4.0236 |
| 3.9099 | 0.03 | 228960 | 3.9506 |
| 3.8437 | 1.03 | 305280 | 3.9106 |
| 3.7918 | 0.03 | 381600 | 3.8857 |
| 3.7519 | 1.03 | 457920 | 3.8689 |
| 3.7218 | 0.03 | 534240 | 3.8581 |
| 3.6904 | 1.03 | 610560 | 3.8518 |
| 3.6603 | 0.03 | 686880 | 3.8468 |
| 3.6377 | 1.03 | 763200 | 3.8447 |
| 3.6135 | 0.03 | 839520 | 3.8432 |
| 3.5916 | 1.03 | 915840 | 3.8415 |
| 3.5781 | 0.03 | 992160 | 3.8417 |
| 3.5586 | 1.03 | 1068480 | 3.8418 |
| 3.5407 | 0.03 | 1144800 | 3.8439 |
| 3.525 | 1.03 | 1221120 | 3.8447 |
| 3.5057 | 0.03 | 1297440 | 3.8447 |
| 3.4938 | 1.03 | 1373760 | 3.8463 |
| 3.4784 | 0.03 | 1450080 | 3.8474 |
| 3.4732 | 1.03 | 1526400 | 3.8485 |
| 3.4634 | 0.03 | 1602720 | 3.8501 |
| 3.4544 | 1.03 | 1679040 | 3.8525 |
| 3.448 | 0.03 | 1755360 | 3.8527 |
| 3.4382 | 0.03 | 1831680 | 3.8545 |
| 3.4259 | 0.03 | 1908000 | 3.8566 |
| 3.4159 | 1.03 | 1984320 | 3.8575 |
| 3.4029 | 0.03 | 2060640 | 3.8589 |
| 3.3911 | 0.03 | 2136960 | 3.8601 |
| 3.3832 | 0.03 | 2213280 | 3.8616 |
| 3.3725 | 0.03 | 2289600 | 3.8614 |
| 3.3585 | 1.03 | 2365920 | 3.8622 |
| 3.3487 | 0.03 | 2442240 | 3.8639 |
| 3.3357 | 1.03 | 2518560 | 3.8639 |
| 3.3261 | 0.03 | 2594880 | 3.8644 |
| 3.3146 | 0.03 | 2671200 | 3.8653 |
| 3.3102 | 1.03 | 2747520 | 3.8654 |
| 3.3041 | 0.03 | 2823840 | 3.8652 |
| 3.2998 | 1.03 | 2900160 | 3.8649 |
| 3.2998 | 0.03 | 2976480 | 3.8644 |
| 3.2926 | 1.02 | 3052726 | 3.8634 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
rhplus0831/maid-yuzu-v2-mid-exl2-6.0bpw-rpcal
|
rhplus0831
| 2024-02-03T08:27:05Z | 6 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"mergekit",
"merge",
"base_model:smelborp/MixtralOrochi8x7B",
"base_model:merge:smelborp/MixtralOrochi8x7B",
"base_model:ycros/BagelMIsteryTour-v2-8x7B",
"base_model:merge:ycros/BagelMIsteryTour-v2-8x7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-03T08:20:35Z |
---
base_model:
- smelborp/MixtralOrochi8x7B
- ycros/BagelMIsteryTour-v2-8x7B
library_name: transformers
tags:
- mergekit
- merge
---
# maid-yuzu-v2-mid
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [smelborp/MixtralOrochi8x7B](https://huggingface.co/smelborp/MixtralOrochi8x7B)
* [ycros/BagelMIsteryTour-v2-8x7B](https://huggingface.co/ycros/BagelMIsteryTour-v2-8x7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model:
model:
path: smelborp/MixtralOrochi8x7B
dtype: bfloat16
merge_method: slerp
parameters:
t:
- value: 0.375
slices:
- sources:
- layer_range: [0, 32]
model:
model:
path: smelborp/MixtralOrochi8x7B
- layer_range: [0, 32]
model:
model:
path: ycros/BagelMIsteryTour-v2-8x7B
```
|
CLMBR/re-irr-sv-agr-transformer-1
|
CLMBR
| 2024-02-03T07:49:29Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-25T09:56:43Z |
---
tags:
- generated_from_trainer
model-index:
- name: re-irr-sv-agr-transformer-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# re-irr-sv-agr-transformer-1
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8917
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.2191 | 0.03 | 76320 | 4.2133 |
| 4.0124 | 1.03 | 152640 | 4.0446 |
| 3.9042 | 0.03 | 228960 | 3.9689 |
| 3.8378 | 1.03 | 305280 | 3.9282 |
| 3.7862 | 0.03 | 381600 | 3.9036 |
| 3.7465 | 1.03 | 457920 | 3.8875 |
| 3.7125 | 0.03 | 534240 | 3.8780 |
| 3.6811 | 1.03 | 610560 | 3.8712 |
| 3.6533 | 0.03 | 686880 | 3.8683 |
| 3.6278 | 1.03 | 763200 | 3.8661 |
| 3.604 | 0.03 | 839520 | 3.8653 |
| 3.5878 | 1.03 | 915840 | 3.8643 |
| 3.5705 | 0.03 | 992160 | 3.8659 |
| 3.5519 | 0.03 | 1068480 | 3.8674 |
| 3.5332 | 0.03 | 1144800 | 3.8693 |
| 3.516 | 1.03 | 1221120 | 3.8696 |
| 3.498 | 0.03 | 1297440 | 3.8707 |
| 3.4839 | 1.03 | 1373760 | 3.8720 |
| 3.4693 | 0.03 | 1450080 | 3.8750 |
| 3.4632 | 1.03 | 1526400 | 3.8761 |
| 3.4533 | 0.03 | 1602720 | 3.8784 |
| 3.4476 | 1.03 | 1679040 | 3.8794 |
| 3.4382 | 0.03 | 1755360 | 3.8807 |
| 3.4264 | 1.03 | 1831680 | 3.8814 |
| 3.4151 | 0.03 | 1908000 | 3.8848 |
| 3.4026 | 1.03 | 1984320 | 3.8861 |
| 3.3883 | 0.03 | 2060640 | 3.8874 |
| 3.3828 | 1.03 | 2136960 | 3.8885 |
| 3.376 | 0.03 | 2213280 | 3.8899 |
| 3.3616 | 1.03 | 2289600 | 3.8903 |
| 3.3522 | 0.03 | 2365920 | 3.8921 |
| 3.3376 | 0.03 | 2442240 | 3.8915 |
| 3.3228 | 0.03 | 2518560 | 3.8923 |
| 3.3132 | 1.03 | 2594880 | 3.8935 |
| 3.3038 | 0.03 | 2671200 | 3.8945 |
| 3.2999 | 0.03 | 2747520 | 3.8946 |
| 3.2939 | 0.03 | 2823840 | 3.8947 |
| 3.2922 | 1.03 | 2900160 | 3.8938 |
| 3.2867 | 0.03 | 2976480 | 3.8927 |
| 3.2797 | 1.02 | 3052726 | 3.8917 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
samitizerxu/segformer-b1-from-scratch-run1
|
samitizerxu
| 2024-02-03T07:29:08Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
image-segmentation
| 2024-02-02T21:12:54Z |
---
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b1-from-scratch-run1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b1-from-scratch-run1
This model is a fine-tuned version of [](https://huggingface.co/) on the samitizerxu/kelp_data_rgbaa_swin_nir dataset.
It achieves the following results on the evaluation set:
- Iou Kelp: 0.0067
- Loss: 0.9872
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.1
- train_batch_size: 22
- eval_batch_size: 22
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Iou Kelp | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| 0.9999 | 0.15 | 30 | 0.0067 | 0.9872 |
| 1.0 | 0.29 | 60 | 0.0067 | 0.9872 |
| 0.9933 | 0.44 | 90 | 0.0067 | 0.9872 |
| 0.998 | 0.59 | 120 | 0.0067 | 0.9872 |
| 1.0 | 0.73 | 150 | 0.0067 | 0.9872 |
| 0.9998 | 0.88 | 180 | 0.0067 | 0.9872 |
| 0.9998 | 1.02 | 210 | 0.0067 | 0.9872 |
| 1.0 | 1.17 | 240 | 0.0082 | 0.9861 |
| 0.9976 | 1.32 | 270 | 0.0069 | 0.9869 |
| 0.9995 | 1.46 | 300 | 0.0070 | 0.9868 |
| 0.9967 | 1.61 | 330 | 0.0067 | 0.9872 |
| 0.9945 | 1.76 | 360 | 0.0067 | 0.9872 |
| 1.0 | 1.9 | 390 | 0.0067 | 0.9872 |
| 0.9992 | 2.05 | 420 | 0.0067 | 0.9872 |
| 0.9991 | 2.2 | 450 | 0.0067 | 0.9872 |
| 0.997 | 2.34 | 480 | 0.0067 | 0.9872 |
| 0.999 | 2.49 | 510 | 0.0067 | 0.9872 |
| 0.9999 | 2.63 | 540 | 0.0067 | 0.9872 |
| 0.9991 | 2.78 | 570 | 0.0067 | 0.9872 |
| 0.9987 | 2.93 | 600 | 0.0067 | 0.9872 |
| 0.9999 | 3.07 | 630 | 0.0067 | 0.9872 |
| 0.9983 | 3.22 | 660 | 0.0067 | 0.9872 |
| 0.9973 | 3.37 | 690 | 0.0067 | 0.9872 |
| 0.9987 | 3.51 | 720 | 0.0067 | 0.9872 |
| 0.9915 | 3.66 | 750 | 0.0067 | 0.9872 |
| 0.9984 | 3.8 | 780 | 0.0067 | 0.9872 |
| 0.9992 | 3.95 | 810 | 0.0067 | 0.9872 |
| 0.9993 | 4.1 | 840 | 0.0067 | 0.9872 |
| 1.0 | 4.24 | 870 | 0.0067 | 0.9872 |
| 0.9998 | 4.39 | 900 | 0.0067 | 0.9872 |
| 0.9999 | 4.54 | 930 | 0.0067 | 0.9872 |
| 0.9995 | 4.68 | 960 | 0.0067 | 0.9872 |
| 0.998 | 4.83 | 990 | 0.0067 | 0.9872 |
| 0.9989 | 4.98 | 1020 | 0.0067 | 0.9872 |
| 0.9975 | 5.12 | 1050 | 0.0067 | 0.9872 |
| 0.9993 | 5.27 | 1080 | 0.0067 | 0.9872 |
| 0.9971 | 5.41 | 1110 | 0.0067 | 0.9872 |
| 0.9944 | 5.56 | 1140 | 0.0067 | 0.9872 |
| 0.9967 | 5.71 | 1170 | 0.0067 | 0.9872 |
| 0.9986 | 5.85 | 1200 | 0.0067 | 0.9872 |
| 0.9994 | 6.0 | 1230 | 0.0067 | 0.9872 |
| 0.9997 | 6.15 | 1260 | 0.0067 | 0.9872 |
| 0.9998 | 6.29 | 1290 | 0.0067 | 0.9872 |
| 0.999 | 6.44 | 1320 | 0.0067 | 0.9872 |
| 0.9996 | 6.59 | 1350 | 0.0067 | 0.9872 |
| 1.0 | 6.73 | 1380 | 0.0067 | 0.9872 |
| 0.9999 | 6.88 | 1410 | 0.0067 | 0.9872 |
| 0.9933 | 7.02 | 1440 | 0.0067 | 0.9872 |
| 0.998 | 7.17 | 1470 | 0.0067 | 0.9872 |
| 0.9968 | 7.32 | 1500 | 0.0067 | 0.9872 |
| 0.997 | 7.46 | 1530 | 0.0067 | 0.9872 |
| 0.9981 | 7.61 | 1560 | 0.0067 | 0.9872 |
| 0.9992 | 7.76 | 1590 | 0.0067 | 0.9872 |
| 0.9999 | 7.9 | 1620 | 0.0067 | 0.9872 |
| 0.9964 | 8.05 | 1650 | 0.0067 | 0.9872 |
| 0.9999 | 8.2 | 1680 | 0.0067 | 0.9872 |
| 0.9941 | 8.34 | 1710 | 0.0067 | 0.9872 |
| 0.9963 | 8.49 | 1740 | 0.0067 | 0.9872 |
| 0.998 | 8.63 | 1770 | 0.0067 | 0.9872 |
| 0.9989 | 8.78 | 1800 | 0.0067 | 0.9872 |
| 1.0 | 8.93 | 1830 | 0.0067 | 0.9872 |
| 1.0 | 9.07 | 1860 | 0.0067 | 0.9872 |
| 0.9974 | 9.22 | 1890 | 0.0067 | 0.9872 |
| 0.9989 | 9.37 | 1920 | 0.0067 | 0.9872 |
| 0.9989 | 9.51 | 1950 | 0.0067 | 0.9872 |
| 0.996 | 9.66 | 1980 | 0.0067 | 0.9872 |
| 0.9995 | 9.8 | 2010 | 0.0067 | 0.9872 |
| 0.9973 | 9.95 | 2040 | 0.0067 | 0.9872 |
| 0.9957 | 10.1 | 2070 | 0.0067 | 0.9872 |
| 0.9996 | 10.24 | 2100 | 0.0067 | 0.9872 |
| 1.0 | 10.39 | 2130 | 0.0067 | 0.9872 |
| 0.9967 | 10.54 | 2160 | 0.0067 | 0.9872 |
| 0.9989 | 10.68 | 2190 | 0.0067 | 0.9872 |
| 0.9989 | 10.83 | 2220 | 0.0067 | 0.9872 |
| 0.9994 | 10.98 | 2250 | 0.0067 | 0.9872 |
| 0.9992 | 11.12 | 2280 | 0.0067 | 0.9872 |
| 0.9973 | 11.27 | 2310 | 0.0067 | 0.9872 |
| 0.9993 | 11.41 | 2340 | 0.0067 | 0.9872 |
| 0.9973 | 11.56 | 2370 | 0.0067 | 0.9872 |
| 0.9996 | 11.71 | 2400 | 0.0067 | 0.9872 |
| 1.0 | 11.85 | 2430 | 0.0067 | 0.9872 |
| 0.9989 | 12.0 | 2460 | 0.0067 | 0.9872 |
| 1.0 | 12.15 | 2490 | 0.0067 | 0.9872 |
| 0.9987 | 12.29 | 2520 | 0.0067 | 0.9872 |
| 0.9914 | 12.44 | 2550 | 0.0067 | 0.9872 |
| 0.9974 | 12.59 | 2580 | 0.0067 | 0.9872 |
| 1.0 | 12.73 | 2610 | 0.0067 | 0.9872 |
| 0.999 | 12.88 | 2640 | 0.0067 | 0.9872 |
| 1.0 | 13.02 | 2670 | 0.0067 | 0.9872 |
| 0.9991 | 13.17 | 2700 | 0.0067 | 0.9872 |
| 0.9979 | 13.32 | 2730 | 0.0067 | 0.9872 |
| 1.0 | 13.46 | 2760 | 0.0067 | 0.9872 |
| 0.9973 | 13.61 | 2790 | 0.0067 | 0.9872 |
| 0.9995 | 13.76 | 2820 | 0.0067 | 0.9872 |
| 0.9973 | 13.9 | 2850 | 0.0067 | 0.9872 |
| 0.9961 | 14.05 | 2880 | 0.0067 | 0.9872 |
| 0.9907 | 14.2 | 2910 | 0.0067 | 0.9872 |
| 0.9984 | 14.34 | 2940 | 0.0067 | 0.9872 |
| 0.9986 | 14.49 | 2970 | 0.0067 | 0.9872 |
| 0.9935 | 14.63 | 3000 | 0.0067 | 0.9872 |
| 0.998 | 14.78 | 3030 | 0.0067 | 0.9872 |
| 0.9982 | 14.93 | 3060 | 0.0067 | 0.9872 |
| 0.9956 | 15.07 | 3090 | 0.0067 | 0.9872 |
| 0.9991 | 15.22 | 3120 | 0.0067 | 0.9872 |
| 0.9985 | 15.37 | 3150 | 0.0067 | 0.9872 |
| 0.9958 | 15.51 | 3180 | 0.0067 | 0.9872 |
| 0.9998 | 15.66 | 3210 | 0.0067 | 0.9872 |
| 0.9972 | 15.8 | 3240 | 0.0067 | 0.9872 |
| 0.9996 | 15.95 | 3270 | 0.0067 | 0.9872 |
| 0.9965 | 16.1 | 3300 | 0.0067 | 0.9872 |
| 0.9983 | 16.24 | 3330 | 0.0067 | 0.9872 |
| 0.9993 | 16.39 | 3360 | 0.0067 | 0.9872 |
| 0.9962 | 16.54 | 3390 | 0.0067 | 0.9872 |
| 0.9985 | 16.68 | 3420 | 0.0067 | 0.9872 |
| 0.9998 | 16.83 | 3450 | 0.0067 | 0.9872 |
| 0.9993 | 16.98 | 3480 | 0.0067 | 0.9872 |
| 0.9993 | 17.12 | 3510 | 0.0067 | 0.9872 |
| 0.9998 | 17.27 | 3540 | 0.0067 | 0.9872 |
| 1.0 | 17.41 | 3570 | 0.0067 | 0.9872 |
| 0.9999 | 17.56 | 3600 | 0.0067 | 0.9872 |
| 0.9993 | 17.71 | 3630 | 0.0067 | 0.9872 |
| 0.999 | 17.85 | 3660 | 0.0067 | 0.9872 |
| 0.9975 | 18.0 | 3690 | 0.0067 | 0.9872 |
| 0.9993 | 18.15 | 3720 | 0.0067 | 0.9872 |
| 1.0 | 18.29 | 3750 | 0.0067 | 0.9872 |
| 0.9983 | 18.44 | 3780 | 0.0067 | 0.9872 |
| 0.9994 | 18.59 | 3810 | 0.0067 | 0.9872 |
| 0.9993 | 18.73 | 3840 | 0.0067 | 0.9872 |
| 0.9982 | 18.88 | 3870 | 0.0067 | 0.9872 |
| 0.9997 | 19.02 | 3900 | 0.0067 | 0.9872 |
| 0.9955 | 19.17 | 3930 | 0.0067 | 0.9872 |
| 0.9992 | 19.32 | 3960 | 0.0067 | 0.9872 |
| 0.9592 | 19.46 | 3990 | 0.0067 | 0.9872 |
| 0.9897 | 19.61 | 4020 | 0.0067 | 0.9872 |
| 0.9994 | 19.76 | 4050 | 0.0067 | 0.9872 |
| 0.9989 | 19.9 | 4080 | 0.0067 | 0.9872 |
| 0.9995 | 20.05 | 4110 | 0.0067 | 0.9872 |
| 0.9995 | 20.2 | 4140 | 0.0067 | 0.9872 |
| 0.9938 | 20.34 | 4170 | 0.0067 | 0.9872 |
| 0.9987 | 20.49 | 4200 | 0.0067 | 0.9872 |
| 0.9999 | 20.63 | 4230 | 0.0067 | 0.9872 |
| 0.9994 | 20.78 | 4260 | 0.0067 | 0.9872 |
| 0.9954 | 20.93 | 4290 | 0.0067 | 0.9872 |
| 0.9975 | 21.07 | 4320 | 0.0067 | 0.9872 |
| 0.9997 | 21.22 | 4350 | 0.0067 | 0.9872 |
| 0.9978 | 21.37 | 4380 | 0.0067 | 0.9872 |
| 0.9994 | 21.51 | 4410 | 0.0067 | 0.9872 |
| 0.9985 | 21.66 | 4440 | 0.0067 | 0.9872 |
| 0.9998 | 21.8 | 4470 | 0.0067 | 0.9872 |
| 0.998 | 21.95 | 4500 | 0.0067 | 0.9872 |
| 0.9983 | 22.1 | 4530 | 0.0067 | 0.9872 |
| 0.9989 | 22.24 | 4560 | 0.0067 | 0.9872 |
| 0.9973 | 22.39 | 4590 | 0.0067 | 0.9872 |
| 0.9961 | 22.54 | 4620 | 0.0067 | 0.9872 |
| 0.9984 | 22.68 | 4650 | 0.0067 | 0.9872 |
| 1.0 | 22.83 | 4680 | 0.0067 | 0.9872 |
| 0.9949 | 22.98 | 4710 | 0.0067 | 0.9872 |
| 0.9989 | 23.12 | 4740 | 0.0067 | 0.9872 |
| 0.9998 | 23.27 | 4770 | 0.0067 | 0.9872 |
| 0.9999 | 23.41 | 4800 | 0.0067 | 0.9872 |
| 0.9996 | 23.56 | 4830 | 0.0067 | 0.9872 |
| 0.9974 | 23.71 | 4860 | 0.0067 | 0.9872 |
| 0.9997 | 23.85 | 4890 | 0.0067 | 0.9872 |
| 0.9999 | 24.0 | 4920 | 0.0067 | 0.9872 |
| 0.9962 | 24.15 | 4950 | 0.0067 | 0.9872 |
| 0.9996 | 24.29 | 4980 | 0.0067 | 0.9872 |
| 0.9999 | 24.44 | 5010 | 0.0067 | 0.9872 |
| 0.9973 | 24.59 | 5040 | 0.0067 | 0.9872 |
| 0.9996 | 24.73 | 5070 | 0.0067 | 0.9872 |
| 0.9995 | 24.88 | 5100 | 0.0067 | 0.9872 |
| 0.9999 | 25.02 | 5130 | 0.0067 | 0.9872 |
| 0.9988 | 25.17 | 5160 | 0.0067 | 0.9872 |
| 1.0 | 25.32 | 5190 | 0.0067 | 0.9872 |
| 1.0 | 25.46 | 5220 | 0.0067 | 0.9872 |
| 0.9996 | 25.61 | 5250 | 0.0067 | 0.9872 |
| 0.9965 | 25.76 | 5280 | 0.0067 | 0.9872 |
| 0.9976 | 25.9 | 5310 | 0.0067 | 0.9872 |
| 1.0 | 26.05 | 5340 | 0.0067 | 0.9872 |
| 0.9989 | 26.2 | 5370 | 0.0067 | 0.9872 |
| 0.9996 | 26.34 | 5400 | 0.0067 | 0.9872 |
| 0.9998 | 26.49 | 5430 | 0.0067 | 0.9872 |
| 1.0 | 26.63 | 5460 | 0.0067 | 0.9872 |
| 0.9996 | 26.78 | 5490 | 0.0067 | 0.9872 |
| 0.9972 | 26.93 | 5520 | 0.0067 | 0.9872 |
| 0.9984 | 27.07 | 5550 | 0.0067 | 0.9872 |
| 0.9961 | 27.22 | 5580 | 0.0067 | 0.9872 |
| 1.0 | 27.37 | 5610 | 0.0067 | 0.9872 |
| 0.9977 | 27.51 | 5640 | 0.0067 | 0.9872 |
| 0.9969 | 27.66 | 5670 | 0.0067 | 0.9872 |
| 0.9971 | 27.8 | 5700 | 0.0067 | 0.9872 |
| 0.9986 | 27.95 | 5730 | 0.0067 | 0.9872 |
| 0.9995 | 28.1 | 5760 | 0.0067 | 0.9872 |
| 0.9992 | 28.24 | 5790 | 0.0067 | 0.9872 |
| 0.9976 | 28.39 | 5820 | 0.0067 | 0.9872 |
| 0.9994 | 28.54 | 5850 | 0.0067 | 0.9872 |
| 0.998 | 28.68 | 5880 | 0.0067 | 0.9872 |
| 0.9952 | 28.83 | 5910 | 0.0067 | 0.9872 |
| 0.9998 | 28.98 | 5940 | 0.0067 | 0.9872 |
| 0.9937 | 29.12 | 5970 | 0.0067 | 0.9872 |
| 0.9989 | 29.27 | 6000 | 0.0067 | 0.9872 |
| 0.9993 | 29.41 | 6030 | 0.0067 | 0.9872 |
| 0.9989 | 29.56 | 6060 | 0.0067 | 0.9872 |
| 0.999 | 29.71 | 6090 | 0.0067 | 0.9872 |
| 0.9939 | 29.85 | 6120 | 0.0067 | 0.9872 |
| 1.0 | 30.0 | 6150 | 0.0067 | 0.9872 |
| 0.9996 | 30.15 | 6180 | 0.0067 | 0.9872 |
| 0.9994 | 30.29 | 6210 | 0.0067 | 0.9872 |
| 0.999 | 30.44 | 6240 | 0.0067 | 0.9872 |
| 1.0 | 30.59 | 6270 | 0.0067 | 0.9872 |
| 0.9956 | 30.73 | 6300 | 0.0067 | 0.9872 |
| 0.9971 | 30.88 | 6330 | 0.0067 | 0.9872 |
| 0.9985 | 31.02 | 6360 | 0.0067 | 0.9872 |
| 1.0 | 31.17 | 6390 | 0.0067 | 0.9872 |
| 0.9987 | 31.32 | 6420 | 0.0067 | 0.9872 |
| 0.9992 | 31.46 | 6450 | 0.0067 | 0.9872 |
| 0.9996 | 31.61 | 6480 | 0.0067 | 0.9872 |
| 0.9998 | 31.76 | 6510 | 0.0067 | 0.9872 |
| 0.9989 | 31.9 | 6540 | 0.0067 | 0.9872 |
| 1.0 | 32.05 | 6570 | 0.0067 | 0.9872 |
| 0.9966 | 32.2 | 6600 | 0.0067 | 0.9872 |
| 0.9994 | 32.34 | 6630 | 0.0067 | 0.9872 |
| 0.9987 | 32.49 | 6660 | 0.0067 | 0.9872 |
| 0.9993 | 32.63 | 6690 | 0.0067 | 0.9872 |
| 0.9971 | 32.78 | 6720 | 0.0067 | 0.9872 |
| 0.9971 | 32.93 | 6750 | 0.0067 | 0.9872 |
| 0.9929 | 33.07 | 6780 | 0.0067 | 0.9872 |
| 0.9997 | 33.22 | 6810 | 0.0067 | 0.9872 |
| 0.9978 | 33.37 | 6840 | 0.0067 | 0.9872 |
| 1.0 | 33.51 | 6870 | 0.0067 | 0.9872 |
| 0.9991 | 33.66 | 6900 | 0.0067 | 0.9872 |
| 0.9971 | 33.8 | 6930 | 0.0067 | 0.9872 |
| 0.9999 | 33.95 | 6960 | 0.0067 | 0.9872 |
| 0.9999 | 34.1 | 6990 | 0.0067 | 0.9872 |
| 0.9997 | 34.24 | 7020 | 0.0067 | 0.9872 |
| 1.0 | 34.39 | 7050 | 0.0067 | 0.9872 |
| 0.9986 | 34.54 | 7080 | 0.0067 | 0.9872 |
| 0.9996 | 34.68 | 7110 | 0.0067 | 0.9872 |
| 0.9994 | 34.83 | 7140 | 0.0067 | 0.9872 |
| 0.9997 | 34.98 | 7170 | 0.0067 | 0.9872 |
| 0.9999 | 35.12 | 7200 | 0.0067 | 0.9872 |
| 0.9969 | 35.27 | 7230 | 0.0067 | 0.9872 |
| 1.0 | 35.41 | 7260 | 0.0067 | 0.9872 |
| 0.9984 | 35.56 | 7290 | 0.0067 | 0.9872 |
| 0.9961 | 35.71 | 7320 | 0.0067 | 0.9872 |
| 0.9988 | 35.85 | 7350 | 0.0067 | 0.9872 |
| 0.9985 | 36.0 | 7380 | 0.0067 | 0.9872 |
| 0.9997 | 36.15 | 7410 | 0.0067 | 0.9872 |
| 1.0 | 36.29 | 7440 | 0.0067 | 0.9872 |
| 0.9987 | 36.44 | 7470 | 0.0067 | 0.9872 |
| 0.9991 | 36.59 | 7500 | 0.0067 | 0.9872 |
| 0.9992 | 36.73 | 7530 | 0.0067 | 0.9872 |
| 0.9999 | 36.88 | 7560 | 0.0067 | 0.9872 |
| 0.9996 | 37.02 | 7590 | 0.0067 | 0.9872 |
| 0.9995 | 37.17 | 7620 | 0.0067 | 0.9872 |
| 0.9998 | 37.32 | 7650 | 0.0067 | 0.9872 |
| 0.9969 | 37.46 | 7680 | 0.0067 | 0.9872 |
| 0.9989 | 37.61 | 7710 | 0.0067 | 0.9872 |
| 0.9992 | 37.76 | 7740 | 0.0067 | 0.9872 |
| 0.9959 | 37.9 | 7770 | 0.0067 | 0.9872 |
| 0.9987 | 38.05 | 7800 | 0.0067 | 0.9872 |
| 0.998 | 38.2 | 7830 | 0.0067 | 0.9872 |
| 0.9992 | 38.34 | 7860 | 0.0067 | 0.9872 |
| 0.9992 | 38.49 | 7890 | 0.0067 | 0.9872 |
| 0.9993 | 38.63 | 7920 | 0.0067 | 0.9872 |
| 0.9997 | 38.78 | 7950 | 0.0067 | 0.9872 |
| 0.9976 | 38.93 | 7980 | 0.0067 | 0.9872 |
| 1.0 | 39.07 | 8010 | 0.0067 | 0.9872 |
| 0.9959 | 39.22 | 8040 | 0.0067 | 0.9872 |
| 0.9973 | 39.37 | 8070 | 0.0067 | 0.9872 |
| 0.9996 | 39.51 | 8100 | 0.0067 | 0.9872 |
| 1.0 | 39.66 | 8130 | 0.0067 | 0.9872 |
| 0.9986 | 39.8 | 8160 | 0.0067 | 0.9872 |
| 0.9999 | 39.95 | 8190 | 0.0067 | 0.9872 |
### Framework versions
- Transformers 4.37.1
- Pytorch 2.1.2
- Datasets 2.16.1
- Tokenizers 0.15.1
|
jcruna/bert-finetuned-mrpc
|
jcruna
| 2024-02-03T07:27:24Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-03T06:36:21Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bert-finetuned-mrpc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-mrpc
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4634
- Accuracy: 0.8848
- F1: 0.9174
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 459 | 0.3401 | 0.8554 | 0.8977 |
| 0.5006 | 2.0 | 918 | 0.4634 | 0.8848 | 0.9174 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cpu
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Ngoctho/Chigiri
|
Ngoctho
| 2024-02-03T07:25:30Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2024-02-03T07:25:27Z |
---
license: bigscience-openrail-m
---
|
fterry/FofoNet-CatDolphin-PPT-slerp
|
fterry
| 2024-02-03T07:22:25Z | 7 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"rishiraj/CatPPT-base",
"HenryJJ/dolphin-2.6-mistral-7b-dpo-orca-v2",
"base_model:HenryJJ/dolphin-2.6-mistral-7b-dpo-orca-v2",
"base_model:merge:HenryJJ/dolphin-2.6-mistral-7b-dpo-orca-v2",
"base_model:rishiraj/CatPPT-base",
"base_model:merge:rishiraj/CatPPT-base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-03T07:17:20Z |
---
tags:
- merge
- mergekit
- lazymergekit
- rishiraj/CatPPT-base
- HenryJJ/dolphin-2.6-mistral-7b-dpo-orca-v2
base_model:
- rishiraj/CatPPT-base
- HenryJJ/dolphin-2.6-mistral-7b-dpo-orca-v2
---
# FofoNet-CatDolphin-PPT-slerp
FofoNet-CatDolphin-PPT-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [rishiraj/CatPPT-base](https://huggingface.co/rishiraj/CatPPT-base)
* [HenryJJ/dolphin-2.6-mistral-7b-dpo-orca-v2](https://huggingface.co/HenryJJ/dolphin-2.6-mistral-7b-dpo-orca-v2)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: rishiraj/CatPPT-base
layer_range: [0, 32]
- model: HenryJJ/dolphin-2.6-mistral-7b-dpo-orca-v2
layer_range: [0, 32]
merge_method: slerp
base_model: rishiraj/CatPPT-base
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "fterry/FofoNet-CatDolphin-PPT-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
jeiku/Soul_3B_GGUF
|
jeiku
| 2024-02-03T07:15:48Z | 5 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"arxiv:2212.04089",
"base_model:jeiku/Futa_Erotica_StableLM",
"base_model:merge:jeiku/Futa_Erotica_StableLM",
"base_model:jeiku/Gnosis_256_StableLM",
"base_model:merge:jeiku/Gnosis_256_StableLM",
"base_model:jeiku/Humiliation_StableLM",
"base_model:merge:jeiku/Humiliation_StableLM",
"base_model:jeiku/LimaRP_StableLM",
"base_model:merge:jeiku/LimaRP_StableLM",
"base_model:jeiku/Rosa_v1_3B",
"base_model:merge:jeiku/Rosa_v1_3B",
"base_model:jeiku/Theory_of_Mind_128_StableLM",
"base_model:merge:jeiku/Theory_of_Mind_128_StableLM",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-02-03T06:53:07Z |
---
base_model:
- jeiku/Rosa_v1_3B
- jeiku/LimaRP_StableLM
- jeiku/Rosa_v1_3B
- jeiku/Gnosis_256_StableLM
- jeiku/Rosa_v1_3B
- jeiku/Rosa_v1_3B
- jeiku/Humiliation_StableLM
- jeiku/Rosa_v1_3B
- jeiku/Futa_Erotica_StableLM
- jeiku/Rosa_v1_3B
- jeiku/Theory_of_Mind_128_StableLM
library_name: transformers
tags:
- mergekit
- merge
---
# fatality
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [jeiku/Rosa_v1_3B](https://huggingface.co/jeiku/Rosa_v1_3B) as a base.
### Models Merged
The following models were included in the merge:
* [jeiku/Rosa_v1_3B](https://huggingface.co/jeiku/Rosa_v1_3B) + [jeiku/LimaRP_StableLM](https://huggingface.co/jeiku/LimaRP_StableLM)
* [jeiku/Rosa_v1_3B](https://huggingface.co/jeiku/Rosa_v1_3B) + [jeiku/Gnosis_256_StableLM](https://huggingface.co/jeiku/Gnosis_256_StableLM)
* [jeiku/Rosa_v1_3B](https://huggingface.co/jeiku/Rosa_v1_3B) + [jeiku/Humiliation_StableLM](https://huggingface.co/jeiku/Humiliation_StableLM)
* [jeiku/Rosa_v1_3B](https://huggingface.co/jeiku/Rosa_v1_3B) + [jeiku/Futa_Erotica_StableLM](https://huggingface.co/jeiku/Futa_Erotica_StableLM)
* [jeiku/Rosa_v1_3B](https://huggingface.co/jeiku/Rosa_v1_3B) + [jeiku/Theory_of_Mind_128_StableLM](https://huggingface.co/jeiku/Theory_of_Mind_128_StableLM)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: task_arithmetic
base_model: jeiku/Rosa_v1_3B
models:
- model: jeiku/Rosa_v1_3B+jeiku/Futa_Erotica_StableLM
parameters:
weight: 0.75
- model: jeiku/Rosa_v1_3B+jeiku/Gnosis_256_StableLM
parameters:
weight: 0.95
- model: jeiku/Rosa_v1_3B+jeiku/Humiliation_StableLM
parameters:
weight: 0.5
- model: jeiku/Rosa_v1_3B+jeiku/Theory_of_Mind_128_StableLM
parameters:
weight: 0.75
- model: jeiku/Rosa_v1_3B+jeiku/LimaRP_StableLM
parameters:
weight: 0.65
dtype: float16
```
|
ThuyNT03/FoRC_S1_BERT
|
ThuyNT03
| 2024-02-03T07:09:41Z | 9 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-01T17:36:00Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: FoRC_S1_BERT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# FoRC_S1_BERT
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3170
- Accuracy: 0.6476
- F1: 0.6317
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 2.0182 | 2.0 | 2596 | 1.4831 | 0.6049 | 0.5673 |
| 1.1498 | 4.0 | 5192 | 1.3217 | 0.6439 | 0.6224 |
| 0.8597 | 6.0 | 7788 | 1.3170 | 0.6476 | 0.6317 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.1
|
jeiku/Soul_3B
|
jeiku
| 2024-02-03T06:51:40Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"stablelm_epoch",
"text-generation",
"mergekit",
"merge",
"conversational",
"custom_code",
"arxiv:2212.04089",
"base_model:jeiku/Futa_Erotica_StableLM",
"base_model:merge:jeiku/Futa_Erotica_StableLM",
"base_model:jeiku/Gnosis_256_StableLM",
"base_model:merge:jeiku/Gnosis_256_StableLM",
"base_model:jeiku/Humiliation_StableLM",
"base_model:merge:jeiku/Humiliation_StableLM",
"base_model:jeiku/LimaRP_StableLM",
"base_model:merge:jeiku/LimaRP_StableLM",
"base_model:jeiku/Rosa_v1_3B",
"base_model:merge:jeiku/Rosa_v1_3B",
"base_model:jeiku/Theory_of_Mind_128_StableLM",
"base_model:merge:jeiku/Theory_of_Mind_128_StableLM",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-02-03T06:43:58Z |
---
base_model:
- jeiku/Rosa_v1_3B
- jeiku/LimaRP_StableLM
- jeiku/Rosa_v1_3B
- jeiku/Gnosis_256_StableLM
- jeiku/Rosa_v1_3B
- jeiku/Rosa_v1_3B
- jeiku/Humiliation_StableLM
- jeiku/Rosa_v1_3B
- jeiku/Futa_Erotica_StableLM
- jeiku/Rosa_v1_3B
- jeiku/Theory_of_Mind_128_StableLM
library_name: transformers
tags:
- mergekit
- merge
---
# fatality
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [jeiku/Rosa_v1_3B](https://huggingface.co/jeiku/Rosa_v1_3B) as a base.
### Models Merged
The following models were included in the merge:
* [jeiku/Rosa_v1_3B](https://huggingface.co/jeiku/Rosa_v1_3B) + [jeiku/LimaRP_StableLM](https://huggingface.co/jeiku/LimaRP_StableLM)
* [jeiku/Rosa_v1_3B](https://huggingface.co/jeiku/Rosa_v1_3B) + [jeiku/Gnosis_256_StableLM](https://huggingface.co/jeiku/Gnosis_256_StableLM)
* [jeiku/Rosa_v1_3B](https://huggingface.co/jeiku/Rosa_v1_3B) + [jeiku/Humiliation_StableLM](https://huggingface.co/jeiku/Humiliation_StableLM)
* [jeiku/Rosa_v1_3B](https://huggingface.co/jeiku/Rosa_v1_3B) + [jeiku/Futa_Erotica_StableLM](https://huggingface.co/jeiku/Futa_Erotica_StableLM)
* [jeiku/Rosa_v1_3B](https://huggingface.co/jeiku/Rosa_v1_3B) + [jeiku/Theory_of_Mind_128_StableLM](https://huggingface.co/jeiku/Theory_of_Mind_128_StableLM)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: task_arithmetic
base_model: jeiku/Rosa_v1_3B
models:
- model: jeiku/Rosa_v1_3B+jeiku/Futa_Erotica_StableLM
parameters:
weight: 0.75
- model: jeiku/Rosa_v1_3B+jeiku/Gnosis_256_StableLM
parameters:
weight: 0.95
- model: jeiku/Rosa_v1_3B+jeiku/Humiliation_StableLM
parameters:
weight: 0.5
- model: jeiku/Rosa_v1_3B+jeiku/Theory_of_Mind_128_StableLM
parameters:
weight: 0.75
- model: jeiku/Rosa_v1_3B+jeiku/LimaRP_StableLM
parameters:
weight: 0.65
dtype: float16
```
|
boruyang/ppo-Pyramids
|
boruyang
| 2024-02-03T06:47:04Z | 14 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2024-02-03T06:46:04Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: boruyang/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
r0in/Reinforce-Pixelcopter-PLE-v0-c1
|
r0in
| 2024-02-03T06:46:20Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-03T06:45:32Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0-c1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 23.10 +/- 13.23
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
LoneStriker/Blue-Orchid-2x7b-GPTQ
|
LoneStriker
| 2024-02-03T06:30:23Z | 58 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-03T06:27:16Z |
---
license: apache-2.0
---
**Blue-Orchid-2x7b**
GGUF: https://huggingface.co/nakodanei/Blue-Orchid-2x7b_GGUF
Roleplaying focused MoE Mistral model.
One expert is a merge of mostly RP models, the other is a merge of mostly storywriting models. So it should be good at both. The base model is SanjiWatsuki/Kunoichi-DPO-v2-7B.
- Expert 1 is a merge of LimaRP, Limamono, Noromaid 0.4 DPO and good-robot.
- Expert 2 is a merge of Erebus, Holodeck, Dans-AdventurousWinds-Mk2, Opus, Ashhwriter and good-robot.
## Prompt template (LimaRP):
```
### Instruction:
{system prompt}
### Input:
User: {prompt}
### Response:
Character:
```
Alpaca prompt template should work fine too.
|
Gigazinie/240203_QA_model
|
Gigazinie
| 2024-02-03T06:28:23Z | 23 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-02-03T05:39:54Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: 240203_QA_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 240203_QA_model
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6866
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 3.2768 |
| 3.3591 | 2.0 | 500 | 2.7866 |
| 3.3591 | 3.0 | 750 | 2.6866 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
jeiku/Furry_Request_3B_GGUF
|
jeiku
| 2024-02-03T06:23:52Z | 134 | 1 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"arxiv:2203.05482",
"base_model:jeiku/Furry_Request_StableLM",
"base_model:merge:jeiku/Furry_Request_StableLM",
"base_model:jeiku/Rosa_v1_3B",
"base_model:merge:jeiku/Rosa_v1_3B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-02-03T06:06:49Z |
---
base_model:
- jeiku/Rosa_v1_3B
- jeiku/Furry_Request_StableLM
library_name: transformers
tags:
- mergekit
- merge
---
# Furry
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* [jeiku/Rosa_v1_3B](https://huggingface.co/jeiku/Rosa_v1_3B) + [jeiku/Furry_Request_StableLM](https://huggingface.co/jeiku/Furry_Request_StableLM)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: linear
models:
- model: jeiku/Rosa_v1_3B+jeiku/Furry_Request_StableLM
parameters:
weight: 1
dtype: float16
```
|
sarthakharne/Phi1_5-PreTrained-4-epoch
|
sarthakharne
| 2024-02-03T06:18:36Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-03T06:16:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Thala007Dhoni/facedeep
|
Thala007Dhoni
| 2024-02-03T06:16:21Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-02-03T05:03:32Z |
# deepfake-detection
Identify the images as real or fake using state-of-the-art AI models
|
yoinked/merges
|
yoinked
| 2024-02-03T06:11:00Z | 0 | 7 | null |
[
"art",
"text-to-image",
"en",
"license:other",
"region:us"
] |
text-to-image
| 2023-03-26T23:51:40Z |
---
license: other
language:
- en
pipeline_tag: text-to-image
tags:
- art
---
some merges and or ggml conversions
img: booru tags, use the `/awoo/` models preferibly, as theyre the best
all non-ggml models are licensed under yodayno v2:
```
This license allows you to use the model, but only for non-commercial purposes. You cannot use the model or any part of it in a paid service or sell it.
If you use the model on any platform, you must provide a link or reference to the original model. You must give credit to the licensor whenever you use the model.
The licensor does not provide any warranty and is not liable for any damages caused by the use of the model.
If you break any of the terms, this license will be terminated.
This license is governed by the laws of the jurisdiction in which the licensor is located.
```
|
sarthakharne/Phi1_5-PreTrained-2-epoch
|
sarthakharne
| 2024-02-03T06:09:54Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-03T06:07:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kichan05/Novel-Kaguya-Merge
|
kichan05
| 2024-02-03T06:06:34Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:42dot/42dot_LLM-SFT-1.3B",
"base_model:adapter:42dot/42dot_LLM-SFT-1.3B",
"region:us"
] | null | 2024-01-30T13:38:04Z |
---
library_name: peft
base_model: 42dot/42dot_LLM-SFT-1.3B
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2
- PEFT 0.8.1
|
sarthakharne/Phi1_5-PreTrained-1-epoch
|
sarthakharne
| 2024-02-03T06:04:56Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-03T06:02:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AKILESH18/lamam
|
AKILESH18
| 2024-02-03T06:04:31Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T17:04:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TTNVXX/CartoonOrNotV2
|
TTNVXX
| 2024-02-03T06:02:34Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"swin",
"image-classification",
"autotrain",
"dataset:CartoonOrNotV2/autotrain-data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-02-03T06:02:14Z |
---
tags:
- autotrain
- image-classification
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
datasets:
- CartoonOrNotV2/autotrain-data
---
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: 0.15279646217823029
f1: 0.9732620320855614
precision: 0.9891304347826086
recall: 0.9578947368421052
auc: 0.9932718393922951
accuracy: 0.9739583333333334
|
JahnaviKumar/FGL_DevEmotionAnalysis
|
JahnaviKumar
| 2024-02-03T06:00:52Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-03T05:26:38Z |
This model is trained on comments from fast-growing programming languages on GitHub. The corresponding paper has been accepted in ICPC'24, for further details on the dataset, methodology, and results, please refer https://doi.org/10.1145/3643916.3644422.
|
karawalla/aqmodel_20240204
|
karawalla
| 2024-02-03T05:53:35Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-03T05:49:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
blueapple8259/TinyKo-v5-c
|
blueapple8259
| 2024-02-03T05:48:31Z | 64 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"ko",
"dataset:maywell/korean_textbooks",
"dataset:nlpai-lab/kullm-v2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-03T05:32:37Z |
---
license: mit
datasets:
- maywell/korean_textbooks
- nlpai-lab/kullm-v2
language:
- ko
---
[TinyKo-v5-b](https://huggingface.co/blueapple8259/TinyKo-v5-b)모델을 [kullm-v2](https://huggingface.co/datasets/nlpai-lab/kullm-v2)데이터셋으로 파인튜닝한 모델입니다.
주의: 성능이 매우 떨어지며 할루시네이션이 매우 심합니다.
## 모델 정보
model type: llama
hidden size: 6
hidden size: 127
num attention heads: 16
num key value heads: 4
|
SilentSpeak/torchnet
|
SilentSpeak
| 2024-02-03T05:41:01Z | 0 | 0 | null |
[
"en",
"license:gpl-3.0",
"region:us"
] | null | 2023-11-22T11:01:35Z |
---
license: gpl-3.0
language:
- en
metrics:
- wer
---
# LipNet Phonemes Predictors
Project was developed on using python3.8, in a Linux Ubuntu 24.04
run `python -m pip install -r requirements.txt` to make sure your dependencies are the same as mine
the list of video files to be used for training and validation when training normal LipNet (not phonemes prediction)
are in unseen_train.txt and unseen_test.txt respectively.
the datasets are zipped in lip/*.zip, unzip them into the same location and run `python main.py` to start training
hyperparamters are found in options.py
Project Setup
1. pull this repo using `git pull https://huggingface.co/SilentSpeak/torchnet phonemes`
2. initialize a python virtualenv for this project using `python3.8 -m venv venv`
3. initialize the virtualenv using `source venv/bin/activate`
4. run `python -m pip install -r requirements.txt` to get dependencies
5. install git LFS using `git lfs install`
6. pull the GRID dataset and saved tensorboard runs using `git lfs pull`
Following the project setup, you can run training as follows:
To run training for the LipNet phonemes predictor, run `python main.py`
To run training for the LipNet phonemes to text transformer predictor, run `python TransformerTrainer.py`
To run training for the LipNet-to-BiGRU-to-text transformer predictor, run `python TranslatorTrainer.py`
To run evaluation for the lipnet phonemes predictor + phonemes-to-text transformer end-to-end pipeline,
run `cd tests && python lipnet-pipeline.py`. The model weights used in `lipnet-pipeline.py` are included in the repo as
LFS files in the `saved-weights` folder.
The LRS2 dataset was too large to include in the repo, and access to the LRS2 dataset is conditional on accepting
the non-commercial usage license. However, the config file for training on the LRS2 dataset can be found in `options_lrs2.py`
, and the preprocessing code for the LRS2 dataset can be found in `scripts/extract_crop_lips_v2.py` and `scripts/generate_lsr2_train.py`.
The LRS2 dataset itself can be be found at [https://www.robots.ox.ac.uk/~vgg/data/lip_reading/lrs2.html](https://www.robots.ox.ac.uk/~vgg/data/lip_reading/lrs2.html)
|
Imran1/MedChat3.5
|
Imran1
| 2024-02-03T05:39:25Z | 5 | 2 |
transformers, Unsloth, Peft, trl, accelerate, bitsandbytes
|
[
"transformers, Unsloth, Peft, trl, accelerate, bitsandbytes",
"safetensors",
"mistral",
"medical",
"language model",
"NLP",
"license:mit",
"region:us"
] | null | 2024-01-17T05:55:41Z |
---
library_name: transformers, Unsloth, Peft, trl, accelerate, bitsandbytes
tags:
- medical
- language model
- NLP
license: mit
---
# Model Card for MedChat3.5
## Model Details
### Model Description
MedChat3.5 is a specialized language model based on the OpenChat 3.5 architecture, fine-tuned for biomedical natural language processing (NLP) tasks. The model has been tailored using the Llama2-MedTuned-Instructions dataset, which includes approximately 200,000 samples specifically designed for instruction-based learning in biomedical contexts. The model excels in tasks such as Named Entity Recognition (NER), Relation Extraction (RE), Medical Natural Language Inference (NLI), Document Classification, and Question Answering (QA).
- **Developed by:** Imran Ullah
- **Model type:** Language Model (LM), fine-tuned for medical NLP
- **Language(s) (NLP):** English (Biomedical Text)
- **License:** [MIT]
- **Finetuned from model [optional]:** OpenChat 3.5
## Dataset Information
### Dataset Name: Llama2-MedTuned-Instructions
#### Dataset Description
Llama2-MedTuned-Instructions is an instruction-based dataset developed for training language models in biomedical NLP tasks. Comprising approximately 200,000 samples, the dataset guides models through tasks like Named Entity Recognition (NER), Relation Extraction (RE), Medical Natural Language Inference (NLI), Document Classification, and Question Answering (QA). It consolidates subsets from well-known biomedical datasets, ensuring a diverse and comprehensive training experience.
#### Source Datasets and Composition
- Named Entity Recognition (NER): NCBI-disease, BC5CDR-disease, BC5CDR-chem, BC2GM, JNLPBA, i2b2-2012
- Relation Extraction (RE): i2b2-2010, GAD
- Natural Language Inference (NLI): MedNLI
- Document Classification: Hallmarks of cancer (HoC)
- Question Answering (QA): ChatDoctor, PMC-Llama-Instructions
#### Prompting Strategy
Each sample in the dataset follows a three-part structure: Instruction, Input, and Output, facilitating instruction-based learning.
#### Usage and Application
Ideal for training and evaluating models on biomedical NLP tasks, MedChat3.5 serves as a benchmark for assessing model performance in domain-specific tasks, comparing against established models like BioBERT and BioClinicalBERT.
## Inference Instructions
To use MedChat3.5 for inference, follow the provided code snippet using the `transformers` library. Make sure to install the necessary packages and authenticate using an Hugging Face API token. Adjust parameters like temperature, top-p, and top-k for desired generation behavior. The model is optimized for tasks such as question answering and generating responses in biomedical contexts.
```python
# Example Inference Code
!pip install -q --upgrade git+https://github.com/huggingface/transformers.git
!pip install -q accelerate datasets bitsandbytes peft
# user your own hugging face secret token
from google.colab import userdata
hf_token = userdata.get('HF_TOKEN')
import torch
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
from transformers import AutoTokenizer, SinkCache, AutoModelForCausalLM, TextStreamer
path = "Imran1/MedChat3.5"
# Load base LLM model and tokenizer
model = AutoModelForCausalLM.from_pretrained(
path,
low_cpu_mem_usage=True,
torch_dtype=torch.float16,
load_in_4bit=True,
token=hf_token,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(path, token=hf_token)
tokenizer.eos_token_id = model.config.eos_token_id
tokenizer.pad_token = tokenizer.eos_token
streamer = TextStreamer(tokenizer)
tx = '''
GPT4 Correct Assistant: you are a stomach specialist.<|end_of_turn|>
GPT4 Correct User: What role does gastric acid play in the process of digestion, and how does the stomach regulate its secretion to maintain a healthy digestive environment?<|end_of_turn|>
GPT4 Correct Assistant:
'''
import warnings
warnings.filterwarnings('ignore') # Ignore all warnings
inputs = tokenizer(tx, return_tensors="pt", return_attention_mask=False).to('cuda')
generation_params = {
'max_new_tokens': 500,
'use_cache': True,
'do_sample': True,
'temperature': 0.7,
'top_p': 0.9,
'top_k': 50
}
outputs = model.generate(**inputs, **generation_params, streamer=streamer)
decoded_outputs = tokenizer.batch_decode(outputs)
# output
'''
<s>
GPT4 Correct Assistant: you are stomach specialist.<|end_of_turn|>
GPT4 Correct User: What role does gastric acid play in the process of digestion, and how does the stomach regulate its secretion to maintain a healthy digestive environment?<|end_of_turn|>
GPT4 Correct Assistant:
Gastric acid plays a crucial role in the process of digestion by breaking down food into its basic components. It is secreted by the cells lining the stomach, known as parietal cells, in response to the presence of food in the stomach.
The stomach regulates the secretion of gastric acid through a series of mechanisms that maintain a healthy digestive environment. The primary mechanism is the release of gastrin, a hormone produced by the stomach's G-cells in response to the presence of food. Gastrin stimulates the parietal cells to secrete gastric acid, which in turn aids in the breakdown of food.
The stomach also regulates the secretion of gastric acid through the release of histamine, which is produced by the ECL cells in response to the presence of food. Histamine acts on the parietal cells to stimulate gastric acid secretion.
Another mechanism involves the production of intrinsic factor, a protein produced by the stomach's mucous cells. Intrinsic factor is essential for the absorption of vitamin B12 in the small intestine. The production of intrinsic factor is regulated by gastric acid, which helps maintain a healthy balance of this essential nutrient.
Additionally, the stomach regulates the secretion of gastric acid through the release of somatostatin, a hormone produced by the D-cells of the stomach. Somatostatin inhibits gastric acid secretion, helping to maintain a healthy balance between acid production and neutralization.
In summary, the stomach regulates the secretion of gastric acid through a series of mechanisms that maintain a healthy digestive environment. These mechanisms include the release of gastrin, histamine, and intrinsic factor, as well as the release of somatostatin. By maintaining a balance between acid production and neutralization, the stomach ensures that the digestive environment remains conducive to proper digestion and absorption of nutrients.<|end_of_turn|>
'''
```
|
Gigazinie/QA_model
|
Gigazinie
| 2024-02-03T05:34:55Z | 13 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-01-31T08:45:59Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: QA_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# QA_model
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8730
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 50 | 3.8162 |
| No log | 2.0 | 100 | 3.8578 |
| No log | 3.0 | 150 | 3.8730 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
matteo1997/5_images_dreambooth_lora_step1000
|
matteo1997
| 2024-02-03T05:24:53Z | 1 | 2 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-02-03T04:27:23Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: 'a green car in the forest'
output:
url:
"image_0.png"
- text: 'a green car in the forest'
output:
url:
"image_1.png"
- text: 'a green car in the forest'
output:
url:
"image_2.png"
- text: 'a green car in the forest'
output:
url:
"image_3.png"
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a blue car
license: openrail++
---
# SDXL LoRA DreamBooth - matteo1997/5_images_dreambooth_lora_step1000
<Gallery />
## Model description
These are matteo1997/5_images_dreambooth_lora_step1000 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a blue car to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](matteo1997/5_images_dreambooth_lora_step1000/tree/main) them in the Files & versions tab.
|
LoneStriker/Blue-Orchid-2x7b-6.0bpw-h6-exl2
|
LoneStriker
| 2024-02-03T05:20:33Z | 8 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-01T19:56:14Z |
---
license: apache-2.0
---
**Blue-Orchid-2x7b**
GGUF: https://huggingface.co/nakodanei/Blue-Orchid-2x7b_GGUF
Roleplaying focused MoE Mistral model.
One expert is a merge of mostly RP models, the other is a merge of mostly storywriting models. So it should be good at both. The base model is SanjiWatsuki/Kunoichi-DPO-v2-7B.
- Expert 1 is a merge of LimaRP, Limamono, Noromaid 0.4 DPO and good-robot.
- Expert 2 is a merge of Erebus, Holodeck, Dans-AdventurousWinds-Mk2, Opus, Ashhwriter and good-robot.
## Prompt template (LimaRP):
```
### Instruction:
{system prompt}
### Input:
User: {prompt}
### Response:
Character:
```
Alpaca prompt template should work fine too.
|
LoneStriker/Blue-Orchid-2x7b-3.0bpw-h6-exl2
|
LoneStriker
| 2024-02-03T05:07:19Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-01T19:45:12Z |
---
license: apache-2.0
---
**Blue-Orchid-2x7b**
GGUF: https://huggingface.co/nakodanei/Blue-Orchid-2x7b_GGUF
Roleplaying focused MoE Mistral model.
One expert is a merge of mostly RP models, the other is a merge of mostly storywriting models. So it should be good at both. The base model is SanjiWatsuki/Kunoichi-DPO-v2-7B.
- Expert 1 is a merge of LimaRP, Limamono, Noromaid 0.4 DPO and good-robot.
- Expert 2 is a merge of Erebus, Holodeck, Dans-AdventurousWinds-Mk2, Opus, Ashhwriter and good-robot.
## Prompt template (LimaRP):
```
### Instruction:
{system prompt}
### Input:
User: {prompt}
### Response:
Character:
```
Alpaca prompt template should work fine too.
|
kanishka/smolm-autoreg-bpe-counterfactual-babylm-pipps_and_keys_to_it_all_removal-seed_211-1e-3
|
kanishka
| 2024-02-03T05:04:37Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"opt",
"text-generation",
"generated_from_trainer",
"dataset:kanishka/counterfactual-babylm-pipps_and_keys_to_it_all_removal",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T06:33:53Z |
---
tags:
- generated_from_trainer
datasets:
- kanishka/counterfactual-babylm-pipps_and_keys_to_it_all_removal
metrics:
- accuracy
model-index:
- name: smolm-autoreg-bpe-counterfactual-babylm-pipps_and_keys_to_it_all_removal-seed_211-1e-3
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: kanishka/counterfactual-babylm-pipps_and_keys_to_it_all_removal
type: kanishka/counterfactual-babylm-pipps_and_keys_to_it_all_removal
metrics:
- name: Accuracy
type: accuracy
value: 0.40997045687548256
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smolm-autoreg-bpe-counterfactual-babylm-pipps_and_keys_to_it_all_removal-seed_211-1e-3
This model was trained from scratch on the kanishka/counterfactual-babylm-pipps_and_keys_to_it_all_removal dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4342
- Accuracy: 0.4100
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 64
- seed: 211
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 32000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 3.5982 | 1.0 | 18594 | 3.7814 | 0.3600 |
| 3.3842 | 2.0 | 37188 | 3.5917 | 0.3792 |
| 3.2578 | 3.0 | 55782 | 3.4820 | 0.3923 |
| 3.181 | 4.0 | 74376 | 3.4444 | 0.3975 |
| 3.127 | 5.0 | 92970 | 3.4062 | 0.4023 |
| 3.0853 | 6.0 | 111564 | 3.3876 | 0.4042 |
| 3.0444 | 7.0 | 130158 | 3.3845 | 0.4051 |
| 3.0164 | 8.0 | 148752 | 3.3997 | 0.4067 |
| 2.9875 | 9.0 | 167346 | 3.3890 | 0.4077 |
| 2.9637 | 10.0 | 185940 | 3.3966 | 0.4072 |
| 2.9414 | 11.0 | 204534 | 3.3861 | 0.4084 |
| 2.9102 | 12.0 | 223128 | 3.3732 | 0.4095 |
| 2.8918 | 13.0 | 241722 | 3.3955 | 0.4091 |
| 2.8738 | 14.0 | 260316 | 3.3978 | 0.4096 |
| 2.8518 | 15.0 | 278910 | 3.3918 | 0.4102 |
| 2.8325 | 16.0 | 297504 | 3.4144 | 0.4098 |
| 2.8187 | 17.0 | 316098 | 3.4153 | 0.4102 |
| 2.7944 | 18.0 | 334692 | 3.4143 | 0.4103 |
| 2.7783 | 19.0 | 353286 | 3.4294 | 0.4100 |
| 2.7617 | 20.0 | 371880 | 3.4342 | 0.4100 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
MAZINO2/ppo-LunarLander-v2
|
MAZINO2
| 2024-02-03T04:57:43Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-03T04:57:27Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 257.99 +/- 26.25
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
rhplus0831/maid-yuzu-v2-mid
|
rhplus0831
| 2024-02-03T04:17:12Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"mergekit",
"merge",
"base_model:smelborp/MixtralOrochi8x7B",
"base_model:merge:smelborp/MixtralOrochi8x7B",
"base_model:ycros/BagelMIsteryTour-v2-8x7B",
"base_model:merge:ycros/BagelMIsteryTour-v2-8x7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-03T03:43:41Z |
---
base_model:
- smelborp/MixtralOrochi8x7B
- ycros/BagelMIsteryTour-v2-8x7B
library_name: transformers
tags:
- mergekit
- merge
---
# maid-yuzu-v2-mid
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [smelborp/MixtralOrochi8x7B](https://huggingface.co/smelborp/MixtralOrochi8x7B)
* [ycros/BagelMIsteryTour-v2-8x7B](https://huggingface.co/ycros/BagelMIsteryTour-v2-8x7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model:
model:
path: smelborp/MixtralOrochi8x7B
dtype: bfloat16
merge_method: slerp
parameters:
t:
- value: 0.375
slices:
- sources:
- layer_range: [0, 32]
model:
model:
path: smelborp/MixtralOrochi8x7B
- layer_range: [0, 32]
model:
model:
path: ycros/BagelMIsteryTour-v2-8x7B
```
|
Crystalcareai/CrystalMiniCPM
|
Crystalcareai
| 2024-02-03T04:07:55Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"minicpm",
"text-generation",
"generated_from_trainer",
"conversational",
"custom_code",
"base_model:openbmb/MiniCPM-2B-sft-bf16",
"base_model:finetune:openbmb/MiniCPM-2B-sft-bf16",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-02-03T04:06:10Z |
---
base_model: openbmb/MiniCPM-2B-sft-bf16
tags:
- generated_from_trainer
model-index:
- name: qlora-out
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: openbmb/MiniCPM-2B-sft-bf16
load_in_8bit: false
load_in_4bit: false
strict: false
push_dataset_to_hub:
datasets:
- path: teknium/GPT4-LLM-Cleaned
type: alpaca
dataset_prepared_path:
val_set_size: 0.05
adapter:
lora_model_dir:
sequence_len: 4096
max_packed_sequence_len:
lora_r: 8
lora_alpha: 32
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
output_dir: ./qlora-out
gradient_accumulation_steps: 2
micro_batch_size: 2
num_epochs: 1.5
optimizer: paged_adamw_8bit
torchdistx_path:
lr_scheduler: cosine
learning_rate: 0.0001
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: true
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention: true
flash_attention:
gptq_groupsize:
gptq_model_v1:
warmup_steps: 10
evals_per_epoch: 2
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.1
fsdp:
fsdp_config:
special_tokens:
trust_remote_code: true
```
</details><br>
# qlora-out
This model is a fine-tuned version of [openbmb/MiniCPM-2B-sft-bf16](https://huggingface.co/openbmb/MiniCPM-2B-sft-bf16) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0525
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 1.5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0903 | 0.0 | 1 | 1.7199 |
| 0.8959 | 0.5 | 1620 | 1.1007 |
| 0.995 | 1.0 | 3240 | 1.0342 |
| 0.864 | 1.5 | 4860 | 1.0525 |
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
YoelCanaza/base-beans-classification-vit-model-yoel
|
YoelCanaza
| 2024-02-03T04:03:54Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-23T08:16:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
widget:
- src: https://huggingface.co/YoelCanaza/base-beans-classification-vit-model-yoel/resolve/main/healthy.jpeg
example_title: Healthy
- src: https://huggingface.co/YoelCanaza/base-beans-classification-vit-model-yoel/resolve/main/bean_rust.jpeg
example_title: Bean Rust
model-index:
- name: prueba-vit-model-yoel
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# prueba-vit-model-yoel
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0081
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0212 | 3.85 | 500 | 0.0081 | 1.0 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.13.3
|
AdAstra1/q-Taxi-v1
|
AdAstra1
| 2024-02-03T04:01:29Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-03T04:01:26Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="AdAstra1/q-Taxi-v1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
AdAstra1/q-FrozenLake-v1-4x4-noSlippery
|
AdAstra1
| 2024-02-03T04:00:53Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-03T03:45:45Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="AdAstra1/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ruanwz/autotrain-image-classification-for-slides-240203
|
ruanwz
| 2024-02-03T03:52:46Z | 344 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"autotrain",
"dataset:autotrain-image-classification-for-slides-240203/autotrain-data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-02-03T03:51:13Z |
---
tags:
- autotrain
- image-classification
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
datasets:
- autotrain-image-classification-for-slides-240203/autotrain-data
---
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: 0.31477582454681396
f1: 0.7499999999999999
precision: 1.0
recall: 0.6
auc: 0.915
accuracy: 0.8666666666666667
|
acrastt/Bean-3B
|
acrastt
| 2024-02-03T03:36:26Z | 1,522 | 2 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:64bits/lima_vicuna_format",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-02T00:06:46Z |
---
language:
- en
license: apache-2.0
library_name: transformers
datasets:
- 64bits/lima_vicuna_format
pipeline_tag: text-generation
model-index:
- name: Bean-3B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 40.36
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/Bean-3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 72.0
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/Bean-3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 26.43
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/Bean-3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 36.11
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/Bean-3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.67
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/Bean-3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 0.53
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/Bean-3B
name: Open LLM Leaderboard
---
<a href="https://www.buymeacoffee.com/acrastt" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
This is [OpenLLaMA 3B V2](https://huggingface.co/openlm-research/open_llama_3b_v2) finetuned on [LIMA(ShareGPT format)](https://huggingface.co/datasets/64bits/lima_vicuna_format) for 2 epochs.
Prompt template:
```
### HUMAN:
{prompt}
### RESPONSE:
<leave a newline for the model to answer>
```
GGUF quantizations available [here](https://huggingface.co/maddes8cht/acrastt-Bean-3B-gguf).
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_acrastt__Bean-3B)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 40.18 |
| ARC (25-shot) | 40.36 |
| HellaSwag (10-shot) | 72.0 |
| MMLU (5-shot) | 26.43 |
| TruthfulQA (0-shot) | 36.11 |
| Winogrande (5-shot) | 65.67 |
| GSM8K (5-shot) | 0.53 |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_acrastt__Bean-3B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |40.18|
|AI2 Reasoning Challenge (25-Shot)|40.36|
|HellaSwag (10-Shot) |72.00|
|MMLU (5-Shot) |26.43|
|TruthfulQA (0-shot) |36.11|
|Winogrande (5-shot) |65.67|
|GSM8k (5-shot) | 0.53|
|
acrastt/Marx-3B
|
acrastt
| 2024-02-03T03:34:32Z | 2,261 | 13 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:totally-not-an-llm/everything-sharegptformat-morecleaned",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-15T18:23:34Z |
---
language:
- en
license: apache-2.0
datasets:
- totally-not-an-llm/everything-sharegptformat-morecleaned
pipeline_tag: text-generation
model-index:
- name: Marx-3B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 43.17
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/Marx-3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 72.68
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/Marx-3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 28.46
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/Marx-3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 39.09
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/Marx-3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.59
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/Marx-3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 1.29
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/Marx-3B
name: Open LLM Leaderboard
---
<a href="https://www.buymeacoffee.com/acrastt" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
This is [OpenLLaMA 3B V2](https://huggingface.co/openlm-research/open_llama_3b_v2) finetuned on [EverythingLM Data(ShareGPT format more cleaned)](https://huggingface.co/datasets/totally-not-an-llm/everything-sharegptformat-morecleaned) for 1 epochs.
Prompt template:
```
### HUMAN:
{prompt}
### RESPONSE:
<leave a newline for the model to answer>
```
GGML quants available [here](https://huggingface.co/TheBloke/Marx-3b-GGML).</br>
GPTQ quants available [here](https://huggingface.co/TheBloke/Marx-3b-GPTQ).
Note: Don't expect this model to be good, I was just starting out to finetune. So don't roast me please!
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_acrastt__Marx-3B)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 41.71 |
| ARC (25-shot) | 43.17 |
| HellaSwag (10-shot) | 72.68 |
| MMLU (5-shot) | 28.46 |
| TruthfulQA (0-shot) | 39.09 |
| Winogrande (5-shot) | 65.59 |
| GSM8K (5-shot) | 1.29 |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_acrastt__Marx-3B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |41.71|
|AI2 Reasoning Challenge (25-Shot)|43.17|
|HellaSwag (10-Shot) |72.68|
|MMLU (5-Shot) |28.46|
|TruthfulQA (0-shot) |39.09|
|Winogrande (5-shot) |65.59|
|GSM8k (5-shot) | 1.29|
|
rashikadabas/t5-base-news-finetuned
|
rashikadabas
| 2024-02-03T03:10:50Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"summarization",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2024-02-01T07:41:47Z |
---
license: apache-2.0
tags:
- summarization
---
|
saishf/Kuno-Lake-7B-GGUF
|
saishf
| 2024-02-03T03:09:47Z | 11 | 2 | null |
[
"gguf",
"mergekit",
"merge",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B",
"base_model:merge:SanjiWatsuki/Kunoichi-DPO-v2-7B",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:merge:mistralai/Mistral-7B-v0.1",
"base_model:senseable/WestLake-7B-v2",
"base_model:merge:senseable/WestLake-7B-v2",
"endpoints_compatible",
"region:us"
] | null | 2024-02-03T02:33:23Z |
---
base_model:
- mistralai/Mistral-7B-v0.1
- senseable/WestLake-7B-v2
- SanjiWatsuki/Kunoichi-DPO-v2-7B
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base.
### Models Merged
The following models were included in the merge:
* [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2)
* [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
# No parameters necessary for base model
- model: senseable/WestLake-7B-v2
parameters:
density: 0.53
weight: 0.65
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
parameters:
density: 0.53
weight: 0.35
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
```
|
ND911/Franken-Maid-Slerp
|
ND911
| 2024-02-03T03:09:19Z | 5 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"SanjiWatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE",
"ND911/EE-LMaid-7B-Slerp",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-03T03:02:48Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- SanjiWatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE
- ND911/EE-LMaid-7B-Slerp
---

Experimental RP merges - using SillyTavern with Min-P
SanjiWatsuki/Loyal-Macaroni-Maid-7B, merged with ND911/EE-Maid-7B-Slerp which is a merge of SanjiWatsuki/Silicon-Maid-7B and maywell/Synatra-7B-v0.3-RP
EE-LMaid-7B-Slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [SanjiWatsuki/Loyal-Macaroni-Maid-7B](https://huggingface.co/SanjiWatsuki/Loyal-Macaroni-Maid-7B)
* [ND911/EE-Maid-7B-Slerp](https://huggingface.co/ND911/EE-Maid-7B-Slerp)
# Franken-Maid-Slerp
Franken-Maid-Slerp is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [SanjiWatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE](https://huggingface.co/SanjiWatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE)
* [ND911/EE-LMaid-7B-Slerp](https://huggingface.co/ND911/EE-LMaid-7B-Slerp)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: SanjiWatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE
layer_range: [0, 32]
- model: ND911/EE-LMaid-7B-Slerp
layer_range: [0, 32]
merge_method: slerp
base_model: ND911/EE-LMaid-7B-Slerp
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
robbie0/vntl-7b-v0.3.1-hf-exl2
|
robbie0
| 2024-02-03T03:02:45Z | 14 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"translation",
"ja",
"en",
"dataset:lmg-anon/VNTL-v2.5-1k",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
translation
| 2024-02-02T18:30:25Z |
---
license: llama2
datasets:
- lmg-anon/VNTL-v2.5-1k
language:
- ja
- en
pipeline_tag: translation
---
# VNTL v3.5.1 EXL2 quantization branches
- main (4.0bpw)
- 5.6bpw
- 8.0bpw
original (unquantized): <https://huggingface.co/lmg-anon/vntl-7b-v0.3.1-hf>
---------
This is a merge of the [experimental VNTL v0.3.1 lora](https://huggingface.co/lmg-anon/vntl-7b-v0.3.1-lora) created using the [VNTL-v2.5-1k](https://huggingface.co/datasets/lmg-anon/VNTL-v2.5-1k) dataset.
This is an prompt example:
```
<<START>>
Name: Uryuu Shingo (瓜生 新吾) | Gender: Male | Aliases: Onii-chan (お兄ちゃん)
Name: Uryuu Sakuno (瓜生 桜乃) | Gender: Female
<<JAPANESE>>
[桜乃]: 『……ごめん』
<<ENGLISH>> (fidelity = absolute)
[Sakuno]: 『... Sorry.』</s>
<<JAPANESE>>
[新吾]: 「ううん、こう言っちゃなんだけど、迷子でよかったよ。桜乃は可愛いから、いろいろ心配しちゃってたんだぞ俺」
<<ENGLISH>> (fidelity = high)
```
The generated translation for that prompt, with temperature 0, is:
```
[Shingo]: 「No, don't apologize. I'm just glad you're safe. You're so cute, Sakuno, I was worried sick.」
```
|
InfinityLai/NeuralPipe-7B-slerp
|
InfinityLai
| 2024-02-03T03:01:55Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"OpenPipe/mistral-ft-optimized-1218",
"mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:OpenPipe/mistral-ft-optimized-1218",
"base_model:merge:OpenPipe/mistral-ft-optimized-1218",
"base_model:mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:merge:mlabonne/NeuralHermes-2.5-Mistral-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-03T02:57:47Z |
---
tags:
- merge
- mergekit
- lazymergekit
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
base_model:
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
---
# NeuralPipe-7B-slerp
NeuralPipe-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218)
* [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: OpenPipe/mistral-ft-optimized-1218
layer_range: [0, 32]
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: OpenPipe/mistral-ft-optimized-1218
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "InfinityLai/NeuralPipe-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
matteo1997/lora-trained-xl
|
matteo1997
| 2024-02-03T02:51:03Z | 2 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-01-30T06:23:49Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: 'a pink car driven on the expressway'
output:
url:
"image_0.png"
- text: 'a pink car driven on the expressway'
output:
url:
"image_1.png"
- text: 'a pink car driven on the expressway'
output:
url:
"image_2.png"
- text: 'a pink car driven on the expressway'
output:
url:
"image_3.png"
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a blue car
license: openrail++
---
# SDXL LoRA DreamBooth - matteo1997/lora-trained-xl
<Gallery />
## Model description
These are matteo1997/lora-trained-xl LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a blue car to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](matteo1997/lora-trained-xl/tree/main) them in the Files & versions tab.
|
Askahoward/NeuralPipe-7B-slerp
|
Askahoward
| 2024-02-03T02:40:17Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"OpenPipe/mistral-ft-optimized-1218",
"mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:OpenPipe/mistral-ft-optimized-1218",
"base_model:merge:OpenPipe/mistral-ft-optimized-1218",
"base_model:mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:merge:mlabonne/NeuralHermes-2.5-Mistral-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-03T02:35:15Z |
---
tags:
- merge
- mergekit
- lazymergekit
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
base_model:
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
---
# NeuralPipe-7B-slerp
NeuralPipe-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218)
* [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: OpenPipe/mistral-ft-optimized-1218
layer_range: [0, 32]
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: OpenPipe/mistral-ft-optimized-1218
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Askahoward/NeuralPipe-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
bart-automation/sft_zephyr
|
bart-automation
| 2024-02-03T02:34:38Z | 2 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:HuggingFaceH4/zephyr-7b-alpha",
"base_model:adapter:HuggingFaceH4/zephyr-7b-alpha",
"license:mit",
"region:us"
] | null | 2024-02-03T02:34:23Z |
---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: HuggingFaceH4/zephyr-7b-alpha
model-index:
- name: sft_zephyr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft_zephyr
This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-alpha](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 5
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
shuaigetw/NeuralPipe-7B-slerp
|
shuaigetw
| 2024-02-03T02:30:33Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"OpenPipe/mistral-ft-optimized-1218",
"mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:OpenPipe/mistral-ft-optimized-1218",
"base_model:merge:OpenPipe/mistral-ft-optimized-1218",
"base_model:mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:merge:mlabonne/NeuralHermes-2.5-Mistral-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-03T02:27:00Z |
---
tags:
- merge
- mergekit
- lazymergekit
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
base_model:
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
---
# NeuralPipe-7B-slerp
NeuralPipe-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218)
* [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: OpenPipe/mistral-ft-optimized-1218
layer_range: [0, 32]
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: OpenPipe/mistral-ft-optimized-1218
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "shuaigetw/NeuralPipe-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
frankc350/NeuralPipe-7B-slerp
|
frankc350
| 2024-02-03T02:28:03Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"OpenPipe/mistral-ft-optimized-1218",
"mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:OpenPipe/mistral-ft-optimized-1218",
"base_model:merge:OpenPipe/mistral-ft-optimized-1218",
"base_model:mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:merge:mlabonne/NeuralHermes-2.5-Mistral-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-03T02:23:45Z |
---
tags:
- merge
- mergekit
- lazymergekit
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
base_model:
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
---
# NeuralPipe-7B-slerp
NeuralPipe-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218)
* [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: OpenPipe/mistral-ft-optimized-1218
layer_range: [0, 32]
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: OpenPipe/mistral-ft-optimized-1218
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "frankc350/NeuralPipe-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
modelwizard/mink
|
modelwizard
| 2024-02-03T02:15:04Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-03T02:12:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
weimenglin/NeuralPipe-7B-slerp
|
weimenglin
| 2024-02-03T02:13:29Z | 7 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"OpenPipe/mistral-ft-optimized-1218",
"mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:OpenPipe/mistral-ft-optimized-1218",
"base_model:merge:OpenPipe/mistral-ft-optimized-1218",
"base_model:mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:merge:mlabonne/NeuralHermes-2.5-Mistral-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-03T02:09:14Z |
---
tags:
- merge
- mergekit
- lazymergekit
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
base_model:
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
---
# NeuralPipe-7B-slerp
NeuralPipe-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218)
* [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: OpenPipe/mistral-ft-optimized-1218
layer_range: [0, 32]
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: OpenPipe/mistral-ft-optimized-1218
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "weimenglin/NeuralPipe-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.