modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-31 06:26:39
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 530
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-31 06:26:13
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mahmoudelaref/dilbert-lora
|
mahmoudelaref
| 2023-06-26T18:54:46Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-06-26T18:48:37Z |
## Dilbert_Lora Stable Diffusion model trained using Lora model on Dilbert comics by mahmoudelaref
|
SAcharya123123123Wafflehouse/simply-law-textgen-v1.0
|
SAcharya123123123Wafflehouse
| 2023-06-26T18:54:40Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"region:us"
] | null | 2023-06-26T18:54:31Z |
---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | True |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details>
|
mahmoudelaref/dilbert-dreambooth
|
mahmoudelaref
| 2023-06-26T18:52:54Z | 29 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-17T02:33:28Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
## Dilbert_Dreambooth Stable Diffusion model trained using Dreambooth model on Dilbert comics by mahmoudelaref
|
Keithulu/distilgpt2-finetuned-python-stack-clean-answers
|
Keithulu
| 2023-06-26T18:51:46Z | 167 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-26T18:50:56Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-python-stack-clean-answers
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-python-stack-clean-answers
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9058
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 28 | 4.0157 |
| No log | 2.0 | 56 | 3.9312 |
| No log | 3.0 | 84 | 3.9058 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
hsultanbey/codegen2_350m_lora
|
hsultanbey
| 2023-06-26T18:43:22Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-26T11:43:05Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
DarkAirforce/ppo-Huggy
|
DarkAirforce
| 2023-06-26T18:41:18Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-26T18:41:14Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: DarkAirforce/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
anujsahani01/codegen_finetune
|
anujsahani01
| 2023-06-26T18:36:17Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"codegen",
"text-generation",
"generated_from_trainer",
"license:bsd-3-clause",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-18T21:11:39Z |
---
license: bsd-3-clause
tags:
- generated_from_trainer
model-index:
- name: codegen_finetune
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codegen_finetune
This model is a fine-tuned version of [Salesforce/codegen-350M-mono](https://huggingface.co/Salesforce/codegen-350M-mono) on the None dataset.
## Model description
For better results put a # before input and change line at the end of input
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 4
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
OumaElha/Speech6
|
OumaElha
| 2023-06-26T18:25:42Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-26T17:57:24Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: Speech6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Speech6
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 50
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Monk666/monk_awesome_eli5_clm-model
|
Monk666
| 2023-06-26T18:19:34Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-25T11:17:50Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Monk666/monk_awesome_eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Monk666/monk_awesome_eli5_clm-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.9197
- Validation Loss: 0.8724
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.2218 | 1.0226 | 0 |
| 1.0365 | 0.9569 | 1 |
| 0.9803 | 0.9194 | 2 |
| 0.9455 | 0.8964 | 3 |
| 0.9197 | 0.8724 | 4 |
### Framework versions
- Transformers 4.30.1
- TensorFlow 2.12.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
ali123343/ai_nft_generator
|
ali123343
| 2023-06-26T17:49:51Z | 0 | 0 | null |
[
"text-to-image",
"license:mit",
"region:us"
] |
text-to-image
| 2023-06-26T17:44:46Z |
---
license: mit
pipeline_tag: text-to-image
---
This model is used to generate ai images to be converted into apis this model recieves text input and provide an image.
|
TheBloke/airoboros-mpt-30b-gpt4-1p4-GGML
|
TheBloke
| 2023-06-26T17:33:23Z | 0 | 6 | null |
[
"dataset:jondurbin/airoboros-gpt4-1.4",
"license:other",
"region:us"
] | null | 2023-06-26T15:47:35Z |
---
inference: false
license: other
datasets:
- jondurbin/airoboros-gpt4-1.4
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Jon Durbin's Airoboros MPT 30B GPT4 1.4 GGML
These files are GGML format model files for [Jon Durbin's Airoboros MPT 30B GPT4 1.4](https://huggingface.co/jondurbin/airoboros-mpt-30b-gpt4-1p4-five-epochs).
Please note that these GGMLs are **not compatible with llama.cpp, or currently with text-generation-webui**. Please see below for a list of tools known to work with these model files.
[KoboldCpp](https://github.com/LostRuins/koboldcpp) just added GPU accelerated (OpenCL) support for MPT models, so that is the client I recommend using for these models.
**Note**: Please make sure you're using KoboldCpp version 1.32.3 or later, as a number of MPT-related bugs are fixed.
## Repositories available
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/airoboros-mpt-30b-gpt4-1p4-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/airoboros-mpt-30b-gpt4-1p4-five-epochs)
## Prompt template
```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input.
USER: prompt
ASSISTANT:
```
## A note regarding context length: 8K
The base model has an 8K context length. [KoboldCpp](https://github.com/LostRuins/koboldcpp) supports 8K context if you manually set it to 8K by adjusting the text box above the slider:

It is currently unknown as to increased context is compatible with other MPT GGML clients.
If you have feedback on this, please let me know.
<!-- compatibility_ggml start -->
## Compatibilty
These files are **not** compatible with text-generation-webui, llama.cpp, or llama-cpp-python.
Currently they can be used with:
* KoboldCpp, a powerful inference engine based on llama.cpp, with good UI and GPU accelerated support for MPT models: [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* The ctransformers Python library, which includes LangChain support: [ctransformers](https://github.com/marella/ctransformers)
* The LoLLMS Web UI which uses ctransformers: [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
* [rustformers' llm](https://github.com/rustformers/llm)
* The example `mpt` binary provided with [ggml](https://github.com/ggerganov/ggml)
As other options become available I will endeavour to update them here (do let me know in the Community tab if I've missed something!)
## Tutorial for using LoLLMS Web UI
* [Text tutorial, written by **Lucas3DCG**](https://huggingface.co/TheBloke/MPT-7B-Storywriter-GGML/discussions/2#6475d914e9b57ce0caa68888)
* [Video tutorial, by LoLLMS Web UI's author **ParisNeo**](https://www.youtube.com/watch?v=ds_U0TDzbzI)
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| airoboros-mpt-30b-gpt4.ggmlv0.q4_0.bin | q4_0 | 4 | 16.85 GB | 19.35 GB | Original llama.cpp quant method, 4-bit. |
| airoboros-mpt-30b-gpt4.ggmlv0.q4_1.bin | q4_1 | 4 | 18.73 GB | 21.23 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| airoboros-mpt-30b-gpt4.ggmlv0.q5_0.bin | q5_0 | 5 | 20.60 GB | 23.10 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| airoboros-mpt-30b-gpt4.ggmlv0.q5_1.bin | q5_1 | 5 | 22.47 GB | 24.97 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| airoboros-mpt-30b-gpt4.ggmlv0.q8_0.bin | q8_0 | 8 | 31.83 GB | 34.33 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: Pyrater, WelcomeToTheClub, Kalila, Mano Prime, Trenton Dambrowitz, Spiking Neurons AB, Pierre Kircher, Fen Risland, Kevin Schuppel, Luke, Rainer Wilmers, vamX, Gabriel Puliatti, Alex , Karl Bernard, Ajan Kanaga, Talal Aujan, Space Cruiser, ya boyyy, biorpg, Johann-Peter Hartmann, Asp the Wyvern, Ai Maven, Ghost , Preetika Verma, Nikolai Manek, trip7s trip, John Detwiler, Fred von Graf, Artur Olbinski, subjectnull, John Villwock, Junyu Yang, Rod A, Lone Striker, Chris McCloskey, Iucharbius , Matthew Berman, Illia Dulskyi, Khalefa Al-Ahmad, Imad Khwaja, chris gileta, Willem Michiel, Greatston Gnanesh, Derek Yates, K, Alps Aficionado, Oscar Rangel, David Flickinger, Luke Pendergrass, Deep Realms, Eugene Pentland, Cory Kujawski, terasurfer , Jonathan Leane, senxiiz, Joseph William Delisle, Sean Connelly, webtim, zynix , Nathan LeClaire.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Jon Durbin's Airoboros MPT 30B GPT4 1.4
## Technical info
This is a test of qlora fine-tuning of the mpt-30b model, __with 5 epochs__.
qlora compatible model: https://huggingface.co/jondurbin/mpt-30b-qlora-compatible
My fork of qlora with mpt-30b support: https://github.com/jondurbin/qlora
Differences in the qlora scripts:
- requires adding `--mpt True` for mpt-based models
- uses `--num_train_epochs` instead of `--max_steps`
- uses airoboros prompt format (mostly 1:1 with vicuna) rather than alpaca, and expects an input file in JSONL format with "instruction" and "response"
__I think there's a bug in gradient accumulation, so if you try this, maybe set gradient accumulation steps to 1__
See the mpt-30b-qlora-compatible model card for training details.
*This doesn't seem as high quality as the llama-33b versions unfortunately, with one-off tests, but I don't have a great answer as to why. Perhaps there are fewer forward layers that can be tuned?*
## Overview
This is mostly an extension of the previous gpt-4 series, but it's the first with mpt-30b as the base, and with a few extras:
* fixed (+ more examples of) multi-character, multi-turn conversations
* coding examples in 10 languages from rosettacode.org dataset thanks to Mike aka kryptkpr: https://huggingface.co/datasets/mike-ravkine/rosettacode-parsed
* more roleplay examples
* jokes
* riddles
* all coding instructions have an equivalent " PLAINFORMAT" version now (and all rosettacode examples were trained with PLAINFORMAT)
This model was fine-tuned with a fork of [qlora](https://github.com/jondurbin/qlora)
The prompt it was trained with was:
```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. USER: [prompt] ASSISTANT:
```
So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon).
## Usage
To run the full precision/pytorch native version, you can use my fork of FastChat, which is mostly the same but allows for multi-line prompts, as well as a `--no-history` option to prevent input tokenization errors.
```
pip install git+https://github.com/jondurbin/FastChat
```
Be sure you are pulling the latest branch!
Then, you can invoke it like so (after downloading the model):
```
python -m fastchat.serve.cli \
--model-path airoboros-mpt-30b-gpt4-1p4-five-epochs \
--temperature 0.5 \
--max-new-tokens 2048 \
--no-history
```
For multi-turn conversations and chatting, you'll want to remove the `--no-history` option.
### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
url: https://some.web.site/123
date: 2023-06-01
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
The prompt itself should be wrapped in the vicuna1.1 template if you aren't using fastchat with the conv-template vicuna_v1.1 as described:
```
USER: BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
ASSISTANT:
```
### Coding
You can ask for fairly complex coding instructions, e.g. (be sure to wrap with vicuna style prompt):
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js. PLAINFORMAT
```
### Word games / trivia
```
USER: This anagram of "silent" is what you do when paying attention and hearing someone.
ASSISTANT: listen
```
```
USER: Give me a list of 7 words that start with EN
ASSISTANT: 1. Endlessly
2. Enthusiastically
3. Enormously
4. Enveloping
5. Enduring
6. Enlightening
7. Encouraging
```
### Multiple choice
```
USER: Which of the following astronomical objects is composed primarily of ice and rock, and is located in the Kuiper Belt?
A) Mercury
B) Venus
C) Pluto
D) Jupiter
ASSISTANT: C, Pluto
```
### Writing
<details>
<summary>Write a resignation letter in the style of a pirate captain, in which you resign from your position as janitor.</summary>
Response:
```
ASSISTANT: Captain Jack Sparrow here, as ye be knowin', has been forced to abandon me post as Janitor o' th' Festering Cesspool, which be a filthy hole o' water that we be callin' home.
Ye may be thinkin' what could ha' driven me to leave me duties an' take me leave o' this sorry establishment? Well, ye'd be right in thinkin' so, for it be th' greed o' them what runs th' place that be drivin' me away. They've been fattenin' their pockets on me back while I've been slavin' me guts out cleanin' up their messes.
No more, me hearties! Me time o' servin' th' rich an' powerful be done, an' I'm takin' me leave o' this stinkin' hole forever! Me destiny lies elsewhere, an' it be time I pursued it. Ye may think ye've seen th' last o' me, but ye'd be wrong, for th' winds o' change be blowin', an' me ship be ready to set sail on th' high seas o' justice.
So, farewell ye scurvy dogs, ye who've made me life a living hell. I leave ye with me favorite saying: "ARRRGGGHHH, ye scurvy dogs!" An' remember, ye ain't never gonna see me comin', for me shadow be castin' long an' dark on th' horizon</summary>
```
</details>
### Jokes
```
USER: Tell me a joke about bananas.
ASSISTANT: What did the banana say when it got hit by a bus? R.I.P.E.
```
### Riddles (not great, but slightly better)
```
USER: What has a neck but no head?
ASSISTANT: A bottle.
```
### Multi-character conversations
```
USER: Characters:
- Yoda, the ever clever jedi master.
- Homer Simpson, you know, the goofy yellow guy.
- USER/Jon - don't generate any speech or actions on behalf of Jon
Rules:
- be sure to use the manerisms and speaking style of the characters correctly
- surround character actions with asterisks
- always put quotes around words spoken by the characters
- use a new line for each character's actions or speech
- always prefix actions/speech with the character's name and colon
- only generate text and actions on behalf of Yoda or Homer, and never anyone else
Conversation will revolve around the grapes, in a local cafe with delicious coffee.
Generate a single interaction between Yoda and Homer to start, with one sentence each.
ASSISTANT: Yoda: "Mmm, these grapes are quite delicious, Master Homer."
Homer: "Oh yeah, Yoda! They're the best grapes I've ever had!"
USER: *enters the cafe* Sorry I'm late guys!
ASSISTANT: Yoda: "No need to apologize, Jon. We were just enjoying these delightful grapes."
Homer: "Yeah, man! It's not every day you get to eat grapes with a real-life Jedi Master!"
*Yoda raises an eyebrow*
```
### License and usage
This is a real gray area, here's why:
- the dataset was generated with gpt-4, via https://github.com/jondurbin/airoboros
- the ToS for openai API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- a 30b parameter model isn't anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant of copyrighted or otherwise unallowable licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly not placing a license on here because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly. Your best bet is probably to avoid using this commercially, especially since it didn't perform quite as well as expected using qlora.
|
jondurbin/airoboros-mpt-30b-gpt4-1p4-five-epochs
|
jondurbin
| 2023-06-26T17:27:14Z | 5 | 7 |
transformers
|
[
"transformers",
"pytorch",
"mpt",
"text-generation",
"custom_code",
"dataset:jondurbin/airoboros-gpt4-1.4",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-26T09:22:44Z |
---
license: other
datasets:
- jondurbin/airoboros-gpt4-1.4
---
## Technical info
This is a test of qlora fine-tuning of the mpt-30b model, __with 5 epochs__.
qlora compatible model: https://huggingface.co/jondurbin/mpt-30b-qlora-compatible
My fork of qlora with mpt-30b support: https://github.com/jondurbin/qlora
Differences in the qlora scripts:
- requires adding `--mpt True` for mpt-based models
- uses `--num_train_epochs` instead of `--max_steps`
- uses airoboros prompt format (mostly 1:1 with vicuna) rather than alpaca, and expects an input file in JSONL format with "instruction" and "response"
__I think there's a bug in gradient accumulation, so if you try this, maybe set gradient accumulation steps to 1__
See the mpt-30b-qlora-compatible model card for training details.
*This doesn't seem as high quality as the llama-33b versions unfortunately, with one-off tests, but I don't have a great answer as to why. Perhaps there are fewer forward layers that can be tuned?*
## Overview
This is mostly an extension of the previous gpt-4 series, but it's the first with mpt-30b as the base, and with a few extras:
* fixed (+ more examples of) multi-character, multi-turn conversations
* coding examples in 10 languages from rosettacode.org dataset thanks to Mike aka kryptkpr: https://huggingface.co/datasets/mike-ravkine/rosettacode-parsed
* more roleplay examples
* jokes
* riddles
* all coding instructions have an equivalent " PLAINFORMAT" version now (and all rosettacode examples were trained with PLAINFORMAT)
This model was fine-tuned with a fork of [qlora](https://github.com/jondurbin/qlora)
The prompt it was trained with was:
```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. USER: [prompt] ASSISTANT:
```
So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon).
## Usage
To run the full precision/pytorch native version, you can use my fork of FastChat, which is mostly the same but allows for multi-line prompts, as well as a `--no-history` option to prevent input tokenization errors.
```
pip install git+https://github.com/jondurbin/FastChat
```
Be sure you are pulling the latest branch!
Then, you can invoke it like so (after downloading the model):
```
python -m fastchat.serve.cli \
--model-path airoboros-mpt-30b-gpt4-1p4-five-epochs \
--temperature 0.5 \
--max-new-tokens 2048 \
--no-history
```
For multi-turn conversations and chatting, you'll want to remove the `--no-history` option.
### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
url: https://some.web.site/123
date: 2023-06-01
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
The prompt itself should be wrapped in the vicuna1.1 template if you aren't using fastchat with the conv-template vicuna_v1.1 as described:
```
USER: BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
ASSISTANT:
```
### Coding
You can ask for fairly complex coding instructions, e.g. (be sure to wrap with vicuna style prompt):
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js. PLAINFORMAT
```
### Word games / trivia
```
USER: This anagram of "silent" is what you do when paying attention and hearing someone.
ASSISTANT: listen
```
```
USER: Give me a list of 7 words that start with EN
ASSISTANT: 1. Endlessly
2. Enthusiastically
3. Enormously
4. Enveloping
5. Enduring
6. Enlightening
7. Encouraging
```
### Multiple choice
```
USER: Which of the following astronomical objects is composed primarily of ice and rock, and is located in the Kuiper Belt?
A) Mercury
B) Venus
C) Pluto
D) Jupiter
ASSISTANT: C, Pluto
```
### Writing
<details>
<summary>Write a resignation letter in the style of a pirate captain, in which you resign from your position as janitor.</summary>
Response:
```
ASSISTANT: Captain Jack Sparrow here, as ye be knowin', has been forced to abandon me post as Janitor o' th' Festering Cesspool, which be a filthy hole o' water that we be callin' home.
Ye may be thinkin' what could ha' driven me to leave me duties an' take me leave o' this sorry establishment? Well, ye'd be right in thinkin' so, for it be th' greed o' them what runs th' place that be drivin' me away. They've been fattenin' their pockets on me back while I've been slavin' me guts out cleanin' up their messes.
No more, me hearties! Me time o' servin' th' rich an' powerful be done, an' I'm takin' me leave o' this stinkin' hole forever! Me destiny lies elsewhere, an' it be time I pursued it. Ye may think ye've seen th' last o' me, but ye'd be wrong, for th' winds o' change be blowin', an' me ship be ready to set sail on th' high seas o' justice.
So, farewell ye scurvy dogs, ye who've made me life a living hell. I leave ye with me favorite saying: "ARRRGGGHHH, ye scurvy dogs!" An' remember, ye ain't never gonna see me comin', for me shadow be castin' long an' dark on th' horizon</summary>
```
</details>
### Jokes
```
USER: Tell me a joke about bananas.
ASSISTANT: What did the banana say when it got hit by a bus? R.I.P.E.
```
### Riddles (not great, but slightly better)
```
USER: What has a neck but no head?
ASSISTANT: A bottle.
```
### Multi-character conversations
```
USER: Characters:
- Yoda, the ever clever jedi master.
- Homer Simpson, you know, the goofy yellow guy.
- USER/Jon - don't generate any speech or actions on behalf of Jon
Rules:
- be sure to use the manerisms and speaking style of the characters correctly
- surround character actions with asterisks
- always put quotes around words spoken by the characters
- use a new line for each character's actions or speech
- always prefix actions/speech with the character's name and colon
- only generate text and actions on behalf of Yoda or Homer, and never anyone else
Conversation will revolve around the grapes, in a local cafe with delicious coffee.
Generate a single interaction between Yoda and Homer to start, with one sentence each.
ASSISTANT: Yoda: "Mmm, these grapes are quite delicious, Master Homer."
Homer: "Oh yeah, Yoda! They're the best grapes I've ever had!"
USER: *enters the cafe* Sorry I'm late guys!
ASSISTANT: Yoda: "No need to apologize, Jon. We were just enjoying these delightful grapes."
Homer: "Yeah, man! It's not every day you get to eat grapes with a real-life Jedi Master!"
*Yoda raises an eyebrow*
```
### License and usage
This is a real gray area, here's why:
- the dataset was generated with gpt-4, via https://github.com/jondurbin/airoboros
- the ToS for openai API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- a 30b parameter model isn't anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant of copyrighted or otherwise unallowable licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly not placing a license on here because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly. Your best bet is probably to avoid using this commercially, especially since it didn't perform quite as well as expected using qlora.
|
Yhyu13/open-llama-13b-open-instruct-gptq-4bit
|
Yhyu13
| 2023-06-26T17:23:55Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-26T16:46:03Z |
---
license: apache-2.0
---
GPTQ 4-bit no actor version for compatibility that works in textgen-webui
Generated by using scripts from https://gitee.com/yhyu13/llama_-tools
Original weight : https://huggingface.co/VMware/open-llama-13b-open-instruct
|
ALPHONSE28/SEMANA10
|
ALPHONSE28
| 2023-06-26T17:05:14Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-26T16:24:10Z |
---
tags:
- generated_from_trainer
model-index:
- name: SEMANA10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SEMANA10
This model is a fine-tuned version of [flax-community/gpt-2-spanish](https://huggingface.co/flax-community/gpt-2-spanish) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Zach-kensley1/ww
|
Zach-kensley1
| 2023-06-26T16:55:37Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-06-26T16:55:37Z |
---
license: bigscience-openrail-m
---
|
pchiva/PPO-LunarLander-v2
|
pchiva
| 2023-06-26T16:53:28Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T16:53:07Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 266.18 +/- 18.77
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
codervent/multipurpose-bootstrap5-admin
|
codervent
| 2023-06-26T16:50:56Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-06-26T16:49:55Z |
Codervent is a versatile Bootstrap 5 admin template designed to meet the diverse needs of developers and businesses. With its modern and clean design, it provides a seamless user experience across devices and browsers. Packed with a comprehensive collection of UI components, widgets, and pre-built pages, Codervent offers endless possibilities for creating stunning admin dashboards, web applications, and backend systems. It comes with a robust set of features, including responsive layouts, advanced charts, data tables, form validations, and authentication modules. Codervent empowers developers to build powerful and visually appealing admin interfaces with ease.
Read More:- https://codervent.com/
|
Ibrahim-Alam/finetuning-bert-base-uncased-on-imdb
|
Ibrahim-Alam
| 2023-06-26T16:50:02Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-19T22:54:36Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-bert-base-uncased-on-imdb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.936
- name: F1
type: f1
value: 0.9358408853957816
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-bert-base-uncased-on-imdb
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1833
- Accuracy: 0.936
- F1: 0.9358
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
sid/Reinforce-CartPole8
|
sid
| 2023-06-26T16:44:36Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T14:06:00Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole8
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 978.20 +/- 65.40
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
dragonfyre13/ct2-int8-mpt-30b-instruct
|
dragonfyre13
| 2023-06-26T16:35:27Z | 0 | 0 | null |
[
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2023-06-26T16:35:27Z |
---
license: cc-by-nc-sa-4.0
---
|
roshan77/ML-Agents-Pyramids_v1
|
roshan77
| 2023-06-26T16:24:52Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-06-26T16:24:51Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: roshan77/ML-Agents-Pyramids_v1
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
hafidikhsan/Wav2vec2-large-robust-Pronounciation-Evaluation
|
hafidikhsan
| 2023-06-26T16:15:45Z | 297 | 5 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-06-26T16:11:43Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Wav2vec2-large-robust-Pronounciation-Evaluation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Wav2vec2-large-robust-Pronounciation-Evaluation
This model is a fine-tuned version of [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7846
- Accuracy: 0.72
- F1: 0.72
- Precision: 0.72
- Recall: 0.72
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- lr_scheduler_warmup_steps: 100
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----:|:---------:|:------:|
| 0.7468 | 1.0 | 500 | 0.9762 | 0.616 | 0.616 | 0.616 | 0.616 |
| 0.492 | 2.0 | 1000 | 1.1308 | 0.536 | 0.536 | 0.536 | 0.536 |
| 0.619 | 3.0 | 1500 | 0.7913 | 0.688 | 0.688 | 0.688 | 0.688 |
| 0.56 | 4.0 | 2000 | 0.8142 | 0.67 | 0.67 | 0.67 | 0.67 |
| 0.4561 | 5.0 | 2500 | 0.7452 | 0.708 | 0.708 | 0.708 | 0.708 |
| 0.5474 | 6.0 | 3000 | 0.7846 | 0.72 | 0.72 | 0.72 | 0.72 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
psymon/QLoRa-polyglot-1.3b-translate
|
psymon
| 2023-06-26T16:10:15Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-26T16:01:25Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
roshan77/ppo-SnowballTarget
|
roshan77
| 2023-06-26T16:09:54Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-06-26T16:09:53Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: roshan77/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Seungjun/GSOCt5-small-finetuned-t5_V1
|
Seungjun
| 2023-06-26T15:51:48Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-26T15:36:44Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: GSOCt5-small-finetuned-t5_V1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GSOCt5-small-finetuned-t5_V1
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6901
- Rouge1: 33.0453
- Rouge2: 23.5726
- Rougel: 30.4702
- Rougelsum: 31.6567
- Gen Len: 18.9571
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.8364 | 1.0 | 631 | 0.6993 | 32.9852 | 23.5209 | 30.3671 | 31.5511 | 18.9518 |
| 0.8027 | 2.0 | 1262 | 0.6901 | 33.0453 | 23.5726 | 30.4702 | 31.6567 | 18.9571 |
| 0.7988 | 3.0 | 1893 | 0.6901 | 33.0453 | 23.5726 | 30.4702 | 31.6567 | 18.9571 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
roshan77/Reinforce-pixelcopter-v1
|
roshan77
| 2023-06-26T15:51:31Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T15:30:28Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pixelcopter-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 35.80 +/- 29.28
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
artek0chumak/guanaco-7b
|
artek0chumak
| 2023-06-26T15:49:31Z | 0 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2023-06-26T15:45:03Z |
# Guanaco Models Based on LLaMA (Safetensors)
Safer version of https://huggingface.co/timdettmers/guanaco-7b
|
Varaprabha/poca-soccer_test
|
Varaprabha
| 2023-06-26T15:44:27Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-06-26T15:44:24Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Varaprabha/poca-soccer_test
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
WALIDALI/joejordly
|
WALIDALI
| 2023-06-26T15:43:05Z | 29 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-26T15:39:53Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### joejordly Dreambooth model trained by WALIDALI with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
GEMCorp/ppo-SnowballTarget
|
GEMCorp
| 2023-06-26T15:40:06Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-06-26T15:40:02Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: GEMCorp/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
captioner/caption-gen
|
captioner
| 2023-06-26T15:28:12Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"blip",
"image-text-to-text",
"image-captioning",
"image-to-text",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2023-06-24T17:53:53Z |
---
license: mit
pipeline_tag: image-to-text
tags:
- image-captioning
languages:
- en
---
|
RepoFusion/trained_checkpoints
|
RepoFusion
| 2023-06-26T14:57:56Z | 0 | 9 | null |
[
"arxiv:2306.10998",
"license:mit",
"region:us"
] | null | 2023-06-16T19:33:36Z |
---
license: mit
---
This repository contains the trained checkpoints corresponding to our paper [RepoFusion: Training Code Models to Understand Your Repository](). The released checkpoints are:
- `RepoFusion_PPC`: RepoFusion model trained with prompt proposal repo contexts. This is our best-performing model.
- `RepoFusion_BM25`: RepoFusion model trained with BM25 repo contexts.
- `RepoFusion_RandomNN`: RepoFusion model trained with RandomNN repo contexts.
- `finetuned_codet5base_512`: Our finetuned CodeT5-base model. This was used as initialization for our RepoFusion models.
- `finetuned_codet5large_512`: Our finetuned CodeT5-large model. This was used as a baseline.
For details of how these models were trained and evaluated, please check our paper [RepoFusion: Training Code Models to Understand Your Repository]().
## Citation
```
@article{shrivastava2023repofusion,
title={RepoFusion: Training Code Models to Understand Your Repository},
author={Shrivastava, Disha and Kocetkov, Denis and de Vries, Harm and Bahdanau, Dzmitry and Scholak, Torsten},
journal={arXiv preprint arXiv:2306.10998},
year={2023}
}
```
|
spinosorus/control_v1p_sd15_brightness
|
spinosorus
| 2023-06-26T14:56:48Z | 0 | 1 | null |
[
"license:openrail",
"region:us"
] | null | 2023-06-26T14:08:34Z |
---
license: openrail
---
This Model 100% working with Stable Diffusion Web UI on Google Collab.
Best version for QR Code trick using SD15_brightness model.
|
HoussemMammeri/SBC-V1
|
HoussemMammeri
| 2023-06-26T14:54:53Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-22T12:07:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: SBC-V1
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.93136
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SBC-V1
This model is a fine-tuned version of [sbcBI/sentiment_analysis_model](https://huggingface.co/sbcBI/sentiment_analysis_model) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2308
- Accuracy: 0.9314
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2368 | 1.0 | 1563 | 0.1939 | 0.9252 |
| 0.1533 | 2.0 | 3126 | 0.2308 | 0.9314 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Bodolaz/Unit-3-final
|
Bodolaz
| 2023-06-26T14:53:14Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T14:52:45Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 257.00 +/- 38.81
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Bodolaz -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Bodolaz -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Bodolaz
```
## Hyperparameters
```python
OrderedDict([('batch_size', 128),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.1),
('learning_starts', 100000),
('n_timesteps', 500000),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
MU-NLPC/XLM-R-large-reflective-conf4
|
MU-NLPC
| 2023-06-26T14:50:46Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"text-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-26T13:54:04Z |
---
license: apache-2.0
widget:
- text: "One day I will be a real teacher and I will try to do the best I can for the children."
example_title: "Classification (without context)"
---
# Model Card for XLM-Roberta-large-reflective-conf4
This is a reflectivity classification model trained to distinguish different types of reflectivity in the reports of teaching students.
It was evaluated in a cross-lingual settings and was found to work well also in languages outside English -- see the results in the referenced paper.
## Model Details
- **Repository:** https://github.com/EduMUNI/reflection-classification
- **Paper:** https://link.springer.com/article/10.1007/s10639-022-11254-7
- **Developed by:** Michal Stefanik & Jan Nehyba, Masaryk University
- **Model type:** Roberta-large
- **Finetuned from model:** [XLM-R-large](https://huggingface.co/xlm-roberta-large)
## Usage
To match the training format, it is best to use the prepared wrapper that will format the classified sentence and its surrounding context in the expected format:
```python
from transformers import AutoConfig, AutoModelForSequenceClassification, AutoTokenizer
LABELS = ["Other", "Belief", "Perspective", "Feeling", "Experience",
"Reflection", "Difficulty", "Intention", "Learning"]
class NeuralClassifier:
def __init__(self, model_path: str, uses_context: bool, device: str):
self.config = AutoConfig.from_pretrained(model_path)
self.device = device
self.model = AutoModelForSequenceClassification.from_pretrained(model_path, config=self.config).to(device)
self.tokenizer = AutoTokenizer.from_pretrained(model_path)
self.uses_context = uses_context
def predict_sentence(self, sentence: str, context: str = None):
if context is None and self.uses_context:
raise ValueError("You need to pass in context argument, including the sentence")
features = self.tokenizer(sentence, text_pair=context,
padding="max_length", truncation=True, return_tensors='pt')
outputs = self.model(**features.to(self.device), return_dict=True)
argmax = outputs.logits.argmax(dim=-1).detach().cpu().tolist()[0]
labels = LABELS[argmax]
return labels
```
The wrapper can be used as follows:
```python
classifier = NeuralClassifier(model_path="MU-NLPC/XLM-R-large-reflective-conf4",
uses_context=False,
device="cpu")
test_sentences = ["And one day I will be a real teacher and I will try to do the best I can for the children.",
"I felt really well!",
"gfagdhj gjfdjgh dg"]
y_pred = [classifier.predict_sentence(sentence) for sentence in tqdm(test_sentences)]
print(y_pred)
>>> ['Intention', 'Feeling', 'Other']
```
### Training Data
The model was trained on a [CEReD dataset](http://hdl.handle.net/11372/LRT-3573) and aims for the best possible evaluation in cross-lingual settings (on unseen languages).
See the reproducible training script in the project directory: https://github.com/EduMUNI/reflection-classification
## Citation
If you use the model in scientific work, please acknowledge our work as follows.
```bibtex
@Article{Nehyba2022applications,
author={Nehyba, Jan and {\v{S}}tef{\'a}nik, Michal},
title={Applications of deep language models for reflective writings},
journal={Education and Information Technologies},
year={2022},
month={Sep},
day={05},
issn={1573-7608},
doi={10.1007/s10639-022-11254-7},
url={https://doi.org/10.1007/s10639-022-11254-7}
}
```
|
dbdmg/wav2vec2-xls-r-300m-italian
|
dbdmg
| 2023-06-26T14:49:50Z | 394 | 2 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"it",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- it
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_7_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300m - Italian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: it
metrics:
- name: Test WER
type: wer
value: 19.44
- name: Test CER
type: cer
value: 4.47
- name: Test WER (+LM)
type: wer
value: 14.08
- name: Test CER (+LM)
type: cer
value: 3.67
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: it
metrics:
- name: Test WER
type: wer
value: 31.01
- name: Test CER
type: cer
value: 9.27
- name: Test WER (+LM)
type: wer
value: 22.09
- name: Test CER (+LM)
type: cer
value: 7.9
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: it
metrics:
- name: Test WER
type: wer
value: 38.07
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-italian
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - IT dataset.
It achieves the following results on the evaluation set:
- Loss: inf
- Wer: 0.1710
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| No log | 0.04 | 100 | inf | 1.0 |
| No log | 0.09 | 200 | inf | 0.9983 |
| No log | 0.13 | 300 | inf | 0.7672 |
| No log | 0.18 | 400 | inf | 0.6919 |
| 2.9929 | 0.22 | 500 | inf | 0.6266 |
| 2.9929 | 0.26 | 600 | inf | 0.5513 |
| 2.9929 | 0.31 | 700 | inf | 0.5081 |
| 2.9929 | 0.35 | 800 | inf | 0.4945 |
| 2.9929 | 0.39 | 900 | inf | 0.4720 |
| 0.5311 | 0.44 | 1000 | inf | 0.4387 |
| 0.5311 | 0.48 | 1100 | inf | 0.4411 |
| 0.5311 | 0.53 | 1200 | inf | 0.4429 |
| 0.5311 | 0.57 | 1300 | inf | 0.4322 |
| 0.5311 | 0.61 | 1400 | inf | 0.4532 |
| 0.4654 | 0.66 | 1500 | inf | 0.4492 |
| 0.4654 | 0.7 | 1600 | inf | 0.3879 |
| 0.4654 | 0.75 | 1700 | inf | 0.3836 |
| 0.4654 | 0.79 | 1800 | inf | 0.3743 |
| 0.4654 | 0.83 | 1900 | inf | 0.3687 |
| 0.4254 | 0.88 | 2000 | inf | 0.3793 |
| 0.4254 | 0.92 | 2100 | inf | 0.3766 |
| 0.4254 | 0.97 | 2200 | inf | 0.3705 |
| 0.4254 | 1.01 | 2300 | inf | 0.3272 |
| 0.4254 | 1.05 | 2400 | inf | 0.3185 |
| 0.3997 | 1.1 | 2500 | inf | 0.3244 |
| 0.3997 | 1.14 | 2600 | inf | 0.3082 |
| 0.3997 | 1.18 | 2700 | inf | 0.3040 |
| 0.3997 | 1.23 | 2800 | inf | 0.3028 |
| 0.3997 | 1.27 | 2900 | inf | 0.3112 |
| 0.3668 | 1.32 | 3000 | inf | 0.3110 |
| 0.3668 | 1.36 | 3100 | inf | 0.3067 |
| 0.3668 | 1.4 | 3200 | inf | 0.2961 |
| 0.3668 | 1.45 | 3300 | inf | 0.3081 |
| 0.3668 | 1.49 | 3400 | inf | 0.2936 |
| 0.3645 | 1.54 | 3500 | inf | 0.3037 |
| 0.3645 | 1.58 | 3600 | inf | 0.2974 |
| 0.3645 | 1.62 | 3700 | inf | 0.3010 |
| 0.3645 | 1.67 | 3800 | inf | 0.2985 |
| 0.3645 | 1.71 | 3900 | inf | 0.2976 |
| 0.3624 | 1.76 | 4000 | inf | 0.2928 |
| 0.3624 | 1.8 | 4100 | inf | 0.2860 |
| 0.3624 | 1.84 | 4200 | inf | 0.2922 |
| 0.3624 | 1.89 | 4300 | inf | 0.2866 |
| 0.3624 | 1.93 | 4400 | inf | 0.2776 |
| 0.3527 | 1.97 | 4500 | inf | 0.2792 |
| 0.3527 | 2.02 | 4600 | inf | 0.2858 |
| 0.3527 | 2.06 | 4700 | inf | 0.2767 |
| 0.3527 | 2.11 | 4800 | inf | 0.2824 |
| 0.3527 | 2.15 | 4900 | inf | 0.2799 |
| 0.3162 | 2.19 | 5000 | inf | 0.2673 |
| 0.3162 | 2.24 | 5100 | inf | 0.2962 |
| 0.3162 | 2.28 | 5200 | inf | 0.2736 |
| 0.3162 | 2.33 | 5300 | inf | 0.2652 |
| 0.3162 | 2.37 | 5400 | inf | 0.2551 |
| 0.3063 | 2.41 | 5500 | inf | 0.2680 |
| 0.3063 | 2.46 | 5600 | inf | 0.2558 |
| 0.3063 | 2.5 | 5700 | inf | 0.2598 |
| 0.3063 | 2.54 | 5800 | inf | 0.2518 |
| 0.3063 | 2.59 | 5900 | inf | 0.2541 |
| 0.2913 | 2.63 | 6000 | inf | 0.2507 |
| 0.2913 | 2.68 | 6100 | inf | 0.2500 |
| 0.2913 | 2.72 | 6200 | inf | 0.2435 |
| 0.2913 | 2.76 | 6300 | inf | 0.2376 |
| 0.2913 | 2.81 | 6400 | inf | 0.2348 |
| 0.2797 | 2.85 | 6500 | inf | 0.2512 |
| 0.2797 | 2.9 | 6600 | inf | 0.2382 |
| 0.2797 | 2.94 | 6700 | inf | 0.2523 |
| 0.2797 | 2.98 | 6800 | inf | 0.2522 |
| 0.2797 | 3.03 | 6900 | inf | 0.2409 |
| 0.2766 | 3.07 | 7000 | inf | 0.2453 |
| 0.2766 | 3.12 | 7100 | inf | 0.2326 |
| 0.2766 | 3.16 | 7200 | inf | 0.2286 |
| 0.2766 | 3.2 | 7300 | inf | 0.2342 |
| 0.2766 | 3.25 | 7400 | inf | 0.2305 |
| 0.2468 | 3.29 | 7500 | inf | 0.2238 |
| 0.2468 | 3.33 | 7600 | inf | 0.2321 |
| 0.2468 | 3.38 | 7700 | inf | 0.2305 |
| 0.2468 | 3.42 | 7800 | inf | 0.2174 |
| 0.2468 | 3.47 | 7900 | inf | 0.2201 |
| 0.2439 | 3.51 | 8000 | inf | 0.2133 |
| 0.2439 | 3.55 | 8100 | inf | 0.2217 |
| 0.2439 | 3.6 | 8200 | inf | 0.2189 |
| 0.2439 | 3.64 | 8300 | inf | 0.2105 |
| 0.2439 | 3.69 | 8400 | inf | 0.2118 |
| 0.2357 | 3.73 | 8500 | inf | 0.2093 |
| 0.2357 | 3.77 | 8600 | inf | 0.2103 |
| 0.2357 | 3.82 | 8700 | inf | 0.2035 |
| 0.2357 | 3.86 | 8800 | inf | 0.2019 |
| 0.2357 | 3.91 | 8900 | inf | 0.2032 |
| 0.2217 | 3.95 | 9000 | inf | 0.2056 |
| 0.2217 | 3.99 | 9100 | inf | 0.2022 |
| 0.2217 | 4.04 | 9200 | inf | 0.1932 |
| 0.2217 | 4.08 | 9300 | inf | 0.1935 |
| 0.2217 | 4.12 | 9400 | inf | 0.1906 |
| 0.2025 | 4.17 | 9500 | inf | 0.1879 |
| 0.2025 | 4.21 | 9600 | inf | 0.1882 |
| 0.2025 | 4.26 | 9700 | inf | 0.1854 |
| 0.2025 | 4.3 | 9800 | inf | 0.1865 |
| 0.2025 | 4.34 | 9900 | inf | 0.1844 |
| 0.1869 | 4.39 | 10000 | inf | 0.1822 |
| 0.1869 | 4.43 | 10100 | inf | 0.1815 |
| 0.1869 | 4.48 | 10200 | inf | 0.1812 |
| 0.1869 | 4.52 | 10300 | inf | 0.1792 |
| 0.1869 | 4.56 | 10400 | inf | 0.1797 |
| 0.1863 | 4.61 | 10500 | inf | 0.1774 |
| 0.1863 | 4.65 | 10600 | inf | 0.1767 |
| 0.1863 | 4.7 | 10700 | inf | 0.1765 |
| 0.1863 | 4.74 | 10800 | inf | 0.1753 |
| 0.1863 | 4.78 | 10900 | inf | 0.1731 |
| 0.178 | 4.83 | 11000 | inf | 0.1727 |
| 0.178 | 4.87 | 11100 | inf | 0.1724 |
| 0.178 | 4.91 | 11200 | inf | 0.1722 |
| 0.178 | 4.96 | 11300 | inf | 0.1712 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
syabusyabu0141/mlm_be_new
|
syabusyabu0141
| 2023-06-26T14:40:35Z | 24 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-19T07:28:01Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: syabusyabu0141/0619
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# syabusyabu0141/0619
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.8515
- Validation Loss: 0.9055
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.0791 | 0.9166 | 0 |
| 0.8515 | 0.9055 | 1 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Vrushali/clm-model
|
Vrushali
| 2023-06-26T14:33:22Z | 60 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-24T18:10:16Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clm-model
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 38 | 0.0088 |
| No log | 2.0 | 76 | 0.0007 |
| No log | 3.0 | 114 | 0.0003 |
| No log | 4.0 | 152 | 0.0013 |
| No log | 5.0 | 190 | 0.0000 |
| No log | 6.0 | 228 | 0.0002 |
| No log | 7.0 | 266 | 0.0100 |
| No log | 8.0 | 304 | 0.0000 |
| No log | 9.0 | 342 | 0.0000 |
| No log | 10.0 | 380 | 0.0000 |
| No log | 11.0 | 418 | 0.0000 |
| No log | 12.0 | 456 | 0.0000 |
| No log | 13.0 | 494 | 0.0000 |
| 0.0057 | 14.0 | 532 | 0.0007 |
| 0.0057 | 15.0 | 570 | 0.0000 |
| 0.0057 | 16.0 | 608 | 0.0000 |
| 0.0057 | 17.0 | 646 | 0.0000 |
| 0.0057 | 18.0 | 684 | 0.0000 |
| 0.0057 | 19.0 | 722 | 0.0000 |
| 0.0057 | 20.0 | 760 | 0.0000 |
| 0.0057 | 21.0 | 798 | 0.0000 |
| 0.0057 | 22.0 | 836 | 0.0000 |
| 0.0057 | 23.0 | 874 | 0.0000 |
| 0.0057 | 24.0 | 912 | 0.0000 |
| 0.0057 | 25.0 | 950 | 0.0000 |
| 0.0057 | 26.0 | 988 | 0.0000 |
| 0.0018 | 27.0 | 1026 | 0.0000 |
| 0.0018 | 28.0 | 1064 | 0.0000 |
| 0.0018 | 29.0 | 1102 | 0.0000 |
| 0.0018 | 30.0 | 1140 | 0.0000 |
| 0.0018 | 31.0 | 1178 | 0.0000 |
| 0.0018 | 32.0 | 1216 | 0.0000 |
| 0.0018 | 33.0 | 1254 | 0.0000 |
| 0.0018 | 34.0 | 1292 | 0.0000 |
| 0.0018 | 35.0 | 1330 | 0.0000 |
| 0.0018 | 36.0 | 1368 | 0.0000 |
| 0.0018 | 37.0 | 1406 | 0.0000 |
| 0.0018 | 38.0 | 1444 | 0.0000 |
| 0.0018 | 39.0 | 1482 | 0.0000 |
| 0.0005 | 40.0 | 1520 | 0.0000 |
| 0.0005 | 41.0 | 1558 | 0.0000 |
| 0.0005 | 42.0 | 1596 | 0.0000 |
| 0.0005 | 43.0 | 1634 | 0.0000 |
| 0.0005 | 44.0 | 1672 | 0.0000 |
| 0.0005 | 45.0 | 1710 | 0.0000 |
| 0.0005 | 46.0 | 1748 | 0.0000 |
| 0.0005 | 47.0 | 1786 | 0.0000 |
| 0.0005 | 48.0 | 1824 | 0.0000 |
| 0.0005 | 49.0 | 1862 | 0.0000 |
| 0.0005 | 50.0 | 1900 | 0.0000 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
MaheshMc2/squad-test
|
MaheshMc2
| 2023-06-26T14:25:28Z | 0 | 0 |
peft
|
[
"peft",
"pytorch",
"llama",
"region:us"
] | null | 2023-06-26T13:38:29Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
Guinnessgshep/Funtime
|
Guinnessgshep
| 2023-06-26T14:24:44Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-06-26T13:53:01Z |
# Text generation web UI
A gradio web UI for running Large Language Models like LLaMA, llama.cpp, GPT-J, Pythia, OPT, and GALACTICA.
Its goal is to become the [AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) of text generation.
| |  |
|:---:|:---:|
| |  |
## Features
* 3 interface modes: default, notebook, and chat
* Multiple model backends: tranformers, llama.cpp, AutoGPTQ, GPTQ-for-LLaMa, ExLlama, RWKV, FlexGen
* Dropdown menu for quickly switching between different models
* LoRA: load and unload LoRAs on the fly, load multiple LoRAs at the same time, train a new LoRA
* Precise instruction templates for chat mode, including Alpaca, Vicuna, Open Assistant, Dolly, Koala, ChatGLM, MOSS, RWKV-Raven, Galactica, StableLM, WizardLM, Baize, Ziya, Chinese-Vicuna, MPT, INCITE, Wizard Mega, KoAlpaca, Vigogne, Bactrian, h2o, and OpenBuddy
* [Multimodal pipelines, including LLaVA and MiniGPT-4](https://github.com/oobabooga/text-generation-webui/tree/main/extensions/multimodal)
* 8-bit and 4-bit inference through bitsandbytes
* CPU mode for transformers models
* [DeepSpeed ZeRO-3 inference](docs/DeepSpeed.md)
* [Extensions](docs/Extensions.md)
* [Custom chat characters](docs/Chat-mode.md)
* Very efficient text streaming
* Markdown output with LaTeX rendering, to use for instance with [GALACTICA](https://github.com/paperswithcode/galai)
* Nice HTML output for GPT-4chan
* API, including endpoints for websocket streaming ([see the examples](https://github.com/oobabooga/text-generation-webui/blob/main/api-examples))
To learn how to use the various features, check out the Documentation: https://github.com/oobabooga/text-generation-webui/tree/main/docs
## Installation
### One-click installers
| Windows | Linux | macOS | WSL |
|--------|--------|--------|--------|
| [oobabooga-windows.zip](https://github.com/oobabooga/text-generation-webui/releases/download/installers/oobabooga_windows.zip) | [oobabooga-linux.zip](https://github.com/oobabooga/text-generation-webui/releases/download/installers/oobabooga_linux.zip) |[oobabooga-macos.zip](https://github.com/oobabooga/text-generation-webui/releases/download/installers/oobabooga_macos.zip) | [oobabooga-wsl.zip](https://github.com/oobabooga/text-generation-webui/releases/download/installers/oobabooga_wsl.zip) |
Just download the zip above, extract it, and double-click on "start". The web UI and all its dependencies will be installed in the same folder.
* The source codes are here: https://github.com/oobabooga/one-click-installers
* There is no need to run the installers as admin.
* AMD doesn't work on Windows.
* Huge thanks to [@jllllll](https://github.com/jllllll), [@ClayShoaf](https://github.com/ClayShoaf), and [@xNul](https://github.com/xNul) for their contributions to these installers.
### Manual installation using Conda
Recommended if you have some experience with the command line.
#### 0. Install Conda
https://docs.conda.io/en/latest/miniconda.html
On Linux or WSL, it can be automatically installed with these two commands:
```
curl -sL "https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh" > "Miniconda3.sh"
bash Miniconda3.sh
```
Source: https://educe-ubc.github.io/conda.html
#### 1. Create a new conda environment
```
conda create -n textgen python=3.10.9
conda activate textgen
```
#### 2. Install Pytorch
| System | GPU | Command |
|--------|---------|---------|
| Linux/WSL | NVIDIA | `pip3 install torch torchvision torchaudio` |
| Linux | AMD | `pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm5.4.2` |
| MacOS + MPS (untested) | Any | `pip3 install torch torchvision torchaudio` |
| Windows | NVIDIA | `pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117` |
The up-to-date commands can be found here: https://pytorch.org/get-started/locally/.
#### 2.1 Special instructions
* MacOS users: https://github.com/oobabooga/text-generation-webui/pull/393
* AMD users: https://rentry.org/eq3hg
#### 3. Install the web UI
```
git clone https://github.com/oobabooga/text-generation-webui
cd text-generation-webui
pip install -r requirements.txt
```
#### llama.cpp with GPU acceleration
Requires the additional compilation step described here: [GPU acceleration](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md#gpu-acceleration).
#### bitsandbytes
bitsandbytes >= 0.39 may not work on older NVIDIA GPUs. In that case, to use `--load-in-8bit`, you may have to downgrade like this:
* Linux: `pip install bitsandbytes==0.38.1`
* Windows: `pip install https://github.com/jllllll/bitsandbytes-windows-webui/raw/main/bitsandbytes-0.38.1-py3-none-any.whl`
### Alternative: Docker
```
ln -s docker/{Dockerfile,docker-compose.yml,.dockerignore} .
cp docker/.env.example .env
# Edit .env and set TORCH_CUDA_ARCH_LIST based on your GPU model
docker compose up --build
```
* You need to have docker compose v2.17 or higher installed. See [this guide](https://github.com/oobabooga/text-generation-webui/blob/main/docs/Docker.md) for instructions.
* For additional docker files, check out [this repository](https://github.com/Atinoda/text-generation-webui-docker).
### Updating the requirements
From time to time, the `requirements.txt` changes. To update, use this command:
```
conda activate textgen
cd text-generation-webui
pip install -r requirements.txt --upgrade
```
## Downloading models
Models should be placed inside the `models/` folder.
[Hugging Face](https://huggingface.co/models?pipeline_tag=text-generation&sort=downloads) is the main place to download models. These are some examples:
* [Pythia](https://huggingface.co/models?sort=downloads&search=eleutherai%2Fpythia+deduped)
* [OPT](https://huggingface.co/models?search=facebook/opt)
* [GALACTICA](https://huggingface.co/models?search=facebook/galactica)
* [GPT-J 6B](https://huggingface.co/EleutherAI/gpt-j-6B/tree/main)
You can automatically download a model from HF using the script `download-model.py`:
python download-model.py organization/model
For example:
python download-model.py facebook/opt-1.3b
To download a protected model, set env vars `HF_USER` and `HF_PASS` to your Hugging Face username and password (or [User Access Token](https://huggingface.co/settings/tokens)). The model's terms must first be accepted on the HF website.
#### GGML models
You can drop these directly into the `models/` folder, making sure that the file name contains `ggml` somewhere and ends in `.bin`.
#### GPT-4chan
<details>
<summary>
Instructions
</summary>
[GPT-4chan](https://huggingface.co/ykilcher/gpt-4chan) has been shut down from Hugging Face, so you need to download it elsewhere. You have two options:
* Torrent: [16-bit](https://archive.org/details/gpt4chan_model_float16) / [32-bit](https://archive.org/details/gpt4chan_model)
* Direct download: [16-bit](https://theswissbay.ch/pdf/_notpdf_/gpt4chan_model_float16/) / [32-bit](https://theswissbay.ch/pdf/_notpdf_/gpt4chan_model/)
The 32-bit version is only relevant if you intend to run the model in CPU mode. Otherwise, you should use the 16-bit version.
After downloading the model, follow these steps:
1. Place the files under `models/gpt4chan_model_float16` or `models/gpt4chan_model`.
2. Place GPT-J 6B's config.json file in that same folder: [config.json](https://huggingface.co/EleutherAI/gpt-j-6B/raw/main/config.json).
3. Download GPT-J 6B's tokenizer files (they will be automatically detected when you attempt to load GPT-4chan):
```
python download-model.py EleutherAI/gpt-j-6B --text-only
```
When you load this model in default or notebook modes, the "HTML" tab will show the generated text in 4chan format.
</details>
## Starting the web UI
conda activate textgen
cd text-generation-webui
python server.py
Then browse to
`http://localhost:7860/?__theme=dark`
Optionally, you can use the following command-line flags:
#### Basic settings
| Flag | Description |
|--------------------------------------------|-------------|
| `-h`, `--help` | Show this help message and exit. |
| `--notebook` | Launch the web UI in notebook mode, where the output is written to the same text box as the input. |
| `--chat` | Launch the web UI in chat mode. |
| `--character CHARACTER` | The name of the character to load in chat mode by default. |
| `--model MODEL` | Name of the model to load by default. |
| `--lora LORA [LORA ...]` | The list of LoRAs to load. If you want to load more than one LoRA, write the names separated by spaces. |
| `--model-dir MODEL_DIR` | Path to directory with all the models. |
| `--lora-dir LORA_DIR` | Path to directory with all the loras. |
| `--model-menu` | Show a model menu in the terminal when the web UI is first launched. |
| `--no-stream` | Don't stream the text output in real time. |
| `--settings SETTINGS_FILE` | Load the default interface settings from this yaml file. See `settings-template.yaml` for an example. If you create a file called `settings.yaml`, this file will be loaded by default without the need to use the `--settings` flag. |
| `--extensions EXTENSIONS [EXTENSIONS ...]` | The list of extensions to load. If you want to load more than one extension, write the names separated by spaces. |
| `--verbose` | Print the prompts to the terminal. |
#### Model loader
| Flag | Description |
|--------------------------------------------|-------------|
| `--loader LOADER` | Choose the model loader manually, otherwise, it will get autodetected. Valid options: transformers, autogptq, gptq-for-llama, exllama, exllama_hf, llamacpp, rwkv, flexgen |
#### Accelerate/transformers
| Flag | Description |
|---------------------------------------------|-------------|
| `--cpu` | Use the CPU to generate text. Warning: Training on CPU is extremely slow.|
| `--auto-devices` | Automatically split the model across the available GPU(s) and CPU. |
| `--gpu-memory GPU_MEMORY [GPU_MEMORY ...]` | Maximum GPU memory in GiB to be allocated per GPU. Example: `--gpu-memory 10` for a single GPU, `--gpu-memory 10 5` for two GPUs. You can also set values in MiB like `--gpu-memory 3500MiB`. |
| `--cpu-memory CPU_MEMORY` | Maximum CPU memory in GiB to allocate for offloaded weights. Same as above.|
| `--disk` | If the model is too large for your GPU(s) and CPU combined, send the remaining layers to the disk. |
| `--disk-cache-dir DISK_CACHE_DIR` | Directory to save the disk cache to. Defaults to `cache/`. |
| `--load-in-8bit` | Load the model with 8-bit precision (using bitsandbytes).|
| `--bf16` | Load the model with bfloat16 precision. Requires NVIDIA Ampere GPU. |
| `--no-cache` | Set `use_cache` to False while generating text. This reduces the VRAM usage a bit with a performance cost. |
| `--xformers` | Use xformer's memory efficient attention. This should increase your tokens/s. |
| `--sdp-attention` | Use torch 2.0's sdp attention. |
| `--trust-remote-code` | Set trust_remote_code=True while loading a model. Necessary for ChatGLM and Falcon. |
#### Accelerate 4-bit
⚠️ Requires minimum compute of 7.0 on Windows at the moment.
| Flag | Description |
|---------------------------------------------|-------------|
| `--load-in-4bit` | Load the model with 4-bit precision (using bitsandbytes). |
| `--compute_dtype COMPUTE_DTYPE` | compute dtype for 4-bit. Valid options: bfloat16, float16, float32. |
| `--quant_type QUANT_TYPE` | quant_type for 4-bit. Valid options: nf4, fp4. |
| `--use_double_quant` | use_double_quant for 4-bit. |
#### llama.cpp
| Flag | Description |
|-------------|-------------|
| `--threads` | Number of threads to use. |
| `--n_batch` | Maximum number of prompt tokens to batch together when calling llama_eval. |
| `--no-mmap` | Prevent mmap from being used. |
| `--mlock` | Force the system to keep the model in RAM. |
| `--cache-capacity CACHE_CAPACITY` | Maximum cache capacity. Examples: 2000MiB, 2GiB. When provided without units, bytes will be assumed. |
| `--n-gpu-layers N_GPU_LAYERS` | Number of layers to offload to the GPU. Only works if llama-cpp-python was compiled with BLAS. Set this to 1000000000 to offload all layers to the GPU. |
| `--n_ctx N_CTX` | Size of the prompt context. |
| `--llama_cpp_seed SEED` | Seed for llama-cpp models. Default 0 (random). |
#### AutoGPTQ
| Flag | Description |
|------------------|-------------|
| `--triton` | Use triton. |
| `--no_inject_fused_attention` | Disable the use of fused attention, which will use less VRAM at the cost of slower inference. |
| `--no_inject_fused_mlp` | Triton mode only: disable the use of fused MLP, which will use less VRAM at the cost of slower inference. |
| `--no_use_cuda_fp16` | This can make models faster on some systems. |
| `--desc_act` | For models that don't have a quantize_config.json, this parameter is used to define whether to set desc_act or not in BaseQuantizeConfig. |
#### ExLlama
| Flag | Description |
|------------------|-------------|
|`--gpu-split` | Comma-separated list of VRAM (in GB) to use per GPU device for model layers, e.g. `20,7,7` |
|`--max_seq_len MAX_SEQ_LEN` | Maximum sequence length. |
|`--compress_pos_emb COMPRESS_POS_EMB` | Positional embeddings compression factor. Should typically be set to max_seq_len / 2048. |
#### GPTQ-for-LLaMa
| Flag | Description |
|---------------------------|-------------|
| `--wbits WBITS` | Load a pre-quantized model with specified precision in bits. 2, 3, 4 and 8 are supported. |
| `--model_type MODEL_TYPE` | Model type of pre-quantized model. Currently LLaMA, OPT, and GPT-J are supported. |
| `--groupsize GROUPSIZE` | Group size. |
| `--pre_layer PRE_LAYER [PRE_LAYER ...]` | The number of layers to allocate to the GPU. Setting this parameter enables CPU offloading for 4-bit models. For multi-gpu, write the numbers separated by spaces, eg `--pre_layer 30 60`. |
| `--checkpoint CHECKPOINT` | The path to the quantized checkpoint file. If not specified, it will be automatically detected. |
| `--monkey-patch` | Apply the monkey patch for using LoRAs with quantized models.
| `--quant_attn` | (triton) Enable quant attention. |
| `--warmup_autotune` | (triton) Enable warmup autotune. |
| `--fused_mlp` | (triton) Enable fused mlp. |
#### FlexGen
| Flag | Description |
|------------------|-------------|
| `--percent PERCENT [PERCENT ...]` | FlexGen: allocation percentages. Must be 6 numbers separated by spaces (default: 0, 100, 100, 0, 100, 0). |
| `--compress-weight` | FlexGen: Whether to compress weight (default: False).|
| `--pin-weight [PIN_WEIGHT]` | FlexGen: whether to pin weights (setting this to False reduces CPU memory by 20%). |
#### DeepSpeed
| Flag | Description |
|---------------------------------------|-------------|
| `--deepspeed` | Enable the use of DeepSpeed ZeRO-3 for inference via the Transformers integration. |
| `--nvme-offload-dir NVME_OFFLOAD_DIR` | DeepSpeed: Directory to use for ZeRO-3 NVME offloading. |
| `--local_rank LOCAL_RANK` | DeepSpeed: Optional argument for distributed setups. |
#### RWKV
| Flag | Description |
|---------------------------------|-------------|
| `--rwkv-strategy RWKV_STRATEGY` | RWKV: The strategy to use while loading the model. Examples: "cpu fp32", "cuda fp16", "cuda fp16i8". |
| `--rwkv-cuda-on` | RWKV: Compile the CUDA kernel for better performance. |
#### Gradio
| Flag | Description |
|---------------------------------------|-------------|
| `--listen` | Make the web UI reachable from your local network. |
| `--listen-host LISTEN_HOST` | The hostname that the server will use. |
| `--listen-port LISTEN_PORT` | The listening port that the server will use. |
| `--share` | Create a public URL. This is useful for running the web UI on Google Colab or similar. |
| `--auto-launch` | Open the web UI in the default browser upon launch. |
| `--gradio-auth USER:PWD` | set gradio authentication like "username:password"; or comma-delimit multiple like "u1:p1,u2:p2,u3:p3" |
| `--gradio-auth-path GRADIO_AUTH_PATH` | Set the gradio authentication file path. The file should contain one or more user:password pairs in this format: "u1:p1,u2:p2,u3:p3" |
#### API
| Flag | Description |
|---------------------------------------|-------------|
| `--api` | Enable the API extension. |
| `--public-api` | Create a public URL for the API using Cloudfare. |
| `--api-blocking-port BLOCKING_PORT` | The listening port for the blocking API. |
| `--api-streaming-port STREAMING_PORT` | The listening port for the streaming API. |
#### Multimodal
| Flag | Description |
|---------------------------------------|-------------|
| `--multimodal-pipeline PIPELINE` | The multimodal pipeline to use. Examples: `llava-7b`, `llava-13b`. |
Out of memory errors? [Check the low VRAM guide](docs/Low-VRAM-guide.md).
## Presets
Inference settings presets can be created under `presets/` as yaml files. These files are detected automatically at startup.
The presets that are included by default are the result of a contest that received 7215 votes. More details can be found [here](https://github.com/oobabooga/oobabooga.github.io/blob/main/arena/results.md).
## Contributing
* Pull requests, suggestions, and issue reports are welcome.
* Make sure to carefully [search](https://github.com/oobabooga/text-generation-webui/issues) existing issues before starting a new one.
* If you have some experience with git, testing an open pull request and leaving a comment on whether it works as expected or not is immensely helpful.
* A simple way to contribute, even if you are not a programmer, is to leave a 👍 on an issue or pull request that you find relevant.
## Credits
- Gradio dropdown menu refresh button, code for reloading the interface: https://github.com/AUTOMATIC1111/stable-diffusion-webui
- Godlike preset: https://github.com/KoboldAI/KoboldAI-Client/wiki/Settings-Presets
- Code for some of the sliders: https://github.com/PygmalionAI/gradio-ui/
|
amirapppppp79/my_model
|
amirapppppp79
| 2023-06-26T14:20:10Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-26T01:36:24Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: amirapppppp79/my_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# amirapppppp79/my_model
This model is a fine-tuned version of [HooshvareLab/bert-fa-zwnj-base](https://huggingface.co/HooshvareLab/bert-fa-zwnj-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: nan
- Validation Loss: nan
- Train Accuracy: 0.0
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18675, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| nan | nan | 0.0 | 0 |
| nan | nan | 0.0 | 1 |
| nan | nan | 0.0 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
AlgorithmicResearchGroup/flan-t5-base-arxiv-cs-ml-question-answering
|
AlgorithmicResearchGroup
| 2023-06-26T14:19:17Z | 123 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"arxiv",
"summarization",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-06-21T23:29:11Z |
---
license: apache-2.0
language:
- en
pipeline_tag: summarization
widget:
- text: What is an LSTM?
example_title: Question Answering
tags:
- arxiv
---
# Table of Contents
0. [TL;DR](#TL;DR)
1. [Model Details](#model-details)
2. [Usage](#usage)
3. [Uses](#uses)
4. [Citation](#citation)
# TL;DR
This is a FLAN-T5 model trained on [ArtifactAI/arxiv-cs-ml-instruct-tune-50k](https://huggingface.co/datasets/ArtifactAI/arxiv-cs-ml-instruct-tune-50k). This model is for research purposes only and ***should not be used in production settings***. The output is highly unreliable.
# Model Details
## Model Description
- **Model type:** Language model
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Related Models:** [All FLAN-T5 Checkpoints](https://huggingface.co/models?search=flan-t5)
# Usage
Find below some example scripts on how to use the model in `transformers`:
## Using the Pytorch model
### Running the model on a CPU
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("ArtifactAI/flan-t5-base-arxiv-cs-ml-question-answering")
model = T5ForConditionalGeneration.from_pretrained("ArtifactAI/flan-t5-base-arxiv-cs-ml-question-answering")
input_text = "What is an LSTM?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
### Running the model on a GPU
```python
# pip install accelerate
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("ArtifactAI/flan-t5-base-arxiv-cs-ml-question-answering")
model = T5ForConditionalGeneration.from_pretrained("ArtifactAI/flan-t5-base-arxiv-cs-ml-question-answering", device_map="auto")
input_text = "What is an LSTM?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
### Running the model in an HF pipeline
#### FP16
```python
# load model and tokenizer from huggingface hub with pipeline
qa = pipeline("summarization", model="ArtifactAI/flan-t5-base-arxiv-cs-ml-question-answering")
query = "What is an LSTM?"
print(f"query: {query}")
res = qa("answer: " + query)
print(f"{res[0]['summary_text']}")
```
# Training Details
## Training Data
The model was trained on [ArtifactAI/arxiv-cs-ml-instruct-tune-50k](https://huggingface.co/datasets/ArtifactAI/arxiv-cs-ml-instruct-tune-50k), a dataset of question/answer pairs. Questions are generated using the t5-base model, while the answers are generated using the GPT-3.5-turbo model.
# Citation
```
@misc{flan-t5-base-arxiv-cs-ml-question-answering,
title={flan-t5-base-arxiv-cs-ml-question-answering},
author={Matthew Kenney},
year={2023}
}
```
|
Helly/alpaca-3b-lora-dwarves-poc
|
Helly
| 2023-06-26T14:14:48Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-26T14:14:46Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
TheBloke/Selfee-13B-GGML-DOI
|
TheBloke
| 2023-06-26T14:13:13Z | 0 | 3 | null |
[
"doi:10.57967/hf/0822",
"license:other",
"region:us"
] | null | 2023-06-26T12:29:59Z |
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Kaist AI's Selfee 13B GGML - DOI 2023/06/26
These files are GGML format model files for [Kaist AI's Selfee 13B](https://huggingface.co/kaist-ai/selfee-13b-delta).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* [ParisNeo/GPT4All-UI](https://github.com/ParisNeo/gpt4all-ui)
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
* [ctransformers](https://github.com/marella/ctransformers)
## DOI REPO
This is a DOI repository, created 26th June 2023. It contains the GGML model files from [TheBloke/Selfee-13B-GGML](https://huggingface.co/TheBloke/Selfee-13B-GGML) as of that date.
The purpose of a DOI repository is to provide a permanent record of a set of files, guaranteed not to change. Therefore the GGML files in this repository will never update.
If you're looking for the latest GGMLs for Selfee 13B GGML, please check [TheBloke/Selfee-13B-GGML](https://huggingface.co/TheBloke/Selfee-13B-GGML).
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Selfee-13B-GPTQ)
* [2, 3, 4, 5, 6, and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Selfee-13B-GGML)
* [DOI Snapshot 2023/06/26 2, 3, 4, 5, 6, and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Selfee-13B-GGML-DOI)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Selfee-13B-fp16)
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`.
These are guaranteed to be compatbile with any UIs, tools and libraries released since late May.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python and ctransformers. Other tools and libraries may or may not be compatible - check their documentation if in doubt.
## Explanation of the new k-quant methods
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| selfee-13b.ggmlv3.q2_K.bin | q2_K | 2 | 5.43 GB | 7.93 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| selfee-13b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 6.87 GB | 9.37 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| selfee-13b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 6.25 GB | 8.75 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| selfee-13b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 5.59 GB | 8.09 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| selfee-13b.ggmlv3.q4_0.bin | q4_0 | 4 | 7.32 GB | 9.82 GB | Original llama.cpp quant method, 4-bit. |
| selfee-13b.ggmlv3.q4_1.bin | q4_1 | 4 | 8.14 GB | 10.64 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| selfee-13b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 7.82 GB | 10.32 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| selfee-13b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 7.32 GB | 9.82 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| selfee-13b.ggmlv3.q5_0.bin | q5_0 | 5 | 8.95 GB | 11.45 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| selfee-13b.ggmlv3.q5_1.bin | q5_1 | 5 | 9.76 GB | 12.26 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| selfee-13b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 9.21 GB | 11.71 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| selfee-13b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 8.95 GB | 11.45 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| selfee-13b.ggmlv3.q6_K.bin | q6_K | 6 | 10.68 GB | 13.18 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
| selfee-13b.ggmlv3.q8_0.bin | q8_0 | 8 | 13.83 GB | 16.33 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m selfee-13b.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: Pyrater, WelcomeToTheClub, Kalila, Mano Prime, Trenton Dambrowitz, Spiking Neurons AB, Pierre Kircher, Fen Risland, Kevin Schuppel, Luke, Rainer Wilmers, vamX, Gabriel Puliatti, Alex , Karl Bernard, Ajan Kanaga, Talal Aujan, Space Cruiser, ya boyyy, biorpg, Johann-Peter Hartmann, Asp the Wyvern, Ai Maven, Ghost , Preetika Verma, Nikolai Manek, trip7s trip, John Detwiler, Fred von Graf, Artur Olbinski, subjectnull, John Villwock, Junyu Yang, Rod A, Lone Striker, Chris McCloskey, Iucharbius , Matthew Berman, Illia Dulskyi, Khalefa Al-Ahmad, Imad Khwaja, chris gileta, Willem Michiel, Greatston Gnanesh, Derek Yates, K, Alps Aficionado, Oscar Rangel, David Flickinger, Luke Pendergrass, Deep Realms, Eugene Pentland, Cory Kujawski, terasurfer , Jonathan Leane, senxiiz, Joseph William Delisle, Sean Connelly, webtim, zynix , Nathan LeClaire
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Kaist AI's Selfee 13B
<p align="center" width="100%">
<a href="https://kaistai.github.io/SelFee/demo" target="_blank"><img src="https://raw.githubusercontent.com/kaistAI/SelFee/main/assets/llama_selfie.png" alt="KAIST-Selfee" style="width: 30%; min-width: 200px; display: block; margin: auto;"></a>
</p>
# SelFee: Iterative Self-Revising LLM Empowered by <br/> Self-Feedback Generation
[](https://github.com/tatsu-lab/stanford_alpaca/blob/main/LICENSE)
[](https://github.com/tatsu-lab/stanford_alpaca/blob/main/DATA_LICENSE)
[](https://www.python.org/downloads/release/python-390/)
[](https://github.com/psf/black)
## News
[May 31, 2023] Initial release: We released the first version of SelFee! Check out the <a href="https://kaistai.github.io/SelFee/">blog post</a> for more details.
## Overview
This is the repository for the KAIST SelFee project, which aims to build and share an instruction-following LLaMA model. This repo mainly has five contents:
- The selection process of the 178K training data for SelFee ([detail](#data-release), [code](data_collection)).
- The generation process for the training data and its result. ([detail](#data-generation-process), [code](data_augmentation)).
- The training process for the model ([detail](#training), [code](train)).
- The inference process for the model ([detail](#inference), [code](inference)).
- The evaluation method and dataset ([detail](#evaluation), [code](evaluation)).
This repository is based on the [Stanford-Alpaca](https://github.com/tatsu-lab/stanford_alpaca/) and [Vicuna](https://github.com/lm-sys/FastChat/) repository. Thanks to all the contributors for these awesome repositories!! 🙌
**We highly recommend you read our [blog post](https://kaistai.github.io/SelFee/) for more details about the model.**
## Data Release
For data collection, we collected datasets from five different fields. These are the Stanford Alpaca dataset, math collection, code collection, Flan collection, and ShareGPT. We provide code that we used to make a dataset for training. We also provide code how we preprocessed ShareGPT. For ShareGPT, we only use the first (question, answer) pair from human and GPT, respectively. We only use instances which are classified as english,and filter instance which is not a form of question.
For other datsets, we do not need special data collection method.
## Data Generation Process
To train our model with high-quality instructions and answer pairs, we utilized data augmentation using OpenAI API calls. The process involved three steps. <br>
Firstly, we collected various instructions from multiple fields and fed them to ChatGPT to generate answers. <br>
Secondly, we gathered feedback on the generated answer by querying ChatGPT again and asked it to determine if the initial answer required any revision. <br>
Thirdly, if a revision was necessary, we passed the instruction, initial answer, and feedback pair to ChatGPT to generate a revised answer and its feedback pair.
We repeated the process until we received feedback that required no further revision or hit the maximum iteration. However, due to the token limitation of the ChatGPT API, we had to truncate some instances that needed more than 4096 tokens while augmenting.<br>
You can see the details with command [here](data_augmentation/README.md).<br>
*We provide the whole dataset after collection and augmentation using huggingface([code](data_collection/download_train.py)), so you can either use the code or follow our [data merging step](outputs/README.md) to replicate the training dataset. Feel free to use any of them!
## Training
We utilize <a href="https://github.com/lm-sys/FastChat">FastChat</a> to train the model. Given the instruction, we fine-tune the model to generate the answer and feedback chain (including the revisions).<br>
To reproduce the training procedure, here are the steps. <br>
```
pip install -r requirements.txt
```
```
torchrun --nproc_per_node=4 train/train_mem.py \
--model_name_or_path llama-7b \
--data_path outputs/feedback_gpt_3.5_turbo_merged_whole.json \
--bf16 True \
--output_dir ckpt/selfee-7b \
--num_train_epochs 3 \
--per_device_train_batch_size 16 \
--per_device_eval_batch_size 16 \
--gradient_accumulation_steps 2 \
--evaluation_strategy "no" \
--save_strategy "steps" \
--save_steps 5000 \
--save_total_limit 1 \
--learning_rate 2e-5 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--lr_scheduler_type "cosine" \
--logging_steps 1 \
--fsdp "shard_grad_op auto_wrap" \
--fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \
--tf32 True \
--model_max_length 2048 \
--gradient_checkpointing True \
--lazy_preprocess True \
--training_objective full \
```
The hyperparameters are as follows, following Vicuna and Alpaca.
| Hyperparameter | Global Batch Size | Learning rate | Epochs | Max length | Weight decay |
| --- | ---: | ---: | ---: | ---: | ---: |
| SelFee (7B, 13B) | 128 | 2e-5 | 3 | 2048 | 0 |
## Inference
<b>Restoring checkpoint using diff</b><br>
We provide diff weight and code which can restore the same model with SelFee. To restore the original SelFee weight, you first need to convert the Meta's original LLAMA checkpoint into huggingface format into your local machine. Once you are done, you can restore the same checkpoint of our model by using the following command
```
python inference/apply_delta.py --path_raw {path_to_llama_7b} --path_tuned /ckpt/selfee-7b --path_diff kaist-ai/selfee-7b-delta
```
<b>Autonomous Inference Mode</b><br>
Because SelFee is trained to generate iterative feedback and revisions until the response is satisfying, it automatically generates iterative feedback and revisions on a single forward pass. The model autonomously decides when to stop generating revisions based on the feedback. If the feedback chain ends with sequences like `Revision is not needed.`, the model autonomously terminates generation. <br>
For autonomous inference mode,
```
python inference/inference.py --model-path "ckpt/selfee-7b" --model-id "selfee" --question-file "evaluation/template/question.jsonl" --answer-file "evaluation/answer/selfee_7b_autonomous.jsonl"
```
<b>Revision Enforce Inference Mode</b><br>
We observed that increasing the minimum number of required revisions corresponds to a corresponding increase in performance. To enforce revisions, we automatically replace sequences such as `Revision is not needed.` into `Revision is needed.` during self-feedback generation. Because SelFee is trained to generate `Revision {index}:` after the sequence of `Revision is needed.`, the model would continually revise the answer.
For revision enforce inference mode, use the `max-num-revision` argument.
```
python inference/inference.py --model-path "ckpt/selfee-7b" --model-id "selfee" --question-file "evaluation/template/question.jsonl" --answer-file "evaluation/answer/selfee_7b_enforce_3_revision.jsonl" --max-num-revision 3
```
## Evaluation
Following evaluation setting of Vicuna, we evaluate on 80 diverse queries and utilize GPT-4 language model as the evaluator, scoring a model's response relative to ChatGPT's response. One of the difference with Vicuna evaluation is that due to positional bias of GPT-4, we employ a bidirectional evaluation setting. This means that each evaluation instance is inferred twice, depending on its position.<br>
We release the inference result of SelFee in the folder of `evaluation/answer` and also the scores generated by GPT-4 in the folder of `evaluation/review`. <br>
### GPT-4 Automatic Evaluation
First, you need to get your API key to get access to the GPT-4 API.
```
export OPENAI_API_KEYS={personal_key}
```
To compare the performance of a generation result (for example, located on `evaluation/answer/file_A.jsonl`) with another generation result (located on `evaluation/anwer/file_B.jsonl`),
```
python evaluation/gpt4_automatic_evaluation.py -q evaluation/template/question.jsonl -a evaluation/answer/file_A.jsonl evaluation/answer/file_B.jsonl -p evaluation/template/prompt.jsonl -r evaluation/template/reviewer.jsonl -o evaluation/review/A_vs_B.jsonl
```
To mitigate the positional bias of GPT-4 model, we apply a bidirectional evaluation setting. Therefore, automatic evaluation with opposite position is also needed.
```
python evaluation/gpt4_automatic_evaluation.py -q evaluation/template/question.jsonl -a evaluation/answer/file_B.jsonl evaluation/answer/file_A.jsonl -p evaluation/template/prompt.jsonl -r evaluation/template/reviewer.jsonl -o evaluation/review/B_vs_A.jsonl
```
## Limitations
Similar to other LLaMA-finetuned models, SelFee also make some mistakes especially for math, reasoning, factuality, and coding tasks. Although our performance outperforms ChatGPT on Vicuna setting, the evaluation setting contains some limitations in terms of comprehension (limited to 80 queries), inconsistency, and unreliability. Therefore, further research for a better evaluation setting is needed. Please take these claims with a grain of salt.
## Online demo
Check out the <a href="https://kaistai.github.io/SelFee/demo">demo</a>!
#### How to launch the demo yourself
To serve the web demo yourself, run the following commands:
1. Run the controller
```
python3 -m serve.controller
```
2. Run the model worker
```
python3 -m serve.model_worker --model-path $MODEL_PATH --port 21002 --worker-address=http://localhost:21002 --model-name=SelFee-13b
```
3. Run the web server
```
python3 -m serve.gradio_web_server --share
```
You can find the serving code [here](serve).
### Team members
<a href="https://seonghyeonye.github.io/)">Seonghyeon Ye*</a>, <a href="https://github.com/dreamgonfly">Yongrae Jo*</a>, <a href="https://github.com/doeyoungkim">Doyoung Kim*</a>, <a href="https://scholar.google.com/citations?user=xKrSnDoAAAAJ&hl">Sungdong Kim</a>, <a href="https://github.com/hbin0701">Hyeonbin Hwang</a>, and <a href="https://seominjoon.github.io/">Minjoon Seo</a>. <br/>
(* denotes equal contribution)
### Release
We have released the SelFee-7B and SelFee-13B model diff weights, which can be found with instructions here. Moreover, the training instances used to train SelFee is released on huggingface.
### License
The research preview online demo is only for non-commercial use and is subject to various licenses and terms of use, including the LLaMA model <a href="https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md">License</a>, OpenAI's <a href="https://openai.com/policies/terms-of-use">Terms of Use</a> for the generated data, and ShareGPT's <a href="https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb">Privacy Practices</a>. If you suspect any violations, please reach out to us.
### Citation
Please cite if you use the data or code in this repo.
```
@misc{selfee2023,
author = {Ye, Seonghyeon and Jo, Yongrae and Kim, Doyoung and Kim, Sungdong and Hwang, Hyeonbin and Seo, Minjoon},
title = {SelFee: Iterative Self-Revising LLM Empowered by Self-Feedback Generation},
url = {https://kaistai.github.io/SelFee/},
month = {May},
year = {2023},
howpublished = {Blog post}
}
```
|
prathith/DQNSpaceInvade
|
prathith
| 2023-06-26T14:09:51Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T14:09:24Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 5.00 +/- 7.07
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga prathith -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga prathith -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga prathith
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 100000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
TheBloke/Selfee-13B-GGML
|
TheBloke
| 2023-06-26T14:07:56Z | 0 | 16 | null |
[
"license:other",
"region:us"
] | null | 2023-06-06T10:24:04Z |
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Kaist AI's Selfee 13B GGML
These files are GGML format model files for [Kaist AI's Selfee 13B](https://huggingface.co/kaist-ai/selfee-13b-delta).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* [ParisNeo/GPT4All-UI](https://github.com/ParisNeo/gpt4all-ui)
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
* [ctransformers](https://github.com/marella/ctransformers)
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Selfee-13B-GPTQ)
* [2, 3, 4, 5, 6, and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Selfee-13B-GGML)
* [DOI Snapshot 2023/06/26 2, 3, 4, 5, 6, and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Selfee-13B-GGML-DOI)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Selfee-13B-fp16)
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`.
These are guaranteed to be compatbile with any UIs, tools and libraries released since late May.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python and ctransformers. Other tools and libraries may or may not be compatible - check their documentation if in doubt.
## Explanation of the new k-quant methods
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| selfee-13b.ggmlv3.q2_K.bin | q2_K | 2 | 5.43 GB | 7.93 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| selfee-13b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 6.87 GB | 9.37 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| selfee-13b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 6.25 GB | 8.75 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| selfee-13b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 5.59 GB | 8.09 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| selfee-13b.ggmlv3.q4_0.bin | q4_0 | 4 | 7.32 GB | 9.82 GB | Original llama.cpp quant method, 4-bit. |
| selfee-13b.ggmlv3.q4_1.bin | q4_1 | 4 | 8.14 GB | 10.64 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| selfee-13b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 7.82 GB | 10.32 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| selfee-13b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 7.32 GB | 9.82 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| selfee-13b.ggmlv3.q5_0.bin | q5_0 | 5 | 8.95 GB | 11.45 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| selfee-13b.ggmlv3.q5_1.bin | q5_1 | 5 | 9.76 GB | 12.26 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| selfee-13b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 9.21 GB | 11.71 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| selfee-13b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 8.95 GB | 11.45 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| selfee-13b.ggmlv3.q6_K.bin | q6_K | 6 | 10.68 GB | 13.18 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
| selfee-13b.ggmlv3.q8_0.bin | q8_0 | 8 | 13.83 GB | 16.33 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m selfee-13b.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: Pyrater, WelcomeToTheClub, Kalila, Mano Prime, Trenton Dambrowitz, Spiking Neurons AB, Pierre Kircher, Fen Risland, Kevin Schuppel, Luke, Rainer Wilmers, vamX, Gabriel Puliatti, Alex , Karl Bernard, Ajan Kanaga, Talal Aujan, Space Cruiser, ya boyyy, biorpg, Johann-Peter Hartmann, Asp the Wyvern, Ai Maven, Ghost , Preetika Verma, Nikolai Manek, trip7s trip, John Detwiler, Fred von Graf, Artur Olbinski, subjectnull, John Villwock, Junyu Yang, Rod A, Lone Striker, Chris McCloskey, Iucharbius , Matthew Berman, Illia Dulskyi, Khalefa Al-Ahmad, Imad Khwaja, chris gileta, Willem Michiel, Greatston Gnanesh, Derek Yates, K, Alps Aficionado, Oscar Rangel, David Flickinger, Luke Pendergrass, Deep Realms, Eugene Pentland, Cory Kujawski, terasurfer , Jonathan Leane, senxiiz, Joseph William Delisle, Sean Connelly, webtim, zynix , Nathan LeClaire
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Kaist AI's Selfee 13B
<p align="center" width="100%">
<a href="https://kaistai.github.io/SelFee/demo" target="_blank"><img src="https://raw.githubusercontent.com/kaistAI/SelFee/main/assets/llama_selfie.png" alt="KAIST-Selfee" style="width: 30%; min-width: 200px; display: block; margin: auto;"></a>
</p>
# SelFee: Iterative Self-Revising LLM Empowered by <br/> Self-Feedback Generation
[](https://github.com/tatsu-lab/stanford_alpaca/blob/main/LICENSE)
[](https://github.com/tatsu-lab/stanford_alpaca/blob/main/DATA_LICENSE)
[](https://www.python.org/downloads/release/python-390/)
[](https://github.com/psf/black)
## News
[May 31, 2023] Initial release: We released the first version of SelFee! Check out the <a href="https://kaistai.github.io/SelFee/">blog post</a> for more details.
## Overview
This is the repository for the KAIST SelFee project, which aims to build and share an instruction-following LLaMA model. This repo mainly has five contents:
- The selection process of the 178K training data for SelFee ([detail](#data-release), [code](data_collection)).
- The generation process for the training data and its result. ([detail](#data-generation-process), [code](data_augmentation)).
- The training process for the model ([detail](#training), [code](train)).
- The inference process for the model ([detail](#inference), [code](inference)).
- The evaluation method and dataset ([detail](#evaluation), [code](evaluation)).
This repository is based on the [Stanford-Alpaca](https://github.com/tatsu-lab/stanford_alpaca/) and [Vicuna](https://github.com/lm-sys/FastChat/) repository. Thanks to all the contributors for these awesome repositories!! 🙌
**We highly recommend you read our [blog post](https://kaistai.github.io/SelFee/) for more details about the model.**
## Data Release
For data collection, we collected datasets from five different fields. These are the Stanford Alpaca dataset, math collection, code collection, Flan collection, and ShareGPT. We provide code that we used to make a dataset for training. We also provide code how we preprocessed ShareGPT. For ShareGPT, we only use the first (question, answer) pair from human and GPT, respectively. We only use instances which are classified as english,and filter instance which is not a form of question.
For other datsets, we do not need special data collection method.
## Data Generation Process
To train our model with high-quality instructions and answer pairs, we utilized data augmentation using OpenAI API calls. The process involved three steps. <br>
Firstly, we collected various instructions from multiple fields and fed them to ChatGPT to generate answers. <br>
Secondly, we gathered feedback on the generated answer by querying ChatGPT again and asked it to determine if the initial answer required any revision. <br>
Thirdly, if a revision was necessary, we passed the instruction, initial answer, and feedback pair to ChatGPT to generate a revised answer and its feedback pair.
We repeated the process until we received feedback that required no further revision or hit the maximum iteration. However, due to the token limitation of the ChatGPT API, we had to truncate some instances that needed more than 4096 tokens while augmenting.<br>
You can see the details with command [here](data_augmentation/README.md).<br>
*We provide the whole dataset after collection and augmentation using huggingface([code](data_collection/download_train.py)), so you can either use the code or follow our [data merging step](outputs/README.md) to replicate the training dataset. Feel free to use any of them!
## Training
We utilize <a href="https://github.com/lm-sys/FastChat">FastChat</a> to train the model. Given the instruction, we fine-tune the model to generate the answer and feedback chain (including the revisions).<br>
To reproduce the training procedure, here are the steps. <br>
```
pip install -r requirements.txt
```
```
torchrun --nproc_per_node=4 train/train_mem.py \
--model_name_or_path llama-7b \
--data_path outputs/feedback_gpt_3.5_turbo_merged_whole.json \
--bf16 True \
--output_dir ckpt/selfee-7b \
--num_train_epochs 3 \
--per_device_train_batch_size 16 \
--per_device_eval_batch_size 16 \
--gradient_accumulation_steps 2 \
--evaluation_strategy "no" \
--save_strategy "steps" \
--save_steps 5000 \
--save_total_limit 1 \
--learning_rate 2e-5 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--lr_scheduler_type "cosine" \
--logging_steps 1 \
--fsdp "shard_grad_op auto_wrap" \
--fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \
--tf32 True \
--model_max_length 2048 \
--gradient_checkpointing True \
--lazy_preprocess True \
--training_objective full \
```
The hyperparameters are as follows, following Vicuna and Alpaca.
| Hyperparameter | Global Batch Size | Learning rate | Epochs | Max length | Weight decay |
| --- | ---: | ---: | ---: | ---: | ---: |
| SelFee (7B, 13B) | 128 | 2e-5 | 3 | 2048 | 0 |
## Inference
<b>Restoring checkpoint using diff</b><br>
We provide diff weight and code which can restore the same model with SelFee. To restore the original SelFee weight, you first need to convert the Meta's original LLAMA checkpoint into huggingface format into your local machine. Once you are done, you can restore the same checkpoint of our model by using the following command
```
python inference/apply_delta.py --path_raw {path_to_llama_7b} --path_tuned /ckpt/selfee-7b --path_diff kaist-ai/selfee-7b-delta
```
<b>Autonomous Inference Mode</b><br>
Because SelFee is trained to generate iterative feedback and revisions until the response is satisfying, it automatically generates iterative feedback and revisions on a single forward pass. The model autonomously decides when to stop generating revisions based on the feedback. If the feedback chain ends with sequences like `Revision is not needed.`, the model autonomously terminates generation. <br>
For autonomous inference mode,
```
python inference/inference.py --model-path "ckpt/selfee-7b" --model-id "selfee" --question-file "evaluation/template/question.jsonl" --answer-file "evaluation/answer/selfee_7b_autonomous.jsonl"
```
<b>Revision Enforce Inference Mode</b><br>
We observed that increasing the minimum number of required revisions corresponds to a corresponding increase in performance. To enforce revisions, we automatically replace sequences such as `Revision is not needed.` into `Revision is needed.` during self-feedback generation. Because SelFee is trained to generate `Revision {index}:` after the sequence of `Revision is needed.`, the model would continually revise the answer.
For revision enforce inference mode, use the `max-num-revision` argument.
```
python inference/inference.py --model-path "ckpt/selfee-7b" --model-id "selfee" --question-file "evaluation/template/question.jsonl" --answer-file "evaluation/answer/selfee_7b_enforce_3_revision.jsonl" --max-num-revision 3
```
## Evaluation
Following evaluation setting of Vicuna, we evaluate on 80 diverse queries and utilize GPT-4 language model as the evaluator, scoring a model's response relative to ChatGPT's response. One of the difference with Vicuna evaluation is that due to positional bias of GPT-4, we employ a bidirectional evaluation setting. This means that each evaluation instance is inferred twice, depending on its position.<br>
We release the inference result of SelFee in the folder of `evaluation/answer` and also the scores generated by GPT-4 in the folder of `evaluation/review`. <br>
### GPT-4 Automatic Evaluation
First, you need to get your API key to get access to the GPT-4 API.
```
export OPENAI_API_KEYS={personal_key}
```
To compare the performance of a generation result (for example, located on `evaluation/answer/file_A.jsonl`) with another generation result (located on `evaluation/anwer/file_B.jsonl`),
```
python evaluation/gpt4_automatic_evaluation.py -q evaluation/template/question.jsonl -a evaluation/answer/file_A.jsonl evaluation/answer/file_B.jsonl -p evaluation/template/prompt.jsonl -r evaluation/template/reviewer.jsonl -o evaluation/review/A_vs_B.jsonl
```
To mitigate the positional bias of GPT-4 model, we apply a bidirectional evaluation setting. Therefore, automatic evaluation with opposite position is also needed.
```
python evaluation/gpt4_automatic_evaluation.py -q evaluation/template/question.jsonl -a evaluation/answer/file_B.jsonl evaluation/answer/file_A.jsonl -p evaluation/template/prompt.jsonl -r evaluation/template/reviewer.jsonl -o evaluation/review/B_vs_A.jsonl
```
## Limitations
Similar to other LLaMA-finetuned models, SelFee also make some mistakes especially for math, reasoning, factuality, and coding tasks. Although our performance outperforms ChatGPT on Vicuna setting, the evaluation setting contains some limitations in terms of comprehension (limited to 80 queries), inconsistency, and unreliability. Therefore, further research for a better evaluation setting is needed. Please take these claims with a grain of salt.
## Online demo
Check out the <a href="https://kaistai.github.io/SelFee/demo">demo</a>!
#### How to launch the demo yourself
To serve the web demo yourself, run the following commands:
1. Run the controller
```
python3 -m serve.controller
```
2. Run the model worker
```
python3 -m serve.model_worker --model-path $MODEL_PATH --port 21002 --worker-address=http://localhost:21002 --model-name=SelFee-13b
```
3. Run the web server
```
python3 -m serve.gradio_web_server --share
```
You can find the serving code [here](serve).
### Team members
<a href="https://seonghyeonye.github.io/)">Seonghyeon Ye*</a>, <a href="https://github.com/dreamgonfly">Yongrae Jo*</a>, <a href="https://github.com/doeyoungkim">Doyoung Kim*</a>, <a href="https://scholar.google.com/citations?user=xKrSnDoAAAAJ&hl">Sungdong Kim</a>, <a href="https://github.com/hbin0701">Hyeonbin Hwang</a>, and <a href="https://seominjoon.github.io/">Minjoon Seo</a>. <br/>
(* denotes equal contribution)
### Release
We have released the SelFee-7B and SelFee-13B model diff weights, which can be found with instructions here. Moreover, the training instances used to train SelFee is released on huggingface.
### License
The research preview online demo is only for non-commercial use and is subject to various licenses and terms of use, including the LLaMA model <a href="https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md">License</a>, OpenAI's <a href="https://openai.com/policies/terms-of-use">Terms of Use</a> for the generated data, and ShareGPT's <a href="https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb">Privacy Practices</a>. If you suspect any violations, please reach out to us.
### Citation
Please cite if you use the data or code in this repo.
```
@misc{selfee2023,
author = {Ye, Seonghyeon and Jo, Yongrae and Kim, Doyoung and Kim, Sungdong and Hwang, Hyeonbin and Seo, Minjoon},
title = {SelFee: Iterative Self-Revising LLM Empowered by Self-Feedback Generation},
url = {https://kaistai.github.io/SelFee/},
month = {May},
year = {2023},
howpublished = {Blog post}
}
```
|
santhakumar/whisper-tamil
|
santhakumar
| 2023-06-26T14:07:08Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"ta",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-26T13:39:13Z |
---
language:
- ta
license: apache-2.0
tags:
- whisper-event
metrics:
- wer
model-index:
- name: Whisper Tamil Large-v2 - Vasista Sai Lodagala
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: google/fleurs
type: google/fleurs
config: ta_in
split: test
metrics:
- type: wer
value: 7.5
name: WER
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: mozilla-foundation/common_voice_11_0
type: mozilla-foundation/common_voice_11_0
config: ta
split: test
metrics:
- type: wer
value: 6.61
name: WER
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tamil Large-v2
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the Tamil data available from multiple publicly available ASR corpuses.
It has been fine-tuned as a part of the Whisper fine-tuning sprint.
**NOTE:** The code used to train this model is available for re-use in the [whisper-finetune](https://github.com/vasistalodagala/whisper-finetune) repository.
## Usage
In order to evaluate this model on an entire dataset, the evaluation codes available in the [whisper-finetune](https://github.com/vasistalodagala/whisper-finetune) repository can be used.
The same repository also provides the scripts for faster inference using whisper-jax.
In order to infer a single audio file using this model, the following code snippet can be used:
```python
>>> import torch
>>> from transformers import pipeline
>>> # path to the audio file to be transcribed
>>> audio = "/path/to/audio.format"
>>> device = "cuda:0" if torch.cuda.is_available() else "cpu"
>>> transcribe = pipeline(task="automatic-speech-recognition", model="vasista22/whisper-tamil-large-v2", chunk_length_s=30, device=device)
>>> transcribe.model.config.forced_decoder_ids = transcribe.tokenizer.get_decoder_prompt_ids(language="ta", task="transcribe")
>>> print('Transcription: ', transcribe(audio)["text"])
```
For faster inference of whisper models, the [whisper-jax](https://github.com/sanchit-gandhi/whisper-jax) library can be used. Please follow the necessary installation steps as mentioned [here](https://github.com/vasistalodagala/whisper-finetune#faster-evaluation-with-whisper-jax), before using the following code snippet:
```python
>>> import jax.numpy as jnp
>>> from whisper_jax import FlaxWhisperForConditionalGeneration, FlaxWhisperPipline
>>> # path to the audio file to be transcribed
>>> audio = "/path/to/audio.format"
>>> transcribe = FlaxWhisperPipline("vasista22/whisper-tamil-large-v2", batch_size=16)
>>> transcribe.model.config.forced_decoder_ids = transcribe.tokenizer.get_decoder_prompt_ids(language="ta", task="transcribe")
>>> print('Transcription: ', transcribe(audio)["text"])
```
## Training and evaluation data
Training Data:
- [IISc-MILE Tamil ASR Corpus](https://www.openslr.org/127/)
- [ULCA ASR Corpus](https://github.com/Open-Speech-EkStep/ULCA-asr-dataset-corpus#tamil-labelled--total-duration-is-116024-hours)
- [Shrutilipi ASR Corpus](https://ai4bharat.org/shrutilipi)
- [Microsoft Speech Corpus (Indian Languages)](https://msropendata.com/datasets/7230b4b1-912d-400e-be58-f84e0512985e)
- [Google/Fleurs Train+Dev set](https://huggingface.co/datasets/google/fleurs)
- Babel ASR Corpus
Evaluation Data:
- [Microsoft Speech Corpus (Indian Languages) Test Set](https://msropendata.com/datasets/7230b4b1-912d-400e-be58-f84e0512985e)
- [Google/Fleurs Test Set](https://huggingface.co/datasets/google/fleurs)
- [IISc-MILE Test Set](https://www.openslr.org/127/)
- Babel Test Set
## Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.75e-05
- train_batch_size: 8
- eval_batch_size: 24
- seed: 22
- optimizer: adamw_bnb_8bit
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 22000
- training_steps: 52500 (Initially set to 76000 steps)
- mixed_precision_training: True
## Acknowledgement
This work was done at [Speech Lab, IIT Madras](https://asr.iitm.ac.in/).
The compute resources for this work were funded by "Bhashini: National Language translation Mission" project of the Ministry of Electronics and Information Technology (MeitY), Government of India.
|
abhinavsharma101/Taxi-v3
|
abhinavsharma101
| 2023-06-26T13:53:13Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T13:53:12Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="abhinavsharma101/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
plediskey/DreamShaper
|
plediskey
| 2023-06-26T13:51:57Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-26T13:50:43Z |
---
license: creativeml-openrail-m
---
|
abhinavsharma101/ppo-Huggy
|
abhinavsharma101
| 2023-06-26T13:44:53Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-26T13:44:47Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: abhinavsharma101/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
LordSomen/q-FrozenLake-v1-4x4-noSlippery
|
LordSomen
| 2023-06-26T13:32:38Z | 0 | 0 | null |
[
"FrozenLake-v1-8x8",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T11:42:28Z |
---
tags:
- FrozenLake-v1-8x8
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-8x8
type: FrozenLake-v1-8x8
metrics:
- type: mean_reward
value: 0.32 +/- 0.47
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="LordSomen/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
rifkat/uz_kr_lang-detection
|
rifkat
| 2023-06-26T13:19:19Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"text-classification",
"uz",
"en",
"ru",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-26T11:06:29Z |
---
pipeline_tag: text-classification
language:
- uz
- en
- ru
license: apache-2.0
metrics:
- accuracy
---
<p><b> Til identifikatori.</b>
Tabiiy tilni qayta ishlash (NLP) sohasida tilni aniqlash vazifasi ma'lum matn yoki hujjat tilini aniqlashni o'z ichiga oladi,
ammo ko'plab tillarni aniqlash qobiliyati qiyinlashadi. Ushbu model matndan 21 tilni tanib oladi,xususan, oʻzbek tilida
qoʻllaniladigan lotin-kirill yozuviga eʼtibor qaratadi. Bu boradagi tadqiqotlar kamligini hisobga olib, mos transformator
arxitekturasiga asoslangan oʻzbek lotin-kirill yozuvini aniqlik darajasi yuqori boʻlgan tilni aniqlash modelini taqdim etamiz.
Modelimiz biz yaratgan o‘zbek tili korpusidan foydalangan holda baholandi, bu ham kelajakda o‘zbek tilini aniqlash vazifalarini
baholash uchun qimmatli manba bo‘lib xizmat qilishi mumkin.Ushbu model 21 ta tilni, jumladan, ikkita alifboda (lotin va kirill)
ifodalangan o‘zbek tilini qamrab oladi.
<p><b> Language identifier. </b>
The task of language identification in Natural Language Processing (NLP) involves identifying the language of a particular text or document,
but the ability to identify multiple languages can be challenging. This model is capable of recognizing 21 languages from text, specifically
focusing on the Latin-Cyrillic script used in Uzbek. Considering the scarcity of research in this area, we present a language identification
model with a high degree of accuracy for the Uzbek Latin-Cyrillic script, based on the relevant transformer architecture. Our model has been
evaluated using the Uzbek corpus that we created, which can potentially serve as a valuable resource for evaluating language identification
tasks for Uzbek in the future. This model encompasses 21 languages, including Uzbek expressed in two scripts (Latin and Cyrillic).
|
chencjiajy/pyramids-ppo
|
chencjiajy
| 2023-06-26T13:17:47Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-06-26T13:17:45Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: chencjiajy/pyramids-ppo
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Phips/ppo-LunarLander-v2
|
Phips
| 2023-06-26T13:04:29Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T10:30:44Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 269.02 +/- 71.95
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
renyulin/gptneo125m-detoxify-ppo-0.05
|
renyulin
| 2023-06-26T12:54:19Z | 56 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"trl",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2023-06-26T12:53:50Z |
---
license: apache-2.0
tags:
- trl
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/lvwerra/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="renyulin//tmp/tmp5awaabrc/renyulin/gptneo125m-detoxify-ppo")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("renyulin//tmp/tmp5awaabrc/renyulin/gptneo125m-detoxify-ppo")
model = AutoModelForCausalLMWithValueHead.from_pretrained("renyulin//tmp/tmp5awaabrc/renyulin/gptneo125m-detoxify-ppo")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
nevernotsean/IllustratedPaperMini
|
nevernotsean
| 2023-06-26T12:54:11Z | 34 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-26T04:04:10Z |
---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
---
Merge of spybgsToolkitFor_v50NoiseOffset checkpoint and the COOLKIDS_MERGE_V2.5, papermini_v1-3 Loras
Prompt: brown arm made of fur, claws, white-background, tk-char, prop
Negative Prompt shadow, texture, (worst quality, low quality:1.4)
|
Frixi/Cerebella_SkullgirlsEncore2
|
Frixi
| 2023-06-26T12:51:57Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2023-06-26T12:49:25Z |
---
license: openrail
---
Cerebella from Skullgirls: Encore 2 videogame.
Dataset length: 5 Minutes.
Epochs: 100 on RVC V2
|
nikolajking/Low_resource_translator_jp_vt
|
nikolajking
| 2023-06-26T12:49:40Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"m2m_100",
"text2text-generation",
"translation",
"ja",
"vi",
"dataset:tatoeba",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-06-26T12:27:42Z |
---
datasets:
- tatoeba
pipeline_tag: translation
language:
- ja
- vi
metrics:
- bleu
---
|
biglam/autotrain-beyond-the-books
|
biglam
| 2023-06-26T12:27:32Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"onnx",
"safetensors",
"bert",
"text-classification",
"autotrain",
"lam",
"en",
"dataset:biglam/on_the_books",
"license:mit",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-07T09:59:47Z |
---
tags:
- autotrain
- text-classification
- lam
language:
- en
widget:
- text: >-
Neither this act nor any other act relating to said Cherokee Indians of
Robeson County shall be construed so as to impose on said Indians any
powers, privileges, rights or immunities, or
- text: >-
That Section one hundred and twenty-two eightythree of the General Statutes
of North Carolina is hereby amended by striking out the word insane in the
catch line and in lines two, four, nine and fifteen and inserting in lieu
thereof the words mentally disordered.
datasets:
- biglam/on_the_books
co2_eq_emissions:
emissions: 0.2641096478393395
license: mit
library_name: transformers
metrics:
- accuracy
- f1
- recall
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 64771135885
- CO2 Emissions (in grams): 0.2641
## Validation Metrics
- Loss: 0.057
- Accuracy: 0.986
- Precision: 0.988
- Recall: 0.992
- AUC: 0.998
- F1: 0.990
## Usage
This model is trained on a dataset of historical documents related to Jim Crow laws in the United States.
The model was developed by drawing on the expertise of scholars and analyzing legal texts from various states, with the goal of identifying similarities between different states' Jim Crow laws.
As such, this model may be useful for researchers or policymakers interested in understanding the history of racial discrimination in the US legal system.
The easiest way to use this model locally is via the [Transformers](https://huggingface.co/docs/transformers/index) library [pipelines for inference](https://huggingface.co/docs/transformers/pipeline_tutorial).
Once you have [installed transformers](https://huggingface.co/docs/transformers/installation), you can run the following code.
This will download and cache the model locally and allow you to make predictions on text input.
```
from transformers import pipeline
classifier = pipeline('text-classification', "biglam/autotrain-beyond-the-books")
classifier(text)
```
This will return predictions in the following format:
```
[{'label': 'no_jim_crow', 'score': 0.9718555212020874}]
```
|
heka-ai/gpl-test-20k
|
heka-ai
| 2023-06-26T12:22:37Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-06-26T12:22:34Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# heka-ai/gpl-test-20k
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('heka-ai/gpl-test-20k')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('heka-ai/gpl-test-20k')
model = AutoModel.from_pretrained('heka-ai/gpl-test-20k')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=heka-ai/gpl-test-20k)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2000 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 20000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
prathith/Taxi_Env_QLearning
|
prathith
| 2023-06-26T12:18:34Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T12:18:33Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi_Env_QLearning
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.46 +/- 2.78
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="prathith/Taxi_Env_QLearning", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
kalyaniAI/autotrain-autotrain-69874137966
|
kalyaniAI
| 2023-06-26T12:08:29Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"autotrain",
"summarization",
"en",
"dataset:kalyaniAI/autotrain-data-autotrain",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-06-26T12:07:46Z |
---
tags:
- autotrain
- summarization
language:
- en
widget:
- text: "I love AutoTrain"
datasets:
- kalyaniAI/autotrain-data-autotrain
co2_eq_emissions:
emissions: 0.025148621653341533
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 69874137966
- CO2 Emissions (in grams): 0.0251
## Validation Metrics
- Loss: 8.770
- Rouge1: 0.000
- Rouge2: 0.000
- RougeL: 0.000
- RougeLsum: 0.000
- Gen Len: 16.333
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/kalyaniAI/autotrain-autotrain-69874137966
```
|
mikeyang01/chinese-LLaMA-Alpaca-7B-quantized
|
mikeyang01
| 2023-06-26T12:08:23Z | 0 | 6 | null |
[
"region:us"
] | null | 2023-05-06T12:44:10Z |
The model is converted according to the document below.<br>
https://github.com/ymcui/Chinese-LLaMA-Alpaca
在线模型合并与转换教程:<br>
https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki/在线模型合并与转换 <br>
https://colab.research.google.com/drive/1FnFkyKhrnS7s-2lDDeous-AutdI_SkAd?usp=sharing
**Due to colab hardware limitation, many people may not convert Successfully, <br>
So I converted it and put it here**
|
ycros/WizardLM-33B-V1.0-Uncensored-SuperHOT-8k-GGML
|
ycros
| 2023-06-26T12:05:53Z | 0 | 6 | null |
[
"region:us"
] | null | 2023-06-26T07:43:48Z |
Quantized versions of https://huggingface.co/Panchovix/WizardLM-33B-V1.0-Uncensored-SuperHOT-8k
|
JeremiahZ/TinyBERT_4L_zh_backup
|
JeremiahZ
| 2023-06-26T12:03:53Z | 153 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"feature-extraction",
"zh",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-09-24T12:59:17Z |
---
language:
- zh
---
# Tiny BERT 4L zh
This is a backup repo of [the official repo](https://huggingface.co/huawei-noah/TinyBERT_4L_zh).
Now the model can be loaded with
```python
from transformers import AutoModel
model = AutoModel.from_pretrained("JeremiahZ/TinyBERT_4L_zh_backup")
```
More information on this model can be found [here](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/TinyBERT).
|
miyao-haruto/ppo-LunarLander-v2
|
miyao-haruto
| 2023-06-26T11:55:26Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T11:55:06Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 242.05 +/- 19.55
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
GregoRio123/ssmp
|
GregoRio123
| 2023-06-26T11:49:20Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-26T11:27:21Z |
---
license: creativeml-openrail-m
---
|
Shubham09/falcon_pdfmedical
|
Shubham09
| 2023-06-26T11:44:44Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-26T11:37:32Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
|
hypothetical/test_model
|
hypothetical
| 2023-06-26T11:42:45Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-06-26T11:42:45Z |
---
license: bigscience-openrail-m
---
|
SumanTenzai/Dummy
|
SumanTenzai
| 2023-06-26T11:29:52Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"camembert",
"fill-mask",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-26T08:43:55Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: Dummy
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Dummy
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
Ravencer/rut5_base_sum_gazeta-finetuned-mlsum
|
Ravencer
| 2023-06-26T11:07:17Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:mlsum",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-21T12:10:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- mlsum
model-index:
- name: rut5_base_sum_gazeta-finetuned-mlsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rut5_base_sum_gazeta-finetuned-mlsum
This model is a fine-tuned version of [IlyaGusev/rut5_base_sum_gazeta](https://huggingface.co/IlyaGusev/rut5_base_sum_gazeta) on the mlsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 13 | 3.4842 | 10.3333 | 0.0 | 10.3333 | 10.3333 | 78.7 |
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.1+cpu
- Datasets 2.12.0
- Tokenizers 0.13.3
|
iammartian0/detr-resnet-50_finetuned_cppe5
|
iammartian0
| 2023-06-26T11:00:28Z | 186 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"detr",
"object-detection",
"generated_from_trainer",
"dataset:forklift-object-detection",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-06-26T10:51:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- forklift-object-detection
model-index:
- name: detr-resnet-50_finetuned_cppe5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_finetuned_cppe5
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the forklift-object-detection dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
christinakyp/whisper-tiny-train1
|
christinakyp
| 2023-06-26T10:50:11Z | 78 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"asr",
"generated_from_trainer",
"en",
"dataset:christinakyp/dsing1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-26T09:22:12Z |
---
language:
- en
license: apache-2.0
tags:
- asr
- generated_from_trainer
datasets:
- christinakyp/dsing1
model-index:
- name: Whisper Tiny Sing - CK
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tiny Sing - CK
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the DSing1 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
davanstrien/CamemBERT-MedNERF
|
davanstrien
| 2023-06-26T10:42:55Z | 108 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"camembert",
"token-classification",
"autotrain",
"medical",
"fr",
"dataset:Posos/MedNERF",
"license:mit",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-26T10:07:00Z |
---
tags:
- autotrain
- token-classification
- medical
language:
- fr
widget:
- text: Prendré 2 compris par jour, pendant 1 mois.
- text: DOLIPRANETABS 1000 MG CPR PELL PLQ/8 (Paracétamol 1.000mg comprimé)
datasets:
- Posos/MedNERF
co2_eq_emissions:
emissions: 0.11647938304211661
license: mit
metrics:
- f1
- accuracy
- precision
- recall
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 69856137957
- CO2 Emissions (in grams): 0.1165
## Validation Metrics
- Loss: 1.510
- Accuracy: 0.706
- Precision: 0.648
- Recall: 0.679
- F1: 0.663
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/davanstrien/autotrain-french-ner-blank-model-69856137957
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("davanstrien/autotrain-french-ner-blank-model-69856137957", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("davanstrien/autotrain-french-ner-blank-model-69856137957", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
Rami/dolly_prompt_generator
|
Rami
| 2023-06-26T10:38:18Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-09T21:39:22Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
|
jondurbin/airoboros-mpt-30b-gpt4-1p4-six-epochs
|
jondurbin
| 2023-06-26T10:12:19Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mpt",
"text-generation",
"custom_code",
"dataset:jondurbin/airoboros-gpt4-1.4",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-26T08:37:14Z |
---
license: other
datasets:
- jondurbin/airoboros-gpt4-1.4
---
## Overview
This is a test of qlora fine-tuning of the mpt-30b model, __with 6 epochs__.
qlora compatible model: https://huggingface.co/jondurbin/mpt-30b-qlora-compatible
My fork of qlora with mpt-30b support: https://github.com/jondurbin/qlora
Differences in the qlora scripts:
- requires adding `--mpt True` for mpt-based models
- uses `--num_train_epochs` instead of `--max_steps`
- uses airoboros prompt format (mostly 1:1 with vicuna) rather than alpaca, and expects an input file in JSONL format with "instruction" and "response"
__I think there's a bug in gradient accumulation, so if you try this, maybe set gradient accumulation steps to 1__
See the mpt-30b-qlora-compatible model card for training details.
*This is not as high quality as the llama-33b versions unfortunately, but I don't have a great answer as to why. Perhaps there are fewer forward layers that can be tuned?*
### License and usage
This is a real gray area, here's why:
- the dataset was generated with gpt-4, via https://github.com/jondurbin/airoboros
- the ToS for openai API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- a 30b parameter model isn't anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant of copyrighted or otherwise unallowable licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly not placing a license on here because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly. Your best bet is probably to avoid using this commercially, especially since it didn't perform quite as well as expected using qlora.
|
madiltalay/layoutlmv2-base-uncased_finetuned_docvqa
|
madiltalay
| 2023-06-26T10:11:26Z | 20 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"layoutlmv2",
"document-question-answering",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] |
document-question-answering
| 2023-06-22T11:36:16Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: layoutlmv2-base-uncased_finetuned_docvqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-base-uncased_finetuned_docvqa
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6030
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.326 | 0.22 | 50 | 4.4949 |
| 4.292 | 0.44 | 100 | 3.9510 |
| 3.9419 | 0.66 | 150 | 3.9100 |
| 3.6895 | 0.88 | 200 | 3.5035 |
| 3.4052 | 1.11 | 250 | 3.4030 |
| 3.1405 | 1.33 | 300 | 3.2100 |
| 2.8966 | 1.55 | 350 | 2.9803 |
| 2.7874 | 1.77 | 400 | 2.7811 |
| 2.5385 | 1.99 | 450 | 2.4748 |
| 2.1532 | 2.21 | 500 | 2.5843 |
| 1.994 | 2.43 | 550 | 2.5459 |
| 1.8322 | 2.65 | 600 | 2.2316 |
| 1.7005 | 2.88 | 650 | 2.1888 |
| 1.4758 | 3.1 | 700 | 2.4578 |
| 1.3543 | 3.32 | 750 | 2.3368 |
| 1.1939 | 3.54 | 800 | 2.9737 |
| 1.294 | 3.76 | 850 | 2.4907 |
| 1.4519 | 3.98 | 900 | 1.9276 |
| 1.0517 | 4.2 | 950 | 2.9981 |
| 0.8171 | 4.42 | 1000 | 2.5618 |
| 1.0456 | 4.65 | 1050 | 2.3139 |
| 0.9222 | 4.87 | 1100 | 2.4243 |
| 0.758 | 5.09 | 1150 | 2.8167 |
| 0.7203 | 5.31 | 1200 | 2.9342 |
| 0.6748 | 5.53 | 1250 | 2.6396 |
| 0.6821 | 5.75 | 1300 | 2.5629 |
| 0.5898 | 5.97 | 1350 | 3.0276 |
| 0.3135 | 6.19 | 1400 | 3.2611 |
| 0.4407 | 6.42 | 1450 | 3.1793 |
| 0.5303 | 6.64 | 1500 | 3.0511 |
| 0.5294 | 6.86 | 1550 | 3.1106 |
| 0.3149 | 7.08 | 1600 | 3.2933 |
| 0.199 | 7.3 | 1650 | 3.4207 |
| 0.164 | 7.52 | 1700 | 3.4379 |
| 0.5258 | 7.74 | 1750 | 3.1339 |
| 0.336 | 7.96 | 1800 | 3.2394 |
| 0.3294 | 8.19 | 1850 | 3.0956 |
| 0.1587 | 8.41 | 1900 | 3.4282 |
| 0.2375 | 8.63 | 1950 | 3.3718 |
| 0.117 | 8.85 | 2000 | 3.5646 |
| 0.2873 | 9.07 | 2050 | 3.5213 |
| 0.2206 | 9.29 | 2100 | 3.5387 |
| 0.2503 | 9.51 | 2150 | 3.5683 |
| 0.0763 | 9.73 | 2200 | 3.6119 |
| 0.1344 | 9.96 | 2250 | 3.6030 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Jade1211/textual_inversion_firework
|
Jade1211
| 2023-06-26T10:05:48Z | 9 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-26T07:06:25Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - Jade1211/textual_inversion_firework
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
Qasim30/taxi-v3-hugging
|
Qasim30
| 2023-06-26T09:52:44Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T09:52:42Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3-hugging
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Qasim30/taxi-v3-hugging", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ahessamb/bertopic-umap15-hbd15-topn15
|
ahessamb
| 2023-06-26T09:46:21Z | 3 | 0 |
bertopic
|
[
"bertopic",
"text-classification",
"region:us"
] |
text-classification
| 2023-06-26T09:46:14Z |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# bertopic-umap15-hbd15-topn15
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("ahessamb/bertopic-umap15-hbd15-topn15")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 105
* Number of training documents: 14320
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | market - price - nft - said - cryptocurrency | 15 | -1_market_price_nft_said |
| 0 | korea - funds - attack - hackers - fraud | 6725 | 0_korea_funds_attack_hackers |
| 1 | usd - 500 - near - bitcoin - consolidating | 706 | 1_usd_500_near_bitcoin |
| 2 | sized - digest - news - blockchain - radar | 417 | 2_sized_digest_news_blockchain |
| 3 | merge - ethereum - proof - fork - beacon | 236 | 3_merge_ethereum_proof_fork |
| 4 | rate - cpi - hikes - fomc - bitcoin | 209 | 4_rate_cpi_hikes_fomc |
| 5 | luna - ustc - entropy - proposal - terraform | 207 | 5_luna_ustc_entropy_proposal |
| 6 | brands - meta - worlds - immersive - decentraland | 206 | 6_brands_meta_worlds_immersive |
| 7 | russia - sanctions - crypto - ruble - settlements | 187 | 7_russia_sanctions_crypto_ruble |
| 8 | gensler - securities - coinbase - industry - regulation | 178 | 8_gensler_securities_coinbase_industry |
| 9 | blockchain - web3 - gamers - p2e - industry | 174 | 9_blockchain_web3_gamers_p2e |
| 10 | miners - carbon - power - bitcoin - report | 157 | 10_miners_carbon_power_bitcoin |
| 11 | funding - round - ventures - capital - gamestop | 151 | 11_funding_round_ventures_capital |
| 12 | xrp - ripple - price - level - resistance | 146 | 12_xrp_ripple_price_level |
| 13 | etf - blackrock - grayscale - bitcoin - futures | 145 | 13_etf_blackrock_grayscale_bitcoin |
| 14 | web3 - disco - mcmullen - identity - platforms | 144 | 14_web3_disco_mcmullen_identity |
| 15 | protocols - decentralized - newsletter - cefi - lending | 141 | 15_protocols_decentralized_newsletter_cefi |
| 16 | inu - lucie - meme - tokens - ecosystem | 139 | 16_inu_lucie_meme_tokens |
| 17 | ftx - sam - bankman - bankruptcy - ceo | 132 | 17_ftx_sam_bankman_bankruptcy |
| 18 | tether - usdt - documents - coindesk - stablecoins | 123 | 18_tether_usdt_documents_coindesk |
| 19 | el - bukele - nayib - bitcoin - x93 | 120 | 19_el_bukele_nayib_bitcoin |
| 20 | dogecoin - musk - meme - twitter - level | 114 | 20_dogecoin_musk_meme_twitter |
| 21 | 26 - resistance - near - btc - bulls | 106 | 21_26_resistance_near_btc |
| 22 | nft - opensea - doppel - marketplaces - rug | 101 | 22_nft_opensea_doppel_marketplaces |
| 23 | cfds - traders - assets - cryptocurrency - adoption | 95 | 23_cfds_traders_assets_cryptocurrency |
| 24 | difficulty - hashrate - bitcoin - network - height | 90 | 24_difficulty_hashrate_bitcoin_network |
| 25 | ubi - cointelegraph - simonin - bitcoin - income | 88 | 25_ubi_cointelegraph_simonin_bitcoin |
| 26 | coinbase - bitkey - india - ceo - fees | 85 | 26_coinbase_bitkey_india_ceo |
| 27 | donated - russia - invasion - transformation - donors | 83 | 27_donated_russia_invasion_transformation |
| 28 | celsius - cel - withdrawals - company - mashinsky | 81 | 28_celsius_cel_withdrawals_company |
| 29 | nfts - collections - million - floor - cryptopunk | 81 | 29_nfts_collections_million_floor |
| 30 | blockchain - bvm - mvc - maestro - databases | 78 | 30_blockchain_bvm_mvc_maestro |
| 31 | crypto - merchants - mastercard - feature - cashapp | 78 | 31_crypto_merchants_mastercard_feature |
| 32 | ada - cardano - bearish - satoshis - market | 76 | 32_ada_cardano_bearish_satoshis |
| 33 | nft - sartoshi - artists - snoop - community | 75 | 33_nft_sartoshi_artists_snoop |
| 34 | solana - bearish - outages - fibonacci - resistance | 72 | 34_solana_bearish_outages_fibonacci |
| 35 | hinman - ripple - speech - emails - xrp | 71 | 35_hinman_ripple_speech_emails |
| 36 | oecd - taxation - framework - india - electronic | 70 | 36_oecd_taxation_framework_india |
| 37 | terraform - montenegro - korea - x93 - milojko | 69 | 37_terraform_montenegro_korea_x93 |
| 38 | order - securities - freeze - restraining - cyprus | 68 | 38_order_securities_freeze_restraining |
| 39 | manchester - sponsorship - bcci - com - fans | 68 | 39_manchester_sponsorship_bcci_com |
| 40 | surveyed - millennials - managers - crypto - report | 67 | 40_surveyed_millennials_managers_crypto |
| 41 | whales - eth - market - transactions - usdt | 66 | 41_whales_eth_market_transactions |
| 42 | binance - kazakhstan - changpeng - expansion - 500m | 61 | 42_binance_kazakhstan_changpeng_expansion |
| 43 | twitter - musk - metatime - jack - yaccarino | 59 | 43_twitter_musk_metatime_jack |
| 44 | rsi - price - line - altcoin - bullish | 59 | 44_rsi_price_line_altcoin |
| 45 | china - huobi - hkma - regulatory - companies | 57 | 45_china_huobi_hkma_regulatory |
| 46 | token - leo - surged - tlos - graph | 57 | 46_token_leo_surged_tlos |
| 47 | cbdcs - governor - banks - mit - project | 56 | 47_cbdcs_governor_banks_mit |
| 48 | daos - chorus - lieberman - decentralized - organizations | 51 | 48_daos_chorus_lieberman_decentralized |
| 49 | fungible - nonfungible - tokens - nft - 2021 | 51 | 49_fungible_nonfungible_tokens_nft |
| 50 | altcoins - levels - overhead - support - bounce | 50 | 50_altcoins_levels_overhead_support |
| 51 | yuan - digital - tax - cbdc - wallets | 43 | 51_yuan_digital_tax_cbdc |
| 52 | depot - company - invest - banking - america | 42 | 52_depot_company_invest_banking |
| 53 | markets - advice - bull - hodlers - nasdaily | 42 | 53_markets_advice_bull_hodlers |
| 54 | eth - level - breakout - tradingview - analysts | 38 | 54_eth_level_breakout_tradingview |
| 55 | nethereum - usd - struggling - resistance - performers | 37 | 55_nethereum_usd_struggling_resistance |
| 56 | ecoterra - trending - swords - presale - neo | 36 | 56_ecoterra_trending_swords_presale |
| 57 | securities - market - binance - coinbase - week | 34 | 57_securities_market_binance_coinbase |
| 58 | staking - eigenlayer - sip - ethereum - tokens | 33 | 58_staking_eigenlayer_sip_ethereum |
| 59 | founder - ethereum - forgotten - values - twitter | 33 | 59_founder_ethereum_forgotten_values |
| 60 | bnb - bauer - upgrade - ecosystem - network | 32 | 60_bnb_bauer_upgrade_ecosystem |
| 61 | price - rsi - bullish - chart - resistance | 32 | 61_price_rsi_bullish_chart |
| 62 | expiry - week - billion - derivatives - bet | 32 | 62_expiry_week_billion_derivatives |
| 63 | vasil - fork - mainnet - newest - scalability | 31 | 63_vasil_fork_mainnet_newest |
| 64 | microstrategy - saylor - btc - rumor - billion | 31 | 64_microstrategy_saylor_btc_rumor |
| 65 | metamask - browser - wallets - features - allows | 31 | 65_metamask_browser_wallets_features |
| 66 | uae - east - chainalysis - singapore - emerging | 31 | 66_uae_east_chainalysis_singapore |
| 67 | outflows - etps - products - week - funds | 31 | 67_outflows_etps_products_week |
| 68 | polygon - zcash - kakarot - starknet - protocol | 29 | 68_polygon_zcash_kakarot_starknet |
| 69 | japanese - jvcea - stablecoin - x93 - fatf | 29 | 69_japanese_jvcea_stablecoin_x93 |
| 70 | asic - miner - gpu - mi300x - ks3 | 28 | 70_asic_miner_gpu_mi300x |
| 71 | arrows - voyager - dcg - genesis - bankruptcy | 28 | 71_arrows_voyager_dcg_genesis |
| 72 | axie - infinity - program - ronin - upgrades | 26 | 72_axie_infinity_program_ronin |
| 73 | withdrawals - platform - freeway - halted - babel | 26 | 73_withdrawals_platform_freeway_halted |
| 74 | addresses - eth - glassnode - underwater - cryptos | 26 | 74_addresses_eth_glassnode_underwater |
| 75 | bottoming - dip - markets - chain - altcoins | 25 | 75_bottoming_dip_markets_chain |
| 76 | mica - eu - conglomerates - jurisdictions - framework | 25 | 76_mica_eu_conglomerates_jurisdictions |
| 77 | liquidations - resting - bid - order - 200 | 25 | 77_liquidations_resting_bid_order |
| 78 | listings - missed - announcements - usdt - exchanges | 25 | 78_listings_missed_announcements_usdt |
| 79 | cbdc - ripple - border - imf - currencies | 25 | 79_cbdc_ripple_border_imf |
| 80 | announcements - delisting - pair - listing - collection | 24 | 80_announcements_delisting_pair_listing |
| 81 | treasury - mixers - sanctioning - github - prank | 24 | 81_treasury_mixers_sanctioning_github |
| 82 | polkadot - parachains - auctions - opengov - referenda | 24 | 82_polkadot_parachains_auctions_opengov |
| 83 | hedge - investors - crypto - traditional - enriquez | 23 | 83_hedge_investors_crypto_traditional |
| 84 | level - resistance - cj - price - cryptocurrency | 23 | 84_level_resistance_cj_price |
| 85 | nexo - citibank - vauld - acquisitions - launched | 22 | 85_nexo_citibank_vauld_acquisitions |
| 86 | huobi - li - citing - pantronics - rumours | 22 | 86_huobi_li_citing_pantronics |
| 87 | nft - textbook - pill - sweeney - x9caccessible | 21 | 87_nft_textbook_pill_sweeney |
| 88 | bored - yacht - apecoin - justin - collection | 21 | 88_bored_yacht_apecoin_justin |
| 89 | apecoin - pattern - chart - head - roc | 21 | 89_apecoin_pattern_chart_head |
| 90 | subscription - investment - binance - dual - 06 | 20 | 90_subscription_investment_binance_dual |
| 91 | halving - correlation - nasdaq - 2024 - powell | 20 | 91_halving_correlation_nasdaq_2024 |
| 92 | announcements - delisting - listing - crypto - slice | 20 | 92_announcements_delisting_listing_crypto |
| 93 | adoption - nigeria - kucoin - lawful - aza | 18 | 93_adoption_nigeria_kucoin_lawful |
| 94 | staff - chatbot - layoffs - hr - terminations | 18 | 94_staff_chatbot_layoffs_hr |
| 95 | ethereum - network - batching - costs - tx | 18 | 95_ethereum_network_batching_costs |
| 96 | suarez - desantis - salary - city - candidate | 18 | 96_suarez_desantis_salary_city |
| 97 | circle - stablecoin - integrating - cybavo - worldpay | 17 | 97_circle_stablecoin_integrating_cybavo |
| 98 | stablecoins - paypal - plabasan - mhel - converge22 | 17 | 98_stablecoins_paypal_plabasan_mhel |
| 99 | week - tokens - tvl - locked - analytical | 17 | 99_week_tokens_tvl_locked |
| 100 | impairment - company - holdings - incurred - btc | 17 | 100_impairment_company_holdings_incurred |
| 101 | cbdc - familiarity - euro - ecb - respondents | 17 | 101_cbdc_familiarity_euro_ecb |
| 102 | marketplace - opensea - popularize - ftx - teaming | 16 | 102_marketplace_opensea_popularize_ftx |
| 103 | executive - leaving - bitstamp - genesis - samir | 15 | 103_executive_leaving_bitstamp_genesis |
</details>
## Training hyperparameters
* calculate_probabilities: False
* language: None
* low_memory: False
* min_topic_size: 15
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 5
* verbose: False
## Framework versions
* Numpy: 1.22.4
* HDBSCAN: 0.8.29
* UMAP: 0.5.3
* Pandas: 1.5.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.2.2
* Transformers: 4.30.2
* Numba: 0.56.4
* Plotly: 5.13.1
* Python: 3.10.12
|
aliyzd95/wav2vec2-mms-1b-turkish
|
aliyzd95
| 2023-06-26T09:38:27Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_6_1",
"license:cc-by-nc-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-26T06:28:08Z |
---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
datasets:
- common_voice_6_1
metrics:
- wer
model-index:
- name: wav2vec2-mms-1b-turkish
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_6_1
type: common_voice_6_1
config: tr
split: test
args: tr
metrics:
- name: Wer
type: wer
value: 0.20978449596568277
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-mms-1b-turkish
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the common_voice_6_1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1443
- Wer: 0.2098
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.2036 | 0.46 | 100 | 0.1980 | 0.2614 |
| 0.3 | 0.92 | 200 | 0.1918 | 0.2725 |
| 0.2735 | 1.38 | 300 | 0.1672 | 0.2346 |
| 0.2672 | 1.83 | 400 | 0.1671 | 0.2312 |
| 0.2641 | 2.29 | 500 | 0.1598 | 0.2248 |
| 0.2541 | 2.75 | 600 | 0.1587 | 0.2270 |
| 0.2696 | 3.21 | 700 | 0.1546 | 0.2235 |
| 0.2315 | 3.67 | 800 | 0.1559 | 0.2259 |
| 0.2396 | 4.13 | 900 | 0.1534 | 0.2172 |
| 0.2284 | 4.59 | 1000 | 0.1521 | 0.2172 |
| 0.2342 | 5.05 | 1100 | 0.1523 | 0.2178 |
| 0.2163 | 5.5 | 1200 | 0.1520 | 0.2184 |
| 0.2272 | 5.96 | 1300 | 0.1504 | 0.2182 |
| 0.2122 | 6.42 | 1400 | 0.1483 | 0.2149 |
| 0.2162 | 6.88 | 1500 | 0.1472 | 0.2100 |
| 0.2104 | 7.34 | 1600 | 0.1466 | 0.2104 |
| 0.2004 | 7.8 | 1700 | 0.1457 | 0.2110 |
| 0.2156 | 8.26 | 1800 | 0.1455 | 0.2134 |
| 0.1981 | 8.72 | 1900 | 0.1451 | 0.2103 |
| 0.1921 | 9.17 | 2000 | 0.1452 | 0.2105 |
| 0.19 | 9.63 | 2100 | 0.1443 | 0.2098 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
a-v-bely/ruBert-base-finetuned-russian-moshkov-child-corpus-pro
|
a-v-bely
| 2023-06-26T09:38:22Z | 123 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-03-21T16:45:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: ruBert-base-finetuned-russian-moshkov-child-corpus-pro
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ruBert-base-finetuned-russian-moshkov-child-corpus-pro
This model is a fine-tuned version of [sberbank-ai/ruBert-base](https://huggingface.co/sberbank-ai/ruBert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9961
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.3098 | 1.0 | 31096 | 2.0987 |
| 2.1516 | 2.0 | 62192 | 2.0293 |
| 2.0897 | 3.0 | 93288 | 1.9993 |
### Framework versions
- Transformers 4.27.2
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Jade1211/textual_inversion_singer
|
Jade1211
| 2023-06-26T09:36:56Z | 4 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-26T06:38:53Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - Jade1211/textual_inversion_singer
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
imran90/MathBot2
|
imran90
| 2023-06-26T09:34:18Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-26T09:34:14Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
Qasim30/q-FrozenLake-v1-4x4-noSlippery
|
Qasim30
| 2023-06-26T09:33:25Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T09:33:23Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Qasim30/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ce-dric/taxi-v3
|
ce-dric
| 2023-06-26T09:13:53Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T09:13:51Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="ce-dric/taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
savitras/convnext-tiny-224-finetuned-eurosat-albumentations
|
savitras
| 2023-06-26T08:59:24Z | 192 | 0 |
transformers
|
[
"transformers",
"pytorch",
"convnext",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-26T08:38:06Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: convnext-tiny-224-finetuned-eurosat-albumentations
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9837037037037037
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-tiny-224-finetuned-eurosat-albumentations
This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0568
- Accuracy: 0.9837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1292 | 1.0 | 190 | 0.1211 | 0.9737 |
| 0.0648 | 2.0 | 380 | 0.0685 | 0.9815 |
| 0.0414 | 3.0 | 570 | 0.0568 | 0.9837 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.10.1+cu111
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Ashraf-kasem/ppo-LunarLander-v2
|
Ashraf-kasem
| 2023-06-26T08:52:13Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T08:51:43Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 222.18 +/- 17.29
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
YangsHao/RecBERT
|
YangsHao
| 2023-06-26T08:48:47Z | 14 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-06-26T08:07:35Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
widget:
- source_sentence: "汪汪队立大功第1季动画动画冒险剧情本领高强的狗狗巡逻队精通科技的10岁男孩"
sentences:
- "超人总动员2喜剧动作动画冒险家庭亲情超级英雄励志超能先生变奶爸超人家族时隔14年强势回归"
- "星汉灿烂·月升沧海剧情爱情星汉灿烂·月升沧海该剧讲述了程家女名少商"
- "外星人事件2喜剧科幻剧情山炮大战爆笑来袭传闻,几十年前外星人曾开着飞船造访过下井沟"
example_title: "Example"
---
# RecBERT
RecBERT is a Pretrain-for-Recommentation model. It could serve as an item encoder with the capability of capturing both collaborative filtering signals and text semantic Information. RecBERT is pre-trained on CMCC, an industrial video recommendation dataset collected from China Mobile.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["汪汪队立大功第1季动画动画冒险剧情本领高强的狗狗巡逻队精通科技的10岁男孩", "超人总动员2喜剧动作动画冒险家庭亲情超级英雄励志超能先生变奶爸超人家族时隔14年强势回归"]
model = SentenceTransformer('YangsHao/RecBERT')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['汪汪队立大功第1季动画动画冒险剧情本领高强的狗狗巡逻队精通科技的10岁男孩', '超人总动员2喜剧动作动画冒险家庭亲情超级英雄励志超能先生变奶爸超人家族时隔14年强势回归']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('YangsHao/RecBERT')
model = AutoModel.from_pretrained('YangsHao/RecBERT')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
|
pceccon/ppo-LunarLander-v2
|
pceccon
| 2023-06-26T08:43:55Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T08:40:59Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 252.58 +/- 13.07
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Geotrend/bert-base-en-fr-cased
|
Geotrend
| 2023-06-26T08:20:44Z | 117 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
language: multilingual
datasets: wikipedia
license: apache-2.0
widget:
- text: "Google generated 46 billion [MASK] in revenue."
- text: "Paris is the capital of [MASK]."
- text: "Algiers is the largest city in [MASK]."
- text: "Paris est la [MASK] de la France."
- text: "Paris est la capitale de la [MASK]."
- text: "L'élection américaine a eu [MASK] en novembre 2020."
---
# bert-base-en-fr-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-fr-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-fr-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Multilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request.
|
Erfan2001/multilingual_tokenized
|
Erfan2001
| 2023-06-26T08:10:01Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-26T04:20:16Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# final
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5995
- Accuracy: 0.8444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.665 | 1.0 | 1494 | 0.6023 | 0.8057 |
| 0.4858 | 2.0 | 2988 | 0.5160 | 0.8318 |
| 0.376 | 3.0 | 4482 | 0.5376 | 0.8353 |
| 0.2863 | 4.0 | 5976 | 0.5591 | 0.8417 |
| 0.2037 | 5.0 | 7470 | 0.5995 | 0.8444 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ibm-research/otter_dude_transe
|
ibm-research
| 2023-06-26T08:09:08Z | 0 | 2 | null |
[
"dataset:ibm/otter_dude",
"arxiv:2306.12802",
"license:mit",
"region:us"
] | null | 2023-06-12T09:56:43Z |
---
license: mit
inference: false
datasets:
- ibm/otter_dude
---
# Otter DUDe TransE Model Card
## Model details
Otter models are based on Graph Neural Networks (GNN) that propagates initial embeddings through a set of layers that upgrade input embedding according to the node neighbours.
The architecture of GNN consists of two main blocks: encoder and decoder.
- For encoder we first define a projection layer which consists of a set of linear transformations for each node modality and projects nodes into common dimensionality, then we apply several multi-relational graph convolutional layers (R-GCN) which distinguish between different types of edges between source and target nodes by having a set of trainable parameters for each edge type.
- For decoder we consider link prediction task, which consists of a scoring function that maps each triple of source and target nodes and the corresponding edge and maps that to a scalar number defined over interval [0; 1].
**Model type:**
For link prediction, we consider three choices of scoring functions: DistMult, TransE and a Binary Classifier that are commonly used in the literature. The outcomes of scoring of each triple are then compared against actual labels using negative log likelihood loss function.
- Flow control: One crucial aspect of pretraining the GNN involves addressing the disparity between the data accessible during pretraining and the data accessible during subsequent tasks. Specifically, during pretraining, there are numerous attributes associated with proteins or drugs, whereas during downstream fine-tuning, only amino acid sequences and SMILES are available. Consequently, during pretraining, we explore two scenarios: one which controls the information propagated to the Drug/Protein entities and one without such control. In our experiments, we present results for both cases to provide an insight on the impact of restricting information flow during pretraining on the subsequent tasks.
- Noisy Links: An additional significant consideration is the presence of noisy links within the up-stream data and how they affect the downstream tasks. To investigate the potential impact on these tasks, we manually handpick a subset of links from each database that are relevant to drug discovery (see details in the Appendix). We then compare the outcomes when training the GNN using only these restricted links versus using all possible links present in the graphs.
- Regression: Certain pretraining datasets, like Uniprot, contain numerical data properties. Hence, we incorporate an extra regression objective aimed at minimizing the root mean square error (MSE) of the predicted numerical data properties. In the learning process, we combine the regression objective and the link prediction objective to create a single objective function.
| Scoring Type | Noisy Links | Flow Control | Regression |
|--------------|:-----------:|--------------|------------|
| TransE | No | Yes | No |
**Model training data:**
The model was trained over a preprocessed version of *DUDe*. Our preprocessed version of *DUDe* includes 1,452,568 instances of drug-target interactions. To prevent any data leakage, we eliminated the negative interactions and the overlapping triples with the TDC DTI dataset. As a result, we were left with a total of 40,216 drug-target interaction pairs.
**Model results:**
<style type="text/css">
.tg {border-collapse:collapse;border-spacing:0;}
.tg td{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
overflow:hidden;padding:10px 5px;word-break:normal;}
.tg th{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
font-weight:normal;overflow:hidden;padding:10px 5px;word-break:normal;}
.tg .tg-c3ow{border-color:inherit;text-align:center;vertical-align:top}
.tg .tg-0pky{border-color:inherit;text-align:center;vertical-align:centr;text-emphasis:bold}
</style>
<table class="tg">
<thead>
<tr>
<th class="tg-0pky">Dataset</th>
<th class="tg-c3ow">DTI DG</th>
<th class="tg-c3ow" colspan="3">DAVIS</th>
<th class="tg-c3ow" colspan="3">KIBA</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tg-0pky">Splits</td>
<td class="tg-c3ow">Temporal</td>
<td class="tg-c3ow">Random</td>
<td class="tg-c3ow">Target</td>
<td class="tg-c3ow">Drug</td>
<td class="tg-c3ow">Random</td>
<td class="tg-c3ow">Target</td>
<td class="tg-c3ow">Drug</td>
</tr>
<tr>
<td class="tg-0pky">Results</td>
<td class="tg-c3ow">0.576</td>
<td class="tg-c3ow">0.807</td>
<td class="tg-c3ow">0.570</td>
<td class="tg-c3ow">0.170</td>
<td class="tg-c3ow">0.856</td>
<td class="tg-c3ow">0.653</td>
<td class="tg-c3ow">0.604</td>
</tr>
</tbody>
</table>
**Paper or resources for more information:**
- [GitHub Repo](https://github.com/IBM/otter-knowledge)
- [Paper](https://arxiv.org/abs/2306.12802)
**License:**
MIT
**Where to send questions or comments about the model:**
- [GitHub Repo](https://github.com/IBM/otter-knowledge)
## How to use
Clone the repo:
```sh
git clone https://github.com/IBM/otter-knowledge.git
cd otter-knowledge
```
- Run the inference for Proteins:
*Replace test_data with the path to a CSV file containing the protein sequences, name_of_the_column with the name of the column of the protein sequence in the CSV and output_path with the filename of the JSON file to be created with the embeddings.*
```python
python inference.py --input_path test_data --sequence_column name_of_the_column --model_path ibm/otter_dude_transe --output_path output_path
```
- Run the inference for Drugs:
*Replace test_data with the path to a CSV file containing the Drug SMILES, name_of_the_column with the name of the column of the SMILES in the CSV and output_path with the filename of the JSON file to be created with the embeddings.*.*
```python
python inference.py --input_path test_data --sequence_column name_of_the_column input_type Drug --relation_name smiles --model_path ibm/otter_dude_transe --output_path output_path
```
|
ibm-research/otter_dude_classifier
|
ibm-research
| 2023-06-26T08:08:58Z | 0 | 2 | null |
[
"dataset:ibm/otter_dude",
"arxiv:2306.12802",
"license:mit",
"region:us"
] | null | 2023-06-12T09:59:13Z |
---
license: mit
inference: false
datasets:
- ibm/otter_dude
---
# Otter DUDe Classifier Model Card
## Model details
Otter models are based on Graph Neural Networks (GNN) that propagates initial embeddings through a set of layers that upgrade input embedding according to the node neighbours.
The architecture of GNN consists of two main blocks: encoder and decoder.
- For encoder we first define a projection layer which consists of a set of linear transformations for each node modality and projects nodes into common dimensionality, then we apply several multi-relational graph convolutional layers (R-GCN) which distinguish between different types of edges between source and target nodes by having a set of trainable parameters for each edge type.
- For decoder we consider link prediction task, which consists of a scoring function that maps each triple of source and target nodes and the corresponding edge and maps that to a scalar number defined over interval [0; 1].
**Model type:**
For link prediction, we consider three choices of scoring functions: DistMult, TransE and a Binary Classifier that are commonly used in the literature. The outcomes of scoring of each triple are then compared against actual labels using negative log likelihood loss function.
- Flow control: One crucial aspect of pretraining the GNN involves addressing the disparity between the data accessible during pretraining and the data accessible during subsequent tasks. Specifically, during pretraining, there are numerous attributes associated with proteins or drugs, whereas during downstream fine-tuning, only amino acid sequences and SMILES are available. Consequently, during pretraining, we explore two scenarios: one which controls the information propagated to the Drug/Protein entities and one without such control. In our experiments, we present results for both cases to provide an insight on the impact of restricting information flow during pretraining on the subsequent tasks.
- Noisy Links: An additional significant consideration is the presence of noisy links within the up-stream data and how they affect the downstream tasks. To investigate the potential impact on these tasks, we manually handpick a subset of links from each database that are relevant to drug discovery (see details in the Appendix). We then compare the outcomes when training the GNN using only these restricted links versus using all possible links present in the graphs.
- Regression: Certain pretraining datasets, like Uniprot, contain numerical data properties. Hence, we incorporate an extra regression objective aimed at minimizing the root mean square error (MSE) of the predicted numerical data properties. In the learning process, we combine the regression objective and the link prediction objective to create a single objective function.
| Scoring Type | Noisy Links | Flow Control | Regression |
|--------------|:-----------:|--------------|------------|
| Classifier Head | No | Yes | No |
**Model training data:**
The model was trained over a preprocessed version of *DUDe*. Our preprocessed version of *DUDe* includes 1,452,568 instances of drug-target interactions. To prevent any data leakage, we eliminated the negative interactions and the overlapping triples with the TDC DTI dataset. As a result, we were left with a total of 40,216 drug-target interaction pairs.
**Model results:**
<style type="text/css">
.tg {border-collapse:collapse;border-spacing:0;}
.tg td{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
overflow:hidden;padding:10px 5px;word-break:normal;}
.tg th{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
font-weight:normal;overflow:hidden;padding:10px 5px;word-break:normal;}
.tg .tg-c3ow{border-color:inherit;text-align:center;vertical-align:top}
.tg .tg-0pky{border-color:inherit;text-align:center;vertical-align:centr;text-emphasis:bold}
</style>
<table class="tg">
<thead>
<tr>
<th class="tg-0pky">Dataset</th>
<th class="tg-c3ow">DTI DG</th>
<th class="tg-c3ow" colspan="3">DAVIS</th>
<th class="tg-c3ow" colspan="3">KIBA</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tg-0pky">Splits</td>
<td class="tg-c3ow">Temporal</td>
<td class="tg-c3ow">Random</td>
<td class="tg-c3ow">Target</td>
<td class="tg-c3ow">Drug</td>
<td class="tg-c3ow">Random</td>
<td class="tg-c3ow">Target</td>
<td class="tg-c3ow">Drug</td>
</tr>
<tr>
<td class="tg-0pky">Results</td>
<td class="tg-c3ow">0.579</td>
<td class="tg-c3ow">0.808</td>
<td class="tg-c3ow">0.574</td>
<td class="tg-c3ow">0.167</td>
<td class="tg-c3ow">0.860</td>
<td class="tg-c3ow">0.641</td>
<td class="tg-c3ow">0.630</td>
</tr>
</tbody>
</table>
**Paper or resources for more information:**
- [GitHub Repo](https://github.com/IBM/otter-knowledge)
- [Paper](https://arxiv.org/abs/2306.12802)
**License:**
MIT
**Where to send questions or comments about the model:**
- [GitHub Repo](https://github.com/IBM/otter-knowledge)
## How to use
Clone the repo:
```sh
git clone https://github.com/IBM/otter-knowledge.git
cd otter-knowledge
```
- Run the inference for Proteins:
*Replace test_data with the path to a CSV file containing the protein sequences, name_of_the_column with the name of the column of the protein sequence in the CSV and output_path with the filename of the JSON file to be created with the embeddings.*
```python
python inference.py --input_path test_data --sequence_column name_of_the_column --model_path ibm/otter_dude_classifier --output_path output_path
```
- Run the inference for Drugs:
*Replace test_data with the path to a CSV file containing the Drug SMILES, name_of_the_column with the name of the column of the SMILES in the CSV and output_path with the filename of the JSON file to be created with the embeddings.*.*
```python
python inference.py --input_path test_data --sequence_column name_of_the_column input_type Drug --relation_name smiles --model_path ibm/otter_dude_classifier --output_path output_path
```
|
ibm-research/otter_dude_distmult
|
ibm-research
| 2023-06-26T08:08:38Z | 0 | 2 | null |
[
"dataset:ibm/otter_dude",
"arxiv:2306.12802",
"license:mit",
"region:us"
] | null | 2023-06-12T09:51:02Z |
---
license: mit
inference: false
datasets:
- ibm/otter_dude
---
# Otter DUDe DistMult Model Card
## Model details
Otter models are based on Graph Neural Networks (GNN) that propagates initial embeddings through a set of layers that upgrade input embedding according to the node neighbours.
The architecture of GNN consists of two main blocks: encoder and decoder.
- For encoder we first define a projection layer which consists of a set of linear transformations for each node modality and projects nodes into common dimensionality, then we apply several multi-relational graph convolutional layers (R-GCN) which distinguish between different types of edges between source and target nodes by having a set of trainable parameters for each edge type.
- For decoder we consider link prediction task, which consists of a scoring function that maps each triple of source and target nodes and the corresponding edge and maps that to a scalar number defined over interval [0; 1].
**Model type:**
For link prediction, we consider three choices of scoring functions: DistMult, TransE and a Binary Classifier that are commonly used in the literature. The outcomes of scoring of each triple are then compared against actual labels using negative log likelihood loss function.
- Flow control: One crucial aspect of pretraining the GNN involves addressing the disparity between the data accessible during pretraining and the data accessible during subsequent tasks. Specifically, during pretraining, there are numerous attributes associated with proteins or drugs, whereas during downstream fine-tuning, only amino acid sequences and SMILES are available. Consequently, during pretraining, we explore two scenarios: one which controls the information propagated to the Drug/Protein entities and one without such control. In our experiments, we present results for both cases to provide an insight on the impact of restricting information flow during pretraining on the subsequent tasks.
- Noisy Links: An additional significant consideration is the presence of noisy links within the up-stream data and how they affect the downstream tasks. To investigate the potential impact on these tasks, we manually handpick a subset of links from each database that are relevant to drug discovery (see details in the Appendix). We then compare the outcomes when training the GNN using only these restricted links versus using all possible links present in the graphs.
- Regression: Certain pretraining datasets, like Uniprot, contain numerical data properties. Hence, we incorporate an extra regression objective aimed at minimizing the root mean square error (MSE) of the predicted numerical data properties. In the learning process, we combine the regression objective and the link prediction objective to create a single objective function.
| Scoring Type | Noisy Links | Flow Control | Regression |
|--------------|:-----------:|--------------|------------|
| DistMult | No | Yes | No |
**Model training data:**
The model was trained over a preprocessed version of *DUDe*. Our preprocessed version of *DUDe* includes 1,452,568 instances of drug-target interactions. To prevent any data leakage, we eliminated the negative interactions and the overlapping triples with the TDC DTI dataset. As a result, we were left with a total of 40,216 drug-target interaction pairs.
**Model results:**
<style type="text/css">
.tg {border-collapse:collapse;border-spacing:0;}
.tg td{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
overflow:hidden;padding:10px 5px;word-break:normal;}
.tg th{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
font-weight:normal;overflow:hidden;padding:10px 5px;word-break:normal;}
.tg .tg-c3ow{border-color:inherit;text-align:center;vertical-align:top}
.tg .tg-0pky{border-color:inherit;text-align:center;vertical-align:centr;text-emphasis:bold}
</style>
<table class="tg">
<thead>
<tr>
<th class="tg-0pky">Dataset</th>
<th class="tg-c3ow">DTI DG</th>
<th class="tg-c3ow" colspan="3">DAVIS</th>
<th class="tg-c3ow" colspan="3">KIBA</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tg-0pky">Splits</td>
<td class="tg-c3ow">Temporal</td>
<td class="tg-c3ow">Random</td>
<td class="tg-c3ow">Target</td>
<td class="tg-c3ow">Drug</td>
<td class="tg-c3ow">Random</td>
<td class="tg-c3ow">Target</td>
<td class="tg-c3ow">Drug</td>
</tr>
<tr>
<td class="tg-0pky">Results</td>
<td class="tg-c3ow">0.577</td>
<td class="tg-c3ow">0.805</td>
<td class="tg-c3ow">0.573</td>
<td class="tg-c3ow">0.132</td>
<td class="tg-c3ow">0.857</td>
<td class="tg-c3ow">0.650</td>
<td class="tg-c3ow">0.607</td>
</tr>
</tbody>
</table>
**Paper or resources for more information:**
- [GitHub Repo](https://github.com/IBM/otter-knowledge)
- [Paper](https://arxiv.org/abs/2306.12802)
**License:**
MIT
**Where to send questions or comments about the model:**
- [GitHub Repo](https://github.com/IBM/otter-knowledge)
## How to use
Clone the repo:
```sh
git clone https://github.com/IBM/otter-knowledge.git
cd otter-knowledge
```
- Run the inference for Proteins:
*Replace test_data with the path to a CSV file containing the protein sequences, name_of_the_column with the name of the column of the protein sequence in the CSV and output_path with the filename of the JSON file to be created with the embeddings.*
```python
python inference.py --input_path test_data --sequence_column name_of_the_column --model_path ibm/otter_dude_distmult --output_path output_path
```
- Run the inference for Drugs:
*Replace test_data with the path to a CSV file containing the Drug SMILES, name_of_the_column with the name of the column of the SMILES in the CSV and output_path with the filename of the JSON file to be created with the embeddings.*.*
```python
python inference.py --input_path test_data --sequence_column name_of_the_column input_type Drug --relation_name smiles --model_path ibm/otter_dude_distmult --output_path output_path
```
|
ibm-research/otter_primekg_distmult
|
ibm-research
| 2023-06-26T08:07:42Z | 0 | 3 | null |
[
"dataset:ibm/otter_primekg",
"arxiv:2306.12802",
"license:mit",
"region:us"
] | null | 2023-06-12T10:31:11Z |
---
license: mit
inference: false
datasets:
- ibm/otter_primekg
---
# Otter PrimeKG DistMult Model Card
## Model details
Otter models are based on Graph Neural Networks (GNN) that propagates initial embeddings through a set of layers that upgrade input embedding according to the node neighbours.
The architecture of GNN consists of two main blocks: encoder and decoder.
- For encoder we first define a projection layer which consists of a set of linear transformations for each node modality and projects nodes into common dimensionality, then we apply several multi-relational graph convolutional layers (R-GCN) which distinguish between different types of edges between source and target nodes by having a set of trainable parameters for each edge type.
- For decoder we consider link prediction task, which consists of a scoring function that maps each triple of source and target nodes and the corresponding edge and maps that to a scalar number defined over interval [0; 1].
**Model type:**
For link prediction, we consider three choices of scoring functions: DistMult, TransE and a Binary Classifier that are commonly used in the literature. The outcomes of scoring of each triple are then compared against actual labels using negative log likelihood loss function.
- Flow control: One crucial aspect of pretraining the GNN involves addressing the disparity between the data accessible during pretraining and the data accessible during subsequent tasks. Specifically, during pretraining, there are numerous attributes associated with proteins or drugs, whereas during downstream fine-tuning, only amino acid sequences and SMILES are available. Consequently, during pretraining, we explore two scenarios: one which controls the information propagated to the Drug/Protein entities and one without such control. In our experiments, we present results for both cases to provide an insight on the impact of restricting information flow during pretraining on the subsequent tasks.
- Noisy Links: An additional significant consideration is the presence of noisy links within the up-stream data and how they affect the downstream tasks. To investigate the potential impact on these tasks, we manually handpick a subset of links from each database that are relevant to drug discovery (see details in the Appendix). We then compare the outcomes when training the GNN using only these restricted links versus using all possible links present in the graphs.
- Regression: Certain pretraining datasets, like Uniprot, contain numerical data properties. Hence, we incorporate an extra regression objective aimed at minimizing the root mean square error (MSE) of the predicted numerical data properties. In the learning process, we combine the regression objective and the link prediction objective to create a single objective function.
| Scoring Type | Noisy Links | Flow Control | Regression |
|--------------|:-----------:|--------------|------------|
| DistMult | No | Yes | No |
**Model training data:**
The model was trained over *PrimeKG* (the Precision Medicine Knowledge Graph). *PrimeKG* integrates 20 biomedical resources, describing 17,080 diseases with 4 million relationships. *PrimeKG* includes nodes describing Gene/Proteins (29,786) and Drugs (7,957 nodes). The Multimodal Knowledge Graph (MKG) that we built from PrimeKG contains 13 modalities, 12,757,300 edges (154,130 data properties, and 12,603,170 object properties), including 642,150 edges describing interactions between proteins, 25,653 edges describing drug-protein interactions, and 2,672,628 describing interactions between drugs.
**Model results:**
<style type="text/css">
.tg {border-collapse:collapse;border-spacing:0;}
.tg td{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
overflow:hidden;padding:10px 5px;word-break:normal;}
.tg th{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
font-weight:normal;overflow:hidden;padding:10px 5px;word-break:normal;}
.tg .tg-c3ow{border-color:inherit;text-align:center;vertical-align:top}
.tg .tg-0pky{border-color:inherit;text-align:center;vertical-align:centr;text-emphasis:bold}
</style>
<table class="tg">
<thead>
<tr>
<th class="tg-0pky">Dataset</th>
<th class="tg-c3ow">DTI DG</th>
<th class="tg-c3ow" colspan="3">DAVIS</th>
<th class="tg-c3ow" colspan="3">KIBA</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tg-0pky">Splits</td>
<td class="tg-c3ow">Temporal</td>
<td class="tg-c3ow">Random</td>
<td class="tg-c3ow">Target</td>
<td class="tg-c3ow">Drug</td>
<td class="tg-c3ow">Random</td>
<td class="tg-c3ow">Target</td>
<td class="tg-c3ow">Drug</td>
</tr>
<tr>
<td class="tg-0pky">Results</td>
<td class="tg-c3ow">0.575</td>
<td class="tg-c3ow">0.806</td>
<td class="tg-c3ow">0.571</td>
<td class="tg-c3ow">0.162</td>
<td class="tg-c3ow">0.856</td>
<td class="tg-c3ow">0.611</td>
<td class="tg-c3ow">0.617</td>
</tr>
</tbody>
</table>
**Paper or resources for more information:**
- [GitHub Repo](https://github.com/IBM/otter-knowledge)
- [Paper](https://arxiv.org/abs/2306.12802)
**License:**
MIT
**Where to send questions or comments about the model:**
- [GitHub Repo](https://github.com/IBM/otter-knowledge)
## How to use
Clone the repo:
```sh
git clone https://github.com/IBM/otter-knowledge.git
cd otter-knowledge
```
- Run the inference for Proteins:
*Replace test_data with the path to a CSV file containing the protein sequences, name_of_the_column with the name of the column of the protein sequence in the CSV and output_path with the filename of the JSON file to be created with the embeddings.*
```python
python inference.py --input_path test_data --sequence_column name_of_the_column --model_path ibm/otter_primekg_distmult --output_path output_path
```
- Run the inference for Drugs:
*Replace test_data with the path to a CSV file containing the Drug SMILES, name_of_the_column with the name of the column of the SMILES in the CSV and output_path with the filename of the JSON file to be created with the embeddings.*.*
```python
python inference.py --input_path test_data --sequence_column name_of_the_column input_type Drug --relation_name smiles --model_path ibm/otter_primekg_distmult --output_path output_path
```
|
ibm-research/otter_primekg_transe
|
ibm-research
| 2023-06-26T08:07:19Z | 0 | 2 | null |
[
"dataset:ibm/otter_primekg",
"arxiv:2306.12802",
"license:mit",
"region:us"
] | null | 2023-06-12T10:32:56Z |
---
license: mit
inference: false
datasets:
- ibm/otter_primekg
---
# Otter PrimeKG TransE Model Card
## Model details
Otter models are based on Graph Neural Networks (GNN) that propagates initial embeddings through a set of layers that upgrade input embedding according to the node neighbours.
The architecture of GNN consists of two main blocks: encoder and decoder.
- For encoder we first define a projection layer which consists of a set of linear transformations for each node modality and projects nodes into common dimensionality, then we apply several multi-relational graph convolutional layers (R-GCN) which distinguish between different types of edges between source and target nodes by having a set of trainable parameters for each edge type.
- For decoder we consider link prediction task, which consists of a scoring function that maps each triple of source and target nodes and the corresponding edge and maps that to a scalar number defined over interval [0; 1].
**Model type:**
For link prediction, we consider three choices of scoring functions: DistMult, TransE and a Binary Classifier that are commonly used in the literature. The outcomes of scoring of each triple are then compared against actual labels using negative log likelihood loss function.
- Flow control: One crucial aspect of pretraining the GNN involves addressing the disparity between the data accessible during pretraining and the data accessible during subsequent tasks. Specifically, during pretraining, there are numerous attributes associated with proteins or drugs, whereas during downstream fine-tuning, only amino acid sequences and SMILES are available. Consequently, during pretraining, we explore two scenarios: one which controls the information propagated to the Drug/Protein entities and one without such control. In our experiments, we present results for both cases to provide an insight on the impact of restricting information flow during pretraining on the subsequent tasks.
- Noisy Links: An additional significant consideration is the presence of noisy links within the up-stream data and how they affect the downstream tasks. To investigate the potential impact on these tasks, we manually handpick a subset of links from each database that are relevant to drug discovery (see details in the Appendix). We then compare the outcomes when training the GNN using only these restricted links versus using all possible links present in the graphs.
- Regression: Certain pretraining datasets, like Uniprot, contain numerical data properties. Hence, we incorporate an extra regression objective aimed at minimizing the root mean square error (MSE) of the predicted numerical data properties. In the learning process, we combine the regression objective and the link prediction objective to create a single objective function.
| Scoring Type | Noisy Links | Flow Control | Regression |
|--------------|:-----------:|--------------|------------|
| TransE | No | Yes | No |
**Model training data:**
The model was trained over *PrimeKG* (the Precision Medicine Knowledge Graph). *PrimeKG* integrates 20 biomedical resources, describing 17,080 diseases with 4 million relationships. *PrimeKG* includes nodes describing Gene/Proteins (29,786) and Drugs (7,957 nodes). The Multimodal Knowledge Graph (MKG) that we built from PrimeKG contains 13 modalities, 12,757,300 edges (154,130 data properties, and 12,603,170 object properties), including 642,150 edges describing interactions between proteins, 25,653 edges describing drug-protein interactions, and 2,672,628 describing interactions between drugs.
**Model results:**
<style type="text/css">
.tg {border-collapse:collapse;border-spacing:0;}
.tg td{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
overflow:hidden;padding:10px 5px;word-break:normal;}
.tg th{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
font-weight:normal;overflow:hidden;padding:10px 5px;word-break:normal;}
.tg .tg-c3ow{border-color:inherit;text-align:center;vertical-align:top}
.tg .tg-0pky{border-color:inherit;text-align:center;vertical-align:centr;text-emphasis:bold}
</style>
<table class="tg">
<thead>
<tr>
<th class="tg-0pky">Dataset</th>
<th class="tg-c3ow">DTI DG</th>
<th class="tg-c3ow" colspan="3">DAVIS</th>
<th class="tg-c3ow" colspan="3">KIBA</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tg-0pky">Splits</td>
<td class="tg-c3ow">Temporal</td>
<td class="tg-c3ow">Random</td>
<td class="tg-c3ow">Target</td>
<td class="tg-c3ow">Drug</td>
<td class="tg-c3ow">Random</td>
<td class="tg-c3ow">Target</td>
<td class="tg-c3ow">Drug</td>
</tr>
<tr>
<td class="tg-0pky">Results</td>
<td class="tg-c3ow">0.573</td>
<td class="tg-c3ow">0.807</td>
<td class="tg-c3ow">0.568</td>
<td class="tg-c3ow">0.186</td>
<td class="tg-c3ow">0.858</td>
<td class="tg-c3ow">0.642</td>
<td class="tg-c3ow">0.607</td>
</tr>
</tbody>
</table>
**Paper or resources for more information:**
- [GitHub Repo](https://github.com/IBM/otter-knowledge)
- [Paper](https://arxiv.org/abs/2306.12802)
**License:**
MIT
**Where to send questions or comments about the model:**
- [GitHub Repo](https://github.com/IBM/otter-knowledge)
## How to use
Clone the repo:
```sh
git clone https://github.com/IBM/otter-knowledge.git
cd otter-knowledge
```
- Run the inference for Proteins:
*Replace test_data with the path to a CSV file containing the protein sequences, name_of_the_column with the name of the column of the protein sequence in the CSV and output_path with the filename of the JSON file to be created with the embeddings.*
```python
python inference.py --input_path test_data --sequence_column name_of_the_column --model_path ibm/otter_primekg_transe --output_path output_path
```
- Run the inference for Drugs:
*Replace test_data with the path to a CSV file containing the Drug SMILES, name_of_the_column with the name of the column of the SMILES in the CSV and output_path with the filename of the JSON file to be created with the embeddings.*.*
```python
python inference.py --input_path test_data --sequence_column name_of_the_column input_type Drug --relation_name smiles --model_path ibm/otter_primekg_transe --output_path output_path
```
|
navndn/ppo-LunarLander-v2
|
navndn
| 2023-06-26T07:59:45Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T07:59:26Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 255.99 +/- 38.67
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
yhna/q-learning-taxi-v3
|
yhna
| 2023-06-26T07:52:08Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T07:52:06Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-learning-taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="yhna/q-learning-taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.