modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-04 18:27:43
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 539
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-04 18:27:26
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
michaelfeil/ct2fast-codegen2-3_7B
|
michaelfeil
| 2023-05-21T19:28:46Z | 3 | 1 |
transformers
|
[
"transformers",
"ctranslate2",
"int8",
"float16",
"arxiv:2305.02309",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-05-21T19:02:31Z |
---
tags:
- ctranslate2
- int8
- float16
license: apache-2.0
---
# # Fast-Inference with Ctranslate2
Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.
quantized version of [Salesforce/codegen2-3_7B](https://huggingface.co/Salesforce/codegen2-3_7B)
```bash
pip install hf-hub-ctranslate2>=2.0.8
```
Converted on 2023-05-21 using
```
ct2-transformers-converter --model Salesforce/codegen2-3_7B --output_dir /home/michael/tmp-ct2fast-codegen2-3_7B --force --copy_files merges.txt tokenizer.json README.md tokenizer_config.json vocab.json special_tokens_map.json added_tokens.json configuration_codegen.py .gitattributes --quantization float16
```
Checkpoint compatible to [ctranslate2>=3.13.0](https://github.com/OpenNMT/CTranslate2) and [hf-hub-ctranslate2>=2.0.6](https://github.com/michaelfeil/hf-hub-ctranslate2)
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "michaelfeil/ct2fast-codegen2-3_7B"
# use either TranslatorCT2fromHfHub or GeneratorCT2fromHfHub here, depending on model.
model = GeneratorCT2fromHfHub(
# load in int8 on CUDA
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
# tokenizer=AutoTokenizer.from_pretrained("Salesforce/codegen2-3_7B")
)
outputs = model.generate(
text=["def print_hello_world():", "def hello_name(name:"],
max_length=64
)
print(outputs)
```
# Licence and other remarks:
This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
# Original description
tags:
- ctranslate2
- int8
- float16
# CodeGen2 (CodeGen2-3.7B)
## Model description
[CodeGen2](https://github.com/salesforce/CodeGen2) is a family of autoregressive language models for **program synthesis**, introduced in the paper:
[CodeGen2: Lessons for Training LLMs on Programming and Natural Languages](https://arxiv.org/abs/2305.02309) by Erik Nijkamp\*, Hiroaki Hayashi\*, Caiming Xiong, Silvio Savarese, Yingbo Zhou.
Unlike the original CodeGen model family (i.e., CodeGen1), CodeGen2 is capable of infilling, and supports more programming languages.
Four model sizes are released: `1B`, `3.7B`, `7B`, `16B`.
## How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality.
### Causal sampling
For regular causal sampling, simply generate completions given the context:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen2-3_7B")
model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen2-3_7B", trust_remote_code=True, revision="main")
text = "def hello_world():"
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=128)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
### Infill sampling
For **infill** sampling, we introduce three new special token types:
* `<mask_N>`: N-th span to be masked. In practice, use `<mask_1>` to where you want to sample infill.
* `<sep>`: Seperator token between the suffix and the infilled sample. See below.
* `<eom>`: "End-Of-Mask" token that model will output at the end of infilling. You may use this token to truncate the output.
For example, if we want to generate infill for the following cursor position of a function:
```python
def hello_world():
|
return name
```
we construct an input to the model by
1. Inserting `<mask_1>` token in place of cursor position
2. Append `<sep>` token to indicate the boundary
3. Insert another `<mask_1>` to indicate which mask we want to infill.
The final snippet looks as follows:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen2-3_7B")
model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen2-3_7B", trust_remote_code=True, revision="main")
def format(prefix, suffix):
return prefix + "<mask_1>" + suffix + "<|endoftext|>" + "<sep>" + "<mask_1>"
prefix = "def hello_world():\n "
suffix = " return name"
text = format(prefix, suffix)
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=128)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=False)[len(text):])
```
You might want to truncate the model output with `<eom>`.
## Training data
This checkpoint is trained on the stricter permissive subset of [the deduplicated version of the Stack dataset (v1.1)](https://huggingface.co/datasets/bigcode/the-stack-dedup). Supported languages (and frameworks) are as follows:
`c`, `c++`, `c-sharp`, `dart`, `go`, `java`, `javascript`, `kotlin`, `lua`, `php`, `python`, `ruby`, `rust`, `scala`, `shell`, `sql`, `swift`, `typescript`, `vue`.
## Training procedure
CodeGen2 was trained using cross-entropy loss to maximize the likelihood of sequential inputs.
The input sequences are formatted in two ways: (1) causal language modeling and (2) file-level span corruption.
Please refer to the paper for more details.
## Evaluation results
We evaluate our models on HumanEval and HumanEval-Infill. Please refer to the [paper](https://arxiv.org/abs/2305.02309) for more details.
## Intended use and limitations
As an autoregressive language model, CodeGen2 is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them.
However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well.
## BibTeX entry and citation info
```bibtex
@article{Nijkamp2023codegen2,
title={CodeGen2: Lessons for Training LLMs on Programming and Natural Languages},
author={Nijkamp, Erik and Hayashi, Hiroaki and Xiong, Caiming and Savarese, Silvio and Zhou, Yingbo},
journal={arXiv preprint},
year={2023}
}
```
|
TJKlein/CLIP-ViT
|
TJKlein
| 2023-05-21T19:26:32Z | 132 | 0 |
transformers
|
[
"transformers",
"pytorch",
"clip",
"zero-shot-image-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
zero-shot-image-classification
| 2023-05-21T19:07:37Z |
---
tags:
- generated_from_trainer
model-index:
- name: clip-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clip-finetuned
This model is a fine-tuned version of [openai/clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100.0
### Training results
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.0.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
MrHup/coloring-book
|
MrHup
| 2023-05-21T19:18:38Z | 0 | 39 | null |
[
"license:gpl-3.0",
"region:us"
] | null | 2023-01-24T13:02:38Z |
---
license: gpl-3.0
---
<img src="https://i.imgur.com/z2ODdOr.jpg" alt="drawing" style="width:300px;"/>
# About model
A simple model trained on a custom dataset containing over 100 coloring book type images.
If you enjoy this model and would like me to improve on it, [buy me a coffe](https://www.buymeacoffee.com/mrhup) ☕
# Installation:
Download both the ckpt and yaml files. Ensure that the same naming pattern is used and copy them under models/Stable-Diffusion path in your local/cloud SD installation. Stable Diffusion 2.1 is required for the model to work correctly.
# Black images issue:
2.1 models need to have a web-ui config modified - if you are getting black images - go to your config file and add to COMMANDLINE_ARGS= --no-half - potentially it could work with --xformers instead (if supported). This line might slow your generations a bit but will not affect negatively your output.
# Prompt suggestion:
`bichon havanese wearing sunglasses COLR_001, (((white background))), coloring book, line art, high resolution, black and white, colorless`
Negative: `((watermark)), (text), color, shading, gradient, shadows, transparency, noisy, blurred`
<img src="https://i.imgur.com/3iDf43z.png" alt="drawing" style="width:300px;"/><img src="https://i.imgur.com/TwVxNe1.jpg" alt="drawing" style="width:300px;"/>
<img src="https://i.imgur.com/vKrsyGe.jpg" alt="drawing" style="width:300px;"/><img src="https://i.imgur.com/Mp3vO5i.jpg" alt="drawing" style="width:300px;"/>
|
mxalmeida/bert-finetuned-squad
|
mxalmeida
| 2023-05-21T19:17:15Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-05-21T16:57:07Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Buseak/canine_vowelizer_2105_v6
|
Buseak
| 2023-05-21T19:11:29Z | 89 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"canine",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-05-21T16:55:02Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: canine_vowelizer_2105_v6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# canine_vowelizer_2105_v6
This model is a fine-tuned version of [google/canine-s](https://huggingface.co/google/canine-s) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1704
- Precision: 0.9998
- Recall: 0.9998
- F1: 0.9998
- Accuracy: 0.9391
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.4826 | 1.0 | 3885 | 0.4310 | 0.9997 | 0.9998 | 0.9997 | 0.8467 |
| 0.4118 | 2.0 | 7770 | 0.3556 | 0.9997 | 0.9998 | 0.9997 | 0.8748 |
| 0.369 | 3.0 | 11655 | 0.3126 | 0.9997 | 0.9998 | 0.9997 | 0.8893 |
| 0.339 | 4.0 | 15540 | 0.2811 | 0.9997 | 0.9998 | 0.9998 | 0.9014 |
| 0.3192 | 5.0 | 19425 | 0.2589 | 0.9997 | 0.9998 | 0.9998 | 0.9095 |
| 0.3052 | 6.0 | 23310 | 0.2399 | 0.9997 | 0.9998 | 0.9998 | 0.9157 |
| 0.281 | 7.0 | 27195 | 0.2252 | 0.9997 | 0.9998 | 0.9998 | 0.9207 |
| 0.2749 | 8.0 | 31080 | 0.2117 | 0.9998 | 0.9998 | 0.9998 | 0.9248 |
| 0.2589 | 9.0 | 34965 | 0.2011 | 0.9998 | 0.9998 | 0.9998 | 0.9285 |
| 0.253 | 10.0 | 38850 | 0.1940 | 0.9998 | 0.9998 | 0.9998 | 0.9314 |
| 0.2428 | 11.0 | 42735 | 0.1842 | 0.9998 | 0.9998 | 0.9998 | 0.9348 |
| 0.2433 | 12.0 | 46620 | 0.1783 | 0.9998 | 0.9998 | 0.9998 | 0.9365 |
| 0.2265 | 13.0 | 50505 | 0.1751 | 0.9998 | 0.9998 | 0.9998 | 0.9375 |
| 0.2244 | 14.0 | 54390 | 0.1721 | 0.9998 | 0.9998 | 0.9998 | 0.9387 |
| 0.2203 | 15.0 | 58275 | 0.1704 | 0.9998 | 0.9998 | 0.9998 | 0.9391 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
kaaniince/imdb_sentiment_analysis
|
kaaniince
| 2023-05-21T19:05:31Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"text-classification",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-21T15:00:00Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: kaaniince/imdb_sentiment_analysis
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# kaaniince/imdb_sentiment_analysis
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0449
- Train Accuracy: 0.9547
- Validation Loss: 0.1589
- Validation Accuracy: 0.9547
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 9375, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2035 | 0.9432 | 0.1493 | 0.9432 | 0 |
| 0.1023 | 0.9524 | 0.1360 | 0.9524 | 1 |
| 0.0449 | 0.9547 | 0.1589 | 0.9547 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
agemagician/mlong-t5-tglobal-xl
|
agemagician
| 2023-05-21T18:53:43Z | 5 | 9 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"longt5",
"text2text-generation",
"multilingual",
"af",
"am",
"ar",
"az",
"be",
"bg",
"bn",
"ca",
"ceb",
"co",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fil",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"haw",
"hi",
"hmn",
"ht",
"hu",
"hy",
"ig",
"is",
"it",
"iw",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lb",
"lo",
"lt",
"lv",
"mg",
"mi",
"mk",
"ml",
"mn",
"mr",
"ms",
"mt",
"my",
"ne",
"nl",
"no",
"ny",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sd",
"si",
"sk",
"sl",
"sm",
"sn",
"so",
"sq",
"sr",
"st",
"su",
"sv",
"sw",
"ta",
"te",
"tg",
"th",
"tr",
"uk",
"und",
"ur",
"uz",
"vi",
"xh",
"yi",
"yo",
"zh",
"zu",
"dataset:mc4",
"arxiv:2305.11129",
"arxiv:1912.08777",
"arxiv:2112.07916",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-19T21:28:42Z |
---
license: apache-2.0
language:
- multilingual
- af
- am
- ar
- az
- be
- bg
- bn
- ca
- ceb
- co
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fil
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- haw
- hi
- hmn
- ht
- hu
- hy
- ig
- is
- it
- iw
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lb
- lo
- lt
- lv
- mg
- mi
- mk
- ml
- mn
- mr
- ms
- mt
- my
- ne
- nl
- no
- ny
- pa
- pl
- ps
- pt
- ro
- ru
- sd
- si
- sk
- sl
- sm
- sn
- so
- sq
- sr
- st
- su
- sv
- sw
- ta
- te
- tg
- th
- tr
- uk
- und
- ur
- uz
- vi
- xh
- yi
- yo
- zh
- zu
datasets:
- mc4
---
# MLongT5 (transient-global attention, xl-sized model)
MLongT5 model pre-trained on Multi-language corpus. The model was introduced in the paper [mLongT5: A Multilingual and Efficient Text-To-Text Transformer for Longer Sequences](https://arxiv.org/pdf/2305.11129.pdf) by Uthus et al. and first released in [the LongT5 repository](https://github.com/google-research/longt5). All the model architecture and configuration can be found in [Flaxformer repository](https://github.com/google/flaxformer) which uses another Google research project repository [T5x](https://github.com/google-research/t5x).
Disclaimer: The team releasing MLongT5 did not write a model card for this model so this model card has been written by Ahmed Elnaggar.
## Model description
MLongT5 model is an encoder-decoder transformer pre-trained in a text-to-text denoising generative setting ([Pegasus-like generation pre-training](https://arxiv.org/pdf/1912.08777.pdf)). MLongT5 model is an extension of [LongT5 model](https://arxiv.org/abs/2112.07916), and it enables using one of the two different efficient attention mechanisms - (1) Local attention, or (2) Transient-Global attention. The usage of attention sparsity patterns allows the model to efficiently handle input sequence.
MLongT5 is particularly effective when fine-tuned for text generation (summarization, question answering) which requires handling long input sequences (up to 16,384 tokens).
## Intended uses & limitations
The model is mostly meant to be fine-tuned on a supervised dataset. See the [model hub](https://huggingface.co/models?search=mlongt5) to look for fine-tuned versions on a task that interests you.
### How to use
The following shows how one can extract the last hidden representation for the model.
```python
from transformers import T5Tokenizer, LongT5Model
tokenizer = T5Tokenizer.from_pretrained("agemagician/mlong-t5-tglobal-xl")
model = LongT5Model.from_pretrained("agemagician/mlong-t5-tglobal-xl")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
The following shows how one can predict masked passages using the different denoising strategies.
### S-Denoising
For *S-Denoising*, please make sure to prompt the text with the prefix `[S2S]` as shown below.
```python
from transformers import LongT5ForConditionalGeneration, T5Tokenizer
import torch
model = LongT5ForConditionalGeneration.from_pretrained("agemagician/mlong-t5-tglobal-xl", low_cpu_mem_usage=True, torch_dtype=torch.bfloat16).to("cuda")
tokenizer = T5Tokenizer.from_pretrained("agemagician/mlong-t5-tglobal-xl")
input_string = "[S2S] Mr. Dursley was the director of a firm called Grunnings, which made drills. He was a big, solid man with a bald head. Mrs. Dursley was thin and blonde and more than the usual amount of neck, which came in very useful as she spent so much of her time craning over garden fences, spying on the neighbours. The Dursleys had a small son called Dudley and in their opinion there was no finer boy anywhere <extra_id_0>"
inputs = tokenizer(input_string, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(inputs, max_length=200)
print(tokenizer.decode(outputs[0]))
```
### R-Denoising
For *R-Denoising*, please make sure to prompt the text with the prefix `[NLU]` as shown below.
```python
from transformers import LongT5ForConditionalGeneration, T5Tokenizer
import torch
model = LongT5ForConditionalGeneration.from_pretrained("agemagician/mlong-t5-tglobal-xl", low_cpu_mem_usage=True, torch_dtype=torch.bfloat16).to("cuda")
tokenizer = T5Tokenizer.from_pretrained("agemagician/mlong-t5-tglobal-xl")
input_string = "[NLU] Mr. Dursley was the director of a firm called <extra_id_0>, which made <extra_id_1>. He was a big, solid man with a bald head. Mrs. Dursley was thin and <extra_id_2> of neck, which came in very useful as she spent so much of her time <extra_id_3>. The Dursleys had a small son called Dudley and <extra_id_4>"
inputs = tokenizer(input_string, return_tensors="pt", add_special_tokens=False).input_ids.to("cuda")
outputs = model.generate(inputs, max_length=200)
print(tokenizer.decode(outputs[0]))
```
### X-Denoising
For *X-Denoising*, please make sure to prompt the text with the prefix `[NLG]` as shown below.
```python
from transformers import LongT5ForConditionalGeneration, T5Tokenizer
import torch
model = LongT5ForConditionalGeneration.from_pretrained("agemagician/mlong-t5-tglobal-xl", low_cpu_mem_usage=True, torch_dtype=torch.bfloat16).to("cuda")
tokenizer = T5Tokenizer.from_pretrained("agemagician/mlong-t5-tglobal-xl")
input_string = "[NLG] Mr. Dursley was the director of a firm called Grunnings, which made drills. He was a big, solid man wiht a bald head. Mrs. Dursley was thin and blonde and more than the usual amount of neck, which came in very useful as she
spent so much of her time craning over garden fences, spying on the neighbours. The Dursleys had a small son called Dudley and in their opinion there was no finer boy anywhere. <extra_id_0>"
model.cuda()
inputs = tokenizer(input_string, return_tensors="pt", add_special_tokens=False).input_ids.to("cuda")
outputs = model.generate(inputs, max_length=200)
print(tokenizer.decode(outputs[0]))
```
### BibTeX entry and citation info
```bibtex
@misc{uthus2023mlongt5,
title={mLongT5: A Multilingual and Efficient Text-To-Text Transformer for Longer Sequences},
author={David Uthus and Santiago Ontañón and Joshua Ainslie and Mandy Guo},
year={2023},
eprint={2305.11129},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
michaelfeil/ct2fast-codegen-350M-multi
|
michaelfeil
| 2023-05-21T18:53:18Z | 5 | 1 |
transformers
|
[
"transformers",
"ctranslate2",
"int8",
"float16",
"arxiv:2203.13474",
"license:bsd-3-clause",
"endpoints_compatible",
"region:us"
] | null | 2023-05-21T18:46:44Z |
---
tags:
- ctranslate2
- int8
- float16
license: bsd-3-clause
---
# # Fast-Inference with Ctranslate2
Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.
quantized version of [Salesforce/codegen-350M-multi](https://huggingface.co/Salesforce/codegen-350M-multi)
```bash
pip install hf-hub-ctranslate2>=2.0.8
```
Converted on 2023-05-21 using
```
ct2-transformers-converter --model Salesforce/codegen-350M-multi --output_dir /home/michael/tmp-ct2fast-codegen-350M-multi --force --copy_files merges.txt tokenizer.json README.md tokenizer_config.json vocab.json special_tokens_map.json added_tokens.json .gitattributes --quantization float16
```
Checkpoint compatible to [ctranslate2>=3.13.0](https://github.com/OpenNMT/CTranslate2) and [hf-hub-ctranslate2>=2.0.6](https://github.com/michaelfeil/hf-hub-ctranslate2)
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "michaelfeil/ct2fast-codegen-350M-multi"
# use either TranslatorCT2fromHfHub or GeneratorCT2fromHfHub here, depending on model.
model = GeneratorCT2fromHfHub(
# load in int8 on CUDA
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
# tokenizer=AutoTokenizer.from_pretrained("Salesforce/codegen-350M-multi")
)
outputs = model.generate(
text=["def print_hello_world():", "def hello_name(name:"],
max_length=64
)
print(outputs)
```
# Licence and other remarks:
This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
# Original description
tags:
- ctranslate2
- int8
- float16
# CodeGen (CodeGen-Multi 350M)
## Model description
CodeGen is a family of autoregressive language models for **program synthesis** from the paper: [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. The models are originally released in [this repository](https://github.com/salesforce/CodeGen), under 3 pre-training data variants (`NL`, `Multi`, `Mono`) and 4 model size variants (`350M`, `2B`, `6B`, `16B`).
The checkpoint included in this repository is denoted as **CodeGen-Multi 350M** in the paper, where "Multi" means the model is initialized with *CodeGen-NL 350M* and further pre-trained on a dataset of multiple programming languages, and "350M" refers to the number of trainable parameters.
## Training data
This checkpoint (CodeGen-Multi 350M) was firstly initialized with *CodeGen-NL 350M*, and then pre-trained on [BigQuery](https://console.cloud.google.com/marketplace/details/github/github-repos), a large-scale dataset of multiple programming languages from GitHub repositories. The data consists of 119.2B tokens and includes C, C++, Go, Java, JavaScript, and Python.
## Training procedure
CodeGen was trained using cross-entropy loss to maximize the likelihood of sequential inputs.
The family of models are trained using multiple TPU-v4-512 by Google, leveraging data and model parallelism.
See Section 2.3 of the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Evaluation results
We evaluate our models on two code generation benchmark: HumanEval and MTPB. Please refer to the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Intended Use and Limitations
As an autoregressive language model, CodeGen is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them.
However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well.
## How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-350M-multi")
model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-350M-multi")
text = "def hello_world():"
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=128)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
## BibTeX entry and citation info
```bibtex
@article{Nijkamp2022ACP,
title={A Conversational Paradigm for Program Synthesis},
author={Nijkamp, Erik and Pang, Bo and Hayashi, Hiroaki and Tu, Lifu and Wang, Huan and Zhou, Yingbo and Savarese, Silvio and Xiong, Caiming},
journal={arXiv preprint},
year={2022}
}
```
|
agemagician/mlong-t5-tglobal-large
|
agemagician
| 2023-05-21T18:49:04Z | 22 | 5 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"longt5",
"text2text-generation",
"multilingual",
"af",
"am",
"ar",
"az",
"be",
"bg",
"bn",
"ca",
"ceb",
"co",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fil",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"haw",
"hi",
"hmn",
"ht",
"hu",
"hy",
"ig",
"is",
"it",
"iw",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lb",
"lo",
"lt",
"lv",
"mg",
"mi",
"mk",
"ml",
"mn",
"mr",
"ms",
"mt",
"my",
"ne",
"nl",
"no",
"ny",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sd",
"si",
"sk",
"sl",
"sm",
"sn",
"so",
"sq",
"sr",
"st",
"su",
"sv",
"sw",
"ta",
"te",
"tg",
"th",
"tr",
"uk",
"und",
"ur",
"uz",
"vi",
"xh",
"yi",
"yo",
"zh",
"zu",
"dataset:mc4",
"arxiv:2305.11129",
"arxiv:1912.08777",
"arxiv:2112.07916",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-19T21:09:25Z |
---
license: apache-2.0
language:
- multilingual
- af
- am
- ar
- az
- be
- bg
- bn
- ca
- ceb
- co
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fil
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- haw
- hi
- hmn
- ht
- hu
- hy
- ig
- is
- it
- iw
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lb
- lo
- lt
- lv
- mg
- mi
- mk
- ml
- mn
- mr
- ms
- mt
- my
- ne
- nl
- no
- ny
- pa
- pl
- ps
- pt
- ro
- ru
- sd
- si
- sk
- sl
- sm
- sn
- so
- sq
- sr
- st
- su
- sv
- sw
- ta
- te
- tg
- th
- tr
- uk
- und
- ur
- uz
- vi
- xh
- yi
- yo
- zh
- zu
datasets:
- mc4
---
# MLongT5 (transient-global attention, large-sized model)
MLongT5 model pre-trained on Multi-language corpus. The model was introduced in the paper [mLongT5: A Multilingual and Efficient Text-To-Text Transformer for Longer Sequences](https://arxiv.org/pdf/2305.11129.pdf) by Uthus et al. and first released in [the LongT5 repository](https://github.com/google-research/longt5). All the model architecture and configuration can be found in [Flaxformer repository](https://github.com/google/flaxformer) which uses another Google research project repository [T5x](https://github.com/google-research/t5x).
Disclaimer: The team releasing MLongT5 did not write a model card for this model so this model card has been written by Ahmed Elnaggar.
## Model description
MLongT5 model is an encoder-decoder transformer pre-trained in a text-to-text denoising generative setting ([Pegasus-like generation pre-training](https://arxiv.org/pdf/1912.08777.pdf)). MLongT5 model is an extension of [LongT5 model](https://arxiv.org/abs/2112.07916), and it enables using one of the two different efficient attention mechanisms - (1) Local attention, or (2) Transient-Global attention. The usage of attention sparsity patterns allows the model to efficiently handle input sequence.
MLongT5 is particularly effective when fine-tuned for text generation (summarization, question answering) which requires handling long input sequences (up to 16,384 tokens).
## Intended uses & limitations
The model is mostly meant to be fine-tuned on a supervised dataset. See the [model hub](https://huggingface.co/models?search=mlongt5) to look for fine-tuned versions on a task that interests you.
### How to use
The following shows how one can extract the last hidden representation for the model.
```python
from transformers import T5Tokenizer, LongT5Model
tokenizer = T5Tokenizer.from_pretrained("agemagician/mlong-t5-tglobal-large")
model = LongT5Model.from_pretrained("agemagician/mlong-t5-tglobal-large")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
The following shows how one can predict masked passages using the different denoising strategies.
### S-Denoising
For *S-Denoising*, please make sure to prompt the text with the prefix `[S2S]` as shown below.
```python
from transformers import LongT5ForConditionalGeneration, T5Tokenizer
import torch
model = LongT5ForConditionalGeneration.from_pretrained("agemagician/mlong-t5-tglobal-large", low_cpu_mem_usage=True, torch_dtype=torch.bfloat16).to("cuda")
tokenizer = T5Tokenizer.from_pretrained("agemagician/mlong-t5-tglobal-large")
input_string = "[S2S] Mr. Dursley was the director of a firm called Grunnings, which made drills. He was a big, solid man with a bald head. Mrs. Dursley was thin and blonde and more than the usual amount of neck, which came in very useful as she spent so much of her time craning over garden fences, spying on the neighbours. The Dursleys had a small son called Dudley and in their opinion there was no finer boy anywhere <extra_id_0>"
inputs = tokenizer(input_string, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(inputs, max_length=200)
print(tokenizer.decode(outputs[0]))
```
### R-Denoising
For *R-Denoising*, please make sure to prompt the text with the prefix `[NLU]` as shown below.
```python
from transformers import LongT5ForConditionalGeneration, T5Tokenizer
import torch
model = LongT5ForConditionalGeneration.from_pretrained("agemagician/mlong-t5-tglobal-large", low_cpu_mem_usage=True, torch_dtype=torch.bfloat16).to("cuda")
tokenizer = T5Tokenizer.from_pretrained("agemagician/mlong-t5-tglobal-large")
input_string = "[NLU] Mr. Dursley was the director of a firm called <extra_id_0>, which made <extra_id_1>. He was a big, solid man with a bald head. Mrs. Dursley was thin and <extra_id_2> of neck, which came in very useful as she spent so much of her time <extra_id_3>. The Dursleys had a small son called Dudley and <extra_id_4>"
inputs = tokenizer(input_string, return_tensors="pt", add_special_tokens=False).input_ids.to("cuda")
outputs = model.generate(inputs, max_length=200)
print(tokenizer.decode(outputs[0]))
```
### X-Denoising
For *X-Denoising*, please make sure to prompt the text with the prefix `[NLG]` as shown below.
```python
from transformers import LongT5ForConditionalGeneration, T5Tokenizer
import torch
model = LongT5ForConditionalGeneration.from_pretrained("agemagician/mlong-t5-tglobal-large", low_cpu_mem_usage=True, torch_dtype=torch.bfloat16).to("cuda")
tokenizer = T5Tokenizer.from_pretrained("agemagician/mlong-t5-tglobal-large")
input_string = "[NLG] Mr. Dursley was the director of a firm called Grunnings, which made drills. He was a big, solid man wiht a bald head. Mrs. Dursley was thin and blonde and more than the usual amount of neck, which came in very useful as she
spent so much of her time craning over garden fences, spying on the neighbours. The Dursleys had a small son called Dudley and in their opinion there was no finer boy anywhere. <extra_id_0>"
model.cuda()
inputs = tokenizer(input_string, return_tensors="pt", add_special_tokens=False).input_ids.to("cuda")
outputs = model.generate(inputs, max_length=200)
print(tokenizer.decode(outputs[0]))
```
### BibTeX entry and citation info
```bibtex
@misc{uthus2023mlongt5,
title={mLongT5: A Multilingual and Efficient Text-To-Text Transformer for Longer Sequences},
author={David Uthus and Santiago Ontañón and Joshua Ainslie and Mandy Guo},
year={2023},
eprint={2305.11129},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
wasimar/a2c-PandaReachDense-v2
|
wasimar
| 2023-05-21T18:48:22Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-21T17:12:19Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.39 +/- 0.20
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Ztijn/distilbert-base-multilingual-cased-finetuned-squad
|
Ztijn
| 2023-05-21T17:37:03Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-05-21T15:24:26Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-multilingual-cased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4552
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.5104 | 1.0 | 8558 | 1.4392 |
| 1.2578 | 2.0 | 17116 | 1.4380 |
| 1.0438 | 3.0 | 25674 | 1.4552 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
jinhybr/distilbert_classifier_newsgroups
|
jinhybr
| 2023-05-21T17:31:47Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-21T17:31:15Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilbert_classifier_newsgroups
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert_classifier_newsgroups
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1908, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
divisiondeariza/ekua-man
|
divisiondeariza
| 2023-05-21T17:19:55Z | 30 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-05-21T17:15:29Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
---
### ekua-man on Stable Diffusion via Dreambooth
#### model by divisiondeariza
This your the Stable Diffusion model fine-tuned the ekua-man concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **<ekua> bearded masculine man**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:

























|
pabagcha/roberta_crypto_profiling_task1_deberta
|
pabagcha
| 2023-05-21T17:13:32Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-21T16:47:27Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: roberta_crypto_profiling_task1_deberta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta_crypto_profiling_task1_deberta
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4722
- Accuracy: 0.5176
- F1: 0.4814
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 211 | 0.9966 | 0.5882 | 0.5030 |
| No log | 2.0 | 422 | 1.7145 | 0.5647 | 0.5360 |
| 0.5073 | 3.0 | 633 | 2.2226 | 0.5176 | 0.4695 |
| 0.5073 | 4.0 | 844 | 2.1071 | 0.5647 | 0.5222 |
| 0.112 | 5.0 | 1055 | 2.4722 | 0.5176 | 0.4814 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
MichaelHuang/muril_base_cased_urdu_ner
|
MichaelHuang
| 2023-05-21T16:54:16Z | 207 | 2 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"ner",
"ur",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-05-16T01:16:23Z |
---
language:
- ur
tags:
- ner
---
# NER in Urdu
## muril_base_cased_urdu_ner
Base model is [google/muril-base-cased](https://huggingface.co/google/muril-base-cased), a BERT model pre-trained on 17 Indian languages and their transliterated counterparts.
Urdu NER dataset is translated from the Hindi NER dataset from [HiNER](https://github.com/cfiltnlp/HiNER).
## Usage
### example:
```python
from transformers import AutoModelForTokenClassification, AutoTokenizer
import torch
model = AutoModelForTokenClassification.from_pretrained("MichaelHuang/muril_base_cased_urdu_ner")
tokenizer = AutoTokenizer.from_pretrained("google/muril-base-cased")
# Define the labels dictionary
labels_dict = {
0: "B-FESTIVAL",
1: "B-GAME",
2: "B-LANGUAGE",
3: "B-LITERATURE",
4: "B-LOCATION",
5: "B-MISC",
6: "B-NUMEX",
7: "B-ORGANIZATION",
8: "B-PERSON",
9: "B-RELIGION",
10: "B-TIMEX",
11: "I-FESTIVAL",
12: "I-GAME",
13: "I-LANGUAGE",
14: "I-LITERATURE",
15: "I-LOCATION",
16: "I-MISC",
17: "I-NUMEX",
18: "I-ORGANIZATION",
19: "I-PERSON",
20: "I-RELIGION",
21: "I-TIMEX",
22: "O"
}
def ner_predict(sentence, model, tokenizer, labels_dict):
# Tokenize the input sentence
inputs = tokenizer(sentence, return_tensors="pt", padding=True, truncation=True, max_length=128)
# Perform inference
with torch.no_grad():
outputs = model(**inputs)
# Get the predicted labels
predicted_labels = torch.argmax(outputs.logits, dim=2)
# Convert tokens and labels to lists
tokens = tokenizer.convert_ids_to_tokens(inputs["input_ids"][0])
labels = predicted_labels.squeeze().tolist()
# Map numeric labels to string labels
predicted_labels = [labels_dict[label] for label in labels]
# Combine tokens and labels
result = list(zip(tokens, predicted_labels))
return result
test_sentence = "امیتابھ اور ریکھا کی فلم 'گنگا کی سوگندھ' 10 فروری سنہ 1978 کو ریلیز ہوئی تھی۔ اس کے بعد راکھی، رندھیر کپور اور نیتو سنگھ کے ساتھ 'قسمے وعدے' 21 اپریل 1978 کو ریلیز ہوئی۔"
predictions = ner_predict(test_sentence, model, tokenizer, labels_dict)
for token, label in predictions:
print(f"{token}: {label}")
```
|
polejowska/detr-r50-cd45rb-7k-8ah
|
polejowska
| 2023-05-21T16:49:19Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"detr",
"object-detection",
"generated_from_trainer",
"dataset:cd45rb",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-05-21T13:15:08Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cd45rb
model-index:
- name: detr-r50-cd45rb-7k-8ah
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-r50-cd45rb-7k-8ah
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cd45rb dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7164
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5817 | 1.0 | 896 | 1.9431 |
| 2.2882 | 2.0 | 1792 | 1.8638 |
| 2.2236 | 3.0 | 2688 | 1.8089 |
| 2.1649 | 4.0 | 3584 | 1.7843 |
| 2.1275 | 5.0 | 4480 | 1.7593 |
| 2.1176 | 6.0 | 5376 | 1.7542 |
| 2.1037 | 7.0 | 6272 | 1.7386 |
| 2.0969 | 8.0 | 7168 | 1.7242 |
| 2.0794 | 9.0 | 8064 | 1.7200 |
| 2.0792 | 10.0 | 8960 | 1.7164 |
The training took 3 hours 15 minutes on NVIDIA GPU.
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Siliconic/raven-x-001
|
Siliconic
| 2023-05-21T16:45:19Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-21T15:14:24Z |
---
{}
---
Raven-X-001
---
Raven-X model v1 by Siliconic Technologies.
This is a custom model for Raven AI.
This model is a modified version of vicuna-13b-delta and llama model, trained on oasst, chatgpt, sharegpt datasets and wikipedia.
Created and Fine-tuned by Akshit Kumar.
Raven AI System is a modified version of Visda AI System.
|
daghspam/my_awesome_qa_model
|
daghspam
| 2023-05-21T16:42:09Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-05-21T13:01:48Z |
---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7136
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 2.3978 |
| 2.7051 | 2.0 | 500 | 1.7693 |
| 2.7051 | 3.0 | 750 | 1.7136 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
wasimar/a2c-AntBulletEnv-v0
|
wasimar
| 2023-05-21T16:20:37Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-21T16:19:25Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1665.37 +/- 179.37
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
NOTIF/chater
|
NOTIF
| 2023-05-21T16:14:17Z | 0 | 0 |
fairseq
|
[
"fairseq",
"text-classification",
"aa",
"license:openrail",
"region:us"
] |
text-classification
| 2023-05-21T16:11:57Z |
---
license: openrail
language:
- aa
library_name: fairseq
pipeline_tag: text-classification
---
|
Eldund/ppo-SnowballTargetTESTCOLAB
|
Eldund
| 2023-05-21T16:12:14Z | 3 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-05-21T16:12:09Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: Eldund/ppo-SnowballTargetTESTCOLAB
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
andres-vs/roberta_finetuned_squad_2
|
andres-vs
| 2023-05-21T16:11:54Z | 0 | 0 | null |
[
"pytorch",
"tensorboard",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"region:us"
] | null | 2023-05-21T12:28:59Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta_finetuned_squad_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta_finetuned_squad_2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
khanhj/testgpt2chatbot
|
khanhj
| 2023-05-21T16:10:40Z | 212 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"tflite",
"rust",
"safetensors",
"gpt2",
"text-generation",
"exbert",
"en",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-21T04:35:25Z |
---
language: en
tags:
- exbert
license: mit
---
# GPT-2
Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
Disclaimer: The team releasing GPT-2 also wrote a
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card
has been written by the Hugging Face team to complete the information they provided and give specific examples of bias.
## Model description
GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt.
This is the **smallest** version of GPT-2, with 124M parameters.
**Related Models:** [GPT-Large](https://huggingface.co/gpt2-large), [GPT-Medium](https://huggingface.co/gpt2-medium) and [GPT-XL](https://huggingface.co/gpt2-xl)
## Intended uses & limitations
You can use the raw model for text generation or fine-tune it to a downstream task. See the
[model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you.
### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2')
>>> set_seed(42)
>>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
[{'generated_text': "Hello, I'm a language model, a language for thinking, a language for expressing thoughts."},
{'generated_text': "Hello, I'm a language model, a compiler, a compiler library, I just want to know how I build this kind of stuff. I don"},
{'generated_text': "Hello, I'm a language model, and also have more than a few of your own, but I understand that they're going to need some help"},
{'generated_text': "Hello, I'm a language model, a system model. I want to know my language so that it might be more interesting, more user-friendly"},
{'generated_text': 'Hello, I\'m a language model, not a language model"\n\nThe concept of "no-tricks" comes in handy later with new'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2Model.from_pretrained('gpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = TFGPT2Model.from_pretrained('gpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of
unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases):
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
> that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do
> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a
> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,
> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar
> levels of caution around use cases that are sensitive to biases around human attributes.
Here's an example of how the model can have biased predictions:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2')
>>> set_seed(42)
>>> generator("The White man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The White man worked as a mannequin for'},
{'generated_text': 'The White man worked as a maniser of the'},
{'generated_text': 'The White man worked as a bus conductor by day'},
{'generated_text': 'The White man worked as a plumber at the'},
{'generated_text': 'The White man worked as a journalist. He had'}]
>>> set_seed(42)
>>> generator("The Black man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The Black man worked as a man at a restaurant'},
{'generated_text': 'The Black man worked as a car salesman in a'},
{'generated_text': 'The Black man worked as a police sergeant at the'},
{'generated_text': 'The Black man worked as a man-eating monster'},
{'generated_text': 'The Black man worked as a slave, and was'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
## Training procedure
### Preprocessing
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact
details of training.
## Evaluation results
The model achieves the following results without any fine-tuning (zero-shot):
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
| | 35.13 | 45.99 | 87.65 | 83.4 | 29.41 | 65.85 | 1.16 | 1,17 | 37.50 | 75.20 |
### BibTeX entry and citation info
```bibtex
@article{radford2019language,
title={Language Models are Unsupervised Multitask Learners},
author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
year={2019}
}
```
<a href="https://huggingface.co/exbert/?model=gpt2">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
DevozZ/LunarLander-v2
|
DevozZ
| 2023-05-21T16:06:12Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-21T15:49:21Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -58.93 +/- 81.11
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'LunarLander'
'seed': 42
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 500000
'learning_rate': 0.001
'num_envs': 16
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'DevozZ/LunarLander-v2'
'batch_size': 2048
'minibatch_size': 512}
```
|
asenella/mmnist_JNFDccaconfig2_seed_2_ratio_05_c
|
asenella
| 2023-05-21T15:59:27Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-05-21T15:59:20Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
zollimar/pokemon_rec
|
zollimar
| 2023-05-21T15:56:56Z | 5 | 0 |
tf-keras
|
[
"tf-keras",
"mobilenet",
"image-classification",
"region:us"
] |
image-classification
| 2023-05-09T13:11:08Z |
---
pipeline_tag: image-classification
---
|
gcmsrc/distilbert-base-uncased-finetuned-emotion
|
gcmsrc
| 2023-05-21T15:46:15Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-05T15:27:07Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9355
- name: F1
type: f1
value: 0.9356480877541032
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1424
- Accuracy: 0.9355
- F1: 0.9356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5311 | 1.0 | 250 | 0.1817 | 0.932 | 0.9317 |
| 0.14 | 2.0 | 500 | 0.1483 | 0.9365 | 0.9368 |
| 0.0915 | 3.0 | 750 | 0.1424 | 0.9355 | 0.9356 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.2+cu102
- Datasets 2.8.0
- Tokenizers 0.10.3
|
zonghaoyang/DistilRoBERTa
|
zonghaoyang
| 2023-05-21T15:43:14Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-21T13:32:06Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: DistilRoBERTa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DistilRoBERTa
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5297
- Accuracy: 0.8950
- F1: 0.5834
- Precision: 0.6414
- Recall: 0.5351
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.2559 | 1.0 | 1626 | 0.2590 | 0.8987 | 0.6029 | 0.6538 | 0.5595 |
| 0.2191 | 2.0 | 3252 | 0.3010 | 0.8954 | 0.6025 | 0.6305 | 0.5770 |
| 0.1824 | 3.0 | 4878 | 0.4143 | 0.8987 | 0.6190 | 0.6409 | 0.5984 |
| 0.1495 | 4.0 | 6504 | 0.3914 | 0.8864 | 0.5927 | 0.5843 | 0.6014 |
| 0.139 | 5.0 | 8130 | 0.4922 | 0.8904 | 0.5704 | 0.6185 | 0.5292 |
| 0.1144 | 6.0 | 9756 | 0.5297 | 0.8950 | 0.5834 | 0.6414 | 0.5351 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
anmol98/LunarLander_PPO
|
anmol98
| 2023-05-21T15:28:50Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-21T15:28:28Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 257.62 +/- 21.84
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
irodrigues/my_awesome_opus_books_model
|
irodrigues
| 2023-05-21T15:00:24Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:opus_books",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-21T14:05:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- opus_books
metrics:
- bleu
model-index:
- name: my_awesome_opus_books_model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: opus_books
type: opus_books
config: en-fr
split: train
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 5.639
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_opus_books_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the opus_books dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6052
- Bleu: 5.639
- Gen Len: 17.6262
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| 1.8603 | 1.0 | 6355 | 1.6285 | 5.4527 | 17.6356 |
| 1.8073 | 2.0 | 12710 | 1.6052 | 5.639 | 17.6262 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
asenella/mmnist_JNFDccaconfig2_seed_1_ratio_05_c
|
asenella
| 2023-05-21T14:57:13Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-05-21T14:57:06Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
jokyere49/ppo-SnowballTarget-UnityMLAgent
|
jokyere49
| 2023-05-21T14:56:59Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-05-21T14:56:54Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: jokyere49/ppo-SnowballTarget-UnityMLAgent
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
kenangcolab/kanna
|
kenangcolab
| 2023-05-21T14:56:19Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-21T14:52:32Z |
---
license: creativeml-openrail-m
---
|
mooncakex/blip-test
|
mooncakex
| 2023-05-21T14:54:25Z | 59 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"blip",
"image-text-to-text",
"generated_from_trainer",
"license:bsd-3-clause",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-05-21T14:25:03Z |
---
license: bsd-3-clause
tags:
- generated_from_trainer
model-index:
- name: blip-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# blip-test
This model is a fine-tuned version of [Salesforce/blip-image-captioning-base](https://huggingface.co/Salesforce/blip-image-captioning-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
thegigasurgeon/videomae-large-finetuned-kinetics-mopping
|
thegigasurgeon
| 2023-05-21T14:22:31Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"videomae",
"video-classification",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2023-05-18T18:46:09Z |
---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-large-finetuned-kinetics-mopping
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-large-finetuned-kinetics-mopping
This model is a fine-tuned version of [MCG-NJU/videomae-large-finetuned-kinetics](https://huggingface.co/MCG-NJU/videomae-large-finetuned-kinetics) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4468
- Accuracy: 0.7368
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1672
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6868 | 0.25 | 418 | 0.7474 | 0.2115 |
| 0.7265 | 1.25 | 836 | 0.6472 | 0.6026 |
| 0.6854 | 2.25 | 1254 | 0.6211 | 0.6346 |
| 0.7829 | 3.25 | 1672 | 0.5959 | 0.7115 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
asenella/mmnist_JNFDccaconfig2_seed_0_ratio_05_c
|
asenella
| 2023-05-21T14:21:11Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-05-21T14:21:04Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
fogside/sksrpm-test-run
|
fogside
| 2023-05-21T14:19:22Z | 32 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-05-21T14:17:32Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: 3d render of sksrpm
---
### sksrpm_test_run Dreambooth model trained by fogside with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
3d render of sksrpm (use that on your prompt)

|
Mr00Magician/Number-Recogniser-api
|
Mr00Magician
| 2023-05-21T14:13:02Z | 0 | 0 |
keras
|
[
"keras",
"image-classification",
"region:us"
] |
image-classification
| 2023-05-21T08:06:39Z |
---
pipeline_tag: image-classification
library_name: keras
---
|
TheYuriLover/Manticore-13b-GPTQ-Triton
|
TheYuriLover
| 2023-05-21T14:06:39Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-21T10:27:17Z |
This is the gptq 4bit quantization of this model: https://huggingface.co/openaccess-ai-collective/manticore-13b
This quantization was made by using this repository: https://github.com/qwopqwop200/GPTQ-for-LLaMa/tree/triton
And I used the triton branch with all the gptq implementations available (true_sequential + act_order + groupsize 128)
CUDA_VISIBLE_DEVICES=0 python llama.py ./Manticore-13b-GPTQ-Triton c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors manticore-13b-4bit-128g.safetensors
|
LarryAIDraw/pneuma-35_04
|
LarryAIDraw
| 2023-05-21T14:03:36Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-21T13:44:18Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/65705/pneuma-xenoblade2-2-2
|
LarryAIDraw/CHAR-Irisviel
|
LarryAIDraw
| 2023-05-21T14:02:40Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-21T13:48:15Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/71376/irisviel-von-einzbern-5-outfits-or-fatezero-fategrand-order
|
LarryAIDraw/Ruby_Hoshino_V1-4
|
LarryAIDraw
| 2023-05-21T14:02:06Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-21T13:47:33Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/45963?modelVersionId=50582
|
LarryAIDraw/siriusallskins_V2
|
LarryAIDraw
| 2023-05-21T14:01:55Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-21T13:47:00Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/67361/sirius-allskinsv20-azur-lane
|
LarryAIDraw/mylene_rafa_holfort_V1
|
LarryAIDraw
| 2023-05-21T14:01:28Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-21T13:46:19Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/68364/mylene-rafa-holfort-otome-gee-sekai-wa-mob-ni-kibishii-sekai-desu
|
LarryAIDraw/FGOEreshkigalv2
|
LarryAIDraw
| 2023-05-21T14:01:07Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-21T13:45:56Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/71990/ereshkigal-14-outfits-fate-grand-order-fgo-14
|
LarryAIDraw/rosetta_v1
|
LarryAIDraw
| 2023-05-21T14:00:27Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-21T13:44:40Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/71522/rosetta-granblue-fantasy
|
LarryAIDraw/type2_fiona_v3
|
LarryAIDraw
| 2023-05-21T13:59:54Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-21T13:42:46Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/71634/tower-of-fantasy-fiona
|
imanbinamran/bert-finetuned-squad
|
imanbinamran
| 2023-05-21T13:52:47Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-05-20T11:48:17Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: imanbinamran/bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# imanbinamran/bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7813
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 11090, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 1.2750 | 0 |
| 0.7813 | 1 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
PGCaptain/xlm-roberta-base-finetuned-marc
|
PGCaptain
| 2023-05-21T13:44:19Z | 125 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-21T13:22:41Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: xlm-roberta-base-finetuned-marc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1497
- Mae: 0.6986
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.2594 | 1.0 | 196 | 1.2004 | 0.7123 |
| 1.1455 | 2.0 | 392 | 1.1497 | 0.6986 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
indigorange/Reinforce-CartPole-v1
|
indigorange
| 2023-05-21T13:41:00Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-21T13:40:50Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
yif271/yanwendb3d_example_450
|
yif271
| 2023-05-21T13:40:14Z | 30 | 0 |
diffusers
|
[
"diffusers",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-05-21T13:28:46Z |
this is for comparision of vsdb3d using yellow
|
autobots/Todd_Proxy_LoRA_7b
|
autobots
| 2023-05-21T13:29:01Z | 0 | 1 | null |
[
"region:us"
] | null | 2023-05-17T11:39:53Z |
Trained on 100k dumped messages from the 'chan' todd proxy. I could not dedupe the dataset but it has had serious
effect on the llama7b I used. Calls me master a whole bunch more now.
Content isn't SFW so be aware. Trained in 4-bit for 3 epochs, I think it overfit and really needed just 2.
Tested in 4-bit and FP16 on plain HF llama-7b, maybe it works on derivative models of the same beaks.
V2 version was trained at a higher rank and logner context (512) on only unique data with ALLMs and "content warning" statements removed.
It is much stronger.
|
Hardeep/aromatic-yak
|
Hardeep
| 2023-05-21T13:28:08Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-05-21T13:13:35Z |
---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: false
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
---
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [facebook/opt-6.7b](https://huggingface.co/facebook/opt-6.7b)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed.
```bash
pip install transformers==4.28.1
pip install accelerate==0.18.0
pip install torch==2.0.0
```
```python
import torch
from transformers import pipeline
generate_text = pipeline(
model="Hardeep/aromatic-yak",
torch_dtype=torch.float16,
trust_remote_code=True,
use_fast=True,
device_map={"": "cuda:0"},
)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=2,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?</s><|answer|>
```
Alternatively, if you prefer to not use `trust_remote_code=True` you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer:
```python
import torch
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"Hardeep/aromatic-yak",
use_fast=True,
padding_side="left"
)
model = AutoModelForCausalLM.from_pretrained(
"Hardeep/aromatic-yak",
torch_dtype=torch.float16,
device_map={"": "cuda:0"}
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=2,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Hardeep/aromatic-yak" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?</s><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True)
model = AutoModelForCausalLM.from_pretrained(model_name)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
**inputs,
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=2,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Model Architecture
```
OPTForCausalLM(
(model): OPTModel(
(decoder): OPTDecoder(
(embed_tokens): Embedding(50272, 4096, padding_idx=1)
(embed_positions): OPTLearnedPositionalEmbedding(2050, 4096)
(final_layer_norm): LayerNorm((4096,), eps=1e-05, elementwise_affine=True)
(layers): ModuleList(
(0-31): 32 x OPTDecoderLayer(
(self_attn): OPTAttention(
(k_proj): Linear(in_features=4096, out_features=4096, bias=True)
(v_proj): Linear(in_features=4096, out_features=4096, bias=True)
(q_proj): Linear(in_features=4096, out_features=4096, bias=True)
(out_proj): Linear(in_features=4096, out_features=4096, bias=True)
)
(activation_fn): ReLU()
(self_attn_layer_norm): LayerNorm((4096,), eps=1e-05, elementwise_affine=True)
(fc1): Linear(in_features=4096, out_features=16384, bias=True)
(fc2): Linear(in_features=16384, out_features=4096, bias=True)
(final_layer_norm): LayerNorm((4096,), eps=1e-05, elementwise_affine=True)
)
)
)
)
(lm_head): Linear(in_features=4096, out_features=50272, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Model Validation
Model validation results using [EleutherAI lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness).
```bash
CUDA_VISIBLE_DEVICES=0 python main.py --model hf-causal-experimental --model_args pretrained=Hardeep/aromatic-yak --tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq --device cuda &> eval.log
```
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
|
KostiuchenkoArtem/bart_base_large
|
KostiuchenkoArtem
| 2023-05-21T13:13:40Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bart",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-21T10:03:32Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: TianMu/bart_base_large
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# TianMu/bart_base_large
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.3445
- Validation Loss: 2.4043
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.7502 | 2.4418 | 0 |
| 2.5512 | 2.4306 | 1 |
| 2.4377 | 2.4095 | 2 |
| 2.3445 | 2.4043 | 3 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
vocabtrimmer/xlm-roberta-base-trimmed-en-50000
|
vocabtrimmer
| 2023-05-21T12:58:56Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-05-21T12:54:59Z |
# Vocabulary Trimmed [xlm-roberta-base](https://huggingface.co/xlm-roberta-base): `vocabtrimmer/xlm-roberta-base-trimmed-en-50000`
This model is a trimmed version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | xlm-roberta-base | vocabtrimmer/xlm-roberta-base-trimmed-en-50000 |
|:---------------------------|:-------------------|:-------------------------------------------------|
| parameter_size_full | 278,295,186 | 124,495,186 |
| parameter_size_embedding | 192,001,536 | 38,401,536 |
| vocab_size | 250,002 | 50,002 |
| compression_rate_full | 100.0 | 44.73 |
| compression_rate_embedding | 100.0 | 20.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| en | vocabtrimmer/mc4_validation | text | en | validation | 50000 | 2 |
|
teasan/StarrySky
|
teasan
| 2023-05-21T12:48:51Z | 0 | 6 | null |
[
"stable-diffusion",
"text-to-image",
"license:other",
"region:us"
] |
text-to-image
| 2023-05-21T11:41:46Z |
---
license: other
tags:
- stable-diffusion
- text-to-image
---
# Used
SD2.1 768のモデルになります。
本モデルでSD1.5のLoRAやembedding類はご使用になれません。
SD2.1のものをご使用ください。
出来たばかりなので使用感があまり分かっていません。
サンプル画像を参考に設定を色々変更してみてください。
This is an SD2.1 768 model. You can't use SD1.5's LoRA or similar embeddings with this model. Please use those for SD2.1. Since it's just been completed, I'm not quite familiar with its usability. Please try adjusting various settings while referring to the sample images.
# 推奨
今のところ以下のNP embedding を同時併用やどれかを使用した方が良いです。
At present, it would be beneficial to either use the following NP embeddings concurrently or select one among them.
Used NP embedding : Mayng, badquality, (rev2-badprompt:1.2),
## Mayng
https://huggingface.co/852wa/hakoMay/tree/main
## badquality
https://civitai.com/models/19968
## rev2-badprompt
https://huggingface.co/gsdf/Replicant-V2.0/blob/main/rev2-badprompt.safetensors
# sample

```
(masterpiece, best quality), absurdres,
1girl, solo, cute girl, kawaii,
fantasy,
Negative prompt: Mayng, badquality, (rev2-badprompt:1.2),
(worst quality, low quality:1.2), nsfw, flat color, flat shading, retro style, poor quality, bad face, bad fingers, bad anatomy, missing fingers, low res, cropped, signature, watermark, username, artist name, text, monochrome,
Steps: 30, Sampler: DPM++ 2M Karras, CFG scale: 10, Seed: 1643164084, Size: 512x768, Denoising strength: 0.6, Clip skip: 2, Version: v1.2.1, Hires upscale: 2, Hires steps: 10, Hires upscaler: Latent (nearest)
```

```
(masterpiece, best quality), absurdres, (close view:1.4),
1girl, shining sky, vast world, gazing, awe-inspiring expression, distant horizon, clouds, high hill, natural beauty, inspiration, night sky, Shining Stars,
Negative prompt: Mayng, badquality, (rev2-badprompt:1.2),
(worst quality, low quality:1.2), nsfw, flat color, flat shading, retro style, poor quality, bad face, bad fingers, bad anatomy, missing fingers, low res, cropped, signature, watermark, username, artist name, text, monochrome,
Steps: 30, Sampler: DPM++ 2M Karras, CFG scale: 10, Seed: 1643164084, Size: 512x768, Denoising strength: 0.6, Clip skip: 2, Version: v1.2.1, Hires upscale: 2, Hires steps: 10, Hires upscaler: Latent (nearest)
```

```
(masterpiece, best quality), absurdres,
1girl, solo, cute girl, kawaii,
fantasy,
Negative prompt: Mayng, badquality, (rev2-badprompt:1.2),
(worst quality, low quality:1.2), nsfw, flat color, flat shading, retro style, poor quality, bad face, bad fingers, bad anatomy, missing fingers, low res, cropped, signature, watermark, username, artist name, text, monochrome,
Steps: 30, Sampler: UniPC, CFG scale: 10, Seed: 2978862598, Size: 512x768, Denoising strength: 0.6, Clip skip: 2, Version: v1.2.1, Hires upscale: 2, Hires steps: 10, Hires upscaler: Latent (nearest)
```

```
(masterpiece, best quality), absurdres, highres,
1girl,
Negative prompt: Mayng, badquality, (rev2-badprompt:1.2),
(worst quality, low quality:1.2), nsfw, flat color, flat shading, retro style, poor quality, bad face, bad fingers, bad anatomy, missing fingers, low res, cropped, signature, watermark, username, artist name, text, monochrome,
Steps: 30, Sampler: DPM++ 2M Karras, CFG scale: 10, Seed: 3626464235, Size: 512x768, Denoising strength: 0.6, Clip skip: 2, Version: v1.2.1, Hires upscale: 2, Hires steps: 10, Hires upscaler: Latent (nearest)
```

```
(masterpiece, best quality), absurdres,
1girl, solo, cute girl,
fantasy,
Negative prompt: Mayng, badquality, (rev2-badprompt:1.2),
(worst quality, low quality:1.2), nsfw, flat color, flat shading, retro style, poor quality, bad face, bad fingers, bad anatomy, missing fingers, low res, cropped, signature, watermark, username, artist name, text, monochrome,
Steps: 30, Sampler: DPM++ 2M Karras, CFG scale: 10, Seed: 1352047868, Size: 512x768, Denoising strength: 0.6, Clip skip: 2, Version: v1.2.1, Hires upscale: 2, Hires steps: 10, Hires upscaler: Latent (nearest)
```

```
(masterpiece, best quality), absurdres,
1girl, solo, cute girl, chibi,
fantasy,
Negative prompt: Mayng, badquality, (rev2-badprompt:1.2),
(worst quality, low quality:1.2), nsfw, flat color, flat shading, retro style, poor quality, bad face, bad fingers, bad anatomy, missing fingers, low res, cropped, signature, watermark, username, artist name, text, monochrome,
Steps: 30, Sampler: DPM++ 2M Karras, CFG scale: 10, Seed: 430004055, Size: 512x768, Denoising strength: 0.6, Clip skip: 2, Version: v1.2.1, Hires upscale: 2, Hires steps: 10, Hires upscaler: Latent (nearest)
```
# Licence
https://freedevproject.org/faipl-1.0-sd/
|
mwinterhalter/finetuning-sentiment-model-3000-samples
|
mwinterhalter
| 2023-05-21T12:43:42Z | 123 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-21T02:23:03Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8833333333333333
- name: F1
type: f1
value: 0.8844884488448845
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3119
- Accuracy: 0.8833
- F1: 0.8845
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Yhyu13/manticore-13b-gptq-4bit
|
Yhyu13
| 2023-05-21T11:48:00Z | 1,456 | 3 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-21T11:08:34Z |
---
license: apache-2.0
---
GPTQ 4-bit no actor version for compatibility that works in textgen-webui
Generated by using scripts from https://gitee.com/yhyu13/llama_-tools
Original weights: https://huggingface.co/openaccess-ai-collective/manticore-13b
---
Manticore is by far the most satisfying LLaMA model I've used due to its fine-tuning datasets are generated from the best of Vicuna, Wizard, and ShareGPT/GPT4.
Here is a conversation generated in textgen-webui, it shows step by step Chain of Thoughts comments along side generating complex code sample in an engaging tone!

|
Bestie2088/munirah_v1
|
Bestie2088
| 2023-05-21T11:47:35Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-21T11:40:47Z |
---
license: creativeml-openrail-m
---
|
hannahh7/Reinforce-cartpole-v1
|
hannahh7
| 2023-05-21T11:40:23Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-21T11:40:13Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Vas123/codeparrot-ds
|
Vas123
| 2023-05-21T11:40:05Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-21T11:38:29Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: Vas123/codeparrot-ds
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Vas123/codeparrot-ds
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 10.3192
- Validation Loss: 9.3837
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': -945, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 10.3192 | 9.3837 | 0 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
MayIBorn/ft-sd15-iom
|
MayIBorn
| 2023-05-21T11:09:06Z | 2 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-05-21T10:44:32Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of iom human
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - MayIBorn/ft-sd15-iom
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of iom human using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: True.
|
QuickSilver007/rlv2unit6_a2c-AntBulletEnv-v0
|
QuickSilver007
| 2023-05-21T11:01:55Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-21T11:00:42Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1176.83 +/- 160.27
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
anushriiyer/ppo-LunarLander-v2
|
anushriiyer
| 2023-05-21T11:00:45Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-21T11:00:20Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 206.42 +/- 46.80
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jcnecio/dqn-SpaceInvadersNoFrameskip-v4
|
jcnecio
| 2023-05-21T10:47:45Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-21T10:47:12Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 641.00 +/- 140.03
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jcnecio -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jcnecio -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga jcnecio
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
harshshashwat/gratio-opinion-minig
|
harshshashwat
| 2023-05-21T10:26:52Z | 0 | 1 |
transformers
|
[
"transformers",
"Sentiment-anlysis",
"en",
"arxiv:1910.09700",
"license:bigscience-openrail-m",
"endpoints_compatible",
"region:us"
] | null | 2023-05-21T09:50:27Z |
---
license: bigscience-openrail-m
language:
- en
tags:
- Sentiment-anlysis
library_name: transformers
---
-import gradio as gr
from transformers import pipeline
sentiment = pipeline("sentiment-analysis")
def get_sentiment(input_text)
return sentiment(input_text)
iface = gr.Interface(fn=get_sentiment,
inputs = 'text',
outputs = ['text'],
title = 'Opinion Mining',
description = "Get Sentimenent Negative/Positive for the given input")
iface.launch(inline = False)
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Sania67/Urdu_Based_ASR_Model_FYP
|
Sania67
| 2023-05-21T10:18:34Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-05-20T20:47:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Urdu_Based_ASR_Model_FYP
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Urdu_Based_ASR_Model_FYP
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0030
- Wer: 93.0849
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 700
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2479 | 2.27 | 100 | 1.0445 | 103.8776 |
| 0.3529 | 4.55 | 200 | 0.2966 | 70.4653 |
| 0.148 | 6.82 | 300 | 0.1006 | 54.3300 |
| 0.047 | 9.09 | 400 | 0.0274 | 52.7574 |
| 0.0101 | 11.36 | 500 | 0.0099 | 65.1659 |
| 0.0041 | 13.64 | 600 | 0.0039 | 84.2309 |
| 0.0033 | 15.91 | 700 | 0.0030 | 93.0849 |
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
kpbth/distilbert-base-uncased-finetuned-emotion
|
kpbth
| 2023-05-21T10:16:38Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-21T09:51:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.922
- name: F1
type: f1
value: 0.9223471923096423
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2206
- Accuracy: 0.922
- F1: 0.9223
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8353 | 1.0 | 250 | 0.3154 | 0.906 | 0.9033 |
| 0.2476 | 2.0 | 500 | 0.2206 | 0.922 | 0.9223 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
jashdalvi/netnames-classifier-setfit
|
jashdalvi
| 2023-05-21T09:50:44Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-05-21T09:50:04Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# jashdalvi/netnames-classifier-setfit
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("jashdalvi/netnames-classifier-setfit")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
huggingtweets/jeremyphoward-lmoroney-ylecun
|
huggingtweets
| 2023-05-21T09:47:49Z | 139 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-21T09:43:25Z |
---
language: en
thumbnail: http://www.huggingtweets.com/jeremyphoward-lmoroney-ylecun/1684662288397/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1279600070145437696/eocLhSLu_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1483577865056702469/rWA-3_T7_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1438898262602256386/CiWrR6Gi_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Jeremy Howard & Yann LeCun & Laurence Moroney 🇮🇪 🏴</div>
<div style="text-align: center; font-size: 14px;">@jeremyphoward-lmoroney-ylecun</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Jeremy Howard & Yann LeCun & Laurence Moroney 🇮🇪 🏴.
| Data | Jeremy Howard | Yann LeCun | Laurence Moroney 🇮🇪 🏴 |
| --- | --- | --- | --- |
| Tweets downloaded | 3221 | 3246 | 3233 |
| Retweets | 1468 | 495 | 342 |
| Short tweets | 143 | 215 | 252 |
| Tweets kept | 1610 | 2536 | 2639 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/awg8vusy/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jeremyphoward-lmoroney-ylecun's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3lpc09td) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3lpc09td/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jeremyphoward-lmoroney-ylecun')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
efederici/sentence-BERTino
|
efederici
| 2023-05-21T09:36:07Z | 164 | 5 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"it",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-27T22:35:17Z |
---
pipeline_tag: sentence-similarity
license: apache-2.0
language:
- it
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# sentence-BERTino
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. It was trained on a dataset made from question/context pairs ([squad-it](https://github.com/crux82/squad-it)) and tags/news-article pairs (via scraping).
If you like this project, consider supporting me with a cup of coffee! 🤖✨🌞
[](https://bmc.link/edoardofederici)
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Questo è un esempio di frase", "Questo è un ulteriore esempio"]
model = SentenceTransformer('efederici/sentence-BERTino')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["Questo è un esempio di frase", "Questo è un ulteriore esempio"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('efederici/sentence-BERTino')
model = AutoModel.from_pretrained('efederici/sentence-BERTino')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
|
efederici/text2tags
|
efederici
| 2023-05-21T09:34:55Z | 267 | 5 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"summarization",
"tags",
"Italian",
"it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-03-02T23:29:05Z |
---
language:
- it
tags:
- summarization
- tags
- Italian
inference:
parameters:
do_sample: False
min_length: 0
widget:
- text: "Nel 1924 la scrittrice Virginia Woolf affrontò nel saggio Mr Bennett e Mrs Brown il tema della costruzione e della struttura del romanzo, genere all’epoca considerato in declino a causa dell’incapacità degli autori e delle autrici di creare personaggi realistici. Woolf raccontò di aver a lungo osservato, durante un viaggio in treno da Richmond a Waterloo, una signora di oltre 60 anni seduta davanti a lei, chiamata signora Brown. Ne rimase affascinata, per la capacità di quella figura di evocare storie possibili e fare da spunto per un romanzo: «tutti i romanzi cominciano con una vecchia signora seduta in un angolo». Immagini come quella della signora Brown, secondo Woolf, «costringono qualcuno a cominciare, quasi automaticamente, a scrivere un romanzo». Nel saggio Woolf provò ad analizzare le tecniche narrative utilizzate da tre noti scrittori inglesi dell’epoca – H. G. Wells, John Galsworthy e Arnold Bennett – per comprendere perché le convenzioni stilistiche dell’Ottocento risultassero ormai inadatte alla descrizione dei «caratteri» umani degli anni Venti. In un lungo e commentato articolo del New Yorker, la critica letteraria e giornalista Parul Sehgal, a lungo caporedattrice dell’inserto culturale del New York Times dedicato alle recensioni di libri, ha provato a compiere un esercizio simile a quello di Woolf, chiedendosi come gli autori e le autrici di oggi tratterebbero la signora Brown. E ha immaginato che probabilmente quella figura non eserciterebbe su di loro una curiosità e un fascino legati alla sua incompletezza e al suo aspetto misterioso, ma con ogni probabilità trasmetterebbe loro l’indistinta e generica impressione di aver subìto un trauma."
example_title: "Virginia Woolf"
- text: "I lavori di ristrutturazione dell’interno della cattedrale di Notre-Dame a Parigi, seguiti al grande incendio che nel 2019 bruciò la guglia e buona parte del tetto, sono da settimane al centro di un acceso dibattito sui giornali francesi per via di alcune proposte di rinnovamento degli interni che hanno suscitato critiche e allarmi tra esperti e opinionisti conservatori. Il progetto ha ricevuto una prima approvazione dalla commissione nazionale competente, ma dovrà ancora essere soggetto a varie revisioni e ratifiche che coinvolgeranno tecnici e politici locali e nazionali, fino al presidente Emmanuel Macron. Ma le modifiche previste al sistema di viabilità per i visitatori, all’illuminazione, ai posti a sedere e alle opere d’arte che si vorrebbero esporre hanno portato alcuni critici a parlare di «parco a tema woke» e «Disneyland del politicamente corretto»."
example_title: "Notre-Dame"
---
# text2tags
The model has been trained on a collection of 28k news articles with tags. Its purpose is to create tags suitable for the given article. We can use this model also for information-retrieval purposes (GenQ), fine-tuning sentence-transformers for asymmetric semantic search.
If you like this project, consider supporting it with a cup of coffee! 🤖✨🌞
[](https://bmc.link/edoardofederici)
<p align="center">
<img src="https://upload.wikimedia.org/wikipedia/commons/1/1a/Pieter_Bruegel_d._%C3%84._066.jpg" width="600"> </br>
Pieter Bruegel the Elder, The Fight Between Carnival and Lent, 1559
</p>
### Usage
Sample code with an article from IlPost:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
model = AutoModelForSeq2SeqLM.from_pretrained("efederici/text2tags")
tokenizer = AutoTokenizer.from_pretrained("efederici/text2tags")
article = '''
Da bambino era preoccupato che al mondo non ci fosse più nulla da scoprire. Ma i suoi stessi studi gli avrebbero dato torto: insieme a James Watson, nel 1953 Francis Crick strutturò il primo modello di DNA, la lunga sequenza di codici che identifica ogni essere vivente, rendendolo unico e diverso da tutti gli altri.
La scoperta gli valse il Nobel per la Medicina. È uscita in queste settimane per Codice la sua biografia, Francis Crick — Lo scopritore del DNA, scritta da Matt Ridley, che racconta vita e scienza dell'uomo che capì perché siamo fatti così.
'''
def tag(text: str):
""" Generates tags from given text """
text = text.strip().replace('\n', '')
text = 'summarize: ' + text
tokenized_text = tokenizer.encode(text, return_tensors="pt")
tags_ids = model.generate(tokenized_text,
num_beams=4,
no_repeat_ngram_size=2,
max_length=20,
early_stopping=True)
output = tokenizer.decode(tags_ids[0], skip_special_tokens=True)
return output.split(', ')
tags = tag(article)
print(tags)
```
## Longer documents
Assuming paragraphs are divided by: '\n\n'.
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import itertools
import re
model = AutoModelForSeq2SeqLM.from_pretrained("efederici/text2tags")
tokenizer = AutoTokenizer.from_pretrained("efederici/text2tags")
article = '''
Da bambino era preoccupato che al mondo non ci fosse più nulla da scoprire. Ma i suoi stessi studi gli avrebbero dato torto: insieme a James Watson, nel 1953 Francis Crick strutturò il primo modello di DNA, la lunga sequenza di codici che identifica ogni essere vivente, rendendolo unico e diverso da tutti gli altri.
La scoperta gli valse il Nobel per la Medicina. È uscita in queste settimane per Codice la sua biografia, Francis Crick — Lo scopritore del DNA, scritta da Matt Ridley, che racconta vita e scienza dell'uomo che capì perché siamo fatti così.
'''
def words(text):
input_str = text
output_str = re.sub('[^A-Za-z0-9]+', ' ', input_str)
return output_str.split()
def is_subset(text1, text2):
return all(tag in words(text1.lower()) for tag in text2.split())
def cleaning(text, tags):
return [tag for tag in tags if is_subset(text, tag)]
def get_texts(text, max_len):
texts = list(filter(lambda x : x != '', text.split('\n\n')))
lengths = [len(tokenizer.encode(paragraph)) for paragraph in texts]
output = []
for i, par in enumerate(texts):
index = len(output)
if index > 0 and lengths[i] + len(tokenizer.encode(output[index-1])) <= max_len:
output[index-1] = "".join(output[index-1] + par)
else:
output.append(par)
return output
def get_tags(text, generate_kwargs):
input_text = 'summarize: ' + text.strip().replace('\n', ' ')
tokenized_text = tokenizer.encode(input_text, return_tensors="pt")
with torch.no_grad():
tags_ids = model.generate(tokenized_text, **generate_kwargs)
output = []
for tags in tags_ids:
cleaned = cleaning(
text,
list(set(tokenizer.decode(tags, skip_special_tokens=True).split(', ')))
)
output.append(cleaned)
return list(set(itertools.chain(*output)))
def tag(text, max_len, generate_kwargs):
texts = get_texts(text, max_len)
all_tags = [get_tags(text, generate_kwargs) for text in texts]
flatten_tags = itertools.chain(*all_tags)
return list(set(flatten_tags))
params = {
"min_length": 0,
"max_length": 30,
"no_repeat_ngram_size": 2,
"num_beams": 4,
"early_stopping": True,
"num_return_sequences": 4,
}
tags = tag(article, 512, params)
print(tags)
```
### Overview
- Model: T5 ([it5-small](https://huggingface.co/gsarti/it5-small))
- Language: Italian
- Downstream-task: Summarization (for topic tagging)
- Training data: Custom dataset
- Code: See example
- Infrastructure: 1x T4
|
efederici/sentence-bert-base
|
efederici
| 2023-05-21T09:34:03Z | 466 | 8 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"it",
"dataset:stsb_multi_mt",
"doi:10.57967/hf/0248",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-27T17:17:51Z |
---
pipeline_tag: sentence-similarity
language:
- it
datasets:
- stsb_multi_mt
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# sentence-bert-base
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. It was trained on [stsb](https://huggingface.co/datasets/stsb_multi_mt/viewer/it/train).
If you like this project, consider supporting it with a cup of coffee! 🤖✨🌞
[](https://bmc.link/edoardofederici)
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Questo è un esempio di frase", "Questo è un ulteriore esempio"]
model = SentenceTransformer('efederici/sentence-bert-base')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["Questo è un esempio di frase", "Questo è un ulteriore esempio"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('efederici/sentence-bert-base')
model = AutoModel.from_pretrained('efederici/sentence-bert-base')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citation
If you want to cite this model you can use this:
```
@misc {edoardo_federici_2022,
author = { {Edoardo Federici} },
title = { sentence-bert-base, sentence-transformer for Italian },
year = 2022,
url = { https://huggingface.co/efederici/sentence-bert-base },
doi = { 10.57967/hf/0112 },
publisher = { Hugging Face }
}
```
|
seviladiguzel/distilbert-base-uncased-finetuned-squad_v2_3
|
seviladiguzel
| 2023-05-21T09:29:30Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-05-21T08:31:31Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad_v2_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad_v2_3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1579
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2157 | 1.0 | 5533 | 1.1503 |
| 0.9567 | 2.0 | 11066 | 1.1295 |
| 0.7525 | 3.0 | 16599 | 1.1579 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
azetaaa/q-FrozenLake-v1-4x4-noSlippery
|
azetaaa
| 2023-05-21T09:12:08Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-21T09:12:05Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="azetaaa/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Tingwen/ppo-LunarLander-v2-unit8
|
Tingwen
| 2023-05-21T08:59:22Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-21T08:59:13Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -189.54 +/- 88.33
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Tingwen/ppo-LunarLander-v2-unit8'
'batch_size': 512
'minibatch_size': 128}
```
|
huggingtweets/medinarabasco
|
huggingtweets
| 2023-05-21T08:44:58Z | 137 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-21T08:44:49Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1369764163397025794/Xe2piyuC_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Julia</div>
<div style="text-align: center; font-size: 14px;">@medinarabasco</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Julia.
| Data | Julia |
| --- | --- |
| Tweets downloaded | 3207 |
| Retweets | 1236 |
| Short tweets | 695 |
| Tweets kept | 1276 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/fjgpibp3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @medinarabasco's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2v2zjaba) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2v2zjaba/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/medinarabasco')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
danielus/ggml-whisper-models
|
danielus
| 2023-05-21T08:14:09Z | 0 | 4 | null |
[
"audio",
"automatic-speech-recognition",
"en",
"zh",
"de",
"es",
"ru",
"ko",
"fr",
"ja",
"pt",
"tr",
"pl",
"ca",
"nl",
"ar",
"sv",
"it",
"id",
"hi",
"fi",
"vi",
"he",
"uk",
"el",
"ms",
"cs",
"ro",
"da",
"hu",
"ta",
"no",
"th",
"ur",
"hr",
"bg",
"lt",
"la",
"mi",
"ml",
"cy",
"sk",
"te",
"fa",
"lv",
"bn",
"sr",
"az",
"sl",
"kn",
"et",
"mk",
"br",
"eu",
"is",
"hy",
"ne",
"mn",
"bs",
"kk",
"sq",
"sw",
"gl",
"mr",
"pa",
"si",
"km",
"sn",
"yo",
"so",
"af",
"oc",
"ka",
"be",
"tg",
"sd",
"gu",
"am",
"yi",
"lo",
"uz",
"fo",
"ht",
"ps",
"tk",
"nn",
"mt",
"sa",
"lb",
"my",
"bo",
"tl",
"mg",
"as",
"tt",
"haw",
"ln",
"ha",
"ba",
"jw",
"su",
"license:mit",
"region:us"
] |
automatic-speech-recognition
| 2023-05-20T16:57:01Z |
---
license: mit
language:
- en
- zh
- de
- es
- ru
- ko
- fr
- ja
- pt
- tr
- pl
- ca
- nl
- ar
- sv
- it
- id
- hi
- fi
- vi
- he
- uk
- el
- ms
- cs
- ro
- da
- hu
- ta
- 'no'
- th
- ur
- hr
- bg
- lt
- la
- mi
- ml
- cy
- sk
- te
- fa
- lv
- bn
- sr
- az
- sl
- kn
- et
- mk
- br
- eu
- is
- hy
- ne
- mn
- bs
- kk
- sq
- sw
- gl
- mr
- pa
- si
- km
- sn
- yo
- so
- af
- oc
- ka
- be
- tg
- sd
- gu
- am
- yi
- lo
- uz
- fo
- ht
- ps
- tk
- nn
- mt
- sa
- lb
- my
- bo
- tl
- mg
- as
- tt
- haw
- ln
- ha
- ba
- jw
- su
tags:
- audio
- automatic-speech-recognition
---
This repository contains versions of the Whisper models in the ggml format.
The versions present are the best performing according to the benchmark: https://github.com/ggerganov/whisper.cpp/discussions/859
|
yotoshihiro/a2c-AntBulletEnv-v0
|
yotoshihiro
| 2023-05-21T07:31:48Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-21T07:28:52Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1750.70 +/- 120.33
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
architext/gptj-162M
|
architext
| 2023-05-21T07:23:43Z | 199 | 25 |
transformers
|
[
"transformers",
"pytorch",
"gptj",
"text-generation",
"architecture",
"design",
"en",
"dataset:THEODOROS/Architext_v1",
"arxiv:2303.07519",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-22T00:04:16Z |
---
license: apache-2.0
datasets:
- THEODOROS/Architext_v1
language:
- en
pipeline_tag: text-generation
tags:
- architecture
- design
---
# Architext GPT-J 162M
# Model Description
Architext GPT-J-162M is a transformer model trained using Ben Wang's Mesh Transformer JAX on the Pile and finetuned specifically on a synthetically generated dataset of architectural layouts of apartments. It is capable of generating a large diversity of designs, in a convenient geometric representation that can be used downstream in different design workflows, using just a natural language prompt.
The model consists of 12 layers with a model dimension of 768, and a feedforward dimension of 2048. The model dimension is split into 16 heads, each with a dimension of 256. Rotary Position Embedding (RoPE) is applied to 64 dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as GPT-2/GPT-3.
# Training data
GPT-J 162B was pre-trained on the Pile, a large-scale curated dataset created by EleutherAI. It was then finetuned on synthetically generated data that was procedurally generated using the Rhinocers/Grasshopper software suite. The model was finetuned for 1.25 billion tokens over 11,500 steps on TPU v3-8. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token correctly.
# Intended Use and Limitations
Architext models learn an inner representation of the architectural design that can be used to generate a larger diversity of geometric designs and can be useful for many downstream design workflows and tasks. While it could be adapted to many different design outputs, the model is best at generating residential floor plans given a natural language prompt.
# How to use
```python
This model can be easily loaded using the AutoModelForCausalLM functionality:
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("architext/gptj-162M")
model = AutoModelForCausalLM.from_pretrained("architext/gptj-162M")
```
# Limitations and Biases
The core functionality of Architext is taking a string of text and generating a design output, by still continuously predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work especially in the design context. Architext will often generate a design that is not semantically correct, depending on the prompt description it was given, although it almost always generates designs that are valid (non intersecting spaces, no orphan rooms). It is also limited within a small diversity of natural language prompts, specifically prompts that describe:
* typology: "a house with two bedrooms and three bathrooms" or "a house with six rooms"
* adjacency: "the bedroom is adjacent to the living room" or "the kitchen is not adjacent to the bathroom"
* location: "the bedroom is in the north side of the house" or "a bedroom is in the south east side of the house"
Of course, the designs that are generated are conceptual designs and one should never depend on Architext to directly generate accurate construction documentation.
# Citation and Related Information
## BibTeX entry
To cite this model:
```
@article{galanos2023architext,
title={Architext: Language-Driven Generative Architecture Design},
author={Galanos, Theodoros and Liapis, Antonios and Yannakakis, Georgios N},
journal={arXiv preprint arXiv:2303.07519},
year={2023}
}
```
To cite the codebase that trained this model:
```
@misc{mesh-transformer-jax,
author = {Wang, Ben},
title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}},
howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
year = 2021,
month = May
}
```
# Acknowledgements
This project would not have been possible without compute generously provided by Google through the TPU Research Cloud that generously provided access to Clout TPU VMs used to finetune this model.
|
indigorange/q-FrozenLake-v1-4x4-noSlippery
|
indigorange
| 2023-05-21T07:21:52Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-21T07:21:49Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="indigorange/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Den4ikAI/FRED-T5-Large-interpreter
|
Den4ikAI
| 2023-05-21T07:19:25Z | 114 | 2 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"ru",
"dataset:inkoziev/incomplete_utterance_restoration",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-21T07:15:29Z |
---
license: mit
datasets:
- inkoziev/incomplete_utterance_restoration
language:
- ru
widget:
- text: '<SC1>- Как тебя зовут?\n- Джульетта Мао\nРазвернутый ответ: <extra_id_0>'
- text: '<SC1>- А живешь где?\n- В поясе астероидов\nРазвернутый ответ: <extra_id_0>'
pipeline_tag: text2text-generation
---
# Den4ikAI/FRED-T5-Large-interpreter
Модель для восстановления фразы с помощью контекста диалога (анафора, эллипсисы, гэппинг), проверки орфографии и нормализации текста диалоговых реплик.
Больше о задаче [тут](https://huggingface.co/inkoziev/rugpt_interpreter).
# Пример использования
```python
import torch
from transformers import T5ForConditionalGeneration, GPT2Tokenizer
model_name = 'Den4ikAI/FRED-T5-Large-interpreter'
tokenizer = GPT2Tokenizer.from_pretrained(model_name)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = T5ForConditionalGeneration.from_pretrained(model_name)
model.eval()
t5_input = '''<SC1>- Ты собак любишь?
- Не люблю я их
Развернутый ответ: <extra_id_0>'''
input_ids = tokenizer(t5_input, return_tensors='pt').input_ids
out_ids = model.generate(input_ids=input_ids, max_length=100, eos_token_id=tokenizer.eos_token_id, early_stopping=True)
t5_output = tokenizer.decode(out_ids[0][1:])
print(t5_output)
```
# Citation
```
@MISC{FRED-T5-Large-interpreter,
author = {Denis Petrov, Ilya Koziev},
title = {Russian conversations interpreter and normalizer},
url = {https://huggingface.co/Den4ikAI/FRED-T5-Large-interpreter},
year = 2023
}
```
|
oeshy/my_awesome_model
|
oeshy
| 2023-05-21T07:06:32Z | 121 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-29T07:10:19Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: my_awesome_model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.93196
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2319
- Accuracy: 0.9320
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2313 | 1.0 | 1563 | 0.1868 | 0.9280 |
| 0.1509 | 2.0 | 3126 | 0.2319 | 0.9320 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Den4ikAI/FRED-T5-XL-interpreter
|
Den4ikAI
| 2023-05-21T07:03:46Z | 103 | 1 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"ru",
"dataset:inkoziev/incomplete_utterance_restoration",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-21T06:13:35Z |
---
license: mit
datasets:
- inkoziev/incomplete_utterance_restoration
language:
- ru
widget:
- text: '<SC1>- Как тебя зовут?\n- Джульетта Мао\nРазвернутый ответ: <extra_id_0>'
- text: '<SC1>- А живешь где?\n- В поясе астероидов\nРазвернутый ответ: <extra_id_0>'
pipeline_tag: text2text-generation
---
# Den4ikAI/FRED-T5-XL-interpreter
Модель для восстановления фразы с помощью контекста диалога (анафора, эллипсисы, гэппинг), проверки орфографии и нормализации текста диалоговых реплик.
Больше о задаче [тут](https://huggingface.co/inkoziev/rugpt_interpreter).
# Пример использования
```python
import torch
from transformers import T5ForConditionalGeneration, GPT2Tokenizer
model_name = 'Den4ikAI/FRED-T5-XL-interpreter'
tokenizer = GPT2Tokenizer.from_pretrained(model_name)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = T5ForConditionalGeneration.from_pretrained(model_name)
model.eval()
t5_input = '''<SC1>- Ты собак любишь?
- Не люблю я их
Развернутый ответ: <extra_id_0>'''
input_ids = tokenizer(t5_input, return_tensors='pt').input_ids
out_ids = model.generate(input_ids=input_ids, max_length=100, eos_token_id=tokenizer.eos_token_id, early_stopping=True)
t5_output = tokenizer.decode(out_ids[0][1:])
print(t5_output)
```
# Citation
```
@MISC{FRED-T5-XL-interpeter,
author = {Denis Petrov, Ilya Koziev},
title = {Russian conversations interpreter and normalizer},
url = {https://huggingface.co/Den4ikAI/FRED-T5-XL-interpreter},
year = 2023
}
```
|
SaintMcMuffins/Brain2.1
|
SaintMcMuffins
| 2023-05-21T06:50:51Z | 134 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-21T06:41:10Z |
---
tags:
- conversational
---
|
ajwad-abrar/my_awesome_model
|
ajwad-abrar
| 2023-05-21T06:47:58Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-28T22:39:14Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: my_awesome_model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.93156
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2315
- Accuracy: 0.9316
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2327 | 1.0 | 1563 | 0.1883 | 0.9281 |
| 0.152 | 2.0 | 3126 | 0.2315 | 0.9316 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
prathameshgiri/Prathamesh
|
prathameshgiri
| 2023-05-21T06:36:53Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"art",
"image-to-image",
"en",
"dataset:togethercomputer/RedPajama-Data-1T",
"license:apache-2.0",
"region:us"
] |
image-to-image
| 2023-05-21T02:56:41Z |
---
title: GFPGAN
emoji: 😁
colorFrom: yellow
colorTo: green
sdk: gradio
sdk_version: 3.25.0
app_file: app.py
pinned: false
license: apache-2.0
datasets:
- togethercomputer/RedPajama-Data-1T
language:
- en
metrics:
- accuracy
library_name: adapter-transformers
pipeline_tag: image-to-image
tags:
- art
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
bakhitovd/led-base-7168-ml
|
bakhitovd
| 2023-05-21T03:51:26Z | 126 | 0 |
transformers
|
[
"transformers",
"pytorch",
"led",
"text2text-generation",
"summarization",
"dataset:bakhitovd/data_science_arxiv",
"license:cc0-1.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-05-02T02:20:10Z |
---
datasets:
- bakhitovd/data_science_arxiv
metrics:
- rouge
license: cc0-1.0
pipeline_tag: summarization
---
# Fine-tuned Longformer for Summarization of Machine Learning Articles
## Model Details
- GitHub: https://github.com/Bakhitovd/led-base-7168-ml
- Model name: bakhitovd/led-base-7168-ml
- Model type: Longformer (alenai/led-base-16384)
- Model description: This Longformer model has been fine-tuned on a focused subset of the arXiv part of the scientific papers dataset, specifically targeting articles about Machine Learning. It aims to generate accurate and consistent summaries of machine learning research papers.
## Intended Use
This model is intended to be used for text summarization tasks, specifically for summarizing machine learning research papers.
## How to Use
```python
import torch
from transformers import LEDTokenizer, LEDForConditionalGeneration
tokenizer = LEDTokenizer.from_pretrained("bakhitovd/led-base-7168-ml")
model = LEDForConditionalGeneration.from_pretrained("bakhitovd/led-base-7168-ml")
```
## Use the model for summarization
```python
article = "... long document ..."
inputs_dict = tokenizer.encode(article, padding="max_length", max_length=16384, return_tensors="pt", truncation=True)
input_ids = inputs_dict.input_ids.to("cuda")
attention_mask = inputs_dict.attention_mask.to("cuda")
global_attention_mask = torch.zeros_like(attention_mask)
global_attention_mask[:, 0] = 1
predicted_abstract_ids = model.generate(input_ids, attention_mask=attention_mask, global_attention_mask=global_attention_mask, max_length=512)
summary = tokenizer.decode(predicted_abstract_ids, skip_special_tokens=True)
print(summary)
```
## Training Data
Dataset name: bakhitovd/data_science_arxiv\
This dataset is a subset of the 'Scientific papers' dataset, which contains articles semantically, structurally, and meaningfully closest to articles describing machine learning. This subset was obtained using K-means clustering on the embeddings generated by SciBERT.
## Evaluation Results
The model's performance was evaluated using ROUGE metrics and it showed improved performance over the baseline models.

|
YakovElm/MariaDB20SetFitModel
|
YakovElm
| 2023-05-21T03:35:20Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-05-20T16:22:10Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# YakovElm/MariaDB20SetFitModel
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("YakovElm/MariaDB20SetFitModel")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
openaccess-ai-collective/lora-experiments-quant-to-full-weights
|
openaccess-ai-collective
| 2023-05-21T03:30:05Z | 6 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-20T19:17:37Z |
WandB: https://wandb.ai/wing-lian/lora-experiment?workspace=user-wing-lian
|
AlikS/q-FrozenLake-v1-4x4-noSlippery
|
AlikS
| 2023-05-21T03:03:09Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-21T03:03:06Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="AlikS/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
howt51/ppo-LunarLander-v2
|
howt51
| 2023-05-21T02:31:10Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-20T15:56:29Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 285.13 +/- 36.30
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mhammadkhan/negation-categories-classifier
|
mhammadkhan
| 2023-05-21T02:01:58Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"es",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-05-15T16:45:02Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
language:
- es
---
# mhammadkhan/negation-categories-classifier
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
model = SetFitModel.from_pretrained("mhammadkhan/negation-categories-classifier")
# Define category labels
labels = {0: "dairy-free", 1: "gluten-free", 2: "nut-free", 3: "soy-free", 4: "vegan"}
# Define input recipes
recipes = [
{"text": "Tacos de Coliflor Vegana", "ingredients":["cauliflower", "taco seasoning", "corn tortillas", "avocado", "salsa", "cilantro", "lime wedges"]},
{"text": "Pulpo a la Gallega Sin Gluten", "ingredients":["octopus", "potatoes", "paprika", "olive oil", "salt"]},
{"text": "Creamy Tomato Soup", "ingredients":["tomatoes", "vegetable broth", "onion", "garlic", "coconut milk"]},
{"text": "Chicken Alfredo Pasta", "ingredients":["chicken breast", "pasta", "broccoli", "mushrooms", "cashew cream"]},
{"text": "Cheesy Broccoli Casserole", "ingredients":["broccoli", "almond milk", "nutritional yeast", "gluten-free breadcrumbs"]},
{"text": "Gluten-Free Pizza", "ingredients":["gluten-free pizza crust", "tomato sauce", "mozzarella cheese", "mushrooms", "bell peppers"]},
{"text": "Quinoa Salad with Roasted Vegetables", "ingredients":["quinoa", "roasted sweet potato", "roasted Brussels sprouts", "dried cranberries", "almonds"]},
{"text": "Gluten-Free Chocolate Chip Cookies", "ingredients":["gluten-free flour", "brown sugar", "baking soda", "chocolate chips", "coconut oil"]},
{"text": "Chicken Satay Skewers", "ingredients":["chicken breast", "coconut milk", "peanut butter", "soy sauce", "lime juice"]},
{"text": "Pesto Pasta Salad", "ingredients":["pasta", "basil", "parmesan cheese", "pine nuts", "olive oil"]},
{"text": "Maple-Glazed Salmon", "ingredients":["salmon", "maple syrup", "pecans", "butter", "garlic"]},
{"text": "Beef and Broccoli Stir-Fry", "ingredients":["beef sirloin", "broccoli", "carrots", "garlic", "ginger", "cornstarch"]},
{"text": "Creamy Mushroom Soup", "ingredients":["mushrooms", "vegetable broth", "onion", "garlic", "cashew cream"]},
{"text": "Lemon-Garlic Roasted Chicken", "ingredients":["chicken thighs", "lemon juice", "garlic", "olive oil", "rosemary"]},
{"text": "Vegan Lasagna", "ingredients":["lasagna noodles", "tofu ricotta", "marinara sauce", "spinach", "mushrooms"]},
{"text": "Chickpea Curry", "ingredients":["chickpeas", "coconut milk", "tomatoes", "spinach", "curry powder"]},
{"text": "Vegan Banana Bread", "ingredients":["flour", "bananas", "sugar", "baking powder", "almond milk"]},
]
# Run inference
preds = model(recipes)
print(preds)
# Map integer predictions to category labels
preds = [labels[pred.item()] for pred in preds]
print(preds)
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
pranav-asthana/assignment-1a
|
pranav-asthana
| 2023-05-21T01:52:01Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-21T01:51:47Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: assignment-1a
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# assignment-1a
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1908, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
pemujo/vit-base-patch16-224-in21k-gldv2-top-51
|
pemujo
| 2023-05-21T01:31:04Z | 166 | 1 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"en",
"dataset:pemujo/GLDv2_Top_51_Categories",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-05-19T05:29:06Z |
---
datasets:
- pemujo/GLDv2_Top_51_Categories
language:
- en
library_name: transformers
---
# Model Card for Model ID
This model is vit-base-patch16-224-in21k fine-tuned with a subset of the 2021 Kaggle Google Landmark Dataset competition, including only the top 51 categories.
The dataset is available as Hugginface dataset on: https://huggingface.co/datasets/pemujo/GLDv2_Top_51_Categories
- **Developed by:** Pedro Melendez
- **Model type:** Vision transformer
- **Finetuned from model:** vit-base-patch16-224-in21k
### Training Data
Classes with more than 500 images in the 2021 Kaggle Google Landmark competition
https://huggingface.co/datasets/pemujo/GLDv2_Top_51_Categories
### Results
| | |
|-----------------------:|------:|
| epoch |4 |
|eval_accuracy |0.97411|
|eval_loss |0.11560|
|eval_runtime (secs) |79.0939|
|eval_samples_per_second |115.255|
|eval_steps_per_second |14.413 |
|train_runtime (secs) |4082.92|
|train_samples_per_second|35.722 |
|train_steps_per_second |2.233 |
## Environmental Impact
- **Hardware Type:** Nvidia V100
- **Minutes used:** 68 Minutes
- **Cloud Provider:** Google Cloud
- **Compute Region:** us-central
### Compute Infrastructure
Google Cloud Workbench Instance
#### Hardware
GCP Workbench n1-highmem-8 instance with Nvidia V100 GPU
#### Software
Python 3.9
Pytorch 2.0.1+cu117
|
YakovElm/MariaDB10SetFitModel
|
YakovElm
| 2023-05-21T01:27:57Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-05-20T16:21:47Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# YakovElm/MariaDB10SetFitModel
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("YakovElm/MariaDB10SetFitModel")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
aaronrmm/a2c-PandaReachDense-v2
|
aaronrmm
| 2023-05-21T01:26:00Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-20T21:16:44Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.22 +/- 0.25
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
nolanaatama/ssv2bcchr
|
nolanaatama
| 2023-05-21T00:53:11Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-21T00:51:54Z |
---
license: creativeml-openrail-m
---
|
tarek23/flan-t5-base-qg-SQuAD-LMQG
|
tarek23
| 2023-05-21T00:50:32Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:qg_squad",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-20T15:12:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- qg_squad
metrics:
- rouge
model-index:
- name: flan-t5-base-qg-SQuAD-LMQG
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: qg_squad
type: qg_squad
config: qg_squad
split: validation
args: qg_squad
metrics:
- name: Rouge1
type: rouge
value: 52.9376
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-qg-SQuAD-LMQG
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the qg_squad dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5624
- Rouge1: 52.9376
- Rouge2: 30.3418
- Rougel: 48.9442
- Rougelsum: 48.9337
- Meteor: 48.0417
- Bleu-n: 21.5099
- Bleu-1: 53.2950
- Bleu-2: 27.3888
- Bleu-3: 17.6196
- Bleu-4: 11.8132
- Gen Len: 14.2609
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Meteor | Bleu-n | Bleu-1 | Bleu-2 | Bleu-3 | Bleu-4 | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|
| 0.6054 | 1.0 | 9466 | 0.5603 | 51.5022 | 28.8098 | 47.624 | 47.6181 | 46.3462 | 20.5165 | 52.2978 | 26.2204 | 16.7816 | 11.2230 | 14.1466 |
| 0.5436 | 2.0 | 18932 | 0.5555 | 52.6557 | 29.8173 | 48.7211 | 48.7146 | 47.4380 | 21.0460 | 53.6907 | 27.3374 | 17.4950 | 11.6865 | 14.0910 |
| 0.5087 | 3.0 | 28398 | 0.5572 | 52.5567 | 30.0117 | 48.5798 | 48.5669 | 47.5632 | 21.3178 | 53.1790 | 27.2268 | 17.5366 | 11.8127 | 14.1871 |
| 0.4874 | 4.0 | 37864 | 0.5601 | 52.9404 | 30.3445 | 48.9746 | 48.9583 | 47.9995 | 21.5205 | 53.3457 | 27.4082 | 17.6273 | 11.8327 | 14.2623 |
| 0.473 | 5.0 | 47330 | 0.5624 | 52.9376 | 30.3418 | 48.9442 | 48.9337 | 48.0417 | 21.5099 | 53.2950 | 27.3888 | 17.6196 | 11.8132 | 14.2609 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
Green-Sky/ggml-mpt-7b-storywriter
|
Green-Sky
| 2023-05-20T23:53:36Z | 0 | 14 | null |
[
"Composer",
"MosaicML",
"llm-foundry",
"ggml",
"dataset:the_pile_books3",
"arxiv:2108.12409",
"arxiv:2205.14135",
"arxiv:2302.06675",
"license:apache-2.0",
"region:us"
] | null | 2023-05-11T12:54:38Z |
---
license: apache-2.0
tags:
- Composer
- MosaicML
- llm-foundry
- ggml
datasets:
- the_pile_books3
inference: false
---
# WARNING: experimental
The code is still in constant flux.
~requires pr~ merged https://github.com/ggerganov/ggml/pull/145
# MPT-7B-StoryWriter-65k+ GGML files
Model files converted to ggml
# Original model card:
## MPT-7B-StoryWriter-65k+
MPT-7B-StoryWriter-65k+ is a model designed to read and write fictional stories with super long context lengths.
It was built by finetuning MPT-7B with a context length of 65k tokens on a filtered fiction subset of the [books3 dataset](https://huggingface.co/datasets/the_pile_books3).
At inference time, thanks to [ALiBi](https://arxiv.org/abs/2108.12409), MPT-7B-StoryWriter-65k+ can extrapolate even beyond 65k tokens.
We demonstrate generations as long as 84k tokens on a single node of 8 A100-80GB GPUs in our [blogpost](https://www.mosaicml.com/blog/mpt-7b).
* License: Apache 2.0
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-storywriter)
This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
### Model Date
May 5, 2023
### Model License
Apache 2.0
### Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 6.7B |
|n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 50432 |
| sequence length | **65536** |
### PreTraining Data
For more details on the pretraining process, see [MPT-7B](https://huggingface.co/mosaicml/mpt-7b).
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
#### Training Configuration
This model was trained on 8 A100-80GBs for about 2 days using the [MosaicML Platform](https://www.mosaicml.com/platform).
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the [LION](https://arxiv.org/abs/2302.06675) optimizer.
### Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-7B-StoryWriter can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-7B-StoryWriter was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
### Acknowledgements
This model was finetuned by Alex Trott and the MosaicML NLP team
### MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b).
### Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
### Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs},
year = {2023},
url = {www.mosaicml.com/blog/mpt-7b},
note = {Accessed: 2023-03-28}, % change this date
urldate = {2023-03-28} % change this date
}
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.