modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-29 18:27:06
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 526
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-29 18:26:56
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
WENGSYX/MedCPT
|
WENGSYX
| 2022-07-15T05:14:21Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"feature-extraction",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-06-17T07:21:31Z |
# MedCPT
###### LingYi system pre training medical model
###### Prease load the model from [**CPT**](https://huggingface.co/fnlp/cpt-large)
## Usage
```python
>>> from modeling_cpt import CPTForConditionalGeneration
>>> from transformers import BertTokenizer
>>> tokenizer = BertTokenizer.from_pretrained("WENGSYX/MedCPT")
>>> model = CPTForConditionalGeneration.from_pretrained("WENGSYX/MedCPT")
>>> inputs = tokenizer.encode("医生你好,腹泻难受应该怎么办?", return_tensors='pt')
>>> pred_ids = model.generate(input_ids, num_beams=4, max_length=20)
>>> print(tokenizer.convert_ids_to_tokens(pred_ids[i]))
```
|
Hardik1313X/mt5-small-finetuned-amazon-en-es
|
Hardik1313X
| 2022-07-15T05:09:05Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-07-15T04:16:10Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Hardik1313X/mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Hardik1313X/mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.0749
- Validation Loss: 3.3854
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 9672, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 10.0060 | 4.3897 | 0 |
| 5.9039 | 3.8382 | 1 |
| 5.1623 | 3.6476 | 2 |
| 4.7477 | 3.5488 | 3 |
| 4.4688 | 3.4721 | 4 |
| 4.2706 | 3.4173 | 5 |
| 4.1381 | 3.3921 | 6 |
| 4.0749 | 3.3854 | 7 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Team-PIXEL/pixel-base-finetuned-wnli
|
Team-PIXEL
| 2022-07-15T03:09:09Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"pixel",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-15T03:06:10Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
model-index:
- name: pixel-base-finetuned-wnli
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pixel-base-finetuned-wnli
This model is a fine-tuned version of [Team-PIXEL/pixel-base](https://huggingface.co/Team-PIXEL/pixel-base) on the GLUE WNLI dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 3
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 400
- mixed_precision_training: Apex, opt level O1
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.12.1
|
Team-PIXEL/pixel-base-finetuned-mnli
|
Team-PIXEL
| 2022-07-15T02:42:53Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"pixel",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-15T02:39:38Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
model-index:
- name: pixel-base-finetuned-mnli
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pixel-base-finetuned-mnli
This model is a fine-tuned version of [Team-PIXEL/pixel-base](https://huggingface.co/Team-PIXEL/pixel-base) on the GLUE MNLI dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 15000
- mixed_precision_training: Apex, opt level O1
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
kuttersn/test-clm
|
kuttersn
| 2022-07-15T02:04:32Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-07-13T16:51:06Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: test-clm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-clm
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5311
- Accuracy: 0.3946
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
BigSalmon/InformalToFormalLincoln55
|
BigSalmon
| 2022-07-15T01:50:08Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-07-15T01:41:00Z |
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln54")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln54")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- nebraska
- unicamerical legislature
- different from federal house and senate
text: featuring a unicameral legislature, nebraska's political system stands in stark contrast to the federal model, comprised of a house and senate.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
```
wordy: classical music is becoming less popular more and more.
Translate into Concise Text: interest in classic music is fading.
***
wordy:
```
```
sweet: savvy voters ousted him.
longer: voters who were informed delivered his defeat.
***
sweet:
```
```
1: commercial space company spacex plans to launch a whopping 52 flights in 2022.
2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022.
3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights.
4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company.
5: a commercial space company, spacex aims to conduct 52 flights in 2022.
***
1:
```
Keywords to sentences or sentence.
```
ngos are characterized by:
□ voluntary citizens' group that is organized on a local, national or international level
□ encourage political participation
□ often serve humanitarian functions
□ work for social, economic, or environmental change
***
what are the drawbacks of living near an airbnb?
□ noise
□ parking
□ traffic
□ security
□ strangers
***
```
```
original: musicals generally use spoken dialogue as well as songs to convey the story. operas are usually fully sung.
adapted: musicals generally use spoken dialogue as well as songs to convey the story. ( in a stark departure / on the other hand / in contrast / by comparison / at odds with this practice / far from being alike / in defiance of this standard / running counter to this convention ), operas are usually fully sung.
***
original: akoya and tahitian are types of pearls. akoya pearls are mostly white, and tahitian pearls are naturally dark.
adapted: akoya and tahitian are types of pearls. ( a far cry from being indistinguishable / easily distinguished / on closer inspection / setting them apart / not to be mistaken for one another / hardly an instance of mere synonymy / differentiating the two ), akoya pearls are mostly white, and tahitian pearls are naturally dark.
***
original:
```
```
original: had trouble deciding.
translated into journalism speak: wrestled with the question, agonized over the matter, furrowed their brows in contemplation.
***
original:
```
```
input: not loyal
1800s english: ( two-faced / inimical / perfidious / duplicitous / mendacious / double-dealing / shifty ).
***
input:
```
```
original: big businesses ).
translated into journalism speak: corporate ( behemoths / heavyweights / titans / steamrollers / powerhouses / bigwigs / kahunas / brutes / honchos / barons / kingpins / rainmakers / headliners ).
***
original: environmental movement ).
translated into journalism speak: ( green lobby / conservationist camp / tree-huggers / ecology-obsessed / sustainability crusaders / preservation-crazed / ecological campaigners ).
***
original:
```
```
first: ( was complicit in / was involved in ).
antonym: ( was blameless / was not an accomplice to / had no hand in / was uninvolved in ).
***
first: ( have no qualms about / see no issue with ).
antonym: ( are deeply troubled by / harbor grave reservations about / have a visceral aversion to / take ( umbrage at / exception to ) / are wary of ).
***
first: ( do not see eye to eye / disagree often ).
antonym: ( are in sync / are united / have excellent rapport / are like-minded / are in step / are of one mind / are in lockstep / operate in perfect harmony / march in lockstep ).
***
first:
```
|
CennetOguz/bert-large-uncased-finetuned-youcook_4
|
CennetOguz
| 2022-07-15T00:43:32Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-15T00:34:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-large-uncased-finetuned-youcook_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-finetuned-youcook_4
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9929
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.3915 | 1.0 | 206 | 2.1036 |
| 2.0412 | 2.0 | 412 | 2.2207 |
| 1.9062 | 3.0 | 618 | 1.7281 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0a0+17540c5
- Datasets 2.3.2
- Tokenizers 0.12.1
|
CennetOguz/bert-large-uncased-finetuned-youcook_2
|
CennetOguz
| 2022-07-15T00:16:54Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-15T00:08:47Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-large-uncased-finetuned-youcook_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-finetuned-youcook_2
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9929
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.3915 | 1.0 | 206 | 2.1036 |
| 2.0412 | 2.0 | 412 | 2.2207 |
| 1.9062 | 3.0 | 618 | 1.7281 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0a0+17540c5
- Datasets 2.3.2
- Tokenizers 0.12.1
|
cannlytics/skunkfx
|
cannlytics
| 2022-07-14T21:01:54Z | 0 | 2 | null |
[
"license:mit",
"region:us"
] | null | 2022-07-14T20:54:17Z |
---
license: mit
---
# Predicting Effects and Aromas
<div align="center" style="text-align:center; margin-top:1rem; margin-bottom: 1rem;">
<img width="240px" alt="" src="https://firebasestorage.googleapis.com/v0/b/cannlytics.appspot.com/o/public%2Fimages%2Flogos%2Fskunkfx_logo.png?alt=media&token=1a75b3cc-3230-446c-be7d-5c06012c8e30">
</div>
> "It's been hard to breathe and the smell's been just horrendous... [It's] like you've literally been sprayed by a
**skunk**." - Resident of Prague, Oklahoma in
[*'It's nasty': Prague neighbors push back on area cannabis facility*](https://kfor.com/news/local/its-nasty-prague-neighbors-push-back-on-area-cannabis-facility/), Oklahoma News 4 (2022).
## Objective
Can we build a model to **predict** if someone may *report* specific **effects** or **aromas** given a cannabis product’s **lab results**?
## Literature
[Over eight hundred cannabis strains characterized by the relationship between their psychoactive effects,
perceptual profiles, and chemical compositions](https://www.biorxiv.org/content/10.1101/759696v1.abstract) by Laura Alethia de la Fuente, Federico Zamberlan, Andres Sanchez, Facundo Carrillo, Enzo Tagliazucchi, Carla Pallavicini (2019).
* **Claim**: *"While cannabinoid content was variable even within individual strains, terpene profiles matched the perceptual characterizations made by the users and could be used to predict associations between different psychoactive effects."*
## Data
A panel of strain reviews was curated from the data published by [Alethia, et. al. (2019)](https://data.mendeley.com/datasets/6zwcgrttkp/1). First, we downloaded the authors' strain review and lab result datasets. We then curated terpene and cannabinoid data from the raw text files in the lab result dataset. Average cannabinoid and terpene concentrations were calculated for each of the 184 strains in the dataset from 431 lab results. Reviews are for purported strains and the lab results may or may not be representative of the concentration of the product that the reviewer is referencing. However, without the actual lab results of the product that the reviewer is referencing, the average concentrations for similarly named products can serve as an estimate. The following processing and assumptions were applied.
- Field names were transformed to `snake_case`.
- The fields `total_terpenes` and `total_cannabinoids` were calculated as the simple sum of all terpenes and cannabinoids respectively.
- The fields `total_thc`, `total_cbd`, and `total_cbg` were calculated using the decarboxylation rate (87.7%) for THCA, CBDA, and CBGA.
- Observations with `total_cannabinoids` greater than 35% or `total_terpenes` greater than 6% were presumed to be outliers and were excluded.
- The field `classification` was determined by the original authors from natural language processing (NLP) and can take a value of `sativa`, `indica`, or `hybrid` depending on the language in the reviewer's description.
- Fields for each reported aroma and effect were created and assigned a value of 1 if the reviewer reported the aroma or effect and 0 otherwise.
- Terpenes of similar names were combined on missing values: `p_cymene` with `pcymene`, `beta_caryophyllene` with `caryophyllene`, and `humulene` with `alpha_humulene`.
- Certain terpenes were summed into a encompassing field: `ocimene`, `beta_ocimene`, `trans_ocimene` to `ocimene` and `trans_nerolidol`, `cis_nerolidol`, `transnerolidol_1`, `transnerolidol_2` to `nerolidol`.
- A new field, `terpinenes`, was created as the sum of `alpha_terpinene`, `gamma_terpinene`, `terpinolene`, and `terpinene`.
| Datasets | URL |
|----------|-----|
| Raw data | <https://data.mendeley.com/datasets/6zwcgrttkp/1> |
| Curated panel data | <https://cannlytics.page.link/reported-effects> |
| Potential strain effects data | <https://cannlytics.page.link/strain-effects> |
<!-- TODO: Add WA and CT (OH?) datasets :) -->
## Methodology
A [multivariate probit model](https://en.wikipedia.org/wiki/Multivariate_probit_model) is used to predict the probability of all potential effects and aromas simultaneously given lab results for a sample or samples. Specific effects and aromas are predicted to be reported when the estimated probability of an effect or aroma crosses a threshold. The thresholds are set to best fit the observed occurrence of each effect and aroma. Below are the variates used in the models estimated.
```json
{
"full": [
"cbc",
"cbd",
"cbda",
"cbg",
"cbga",
"cbn",
"delta_8_thc",
"delta_9_thc",
"thca",
"thcv",
"alpha_bisabolol",
"alpha_pinene",
"alpha_terpinene",
"beta_caryophyllene",
"beta_myrcene",
"beta_pinene",
"camphene",
"carene",
"caryophyllene_oxide",
"d_limonene",
"eucalyptol",
"gamma_terpinene",
"geraniol",
"guaiol",
"humulene",
"isopulegol",
"linalool",
"nerolidol",
"ocimene",
"p_cymene",
"terpinene",
"terpinolene"
],
"terpene_only": [
"alpha_bisabolol",
"alpha_pinene",
"alpha_terpinene",
"beta_caryophyllene",
"beta_myrcene",
"beta_pinene",
"camphene",
"carene",
"caryophyllene_oxide",
"d_limonene",
"eucalyptol",
"gamma_terpinene",
"geraniol",
"guaiol",
"humulene",
"isopulegol",
"linalool",
"nerolidol",
"ocimene",
"p_cymene",
"terpinene",
"terpinolene"
],
"cannabinoid_only": [
"cbc",
"cbd",
"cbda",
"cbg",
"cbga",
"cbn",
"delta_8_thc",
"delta_9_thc",
"thca",
"thcv"
],
"totals": ["total_cbd", "total_thc", "total_terpenes"],
"simple": ["total_cbd", "total_thc"]
}
```
## Results
An implementation of the prediction model can be found at <https://cannlytics.com/effects> and utilized through the API endpoint <https://cannlytics.com/api/stats/effects>. In general, there are 3 main actions:
1. You can use the model to predict potentially reported effects and aromas for any cannabis flower for which you have lab results. Simply post your lab results to the `/stats/effects` endpoint, specifying your model if you desire, and you will receive effect and aroma predictions.
2. You can get the model statistics by making a `GET` request to `/stats/effects`. Currently, the model statistics include `false_positive_rate`, `false_negative_rate`, `true_positive_rate`, `true_negative_rate`, `accuracy`, and `informedness`.
3. Finally, you can post the actual effects and aromas that you may observe with the `/stats/effects/actual` endpoint.
You can substitute training data, for strain reviews or lab results, as you see fit. Please see the API documentation for more information about using this API endpoint.
## Insights and future work
The more training data the better. If you want to [contribute lab results or reviews](https://cannlytics.com/stats/effects), then you are welcome! You can also use your own training data. Using the model to predict out-of-sample helps make the model robust. Please feel free to report your use of the model and its accuracy in the wild to <dev@cannlytics.com>. Lastly, but most importantly, remember that the predictions are for the probability of effects and aromas being reported by the observed sample given observed lab results. Extrapolations beyond the ranges of observed values aren't valid and all statistics should be taken at face value. Thank you and good fortune!
## Disclaimer
```
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
|
JoonJoon/bert-base-cased-wikitext2
|
JoonJoon
| 2022-07-14T20:57:50Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-14T20:46:53Z |
---
tags:
- generated_from_trainer
model-index:
- name: bert-base-cased-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-wikitext2
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.9846
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.7422 | 1.0 | 782 | 7.1373 |
| 7.0302 | 2.0 | 1564 | 6.9972 |
| 6.9788 | 3.0 | 2346 | 7.0087 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu102
- Datasets 1.14.0
- Tokenizers 0.10.3
|
nakamura196/roberta-small-hi-char
|
nakamura196
| 2022-07-14T20:32:40Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"japanese",
"masked-lm",
"ja",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-11T06:35:00Z |
---
language:
- "ja"
tags:
- "japanese"
- "masked-lm"
license: "cc-by-sa-4.0"
pipeline_tag: "fill-mask"
mask_token: "[MASK]"
widget:
- text: "入[MASK]外無之候江戸大水又ハ大地震なと"
- text: "日向[MASK]御望之由可令披露候"
---
# roberta-small-hi-char
## Model Description
This is a RoBERTa model pre-trained on HI texts with character tokenizer.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("nakamura196/roberta-small-hi-char")
model=AutoModelForMaskedLM.from_pretrained("nakamura196/roberta-small-hi-char")
```
|
aatmasidha/newsmodelclassification
|
aatmasidha
| 2022-07-14T20:16:34Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-12T08:59:25Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: newsmodelclassification
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.927
- name: F1
type: f1
value: 0.9271124951673986
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# newsmodelclassification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2065
- Accuracy: 0.927
- F1: 0.9271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8011 | 1.0 | 250 | 0.2902 | 0.911 | 0.9090 |
| 0.2316 | 2.0 | 500 | 0.2065 | 0.927 | 0.9271 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.10.3
|
budgekins/ae-classification
|
budgekins
| 2022-07-14T19:57:38Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-07-14T17:43:33Z |
This is a modified adverse event classifier using binary classification.
|
kuttersn/gpt2_chatbot
|
kuttersn
| 2022-07-14T19:04:01Z | 35 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-07-13T03:00:29Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: gpt2_chatbot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2_chatbot
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5732
- Accuracy: 0.3909
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Marissa/model-card-testing
|
Marissa
| 2022-07-14T18:39:01Z | 0 | 0 | null |
[
"en",
"fr",
"multilingual",
"arxiv:1910.09700",
"license:mit",
"region:us"
] | null | 2022-06-06T22:16:21Z |
---
language:
- en
- fr
- multilingual
license: mit
---
# Model Card for model-card-testing
<!-- Provide a quick summary of what the model is/does. [Optional] -->
This is a placeholder summary.
<details>
<summary> Click to expand policymaker version of model card </summary>
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Model Examination](#model-examination)
5. [Environmental Impact](#environmental-impact)
6. [Citation](#citation)
7. [Glossary](#glossary-optional)
8. [More Information](#more-information-optional)
9. [Model Card Authors](#model-card-authors-optional)
10. [Model Card Contact](#model-card-contact)
</details>
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training Details](#training-details)
5. [Evaluation](#evaluation)
6. [Model Examination](#model-examination)
7. [Environmental Impact](#environmental-impact)
8. [Technical Specifications](#technical-specifications-optional)
9. [Citation](#citation)
10. [Glossary](#glossary-optional)
11. [More Information](#more-information-optional)
12. [Model Card Authors](#model-card-authors-optional)
13. [Model Card Contact](#model-card-contact)
14. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is/does. -->
- **Developed by:** More information needed
- **Shared by [Optional]:** More information needed
- **Model type:** Language model
- **Language(s) (NLP):** More information needed
- **License:** More information needed
- **Related Models:** fake_model1, fake_model2
- **Parent Model:** More information needed
- **Resources for more information:** More information needed
- [Associated Paper](https://huggingface.co)
- [Blog Post](https://huggingface.co)
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
The model can be used for text generation.
## Downstream Use [Optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
To learn more about this task and potential downstream uses, see the Hugging Face [text generation docs](https://huggingface.co/tasks/text-generation)
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
The model should not be used to intentionally create hostile or alienating environments for people. The model was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model.
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recomendations.
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
More information on training data needed
## Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
### Preprocessing
More information needed
### Speeds, Sizes, Times
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
More information needed
# Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
## Testing Data, Factors & Metrics
### Testing Data
<!-- This should link to a Data Card if possible. -->
More information needed
### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
More information needed
### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
More information needed
## Results
More information needed
# Model Examination
More information needed
# Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
More information needed
# Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
More information needed
**APA:**
More information needed
# Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
More information needed
# More Information [optional]
More information needed
# Model Card Authors [optional]
<!-- This section provides another layer of transparency and accountability. Whose views is this model card representing? How many voices were included in its construction? Etc. -->
More information needed
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
More information needed
</details>
|
liyijing024/swin-base-patch4-window7-224-in22k-Chinese-finetuned
|
liyijing024
| 2022-07-14T18:04:48Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-07-14T17:28:55Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-base-patch4-window7-224-in22k-Chinese-finetuned
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
args: default
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-base-patch4-window7-224-in22k-Chinese-finetuned
This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224-in22k](https://huggingface.co/microsoft/swin-base-patch4-window7-224-in22k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0121 | 0.99 | 140 | 0.0001 | 1.0 |
| 0.0103 | 1.99 | 280 | 0.0001 | 1.0 |
| 0.0049 | 2.99 | 420 | 0.0000 | 1.0 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.8.0+cu111
- Datasets 2.3.3.dev0
- Tokenizers 0.12.1
|
ericklerouge123/xlm-roberta-base-finetuned-panx-all
|
ericklerouge123
| 2022-07-14T16:46:09Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-07-14T16:18:06Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1348
- F1: 0.8844
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3055 | 1.0 | 835 | 0.1755 | 0.8272 |
| 0.1561 | 2.0 | 1670 | 0.1441 | 0.8727 |
| 0.1016 | 3.0 | 2505 | 0.1348 | 0.8844 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Team-PIXEL/pixel-base-finetuned-jaquad
|
Team-PIXEL
| 2022-07-14T16:07:40Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"pixel",
"question-answering",
"generated_from_trainer",
"dataset:SkelterLabsInc/JaQuAD",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-07-14T16:05:07Z |
---
tags:
- generated_from_trainer
datasets:
- SkelterLabsInc/JaQuAD
model-index:
- name: pixel-base-finetuned-jaquad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pixel-base-finetuned-jaquad
This model is a fine-tuned version of [Team-PIXEL/pixel-base](https://huggingface.co/Team-PIXEL/pixel-base) on the SkelterLabsInc/JaQuAD dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 45
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 20000
- mixed_precision_training: Apex, opt level O1
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.12.1
|
dbarbedillo/testpyramidsrnd
|
dbarbedillo
| 2022-07-14T16:04:51Z | 9 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2022-07-14T16:04:46Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: dbarbedillo/testpyramidsrnd
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Team-PIXEL/pixel-base-finetuned-korquadv1
|
Team-PIXEL
| 2022-07-14T15:58:12Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"pixel",
"question-answering",
"generated_from_trainer",
"dataset:squad_kor_v1",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-07-14T15:55:25Z |
---
tags:
- generated_from_trainer
datasets:
- squad_kor_v1
model-index:
- name: pixel-base-finetuned-korquadv1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pixel-base-finetuned-korquadv1
This model is a fine-tuned version of [Team-PIXEL/pixel-base](https://huggingface.co/Team-PIXEL/pixel-base) on the squad_kor_v1 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 45
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 20000
- mixed_precision_training: Apex, opt level O1
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.12.1
|
neulab/distilgpt2-finetuned-wikitext103
|
neulab
| 2022-07-14T15:38:33Z | 54 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"arxiv:2201.12431",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-07-12T16:42:14Z |
This is a `distilgpt2` model, finetuned on the Wikitext-103 dataset.
It achieves a perplexity of **18.25** using a "sliding window" context, using the `run_clm.py` script at [https://github.com/neulab/knn-transformers](https://github.com/neulab/knn-transformers).
| Base LM: | `distilgpt2` | `gpt2` |
| :--- | ----: | ---: |
| base perplexity | 18.25 | 14.84 |
| + kNN-LM | 15.03 | 12.57 |
| + RetoMaton | **14.70** | **12.46** |
This model was released as part of the paper ["Neuro-Symbolic Language Modeling with Automaton-augmented Retrieval"](https://arxiv.org/pdf/2201.12431.pdf) (ICML'2022).
For more information, see: [https://github.com/neulab/knn-transformers](https://github.com/neulab/knn-transformers)
If you use this model, please cite:
```
@inproceedings{alon2022neuro,
title={Neuro-Symbolic Language Modeling with Automaton-augmented Retrieval},
author={Alon, Uri and Xu, Frank and He, Junxian and Sengupta, Sudipta and Roth, Dan and Neubig, Graham},
booktitle={International Conference on Machine Learning},
pages={468--485},
year={2022},
organization={PMLR}
}
```
|
neulab/gpt2-finetuned-wikitext103
|
neulab
| 2022-07-14T15:38:21Z | 323 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"arxiv:2201.12431",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-07-12T14:37:59Z |
This is a `gpt2` model, finetuned on the Wikitext-103 dataset.
It achieves a perplexity of **14.84** using a "sliding window" context, using the `run_clm.py` script at [https://github.com/neulab/knn-transformers](https://github.com/neulab/knn-transformers).
| Base LM: | `distilgpt2` | `gpt2` |
| :--- | ----: | ---: |
| base perplexity | 18.25 | 14.84 |
| +kNN-LM | 15.03 | 12.57 |
| +RetoMaton | **14.70** | **12.46** |
This model was released as part of the paper ["Neuro-Symbolic Language Modeling with Automaton-augmented Retrieval"](https://arxiv.org/pdf/2201.12431.pdf) (ICML'2022).
For more information, see: [https://github.com/neulab/knn-transformers](https://github.com/neulab/knn-transformers)
If you use this model, please cite:
```
@inproceedings{alon2022neuro,
title={Neuro-Symbolic Language Modeling with Automaton-augmented Retrieval},
author={Alon, Uri and Xu, Frank and He, Junxian and Sengupta, Sudipta and Roth, Dan and Neubig, Graham},
booktitle={International Conference on Machine Learning},
pages={468--485},
year={2022},
organization={PMLR}
}
```
|
neulab/gpt2-med-finetuned-wikitext103
|
neulab
| 2022-07-14T15:38:04Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"arxiv:2201.12431",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-07-12T15:40:48Z |
This is a `gpt2-medium` model, finetuned on the Wikitext-103 dataset.
It achieves a perplexity of **11.55** using a "sliding window" context, using the `run_clm.py` script at [https://github.com/neulab/knn-transformers](https://github.com/neulab/knn-transformers).
| Base LM: | `distilgpt2` | `gpt2` |
| :--- | ----: | ---: |
| base perplexity | 18.25 | 14.84 |
| + kNN-LM | 15.03 | 12.57 |
| + RetoMaton | **14.70** | **12.46** |
This model was released as part of the paper ["Neuro-Symbolic Language Modeling with Automaton-augmented Retrieval"](https://arxiv.org/pdf/2201.12431.pdf) (ICML'2022).
For more information, see: [https://github.com/neulab/knn-transformers](https://github.com/neulab/knn-transformers)
If you use this model, please cite:
```
@inproceedings{alon2022neuro,
title={Neuro-Symbolic Language Modeling with Automaton-augmented Retrieval},
author={Alon, Uri and Xu, Frank and He, Junxian and Sengupta, Sudipta and Roth, Dan and Neubig, Graham},
booktitle={International Conference on Machine Learning},
pages={468--485},
year={2022},
organization={PMLR}
}
```
|
jslowik/distilbert-base-uncased-finetuned-emotion
|
jslowik
| 2022-07-14T15:05:25Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-14T15:01:13Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9265
- name: F1
type: f1
value: 0.9262423473736914
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2156
- Accuracy: 0.9265
- F1: 0.9262
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.814 | 1.0 | 250 | 0.3075 | 0.907 | 0.9048 |
| 0.2481 | 2.0 | 500 | 0.2156 | 0.9265 | 0.9262 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
cybertelx/DialoGPT-small-drunkic0n
|
cybertelx
| 2022-07-14T14:45:10Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-07-14T14:16:48Z |
---
tags:
- conversational
---
# Drunk IC-0n
IC-0n (or Icon) is a murderous AI protagonist of the Internecion Cube series. This is an attempt to build her in real life (haha it failed, and actually gladly)
This uses Microsoft's DialoGPT-small and it is trained on all of Icon's lines throughout the series from episode 1-3 (only 50 though, so low training data)
It's "drunk" because it is very incoherent.
|
Datasaur/distilbert-base-uncased-finetuned-conll2003
|
Datasaur
| 2022-07-14T14:18:28Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"token-classification",
"en",
"dataset:conll2003",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-05-17T05:21:06Z |
---
language: en
license: apache-2.0
datasets:
- conll2003
---
|
gossminn/predict-perception-bertino-cause-object
|
gossminn
| 2022-07-14T14:14:55Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-14T14:06:37Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-bertino-cause-object
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-bertino-cause-object
This model is a fine-tuned version of [indigo-ai/BERTino](https://huggingface.co/indigo-ai/BERTino) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0766
- R2: 0.8216
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 47
### Training results
| Training Loss | Epoch | Step | Validation Loss | R2 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6807 | 1.0 | 14 | 0.4011 | 0.0652 |
| 0.3529 | 2.0 | 28 | 0.2304 | 0.4631 |
| 0.1539 | 3.0 | 42 | 0.0596 | 0.8611 |
| 0.0853 | 4.0 | 56 | 0.1600 | 0.6272 |
| 0.066 | 5.0 | 70 | 0.1596 | 0.6280 |
| 0.0563 | 6.0 | 84 | 0.1146 | 0.7330 |
| 0.0777 | 7.0 | 98 | 0.1010 | 0.7646 |
| 0.0299 | 8.0 | 112 | 0.0897 | 0.7910 |
| 0.0311 | 9.0 | 126 | 0.0832 | 0.8061 |
| 0.0274 | 10.0 | 140 | 0.0988 | 0.7697 |
| 0.0262 | 11.0 | 154 | 0.1048 | 0.7557 |
| 0.0204 | 12.0 | 168 | 0.0615 | 0.8566 |
| 0.0254 | 13.0 | 182 | 0.0742 | 0.8270 |
| 0.0251 | 14.0 | 196 | 0.0923 | 0.7850 |
| 0.0149 | 15.0 | 210 | 0.0663 | 0.8456 |
| 0.0141 | 16.0 | 224 | 0.0755 | 0.8241 |
| 0.0112 | 17.0 | 238 | 0.0905 | 0.7891 |
| 0.0108 | 18.0 | 252 | 0.0834 | 0.8057 |
| 0.0096 | 19.0 | 266 | 0.0823 | 0.8082 |
| 0.0073 | 20.0 | 280 | 0.0825 | 0.8078 |
| 0.0092 | 21.0 | 294 | 0.0869 | 0.7974 |
| 0.0075 | 22.0 | 308 | 0.0744 | 0.8266 |
| 0.0075 | 23.0 | 322 | 0.0825 | 0.8078 |
| 0.0062 | 24.0 | 336 | 0.0797 | 0.8144 |
| 0.0065 | 25.0 | 350 | 0.0793 | 0.8152 |
| 0.007 | 26.0 | 364 | 0.0840 | 0.8043 |
| 0.0067 | 27.0 | 378 | 0.0964 | 0.7753 |
| 0.0064 | 28.0 | 392 | 0.0869 | 0.7976 |
| 0.0063 | 29.0 | 406 | 0.0766 | 0.8215 |
| 0.0057 | 30.0 | 420 | 0.0764 | 0.8219 |
| 0.0057 | 31.0 | 434 | 0.0796 | 0.8145 |
| 0.0054 | 32.0 | 448 | 0.0853 | 0.8012 |
| 0.0044 | 33.0 | 462 | 0.0750 | 0.8253 |
| 0.0072 | 34.0 | 476 | 0.0782 | 0.8179 |
| 0.006 | 35.0 | 490 | 0.0867 | 0.7979 |
| 0.0054 | 36.0 | 504 | 0.0819 | 0.8092 |
| 0.0047 | 37.0 | 518 | 0.0839 | 0.8045 |
| 0.0043 | 38.0 | 532 | 0.0764 | 0.8221 |
| 0.0039 | 39.0 | 546 | 0.0728 | 0.8303 |
| 0.0041 | 40.0 | 560 | 0.0755 | 0.8241 |
| 0.0038 | 41.0 | 574 | 0.0729 | 0.8301 |
| 0.0034 | 42.0 | 588 | 0.0781 | 0.8180 |
| 0.0038 | 43.0 | 602 | 0.0762 | 0.8224 |
| 0.0032 | 44.0 | 616 | 0.0777 | 0.8189 |
| 0.0035 | 45.0 | 630 | 0.0776 | 0.8191 |
| 0.0037 | 46.0 | 644 | 0.0765 | 0.8217 |
| 0.0036 | 47.0 | 658 | 0.0766 | 0.8216 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
ericklerouge123/xlm-roberta-base-finetuned-panx-de
|
ericklerouge123
| 2022-07-14T14:05:25Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-17T20:42:35Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8648740833380706
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1365
- F1: 0.8649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2553 | 1.0 | 525 | 0.1575 | 0.8279 |
| 0.1284 | 2.0 | 1050 | 0.1386 | 0.8463 |
| 0.0813 | 3.0 | 1575 | 0.1365 | 0.8649 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
jgriffi/bart_abstract_summarization
|
jgriffi
| 2022-07-14T12:28:07Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-07-14T09:13:23Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: bart_abstract_summarization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart_abstract_summarization
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1852
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0559 | 0.25 | 500 | 0.1601 |
| 0.0068 | 0.49 | 1000 | 0.2571 |
| 0.0016 | 0.74 | 1500 | 0.4330 |
| 0.0001 | 0.99 | 2000 | 0.1852 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
stokic/ppo-LunarLander-v2
|
stokic
| 2022-07-14T12:22:36Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-14T12:21:59Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 109.33 +/- 78.20
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ClassCat/roberta-base-catalan
|
ClassCat
| 2022-07-14T11:36:43Z | 6 | 2 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"ca",
"dataset:wikipedia",
"dataset:cc100",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-30T14:32:46Z |
---
language: ca
license: cc-by-sa-4.0
datasets:
- wikipedia
- cc100
widget:
- text: "És molt <mask> per a mi."
- text: "Vas jugar a <mask>."
- text: "Ell està una mica <mask>."
- text: "És un bon <mask>."
- text: "M'agradaria menjar una <mask>."
---
## RoBERTa Catalan base model (Uncased)
### Prerequisites
transformers==4.19.2
### Model architecture
This model uses RoBERTa base setttings except vocabulary size.
### Tokenizer
Using BPE tokenizer with vocabulary size 50,000.
### Training Data
* [wiki40b/ca](https://www.tensorflow.org/datasets/catalog/wiki40b#wiki40bca) (Catalan Wikipedia)
* Subset of [CC-100/ca](https://data.statmt.org/cc-100/) : Monolingual Datasets from Web Crawl Data
### Usage
```python
from transformers import pipeline
unmasker = pipeline('fill-mask', model='ClassCat/roberta-base-catalan')
unmasker("Jo <mask> japonès.")
```
|
Siyong/MC
|
Siyong
| 2022-07-14T10:48:35Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-14T08:44:08Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec-base-All
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec-base-All
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0545
- Wer: 0.8861
- Cer: 0.5014
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 120
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:------:|:-----:|:---------------:|:------:|:------:|
| No log | 3.33 | 500 | 4.0654 | 1.0 | 0.9823 |
| No log | 6.67 | 1000 | 3.4532 | 1.0 | 0.9823 |
| No log | 10.0 | 1500 | 3.0707 | 0.9992 | 0.9781 |
| No log | 13.33 | 2000 | 2.7335 | 1.0017 | 0.9027 |
| No log | 16.67 | 2500 | 2.5896 | 1.0690 | 0.7302 |
| No log | 20.0 | 3000 | 2.3315 | 1.0690 | 0.6677 |
| No log | 23.33 | 3500 | 2.2217 | 1.0150 | 0.5966 |
| No log | 26.67 | 4000 | 2.3802 | 1.0549 | 0.5948 |
| No log | 30.0 | 4500 | 2.2208 | 0.9975 | 0.5681 |
| 2.4224 | 33.33 | 5000 | 2.2687 | 0.9800 | 0.5537 |
| 2.4224 | 36.67 | 5500 | 2.3169 | 0.9476 | 0.5493 |
| 2.4224 | 40.0 | 6000 | 2.5196 | 0.9900 | 0.5509 |
| 2.4224 | 43.33 | 6500 | 2.4816 | 0.9501 | 0.5272 |
| 2.4224 | 46.67 | 7000 | 2.4894 | 0.9485 | 0.5276 |
| 2.4224 | 50.0 | 7500 | 2.4555 | 0.9418 | 0.5305 |
| 2.4224 | 53.33 | 8000 | 2.7326 | 0.9559 | 0.5255 |
| 2.4224 | 56.67 | 8500 | 2.5514 | 0.9227 | 0.5209 |
| 2.4224 | 60.0 | 9000 | 2.9135 | 0.9717 | 0.5455 |
| 2.4224 | 63.33 | 9500 | 3.0465 | 0.8346 | 0.5002 |
| 0.8569 | 66.67 | 10000 | 2.8177 | 0.9302 | 0.5216 |
| 0.8569 | 70.0 | 10500 | 2.9908 | 0.9310 | 0.5128 |
| 0.8569 | 73.33 | 11000 | 3.1752 | 0.9235 | 0.5284 |
| 0.8569 | 76.67 | 11500 | 2.7412 | 0.8886 | 0.5 |
| 0.8569 | 80.0 | 12000 | 2.7362 | 0.9127 | 0.5040 |
| 0.8569 | 83.33 | 12500 | 2.9636 | 0.9152 | 0.5093 |
| 0.8569 | 86.67 | 13000 | 3.0139 | 0.9011 | 0.5097 |
| 0.8569 | 90.0 | 13500 | 2.8325 | 0.8853 | 0.5032 |
| 0.8569 | 93.33 | 14000 | 3.0383 | 0.8845 | 0.5056 |
| 0.8569 | 96.67 | 14500 | 2.7931 | 0.8795 | 0.4965 |
| 0.3881 | 100.0 | 15000 | 2.8972 | 0.8928 | 0.5012 |
| 0.3881 | 103.33 | 15500 | 2.7780 | 0.8736 | 0.4947 |
| 0.3881 | 106.67 | 16000 | 3.1081 | 0.9036 | 0.5109 |
| 0.3881 | 110.0 | 16500 | 3.0078 | 0.8928 | 0.5032 |
| 0.3881 | 113.33 | 17000 | 3.0245 | 0.8886 | 0.5009 |
| 0.3881 | 116.67 | 17500 | 3.0739 | 0.8928 | 0.5065 |
| 0.3881 | 120.0 | 18000 | 3.0545 | 0.8861 | 0.5014 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
google/tapas-medium-finetuned-wtq
|
google
| 2022-07-14T10:14:59Z | 31 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"tapas",
"table-question-answering",
"en",
"dataset:wikitablequestions",
"arxiv:2004.02349",
"arxiv:2010.00571",
"arxiv:1508.00305",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
table-question-answering
| 2022-03-02T23:29:05Z |
---
language: en
tags:
- tapas
- table-question-answering
license: apache-2.0
datasets:
- wikitablequestions
---
# TAPAS medium model fine-tuned on WikiTable Questions (WTQ)
This model has 2 versions which can be used. The default version corresponds to the `tapas_wtq_wikisql_sqa_inter_masklm_medium_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas).
This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned in a chain on [SQA](https://www.microsoft.com/en-us/download/details.aspx?id=54253), [WikiSQL](https://github.com/salesforce/WikiSQL) and finally [WTQ](https://github.com/ppasupat/WikiTableQuestions). It uses relative position embeddings (i.e. resetting the position index at every cell of the table).
The other (non-default) version which can be used is:
- `no_reset`, which corresponds to `tapas_wtq_wikisql_sqa_inter_masklm_medium` (intermediate pre-training, absolute position embeddings).
Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by
the Hugging Face team and contributors.
## Results
Size | Reset | Dev Accuracy | Link
-------- | --------| -------- | ----
LARGE | noreset | 0.5062 | [tapas-large-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-large-finetuned-wtq/tree/no_reset)
LARGE | reset | 0.5097 | [tapas-large-finetuned-wtq](https://huggingface.co/google/tapas-large-finetuned-wtq/tree/main)
BASE | noreset | 0.4525 | [tapas-base-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-base-finetuned-wtq/tree/no_reset)
BASE | reset | 0.4638 | [tapas-base-finetuned-wtq](https://huggingface.co/google/tapas-base-finetuned-wtq/tree/main)
**MEDIUM** | **noreset** | **0.4324** | [tapas-medium-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-medium-finetuned-wtq/tree/no_reset)
**MEDIUM** | **reset** | **0.4324** | [tapas-medium-finetuned-wtq](https://huggingface.co/google/tapas-medium-finetuned-wtq/tree/main)
SMALL | noreset | 0.3681 | [tapas-small-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-small-finetuned-wtq/tree/no_reset)
SMALL | reset | 0.3762 | [tapas-small-finetuned-wtq](https://huggingface.co/google/tapas-small-finetuned-wtq/tree/main)
MINI | noreset | 0.2783 | [tapas-mini-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-mini-finetuned-wtq/tree/no_reset)
MINI | reset | 0.2854 | [tapas-mini-finetuned-wtq](https://huggingface.co/google/tapas-mini-finetuned-wtq/tree/main)
TINY | noreset | 0.0823 | [tapas-tiny-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-tiny-finetuned-wtq/tree/no_reset)
TINY | reset | 0.1039 | [tapas-tiny-finetuned-wtq](https://huggingface.co/google/tapas-tiny-finetuned-wtq/tree/main)
## Model description
TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion.
This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it
can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in
the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words.
This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other,
or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional
representation of a table and associated text.
- Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating
a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence
is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements.
This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used
to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed
or refuted by the contents of a table. Fine-tuning is done by adding a cell selection head and aggregation head on top of the pre-trained model, and then jointly train these randomly initialized classification heads with the base model on SQa, WikiSQL and finally WTQ.
## Intended uses & limitations
You can use this model for answering questions related to a table.
For code examples, we refer to the documentation of TAPAS on the HuggingFace website.
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Question [SEP] Flattened table [SEP]
```
The authors did first convert the WTQ dataset into the format of SQA using automatic conversion scripts.
### Fine-tuning
The model was fine-tuned on 32 Cloud TPU v3 cores for 50,000 steps with maximum sequence length 512 and batch size of 512.
In this setup, fine-tuning takes around 10 hours. The optimizer used is Adam with a learning rate of 1.93581e-5, and a warmup
ratio of 0.128960. An inductive bias is added such that the model only selects cells of the same column. This is reflected by the
`select_one_column` parameter of `TapasConfig`. See the [paper](https://arxiv.org/abs/2004.02349) for more details (tables 11 and
12).
### BibTeX entry and citation info
```bibtex
@misc{herzig2020tapas,
title={TAPAS: Weakly Supervised Table Parsing via Pre-training},
author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos},
year={2020},
eprint={2004.02349},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
```bibtex
@misc{eisenschlos2020understanding,
title={Understanding tables with intermediate pre-training},
author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller},
year={2020},
eprint={2010.00571},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@article{DBLP:journals/corr/PasupatL15,
author = {Panupong Pasupat and
Percy Liang},
title = {Compositional Semantic Parsing on Semi-Structured Tables},
journal = {CoRR},
volume = {abs/1508.00305},
year = {2015},
url = {http://arxiv.org/abs/1508.00305},
archivePrefix = {arXiv},
eprint = {1508.00305},
timestamp = {Mon, 13 Aug 2018 16:47:37 +0200},
biburl = {https://dblp.org/rec/journals/corr/PasupatL15.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
google/tapas-mini-finetuned-wtq
|
google
| 2022-07-14T10:14:00Z | 365 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"tapas",
"table-question-answering",
"en",
"dataset:wikitablequestions",
"arxiv:2004.02349",
"arxiv:2010.00571",
"arxiv:1508.00305",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
table-question-answering
| 2022-03-02T23:29:05Z |
---
language: en
tags:
- tapas
- table-question-answering
license: apache-2.0
datasets:
- wikitablequestions
---
# TAPAS mini model fine-tuned on WikiTable Questions (WTQ)
This model has 2 versions which can be used. The default version corresponds to the `tapas_wtq_wikisql_sqa_inter_masklm_mini_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas).
This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned in a chain on [SQA](https://www.microsoft.com/en-us/download/details.aspx?id=54253), [WikiSQL](https://github.com/salesforce/WikiSQL) and finally [WTQ](https://github.com/ppasupat/WikiTableQuestions). It uses relative position embeddings (i.e. resetting the position index at every cell of the table).
The other (non-default) version which can be used is:
- `no_reset`, which corresponds to `tapas_wtq_wikisql_sqa_inter_masklm_mini` (intermediate pre-training, absolute position embeddings).
Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by
the Hugging Face team and contributors.
## Results
Size | Reset | Dev Accuracy | Link
-------- | --------| -------- | ----
LARGE | noreset | 0.5062 | [tapas-large-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-large-finetuned-wtq/tree/no_reset)
LARGE | reset | 0.5097 | [tapas-large-finetuned-wtq](https://huggingface.co/google/tapas-large-finetuned-wtq/tree/main)
BASE | noreset | 0.4525 | [tapas-base-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-base-finetuned-wtq/tree/no_reset)
BASE | reset | 0.4638 | [tapas-base-finetuned-wtq](https://huggingface.co/google/tapas-base-finetuned-wtq/tree/main)
MEDIUM | noreset | 0.4324 | [tapas-medium-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-medium-finetuned-wtq/tree/no_reset)
MEDIUM | reset | 0.4324 | [tapas-medium-finetuned-wtq](https://huggingface.co/google/tapas-medium-finetuned-wtq/tree/main)
SMALL | noreset | 0.3681 | [tapas-small-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-small-finetuned-wtq/tree/no_reset)
SMALL | reset | 0.3762 | [tapas-small-finetuned-wtq](https://huggingface.co/google/tapas-small-finetuned-wtq/tree/main)
**MINI** | **noreset** | **0.2783** | [tapas-mini-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-mini-finetuned-wtq/tree/no_reset)
**MINI** | **reset** | **0.2854** | [tapas-mini-finetuned-wtq](https://huggingface.co/google/tapas-mini-finetuned-wtq/tree/main)
TINY | noreset | 0.0823 | [tapas-tiny-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-tiny-finetuned-wtq/tree/no_reset)
TINY | reset | 0.1039 | [tapas-tiny-finetuned-wtq](https://huggingface.co/google/tapas-tiny-finetuned-wtq/tree/main)
## Model description
TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion.
This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it
can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in
the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words.
This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other,
or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional
representation of a table and associated text.
- Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating
a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence
is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements.
This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used
to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed
or refuted by the contents of a table. Fine-tuning is done by adding a cell selection head and aggregation head on top of the pre-trained model, and then jointly train these randomly initialized classification heads with the base model on SQa, WikiSQL and finally WTQ.
## Intended uses & limitations
You can use this model for answering questions related to a table.
For code examples, we refer to the documentation of TAPAS on the HuggingFace website.
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Question [SEP] Flattened table [SEP]
```
The authors did first convert the WTQ dataset into the format of SQA using automatic conversion scripts.
### Fine-tuning
The model was fine-tuned on 32 Cloud TPU v3 cores for 50,000 steps with maximum sequence length 512 and batch size of 512.
In this setup, fine-tuning takes around 10 hours. The optimizer used is Adam with a learning rate of 1.93581e-5, and a warmup
ratio of 0.128960. An inductive bias is added such that the model only selects cells of the same column. This is reflected by the
`select_one_column` parameter of `TapasConfig`. See the [paper](https://arxiv.org/abs/2004.02349) for more details (tables 11 and
12).
### BibTeX entry and citation info
```bibtex
@misc{herzig2020tapas,
title={TAPAS: Weakly Supervised Table Parsing via Pre-training},
author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos},
year={2020},
eprint={2004.02349},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
```bibtex
@misc{eisenschlos2020understanding,
title={Understanding tables with intermediate pre-training},
author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller},
year={2020},
eprint={2010.00571},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@article{DBLP:journals/corr/PasupatL15,
author = {Panupong Pasupat and
Percy Liang},
title = {Compositional Semantic Parsing on Semi-Structured Tables},
journal = {CoRR},
volume = {abs/1508.00305},
year = {2015},
url = {http://arxiv.org/abs/1508.00305},
archivePrefix = {arXiv},
eprint = {1508.00305},
timestamp = {Mon, 13 Aug 2018 16:47:37 +0200},
biburl = {https://dblp.org/rec/journals/corr/PasupatL15.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
google/tapas-base-finetuned-wtq
|
google
| 2022-07-14T10:12:59Z | 18,559 | 209 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"tapas",
"table-question-answering",
"en",
"dataset:wikitablequestions",
"arxiv:2004.02349",
"arxiv:2010.00571",
"arxiv:1508.00305",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
table-question-answering
| 2022-03-02T23:29:05Z |
---
language: en
tags:
- tapas
license: apache-2.0
datasets:
- wikitablequestions
---
# TAPAS base model fine-tuned on WikiTable Questions (WTQ)
This model has 2 versions which can be used. The default version corresponds to the `tapas_wtq_wikisql_sqa_inter_masklm_base_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas).
This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned in a chain on [SQA](https://www.microsoft.com/en-us/download/details.aspx?id=54253), [WikiSQL](https://github.com/salesforce/WikiSQL) and finally [WTQ](https://github.com/ppasupat/WikiTableQuestions). It uses relative position embeddings (i.e. resetting the position index at every cell of the table).
The other (non-default) version which can be used is:
- `no_reset`, which corresponds to `tapas_wtq_wikisql_sqa_inter_masklm_base` (intermediate pre-training, absolute position embeddings).
Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by
the Hugging Face team and contributors.
## Results
Size | Reset | Dev Accuracy | Link
-------- | --------| -------- | ----
LARGE | noreset | 0.5062 | [tapas-large-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-large-finetuned-wtq/tree/no_reset)
LARGE | reset | 0.5097 | [tapas-large-finetuned-wtq](https://huggingface.co/google/tapas-large-finetuned-wtq/tree/main)
**BASE** | **noreset** | **0.4525** | [tapas-base-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-base-finetuned-wtq/tree/no_reset)
**BASE** | **reset** | **0.4638** | [tapas-base-finetuned-wtq](https://huggingface.co/google/tapas-base-finetuned-wtq/tree/main)
MEDIUM | noreset | 0.4324 | [tapas-medium-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-medium-finetuned-wtq/tree/no_reset)
MEDIUM | reset | 0.4324 | [tapas-medium-finetuned-wtq](https://huggingface.co/google/tapas-medium-finetuned-wtq/tree/main)
SMALL | noreset | 0.3681 | [tapas-small-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-small-finetuned-wtq/tree/no_reset)
SMALL | reset | 0.3762 | [tapas-small-finetuned-wtq](https://huggingface.co/google/tapas-small-finetuned-wtq/tree/main)
MINI | noreset | 0.2783 | [tapas-mini-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-mini-finetuned-wtq/tree/no_reset)
MINI | reset | 0.2854 | [tapas-mini-finetuned-wtq](https://huggingface.co/google/tapas-mini-finetuned-wtq/tree/main)
TINY | noreset | 0.0823 | [tapas-tiny-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-tiny-finetuned-wtq/tree/no_reset)
TINY | reset | 0.1039 | [tapas-tiny-finetuned-wtq](https://huggingface.co/google/tapas-tiny-finetuned-wtq/tree/main)
## Model description
TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion.
This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it
can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in
the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words.
This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other,
or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional
representation of a table and associated text.
- Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating
a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence
is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements.
This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used
to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed
or refuted by the contents of a table. Fine-tuning is done by adding a cell selection head and aggregation head on top of the pre-trained model, and then jointly train these randomly initialized classification heads with the base model on SQa, WikiSQL and finally WTQ.
## Intended uses & limitations
You can use this model for answering questions related to a table.
For code examples, we refer to the documentation of TAPAS on the HuggingFace website.
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Question [SEP] Flattened table [SEP]
```
The authors did first convert the WTQ dataset into the format of SQA using automatic conversion scripts.
### Fine-tuning
The model was fine-tuned on 32 Cloud TPU v3 cores for 50,000 steps with maximum sequence length 512 and batch size of 512.
In this setup, fine-tuning takes around 10 hours. The optimizer used is Adam with a learning rate of 1.93581e-5, and a warmup
ratio of 0.128960. An inductive bias is added such that the model only selects cells of the same column. This is reflected by the
`select_one_column` parameter of `TapasConfig`. See the [paper](https://arxiv.org/abs/2004.02349) for more details (tables 11 and
12).
### BibTeX entry and citation info
```bibtex
@misc{herzig2020tapas,
title={TAPAS: Weakly Supervised Table Parsing via Pre-training},
author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos},
year={2020},
eprint={2004.02349},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
```bibtex
@misc{eisenschlos2020understanding,
title={Understanding tables with intermediate pre-training},
author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller},
year={2020},
eprint={2010.00571},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@article{DBLP:journals/corr/PasupatL15,
author = {Panupong Pasupat and
Percy Liang},
title = {Compositional Semantic Parsing on Semi-Structured Tables},
journal = {CoRR},
volume = {abs/1508.00305},
year = {2015},
url = {http://arxiv.org/abs/1508.00305},
archivePrefix = {arXiv},
eprint = {1508.00305},
timestamp = {Mon, 13 Aug 2018 16:47:37 +0200},
biburl = {https://dblp.org/rec/journals/corr/PasupatL15.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
microsoft/tapex-large-finetuned-tabfact
|
microsoft
| 2022-07-14T10:10:10Z | 136 | 8 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text-classification",
"tapex",
"table-question-answering",
"en",
"dataset:tab_fact",
"arxiv:2107.07653",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
table-question-answering
| 2022-03-02T23:29:05Z |
---
language: en
tags:
- tapex
- table-question-answering
datasets:
- tab_fact
license: mit
---
# TAPEX (large-sized model)
TAPEX was proposed in [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. The original repo can be found [here](https://github.com/microsoft/Table-Pretraining).
## Model description
TAPEX (**Ta**ble **P**re-training via **Ex**ecution) is a conceptually simple and empirically powerful pre-training approach to empower existing models with *table reasoning* skills. TAPEX realizes table pre-training by learning a neural SQL executor over a synthetic corpus, which is obtained by automatically synthesizing executable SQL queries.
TAPEX is based on the BART architecture, the transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder.
This model is the `tapex-base` model fine-tuned on the [Tabfact](https://huggingface.co/datasets/tab_fact) dataset.
## Intended Uses
You can use the model for table fact verficiation.
### How to Use
Here is how to use this model in transformers:
```python
from transformers import TapexTokenizer, BartForSequenceClassification
import pandas as pd
tokenizer = TapexTokenizer.from_pretrained("microsoft/tapex-large-finetuned-tabfact")
model = BartForSequenceClassification.from_pretrained("microsoft/tapex-large-finetuned-tabfact")
data = {
"year": [1896, 1900, 1904, 2004, 2008, 2012],
"city": ["athens", "paris", "st. louis", "athens", "beijing", "london"]
}
table = pd.DataFrame.from_dict(data)
# tapex accepts uncased input since it is pre-trained on the uncased corpus
query = "beijing hosts the olympic games in 2012"
encoding = tokenizer(table=table, query=query, return_tensors="pt")
outputs = model(**encoding)
output_id = int(outputs.logits[0].argmax(dim=0))
print(model.config.id2label[output_id])
# Refused
```
### How to Eval
Please find the eval script [here](https://github.com/SivilTaram/transformers/tree/add_tapex_bis/examples/research_projects/tapex).
### BibTeX entry and citation info
```bibtex
@inproceedings{
liu2022tapex,
title={{TAPEX}: Table Pre-training via Learning a Neural {SQL} Executor},
author={Qian Liu and Bei Chen and Jiaqi Guo and Morteza Ziyadi and Zeqi Lin and Weizhu Chen and Jian-Guang Lou},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=O50443AsCP}
}
```
|
sam34738/xlm-roberta-hindi-nisha
|
sam34738
| 2022-07-14T09:40:30Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-14T09:20:57Z |
---
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-hindi-nisha
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-hindi-nisha
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-emotion](https://huggingface.co/cardiffnlp/twitter-roberta-base-emotion) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5305
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1429 | 1.0 | 460 | 0.7002 |
| 0.5404 | 2.0 | 920 | 0.5305 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Tokenizers 0.12.1
|
NinaXiao/distilroberta-base-wiki-mark
|
NinaXiao
| 2022-07-14T09:05:03Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-14T08:42:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-wiki-mark
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-wiki-mark
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0062
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2841 | 1.0 | 1265 | 2.0553 |
| 2.1536 | 2.0 | 2530 | 1.9840 |
| 2.1067 | 3.0 | 3795 | 1.9731 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
sun1638650145/Reinforce-CartPole-v1
|
sun1638650145
| 2022-07-14T07:13:42Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-14T07:13:09Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
---
# 使用**Reinforce**智能体来玩**CartPole-v1**
这是一个使用**Reinforce**训练有素的模型玩**CartPole-v1**.
要学习使用这个模型并训练你的模型, 请查阅深度强化学习课程第5单元: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
sayakpaul/mit-b0-finetuned-sidewalk-semantic
|
sayakpaul
| 2022-07-14T03:29:57Z | 4 | 2 |
transformers
|
[
"transformers",
"tf",
"segformer",
"generated_from_keras_callback",
"vision",
"image-segmentation",
"dataset:segments/sidewalk-semantic",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-segmentation
| 2022-07-13T17:45:40Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
- vision
- image-segmentation
model-index:
- name: mit-b0-finetuned-sidewalk-semantic
results: []
datasets:
- segments/sidewalk-semantic
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# mit-b0-finetuned-sidewalk-semantic
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2125
- Validation Loss: 0.5151
- Epoch: 49
## Model description
The model was fine-tuned from [this model](https://huggingface.co/nvidia/mit-b0). More information about the model is available
[here](https://huggingface.co/docs/transformers/model_doc/segformer).
## Intended uses & limitations
This fine-tuned model is just for demonstration purposes. Before using it in production, it should be thoroughly inspected and adjusted
if needed.
## Training and evaluation data
[`segments/sidewalk-semantic`](https://huggingface.co/datasets/segments/sidewalk-semantic)
## Training procedure
More information is available here: [deep-diver/segformer-tf-transformers](https://github.com/deep-diver/segformer-tf-transformers).
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 6e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.0785 | 1.1753 | 0 |
| 1.1312 | 0.8807 | 1 |
| 0.9315 | 0.7585 | 2 |
| 0.7952 | 0.7261 | 3 |
| 0.7273 | 0.6701 | 4 |
| 0.6603 | 0.6396 | 5 |
| 0.6198 | 0.6238 | 6 |
| 0.5958 | 0.5925 | 7 |
| 0.5378 | 0.5714 | 8 |
| 0.5236 | 0.5786 | 9 |
| 0.4960 | 0.5588 | 10 |
| 0.4633 | 0.5624 | 11 |
| 0.4562 | 0.5450 | 12 |
| 0.4167 | 0.5438 | 13 |
| 0.4100 | 0.5248 | 14 |
| 0.3947 | 0.5354 | 15 |
| 0.3867 | 0.5069 | 16 |
| 0.3803 | 0.5285 | 17 |
| 0.3696 | 0.5318 | 18 |
| 0.3386 | 0.5162 | 19 |
| 0.3349 | 0.5312 | 20 |
| 0.3233 | 0.5304 | 21 |
| 0.3328 | 0.5178 | 22 |
| 0.3140 | 0.5131 | 23 |
| 0.3081 | 0.5049 | 24 |
| 0.3046 | 0.5011 | 25 |
| 0.3209 | 0.5197 | 26 |
| 0.2966 | 0.5151 | 27 |
| 0.2829 | 0.5166 | 28 |
| 0.2968 | 0.5210 | 29 |
| 0.2818 | 0.5300 | 30 |
| 0.2739 | 0.5221 | 31 |
| 0.2602 | 0.5340 | 32 |
| 0.2570 | 0.5124 | 33 |
| 0.2557 | 0.5234 | 34 |
| 0.2593 | 0.5098 | 35 |
| 0.2582 | 0.5329 | 36 |
| 0.2439 | 0.5373 | 37 |
| 0.2413 | 0.5141 | 38 |
| 0.2423 | 0.5210 | 39 |
| 0.2340 | 0.5043 | 40 |
| 0.2244 | 0.5300 | 41 |
| 0.2246 | 0.4978 | 42 |
| 0.2270 | 0.5385 | 43 |
| 0.2254 | 0.5125 | 44 |
| 0.2176 | 0.5510 | 45 |
| 0.2194 | 0.5384 | 46 |
| 0.2136 | 0.5186 | 47 |
| 0.2121 | 0.5356 | 48 |
| 0.2125 | 0.5151 | 49 |
### Framework versions
- Transformers 4.21.0.dev0
- TensorFlow 2.8.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Billwzl/20split_dataset
|
Billwzl
| 2022-07-14T03:21:48Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-09T08:34:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: 20split_dataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20split_dataset
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0446
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.5971 | 1.0 | 11851 | 2.3479 |
| 2.3773 | 2.0 | 23702 | 2.2446 |
| 2.2663 | 3.0 | 35553 | 2.1630 |
| 2.1842 | 4.0 | 47404 | 2.1059 |
| 2.1145 | 5.0 | 59255 | 2.0626 |
| 2.0652 | 6.0 | 71106 | 2.0446 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
kuttersn/gpt2-finetuned-redditComments
|
kuttersn
| 2022-07-14T01:38:25Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-07-07T14:15:03Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: gpt2-finetuned-redditComments
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-finetuned-redditComments
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8418
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.9535 | 1.0 | 4320 | 3.8888 |
| 3.8832 | 2.0 | 8640 | 3.8523 |
| 3.8708 | 3.0 | 12960 | 3.8418 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
ClassCat/roberta-base-latin-v2
|
ClassCat
| 2022-07-14T00:20:13Z | 162 | 2 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"la",
"dataset:cc100",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-01T18:45:18Z |
---
language: la
license: cc-by-sa-4.0
datasets:
- cc100
widget:
- text: quod est tibi <mask> ?"
- text: vita brevis, ars <mask>.
- text: errare <mask> est.
- text: usus est magister <mask>.
---
## RoBERTa Latin base model Version 2 (Uncased)
### Prerequisites
transformers==4.19.2
### Model architecture
This model uses RoBERTa base setttings except vocabulary size.
### Tokenizer
Using BPE tokenizer with a vocabulary size 50,000.
### Training Data
* Subset of [CC-100/la](https://data.statmt.org/cc-100/) : Monolingual Datasets from Web Crawl Data
### Usage
```python
from transformers import pipeline
unmasker = pipeline('fill-mask', model='ClassCat/roberta-base-latin-v2')
unmasker("vita brevis, ars <mask>")
```
|
joaoalvarenga/bloom-8bit
|
joaoalvarenga
| 2022-07-14T00:12:48Z | 26 | 75 |
transformers
|
[
"transformers",
"pytorch",
"bloom",
"text-generation",
"ak",
"ar",
"as",
"bm",
"bn",
"ca",
"en",
"es",
"eu",
"fon",
"fr",
"gu",
"hi",
"id",
"ig",
"ki",
"kn",
"lg",
"ln",
"ml",
"mr",
"ne",
"nso",
"ny",
"or",
"pa",
"pt",
"rn",
"rw",
"sn",
"st",
"sw",
"ta",
"te",
"tn",
"ts",
"tum",
"tw",
"ur",
"vi",
"wo",
"xh",
"yo",
"zh",
"zu",
"arxiv:2106.09685",
"license:bigscience-bloom-rail-1.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2022-07-11T11:06:46Z |
---
inference: false
license: bigscience-bloom-rail-1.0
language:
- ak
- ar
- as
- bm
- bn
- ca
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zu
pipeline_tag: text-generation
---
### Quantized bigscience/bloom with 8-bit weights
Heavily inspired by [Hivemind's GPT-J-6B with 8-bit weights](https://huggingface.co/hivemind/gpt-j-6B-8bit), this is a version of [bigscience/bloom](https://huggingface.co/bigscience/bloom) a ~176 billion parameters language model that you run and fine-tune with less memory.
Here, we also apply [LoRA (Low Rank Adaptation)](https://arxiv.org/abs/2106.09685) to reduce model size. The original version takes \~353GB memory, this version takes **\~180GB**.
Our main goal is to generate a model compressed enough to be deployed in a traditional Kubernetes cluster.
### How to fine-tune
In this [notebook](https://nbviewer.org/urls/huggingface.co/joaoalvarenga/bloom-8bit/raw/main/fine-tuning-example.ipynb) you can find an adaptation from [Hivemind's GPT-J 8-bit fine-tuning notebook](https://colab.research.google.com/drive/1ft6wQU0BhqG5PRlwgaZJv2VukKKjU4Es) to fine-tune Bloom 8-bit with a 3x NVIDIA A100 instance.
### How to use
This model can be used by adapting Bloom original implementation. This is an adaptation from [Hivemind's GPT-J 8-bit](https://nbviewer.org/urls/huggingface.co/hivemind/gpt-j-6B-8bit/raw/main/convert-gpt-j.ipynb):
```python
import transformers
import torch
import torch.nn as nn
import torch.nn.functional as F
from bitsandbytes.functional import quantize_blockwise, dequantize_blockwise
from typing import Tuple
from torch.cuda.amp import custom_fwd, custom_bwd
class FrozenBNBLinear(nn.Module):
def __init__(self, weight, absmax, code, bias=None):
assert isinstance(bias, nn.Parameter) or bias is None
super().__init__()
self.out_features, self.in_features = weight.shape
self.register_buffer("weight", weight.requires_grad_(False))
self.register_buffer("absmax", absmax.requires_grad_(False))
self.register_buffer("code", code.requires_grad_(False))
self.adapter = None
self.bias = bias
def forward(self, input):
output = DequantizeAndLinear.apply(input, self.weight, self.absmax, self.code, self.bias)
if self.adapter:
output += self.adapter(input)
return output
@classmethod
def from_linear(cls, linear: nn.Linear) -> "FrozenBNBLinear":
weights_int8, state = quantize_blockise_lowmemory(linear.weight)
return cls(weights_int8, *state, linear.bias)
def __repr__(self):
return f"{self.__class__.__name__}({self.in_features}, {self.out_features})"
class DequantizeAndLinear(torch.autograd.Function):
@staticmethod
@custom_fwd
def forward(ctx, input: torch.Tensor, weights_quantized: torch.ByteTensor,
absmax: torch.FloatTensor, code: torch.FloatTensor, bias: torch.FloatTensor):
weights_deq = dequantize_blockwise(weights_quantized, absmax=absmax, code=code)
ctx.save_for_backward(input, weights_quantized, absmax, code)
ctx._has_bias = bias is not None
return F.linear(input, weights_deq, bias)
@staticmethod
@custom_bwd
def backward(ctx, grad_output: torch.Tensor):
assert not ctx.needs_input_grad[1] and not ctx.needs_input_grad[2] and not ctx.needs_input_grad[3]
input, weights_quantized, absmax, code = ctx.saved_tensors
# grad_output: [*batch, out_features]
weights_deq = dequantize_blockwise(weights_quantized, absmax=absmax, code=code)
grad_input = grad_output @ weights_deq
grad_bias = grad_output.flatten(0, -2).sum(dim=0) if ctx._has_bias else None
return grad_input, None, None, None, grad_bias
class FrozenBNBEmbedding(nn.Module):
def __init__(self, weight, absmax, code):
super().__init__()
self.num_embeddings, self.embedding_dim = weight.shape
self.register_buffer("weight", weight.requires_grad_(False))
self.register_buffer("absmax", absmax.requires_grad_(False))
self.register_buffer("code", code.requires_grad_(False))
self.adapter = None
def forward(self, input, **kwargs):
with torch.no_grad():
# note: both quantuized weights and input indices are *not* differentiable
weight_deq = dequantize_blockwise(self.weight, absmax=self.absmax, code=self.code)
output = F.embedding(input, weight_deq, **kwargs)
if self.adapter:
output += self.adapter(input)
return output
@classmethod
def from_embedding(cls, embedding: nn.Embedding) -> "FrozenBNBEmbedding":
weights_int8, state = quantize_blockise_lowmemory(embedding.weight)
return cls(weights_int8, *state)
def __repr__(self):
return f"{self.__class__.__name__}({self.num_embeddings}, {self.embedding_dim})"
def quantize_blockise_lowmemory(matrix: torch.Tensor, chunk_size: int = 2 ** 20):
assert chunk_size % 4096 == 0
code = None
chunks = []
absmaxes = []
flat_tensor = matrix.view(-1)
for i in range((matrix.numel() - 1) // chunk_size + 1):
input_chunk = flat_tensor[i * chunk_size: (i + 1) * chunk_size].clone()
quantized_chunk, (absmax_chunk, code) = quantize_blockwise(input_chunk, code=code)
chunks.append(quantized_chunk)
absmaxes.append(absmax_chunk)
matrix_i8 = torch.cat(chunks).reshape_as(matrix)
absmax = torch.cat(absmaxes)
return matrix_i8, (absmax, code)
def convert_to_int8(model):
"""Convert linear and embedding modules to 8-bit with optional adapters"""
for module in list(model.modules()):
for name, child in module.named_children():
if isinstance(child, nn.Linear):
print(name, child)
setattr(
module,
name,
FrozenBNBLinear(
weight=torch.zeros(child.out_features, child.in_features, dtype=torch.uint8),
absmax=torch.zeros((child.weight.numel() - 1) // 4096 + 1),
code=torch.zeros(256),
bias=child.bias,
),
)
elif isinstance(child, nn.Embedding):
setattr(
module,
name,
FrozenBNBEmbedding(
weight=torch.zeros(child.num_embeddings, child.embedding_dim, dtype=torch.uint8),
absmax=torch.zeros((child.weight.numel() - 1) // 4096 + 1),
code=torch.zeros(256),
)
)
class BloomBlock(transformers.models.bloom.modeling_bloom.BloomBlock):
def __init__(self, config, layer_number=None):
super().__init__(config, layer_number)
convert_to_int8(self.self_attention)
convert_to_int8(self.mlp)
class BloomModel(transformers.models.bloom.modeling_bloom.BloomModel):
def __init__(self, config):
super().__init__(config)
convert_to_int8(self)
class BloomForCausalLM(transformers.models.bloom.modeling_bloom.BloomForCausalLM):
def __init__(self, config):
super().__init__(config)
convert_to_int8(self)
transformers.models.bloom.modeling_bloom.BloomBlock = BloomBlock
model = BloomForCausalLM.from_pretrained('joaoalvarenga/bloom-8bit', low_cpu_mem_usage=True)
tokenizer = BloomTokenizerFast.from_pretrained('joaoalvarenga/bloom-8bit')
prompt = tokenizer("Given a table named salaries and columns id, created_at, salary, age. Creates a SQL to answer What is the average salary for 22 years old:", return_tensors='pt')
out = model.generate(**prompt, min_length=10, do_sample=True)
tokenizer.decode(out[0])
```
|
benjamin/gpt2-wechsel-sundanese
|
benjamin
| 2022-07-13T23:45:18Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"su",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-05-05T13:32:26Z |
---
language: su
license: mit
---
# gpt2-wechsel-sundanese
Model trained with WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
See the code here: https://github.com/CPJKU/wechsel
And the paper here: https://aclanthology.org/2022.naacl-main.293/
## Performance
| Model | PPL |
|---|---|
| `gpt2-wechsel-sundanese` | **111.72** |
| `gpt2` (retrained from scratch) | 149.46 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-scottish-gaelic` | **16.43** |
| `gpt2` (retrained from scratch) | 19.53 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-uyghur` | **34.33** |
| `gpt2` (retrained from scratch) | 42.82 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-malagasy` | **14.01** |
| `gpt2` (retrained from scratch) | 15.93 |
See our paper for details.
## Citation
Please cite WECHSEL as
```
@inproceedings{minixhofer-etal-2022-wechsel,
title = "{WECHSEL}: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models",
author = "Minixhofer, Benjamin and
Paischer, Fabian and
Rekabsaz, Navid",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.293",
pages = "3992--4006",
abstract = "Large pretrained language models (LMs) have become the central building block of many NLP applications. Training these models requires ever more computational resources and most of the existing models are trained on English text only. It is exceedingly expensive to train these models in other languages. To alleviate this problem, we introduce a novel method {--} called WECHSEL {--} to efficiently and effectively transfer pretrained LMs to new languages. WECHSEL can be applied to any model which uses subword-based tokenization and learns an embedding for each subword. The tokenizer of the source model (in English) is replaced with a tokenizer in the target language and token embeddings are initialized such that they are semantically similar to the English tokens by utilizing multilingual static word embeddings covering English and the target language. We use WECHSEL to transfer the English RoBERTa and GPT-2 models to four languages (French, German, Chinese and Swahili). We also study the benefits of our method on very low-resource languages. WECHSEL improves over proposed methods for cross-lingual parameter transfer and outperforms models of comparable size trained from scratch with up to 64x less training effort. Our method makes training large language models for new languages more accessible and less damaging to the environment. We make our code and models publicly available.",
}
```
|
benjamin/roberta-base-wechsel-chinese
|
benjamin
| 2022-07-13T23:44:31Z | 6 | 2 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"zh",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language: zh
license: mit
---
# roberta-base-wechsel-chinese
Model trained with WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
See the code here: https://github.com/CPJKU/wechsel
And the paper here: https://aclanthology.org/2022.naacl-main.293/
## Performance
### RoBERTa
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-french` | **82.43** | **90.88** | **86.65** |
| `camembert-base` | 80.88 | 90.26 | 85.57 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-german` | **81.79** | **89.72** | **85.76** |
| `deepset/gbert-base` | 78.64 | 89.46 | 84.05 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-chinese` | **78.32** | 80.55 | **79.44** |
| `bert-base-chinese` | 76.55 | **82.05** | 79.30 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-swahili` | **75.05** | **87.39** | **81.22** |
| `xlm-roberta-base` | 69.18 | 87.37 | 78.28 |
### GPT2
| Model | PPL |
|---|---|
| `gpt2-wechsel-french` | **19.71** |
| `gpt2` (retrained from scratch) | 20.47 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-german` | **26.8** |
| `gpt2` (retrained from scratch) | 27.63 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-chinese` | **51.97** |
| `gpt2` (retrained from scratch) | 52.98 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-swahili` | **10.14** |
| `gpt2` (retrained from scratch) | 10.58 |
See our paper for details.
## Citation
Please cite WECHSEL as
```
@inproceedings{minixhofer-etal-2022-wechsel,
title = "{WECHSEL}: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models",
author = "Minixhofer, Benjamin and
Paischer, Fabian and
Rekabsaz, Navid",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.293",
pages = "3992--4006",
abstract = "Large pretrained language models (LMs) have become the central building block of many NLP applications. Training these models requires ever more computational resources and most of the existing models are trained on English text only. It is exceedingly expensive to train these models in other languages. To alleviate this problem, we introduce a novel method {--} called WECHSEL {--} to efficiently and effectively transfer pretrained LMs to new languages. WECHSEL can be applied to any model which uses subword-based tokenization and learns an embedding for each subword. The tokenizer of the source model (in English) is replaced with a tokenizer in the target language and token embeddings are initialized such that they are semantically similar to the English tokens by utilizing multilingual static word embeddings covering English and the target language. We use WECHSEL to transfer the English RoBERTa and GPT-2 models to four languages (French, German, Chinese and Swahili). We also study the benefits of our method on very low-resource languages. WECHSEL improves over proposed methods for cross-lingual parameter transfer and outperforms models of comparable size trained from scratch with up to 64x less training effort. Our method makes training large language models for new languages more accessible and less damaging to the environment. We make our code and models publicly available.",
}
```
|
benjamin/gpt2-wechsel-german
|
benjamin
| 2022-07-13T23:44:00Z | 48 | 4 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"de",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: de
license: mit
---
# gpt2-wechsel-german
Model trained with WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
See the code here: https://github.com/CPJKU/wechsel
And the paper here: https://aclanthology.org/2022.naacl-main.293/
## Performance
### RoBERTa
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-french` | **82.43** | **90.88** | **86.65** |
| `camembert-base` | 80.88 | 90.26 | 85.57 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-german` | **81.79** | **89.72** | **85.76** |
| `deepset/gbert-base` | 78.64 | 89.46 | 84.05 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-chinese` | **78.32** | 80.55 | **79.44** |
| `bert-base-chinese` | 76.55 | **82.05** | 79.30 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-swahili` | **75.05** | **87.39** | **81.22** |
| `xlm-roberta-base` | 69.18 | 87.37 | 78.28 |
### GPT2
| Model | PPL |
|---|---|
| `gpt2-wechsel-french` | **19.71** |
| `gpt2` (retrained from scratch) | 20.47 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-german` | **26.8** |
| `gpt2` (retrained from scratch) | 27.63 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-chinese` | **51.97** |
| `gpt2` (retrained from scratch) | 52.98 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-swahili` | **10.14** |
| `gpt2` (retrained from scratch) | 10.58 |
See our paper for details.
## Citation
Please cite WECHSEL as
```
@inproceedings{minixhofer-etal-2022-wechsel,
title = "{WECHSEL}: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models",
author = "Minixhofer, Benjamin and
Paischer, Fabian and
Rekabsaz, Navid",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.293",
pages = "3992--4006",
abstract = "Large pretrained language models (LMs) have become the central building block of many NLP applications. Training these models requires ever more computational resources and most of the existing models are trained on English text only. It is exceedingly expensive to train these models in other languages. To alleviate this problem, we introduce a novel method {--} called WECHSEL {--} to efficiently and effectively transfer pretrained LMs to new languages. WECHSEL can be applied to any model which uses subword-based tokenization and learns an embedding for each subword. The tokenizer of the source model (in English) is replaced with a tokenizer in the target language and token embeddings are initialized such that they are semantically similar to the English tokens by utilizing multilingual static word embeddings covering English and the target language. We use WECHSEL to transfer the English RoBERTa and GPT-2 models to four languages (French, German, Chinese and Swahili). We also study the benefits of our method on very low-resource languages. WECHSEL improves over proposed methods for cross-lingual parameter transfer and outperforms models of comparable size trained from scratch with up to 64x less training effort. Our method makes training large language models for new languages more accessible and less damaging to the environment. We make our code and models publicly available.",
}
```
|
rajistics/testpyramidsrnd
|
rajistics
| 2022-07-13T22:19:35Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2022-07-13T22:19:29Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: rajistics/testpyramidsrnd
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
noorkgill/Tone
|
noorkgill
| 2022-07-13T22:08:03Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-07-13T22:02:05Z |
Promoting empathy among Twitter Users, in order to reduce offensive content that harms the wellness of users.
|
mackseem/distilbert-base-uncased-finetuned-ner
|
mackseem
| 2022-07-13T21:52:51Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9244616234124793
- name: Recall
type: recall
value: 0.9364582168027744
- name: F1
type: f1
value: 0.9304212515282871
- name: Accuracy
type: accuracy
value: 0.9833987322668276
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0623
- Precision: 0.9245
- Recall: 0.9365
- F1: 0.9304
- Accuracy: 0.9834
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2377 | 1.0 | 878 | 0.0711 | 0.9176 | 0.9254 | 0.9215 | 0.9813 |
| 0.0514 | 2.0 | 1756 | 0.0637 | 0.9213 | 0.9346 | 0.9279 | 0.9831 |
| 0.031 | 3.0 | 2634 | 0.0623 | 0.9245 | 0.9365 | 0.9304 | 0.9834 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Evelyn18/distilbert-base-uncased-prueba2
|
Evelyn18
| 2022-07-13T21:14:13Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:becasv2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-07-13T21:05:45Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- becasv2
model-index:
- name: distilbert-base-uncased-prueba2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-prueba2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the becasv2 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 9 | 3.9054 |
| No log | 2.0 | 18 | 3.1893 |
| No log | 3.0 | 27 | 2.9748 |
| No log | 4.0 | 36 | 3.1541 |
| No log | 5.0 | 45 | 3.2887 |
| No log | 6.0 | 54 | 3.5055 |
| No log | 7.0 | 63 | 3.5902 |
| No log | 8.0 | 72 | 3.6356 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
domenicrosati/SPECTER-finetuned-DAGPap22
|
domenicrosati
| 2022-07-13T18:53:10Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-13T17:26:06Z |
---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: SPECTER-finetuned-DAGPap22
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SPECTER-finetuned-DAGPap22
This model is a fine-tuned version of [allenai/specter](https://huggingface.co/allenai/specter) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0023
- Accuracy: 0.9993
- F1: 0.9995
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.3422 | 1.0 | 669 | 0.4135 | 0.8914 | 0.9140 |
| 0.1074 | 2.0 | 1338 | 0.1216 | 0.9746 | 0.9811 |
| 0.0329 | 3.0 | 2007 | 0.0064 | 0.9989 | 0.9992 |
| 0.0097 | 4.0 | 2676 | 0.0132 | 0.9972 | 0.9980 |
| 0.0123 | 5.0 | 3345 | 0.0231 | 0.9961 | 0.9971 |
| 0.0114 | 6.0 | 4014 | 0.0080 | 0.9985 | 0.9989 |
| 0.0029 | 7.0 | 4683 | 0.2207 | 0.9727 | 0.9797 |
| 0.0075 | 8.0 | 5352 | 0.0145 | 0.9974 | 0.9981 |
| 0.0098 | 9.0 | 6021 | 0.0047 | 0.9994 | 0.9996 |
| 0.0025 | 10.0 | 6690 | 0.0000 | 1.0 | 1.0 |
| 0.0044 | 11.0 | 7359 | 0.0035 | 0.9993 | 0.9995 |
| 0.0 | 12.0 | 8028 | 0.0027 | 0.9996 | 0.9997 |
| 0.0027 | 13.0 | 8697 | 0.0036 | 0.9993 | 0.9995 |
| 0.0055 | 14.0 | 9366 | 0.0017 | 0.9998 | 0.9999 |
| 0.0 | 15.0 | 10035 | 0.0000 | 1.0 | 1.0 |
| 0.0 | 16.0 | 10704 | 0.0000 | 1.0 | 1.0 |
| 0.0022 | 17.0 | 11373 | 0.0111 | 0.9981 | 0.9986 |
| 0.0004 | 18.0 | 12042 | 0.0011 | 0.9994 | 0.9996 |
| 0.0 | 19.0 | 12711 | 0.0020 | 0.9994 | 0.9996 |
| 0.0 | 20.0 | 13380 | 0.0023 | 0.9993 | 0.9995 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
jhonparra18/bert-base-uncased-cv-position-classifier
|
jhonparra18
| 2022-07-13T18:10:30Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-13T17:39:26Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
model-index:
- name: bert-base-uncased-cv-position-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-cv-position-classifier
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6924
- Accuracy: {'accuracy': 0.5780703216130645}
- F1: {'f1': 0.5780703216130645}
- Precision: {'precision': 0.5780703216130645}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------------------------------:|:--------------------------:|:---------------------------------:|
| 2.0336 | 1.14 | 1000 | 1.8856 | {'accuracy': 0.5259123479420097} | {'f1': 0.5259123479420097} | {'precision': 0.5259123479420097} |
| 1.5348 | 2.28 | 2000 | 1.6924 | {'accuracy': 0.5780703216130645} | {'f1': 0.5780703216130645} | {'precision': 0.5780703216130645} |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.8.1+cu111
- Datasets 1.6.2
- Tokenizers 0.12.1
|
ghadeermobasher/Modifiedbiobert-v1.1-BioRED-CD-128-32-30
|
ghadeermobasher
| 2022-07-13T17:48:37Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-07-13T17:07:02Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: Modifiedbiobert-v1.1-BioRED-CD-128-32-30
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Modifiedbiobert-v1.1-BioRED-CD-128-32-30
This model is a fine-tuned version of [dmis-lab/biobert-v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Precision: 1.0
- Recall: 1.0
- F1: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30.0
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.10.3
|
bothrajat/testpyramidsrnd
|
bothrajat
| 2022-07-13T17:05:25Z | 6 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2022-07-13T15:57:34Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: bothrajat/testpyramidsrnd
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
public-data/AnimeGANv3-portrait-sketch
|
public-data
| 2022-07-13T17:02:13Z | 0 | 2 | null |
[
"onnx",
"region:us"
] | null | 2022-07-13T16:59:59Z |
# AnimeGANv3 portrait sketch
- https://github.com/TachibanaYoshino/AnimeGANv3
- https://docs.google.com/uc?export=download&id=1F6BSJY3HibzQ08kE_al6pkXd1evxS40s
|
birgermoell/q-Taxi-v3
|
birgermoell
| 2022-07-13T16:49:02Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-13T16:48:54Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="birgermoell/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
IlyaGusev/rugpt3medium_sum_gazeta
|
IlyaGusev
| 2022-07-13T15:36:49Z | 565 | 4 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"causal-lm",
"summarization",
"ru",
"dataset:IlyaGusev/gazeta",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
summarization
| 2022-03-02T23:29:04Z |
---
language:
- ru
tags:
- causal-lm
- summarization
datasets:
- IlyaGusev/gazeta
license:
- apache-2.0
inference: false
widget:
- text: "Высота башни составляет 324 метра (1063 фута), примерно такая же высота, как у 81-этажного здания, и самое высокое сооружение в Париже. Его основание квадратно, размером 125 метров (410 футов) с любой стороны. Во время строительства Эйфелева башня превзошла монумент Вашингтона, став самым высоким искусственным сооружением в мире, и этот титул она удерживала в течение 41 года до завершения строительство здания Крайслер в Нью-Йорке в 1930 году. Это первое сооружение которое достигло высоты 300 метров. Из-за добавления вещательной антенны на вершине башни в 1957 году она сейчас выше здания Крайслер на 5,2 метра (17 футов). За исключением передатчиков, Эйфелева башня является второй самой высокой отдельно стоящей структурой во Франции после виадука Мийо.<s>"
example_title: "Википедия"
---
# RuGPT3MediumSumGazeta
## Model description
This is the model for abstractive summarization for Russian based on [rugpt3medium_based_on_gpt2](https://huggingface.co/sberbank-ai/rugpt3medium_based_on_gpt2).
## Intended uses & limitations
#### How to use
Colab: [link](https://colab.research.google.com/drive/1eR-ev0Y5ISWIwGnzYYoHyGMaSIUz8GTN)
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "IlyaGusev/rugpt3medium_sum_gazeta"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
article_text = "..."
text_tokens = tokenizer(
article_text,
max_length=600,
add_special_tokens=False,
padding=False,
truncation=True
)["input_ids"]
input_ids = text_tokens + [tokenizer.sep_token_id]
input_ids = torch.LongTensor([input_ids])
output_ids = model.generate(
input_ids=input_ids,
no_repeat_ngram_size=4
)
summary = tokenizer.decode(output_ids[0], skip_special_tokens=False)
summary = summary.split(tokenizer.sep_token)[1]
summary = summary.split(tokenizer.eos_token)[0]
print(summary)
```
## Training data
- Dataset: [Gazeta](https://huggingface.co/datasets/IlyaGusev/gazeta)
## Training procedure
- Training script: [train.py](https://github.com/IlyaGusev/summarus/blob/master/external/hf_scripts/train.py)
- Config: [gpt_training_config.json](https://github.com/IlyaGusev/summarus/blob/master/external/hf_scripts/configs/gpt_training_config.json)
## Eval results
* Train dataset: **Gazeta v1 train**
* Test dataset: **Gazeta v1 test**
* Source max_length: **600**
* Target max_length: **200**
* no_repeat_ngram_size: **4**
* num_beams: **5**
| Model | R-1-f | R-2-f | R-L-f | chrF | METEOR | BLEU | Avg char length |
|:--------------------------|:------|:------|:------|:-------|:-------|:-----|:-----|
| [mbart_ru_sum_gazeta](https://huggingface.co/IlyaGusev/mbart_ru_sum_gazeta) | **32.4** | 14.3 | 28.0 | 39.7 | **26.4** | 12.1 | 371 |
| [rut5_base_sum_gazeta](https://huggingface.co/IlyaGusev/rut5_base_sum_gazeta) | 32.2 | **14.4** | **28.1** | **39.8** | 25.7 | **12.3** | 330 |
| [rugpt3medium_sum_gazeta](https://huggingface.co/IlyaGusev/rugpt3medium_sum_gazeta) | 26.2 | 7.7 | 21.7 | 33.8 | 18.2 | 4.3 | 244 |
* Train dataset: **Gazeta v1 train**
* Test dataset: **Gazeta v2 test**
* Source max_length: **600**
* Target max_length: **200**
* no_repeat_ngram_size: **4**
* num_beams: **5**
| Model | R-1-f | R-2-f | R-L-f | chrF | METEOR | BLEU | Avg char length |
|:--------------------------|:------|:------|:------|:-------|:-------|:-----|:-----|
| [mbart_ru_sum_gazeta](https://huggingface.co/IlyaGusev/mbart_ru_sum_gazeta) | **28.7** | **11.1** | 24.4 | **37.3** | **22.7** | **9.4** | 373 |
| [rut5_base_sum_gazeta](https://huggingface.co/IlyaGusev/rut5_base_sum_gazeta) | 28.6 | **11.1** | **24.5** | 37.2 | 22.0 | **9.4** | 331 |
| [rugpt3medium_sum_gazeta](https://huggingface.co/IlyaGusev/rugpt3medium_sum_gazeta) | 24.1 | 6.5 | 19.8 | 32.1 | 16.3 | 3.6 | 242 |
Evaluation script: [evaluate.py](https://github.com/IlyaGusev/summarus/blob/master/evaluate.py)
Flags: --language ru --tokenize-after --lower
|
IlyaGusev/rubert_telegram_headlines
|
IlyaGusev
| 2022-07-13T15:36:18Z | 86 | 17 |
transformers
|
[
"transformers",
"pytorch",
"encoder-decoder",
"text2text-generation",
"summarization",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-03-02T23:29:04Z |
---
language:
- ru
tags:
- summarization
license: apache-2.0
inference:
parameters:
no_repeat_ngram_size: 4
---
# RuBertTelegramHeadlines
## Model description
Example model for [Headline generation competition](https://competitions.codalab.org/competitions/29905)
Based on [RuBERT](http://docs.deeppavlov.ai/en/master/features/models/bert.html) model
## Intended uses & limitations
#### How to use
```python
from transformers import AutoTokenizer, EncoderDecoderModel
model_name = "IlyaGusev/rubert_telegram_headlines"
tokenizer = AutoTokenizer.from_pretrained(model_name, do_lower_case=False, do_basic_tokenize=False, strip_accents=False)
model = EncoderDecoderModel.from_pretrained(model_name)
article_text = "..."
input_ids = tokenizer(
[article_text],
add_special_tokens=True,
max_length=256,
padding="max_length",
truncation=True,
return_tensors="pt",
)["input_ids"]
output_ids = model.generate(
input_ids=input_ids,
max_length=64,
no_repeat_ngram_size=3,
num_beams=10,
top_p=0.95
)[0]
headline = tokenizer.decode(output_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(headline)
```
## Training data
- Dataset: [ru_all_split.tar.gz](https://www.dropbox.com/s/ykqk49a8avlmnaf/ru_all_split.tar.gz)
## Training procedure
```python
import random
import torch
from torch.utils.data import Dataset
from tqdm.notebook import tqdm
from transformers import BertTokenizer, EncoderDecoderModel, Trainer, TrainingArguments, logging
def convert_to_tensors(
tokenizer,
text,
max_text_tokens_count,
max_title_tokens_count = None,
title = None
):
inputs = tokenizer(
text,
add_special_tokens=True,
max_length=max_text_tokens_count,
padding="max_length",
truncation=True
)
result = {
"input_ids": torch.tensor(inputs["input_ids"]),
"attention_mask": torch.tensor(inputs["attention_mask"]),
}
if title is not None:
outputs = tokenizer(
title,
add_special_tokens=True,
max_length=max_title_tokens_count,
padding="max_length",
truncation=True
)
decoder_input_ids = torch.tensor(outputs["input_ids"])
decoder_attention_mask = torch.tensor(outputs["attention_mask"])
labels = decoder_input_ids.clone()
labels[decoder_attention_mask == 0] = -100
result.update({
"labels": labels,
"decoder_input_ids": decoder_input_ids,
"decoder_attention_mask": decoder_attention_mask
})
return result
class GetTitleDataset(Dataset):
def __init__(
self,
original_records,
sample_rate,
tokenizer,
max_text_tokens_count,
max_title_tokens_count
):
self.original_records = original_records
self.sample_rate = sample_rate
self.tokenizer = tokenizer
self.max_text_tokens_count = max_text_tokens_count
self.max_title_tokens_count = max_title_tokens_count
self.records = []
for record in tqdm(original_records):
if random.random() > self.sample_rate:
continue
tensors = convert_to_tensors(
tokenizer=tokenizer,
title=record["title"],
text=record["text"],
max_title_tokens_count=self.max_title_tokens_count,
max_text_tokens_count=self.max_text_tokens_count
)
self.records.append(tensors)
def __len__(self):
return len(self.records)
def __getitem__(self, index):
return self.records[index]
def train(
train_records,
val_records,
pretrained_model_path,
train_sample_rate=1.0,
val_sample_rate=1.0,
output_model_path="models",
checkpoint=None,
max_text_tokens_count=256,
max_title_tokens_count=64,
batch_size=8,
logging_steps=1000,
eval_steps=10000,
save_steps=10000,
learning_rate=0.00003,
warmup_steps=2000,
num_train_epochs=3
):
logging.set_verbosity_info()
tokenizer = BertTokenizer.from_pretrained(
pretrained_model_path,
do_lower_case=False,
do_basic_tokenize=False,
strip_accents=False
)
train_dataset = GetTitleDataset(
train_records,
train_sample_rate,
tokenizer,
max_text_tokens_count=max_text_tokens_count,
max_title_tokens_count=max_title_tokens_count
)
val_dataset = GetTitleDataset(
val_records,
val_sample_rate,
tokenizer,
max_text_tokens_count=max_text_tokens_count,
max_title_tokens_count=max_title_tokens_count
)
model = EncoderDecoderModel.from_encoder_decoder_pretrained(pretrained_model_path, pretrained_model_path)
training_args = TrainingArguments(
output_dir=output_model_path,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
do_train=True,
do_eval=True,
overwrite_output_dir=False,
logging_steps=logging_steps,
eval_steps=eval_steps,
evaluation_strategy="steps",
save_steps=save_steps,
learning_rate=learning_rate,
warmup_steps=warmup_steps,
num_train_epochs=num_train_epochs,
max_steps=-1,
save_total_limit=1,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=val_dataset
)
trainer.train(checkpoint)
model.save_pretrained(output_model_path)
```
|
IlyaGusev/xlm_roberta_large_headline_cause_full
|
IlyaGusev
| 2022-07-13T15:35:52Z | 154 | 3 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"xlm-roberta-large",
"ru",
"en",
"dataset:IlyaGusev/headline_cause",
"arxiv:2108.12626",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
language:
- ru
- en
tags:
- xlm-roberta-large
datasets:
- IlyaGusev/headline_cause
license: apache-2.0
widget:
- text: "Песков опроверг свой перевод на удаленку</s>Дмитрий Песков перешел на удаленку"
---
# XLM-RoBERTa HeadlineCause Full
## Model description
This model was trained to predict the presence of causal relations between two headlines. This model is for the Full task with 7 possible labels: titles are almost the same, A causes B, B causes A, A refutes B, B refutes A, A linked with B in another way, A is not linked to B. English and Russian languages are supported.
You can use hosted inference API to infer a label for a headline pair. To do this, you shoud seperate headlines with ```</s>``` token.
For example:
```
Песков опроверг свой перевод на удаленку</s>Дмитрий Песков перешел на удаленку
```
## Intended uses & limitations
#### How to use
```python
from tqdm.notebook import tqdm
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
def get_batch(data, batch_size):
start_index = 0
while start_index < len(data):
end_index = start_index + batch_size
batch = data[start_index:end_index]
yield batch
start_index = end_index
def pipe_predict(data, pipe, batch_size=64):
raw_preds = []
for batch in tqdm(get_batch(data, batch_size)):
raw_preds += pipe(batch)
return raw_preds
MODEL_NAME = TOKENIZER_NAME = "IlyaGusev/xlm_roberta_large_headline_cause_full"
tokenizer = AutoTokenizer.from_pretrained(TOKENIZER_NAME, do_lower_case=False)
model = AutoModelForSequenceClassification.from_pretrained(MODEL_NAME)
model.eval()
pipe = pipeline("text-classification", model=model, tokenizer=tokenizer, framework="pt", return_all_scores=True)
texts = [
(
"Judge issues order to allow indoor worship in NC churches",
"Some local churches resume indoor services after judge lifted NC governor’s restriction"
),
(
"Gov. Kevin Stitt defends $2 million purchase of malaria drug touted by Trump",
"Oklahoma spent $2 million on malaria drug touted by Trump"
),
(
"Песков опроверг свой перевод на удаленку",
"Дмитрий Песков перешел на удаленку"
)
]
pipe_predict(texts, pipe)
```
#### Limitations and bias
The models are intended to be used on news headlines. No other limitations are known.
## Training data
* HuggingFace dataset: [IlyaGusev/headline_cause](https://huggingface.co/datasets/IlyaGusev/headline_cause)
* GitHub: [IlyaGusev/HeadlineCause](https://github.com/IlyaGusev/HeadlineCause)
## Training procedure
* Notebook: [HeadlineCause](https://colab.research.google.com/drive/1NAnD0OJ0TnYCJRsHpYUyYkjr_yi8ObcA)
* Stand-alone script: [train.py](https://github.com/IlyaGusev/HeadlineCause/blob/main/headline_cause/train.py)
## Eval results
Evaluation results can be found in the [arxiv paper](https://arxiv.org/pdf/2108.12626.pdf).
### BibTeX entry and citation info
```bibtex
@misc{gusev2021headlinecause,
title={HeadlineCause: A Dataset of News Headlines for Detecting Causalities},
author={Ilya Gusev and Alexey Tikhonov},
year={2021},
eprint={2108.12626},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
IlyaGusev/rubertconv_toxic_clf
|
IlyaGusev
| 2022-07-13T15:34:11Z | 14,240 | 13 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
language:
- ru
tags:
- text-classification
license: apache-2.0
---
# RuBERTConv Toxic Classifier
## Model description
Based on [rubert-base-cased-conversational](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational) model
## Intended uses & limitations
#### How to use
Colab: [link](https://colab.research.google.com/drive/1veKO9hke7myxKigZtZho_F-UM2fD9kp8)
```python
from transformers import pipeline
model_name = "IlyaGusev/rubertconv_toxic_clf"
pipe = pipeline("text-classification", model=model_name, tokenizer=model_name, framework="pt")
text = "Ты придурок из интернета"
pipe([text])
```
## Training data
Datasets:
- [2ch]( https://www.kaggle.com/blackmoon/russian-language-toxic-comments)
- [Odnoklassniki](https://www.kaggle.com/alexandersemiletov/toxic-russian-comments)
- [Toloka Persona Chat Rus](https://toloka.ai/ru/datasets)
- [Koziev's Conversations](https://github.com/Koziev/NLP_Datasets/blob/master/Conversations/Data) with [toxic words vocabulary](https://www.dropbox.com/s/ou6lx03b10yhrfl/bad_vocab.txt.tar.gz)
Augmentations:
- ё -> е
- Remove or add "?" or "!"
- Fix CAPS
- Concatenate toxic and non-toxic texts
- Concatenate two non-toxic texts
- Add toxic words from vocabulary
- Add typos
- Mask toxic words with "*", "@", "$"
## Training procedure
TBA
|
IlyaGusev/rubertconv_toxic_editor
|
IlyaGusev
| 2022-07-13T15:33:55Z | 157 | 13 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:04Z |
---
language:
- ru
tags:
- token-classification
license: apache-2.0
widget:
- text: Ёпта, меня зовут придурок и я живу в жопе
---
# RuBERTConv Toxic Editor
## Model description
Tagging model for detoxification based on [rubert-base-cased-conversational](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational).
4 possible classes:
- Equal = save tokens
- Replace = replace tokens with mask
- Delete = remove tokens
- Insert = insert mask before tokens
Use in pair with [mask filler](https://huggingface.co/IlyaGusev/sber_rut5_filler).
## Intended uses & limitations
#### How to use
Colab: [link](https://colab.research.google.com/drive/1NUSO1QGlDgD-IWXa2SpeND089eVxrCJW)
```python
import torch
from transformers import AutoTokenizer, pipeline
tagger_model_name = "IlyaGusev/rubertconv_toxic_editor"
device = "cuda" if torch.cuda.is_available() else "cpu"
device_num = 0 if device == "cuda" else -1
tagger_pipe = pipeline(
"token-classification",
model=tagger_model_name,
tokenizer=tagger_model_name,
framework="pt",
device=device_num,
aggregation_strategy="max"
)
text = "..."
tagger_predictions = tagger_pipe([text], batch_size=1)
sample_predictions = tagger_predictions[0]
print(sample_predictions)
```
## Training data
- Dataset: [russe_detox_2022](https://github.com/skoltech-nlp/russe_detox_2022/tree/main/data)
## Training procedure
- Parallel corpus convertion: [compute_tags.py](https://github.com/IlyaGusev/rudetox/blob/main/rudetox/marker/compute_tags.py)
- Training script: [train.py](https://github.com/IlyaGusev/rudetox/blob/main/rudetox/marker/train.py)
- Pipeline step: [dvc.yaml, train_marker](https://github.com/IlyaGusev/rudetox/blob/main/dvc.yaml#L367)
## Eval results
TBA
|
jpalojarvi/finetuning-sentiment-model-3000-samples
|
jpalojarvi
| 2022-07-13T14:48:18Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-13T14:14:45Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.86
- name: F1
type: f1
value: 0.8590604026845637
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3239
- Accuracy: 0.86
- F1: 0.8591
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
nawta/wav2vec2-onomatopoeia-finetune_smalldata_ESC50pretrained_5
|
nawta
| 2022-07-13T14:43:29Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-13T14:30:32Z |
---
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-onomatopoeia-finetune_smalldata_ESC50pretrained_5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-onomatopoeia-finetune_smalldata_ESC50pretrained_5
This model is a fine-tuned version of [/root/workspace/wav2vec2-pretrained_with_ESC50_10000epochs_32batch_2022-07-09_22-16-46/pytorch_model.bin](https://huggingface.co//root/workspace/wav2vec2-pretrained_with_ESC50_10000epochs_32batch_2022-07-09_22-16-46/pytorch_model.bin) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
bothrajat/q-FrozenLake-v1-8x8-Slippery
|
bothrajat
| 2022-07-13T14:07:21Z | 0 | 0 | null |
[
"FrozenLake-v1-8x8",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-13T10:03:29Z |
---
tags:
- FrozenLake-v1-8x8
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-8x8-Slippery
results:
- metrics:
- type: mean_reward
value: 0.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-8x8
type: FrozenLake-v1-8x8
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="bothrajat/q-FrozenLake-v1-8x8-Slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
nawta/wav2vec2-onomatopoeia-finetune_smalldata_ESC50pretrained_3
|
nawta
| 2022-07-13T14:03:36Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-13T11:47:57Z |
---
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-onomatopoeia-finetune_smalldata_ESC50pretrained_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-onomatopoeia-finetune_smalldata_ESC50pretrained_3
This model is a fine-tuned version of [/root/workspace/wav2vec2-pretrained_with_ESC50_10000epochs_32batch_2022-07-09_22-16-46/pytorch_model.bin](https://huggingface.co//root/workspace/wav2vec2-pretrained_with_ESC50_10000epochs_32batch_2022-07-09_22-16-46/pytorch_model.bin) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5350
- Cer: 1.2730
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.4243 | 4.67 | 500 | 2.6901 | 1.1259 |
| 2.4282 | 9.35 | 1000 | 2.7495 | 1.1563 |
| 2.3377 | 14.02 | 1500 | 2.2475 | 0.9617 |
| 2.2434 | 18.69 | 2000 | 2.2765 | 1.1908 |
| 2.2731 | 23.36 | 2500 | 2.2574 | 1.1669 |
| 2.3436 | 28.04 | 3000 | 2.5350 | 1.2730 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
johntang/finetuning-sentiment-model-3000-samples
|
johntang
| 2022-07-13T14:02:11Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-17T18:54:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8766666666666667
- name: F1
type: f1
value: 0.8786885245901639
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3426
- Accuracy: 0.8767
- F1: 0.8787
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
yuekai/icefall-asr-aishell2-pruned-transducer-stateless5-B-2022-07-12
|
yuekai
| 2022-07-13T13:51:59Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2022-07-12T01:54:35Z |
---
license: apache-2.0
---
### How to clone this repo
```
sudo apt-get install git-lfs
git clone https://huggingface.co/yuekai/icefall-asr-aishell2-pruned-transducer-stateless5-B-2022-07-12
cd https://huggingface.co/yuekai/icefall-asr-aishell2-pruned-transducer-stateless5-B-2022-07-12
git lfs pull
```
|
yuekai/icefall-asr-aishell2-pruned-transducer-stateless5-A-2022-07-12
|
yuekai
| 2022-07-13T13:49:43Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2022-07-13T02:19:09Z |
---
license: apache-2.0
---
### How to clone this repo
```
sudo apt-get install git-lfs
git clone https://huggingface.co/yuekai/icefall-asr-aishell2-pruned-transducer-stateless5-A-2022-07-12
cd https://huggingface.co/yuekai/icefall-asr-aishell2-pruned-transducer-stateless5-A-2022-07-12
git lfs pull
```
|
csukuangfj/icefall-asr-gigaspeech-pruned-transducer-stateless2-bak
|
csukuangfj
| 2022-07-13T13:33:22Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-07-13T13:30:25Z |
# Introduction
torchscript models for https://huggingface.co/wgb14/icefall-asr-gigaspeech-pruned-transducer-stateless2
See also
https://github.com/k2-fsa/icefall/pull/364
and
https://github.com/k2-fsa/icefall/pull/361
|
fxmarty/20220713-h13m33s02_example_conll2003
|
fxmarty
| 2022-07-13T13:33:09Z | 0 | 0 | null |
[
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"region:us"
] |
token-classification
| 2022-07-13T13:33:02Z |
---
pipeline_tag: token-classification
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
tags:
- distilbert
---
**task**: `token-classification`
**Backend:** `sagemaker-training`
**Backend args:** `{'instance_type': 'ml.g4dn.2xlarge', 'supported_instructions': None}`
**Number of evaluation samples:** `All dataset`
Fixed parameters:
* **model_name_or_path**: `elastic/distilbert-base-uncased-finetuned-conll03-english`
* **dataset**:
* **path**: `conll2003`
* **eval_split**: `validation`
* **data_keys**: `{'primary': 'tokens'}`
* **ref_keys**: `['ner_tags']`
* **calibration_split**: `train`
* **quantization_approach**: `static`
* **operators_to_quantize**: `['Add', 'MatMul']`
* **per_channel**: `False`
* **calibration**:
* **method**: `minmax`
* **num_calibration_samples**: `100`
* **framework**: `onnxruntime`
* **framework_args**:
* **opset**: `11`
* **optimization_level**: `1`
* **aware_training**: `False`
Benchmarked parameters:
* **node_exclusion**: `[]`, `['layernorm', 'gelu', 'residual', 'gather', 'softmax']`
# Evaluation
## Non-time metrics
| node_exclusion | | precision (original) | precision (optimized) | | recall (original) | recall (optimized) | | f1 (original) | f1 (optimized) | | accuracy (original) | accuracy (optimized) |
| :------------------------------------------------------: | :-: | :------------------: | :-------------------: | :-: | :---------------: | :----------------: | :-: | :-----------: | :------------: | :-: | :-----------------: | :------------------: |
| `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 0.936 | 0.904 | \| | 0.944 | 0.921 | \| | 0.940 | 0.912 | \| | 0.988 | 0.984 |
| `[]` | \| | 0.936 | 0.065 | \| | 0.944 | 0.243 | \| | 0.940 | 0.103 | \| | 0.988 | 0.357 |
## Time metrics
Time benchmarks were run for 15 seconds per config.
Below, time metrics for batch size = 4, input length = 64.
| node_exclusion | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :------------------------------------------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 103.46 | 53.77 | \| | 9.67 | 18.60 |
| `[]` | \| | 90.62 | 65.86 | \| | 11.07 | 15.20 |
|
hossay/distilbert-base-uncased-finetuned-ner
|
hossay
| 2022-07-13T13:32:51Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-10T00:51:00Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9263064854712186
- name: Recall
type: recall
value: 0.9379125181787672
- name: F1
type: f1
value: 0.9320733740967203
- name: Accuracy
type: accuracy
value: 0.9838117781625813
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0614
- Precision: 0.9263
- Recall: 0.9379
- F1: 0.9321
- Accuracy: 0.9838
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2418 | 1.0 | 878 | 0.0709 | 0.9168 | 0.9242 | 0.9204 | 0.9806 |
| 0.0514 | 2.0 | 1756 | 0.0622 | 0.9175 | 0.9338 | 0.9255 | 0.9826 |
| 0.0306 | 3.0 | 2634 | 0.0614 | 0.9263 | 0.9379 | 0.9321 | 0.9838 |
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
andreaschandra/distilbert-base-uncased-finetuned-emotion
|
andreaschandra
| 2022-07-13T13:16:46Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-02T07:02:31Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.9240890586429673
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2186
- Accuracy: 0.924
- F1: 0.9241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8218 | 1.0 | 250 | 0.3165 | 0.9025 | 0.9001 |
| 0.2494 | 2.0 | 500 | 0.2186 | 0.924 | 0.9241 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
xliu128/distilbert-base-uncased-finetuned-emotion
|
xliu128
| 2022-07-13T13:16:07Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-28T13:51:01Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.925
- name: F1
type: f1
value: 0.924714869006902
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2168
- Accuracy: 0.925
- F1: 0.9247
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8435 | 1.0 | 250 | 0.3160 | 0.9065 | 0.9045 |
| 0.2457 | 2.0 | 500 | 0.2168 | 0.925 | 0.9247 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
xichenn/distilbert-base-uncased-finetuned-emotion
|
xichenn
| 2022-07-13T12:59:22Z | 16 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-19T13:16:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.924047984825329
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2294
- Accuracy: 0.924
- F1: 0.9240
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.3316 | 0.9025 | 0.8985 |
| No log | 2.0 | 500 | 0.2294 | 0.924 | 0.9240 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
michauhl/distilbert-base-uncased-finetuned-emotion
|
michauhl
| 2022-07-13T12:57:33Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-05T14:17:20Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9405
- name: F1
type: f1
value: 0.9404976918144629
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1891
- Accuracy: 0.9405
- F1: 0.9405
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1344 | 1.0 | 1000 | 0.1760 | 0.933 | 0.9331 |
| 0.0823 | 2.0 | 2000 | 0.1891 | 0.9405 | 0.9405 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0.post202
- Datasets 2.3.2
- Tokenizers 0.11.0
|
jordyvl/udpos28-sm-first-POS
|
jordyvl
| 2022-07-13T12:53:00Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:udpos28",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-07-13T12:33:01Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- udpos28
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: udpos28-sm-first-POS
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: udpos28
type: udpos28
args: en
metrics:
- name: Precision
type: precision
value: 0.9511089206505667
- name: Recall
type: recall
value: 0.9546093116207286
- name: F1
type: f1
value: 0.9528559014062253
- name: Accuracy
type: accuracy
value: 0.9559133601686793
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# udpos28-sm-first-POS
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the udpos28 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1896
- Precision: 0.9511
- Recall: 0.9546
- F1: 0.9529
- Accuracy: 0.9559
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1696 | 1.0 | 4978 | 0.1700 | 0.9440 | 0.9464 | 0.9452 | 0.9472 |
| 0.0973 | 2.0 | 9956 | 0.1705 | 0.9487 | 0.9533 | 0.9510 | 0.9543 |
| 0.0508 | 3.0 | 14934 | 0.1896 | 0.9511 | 0.9546 | 0.9529 | 0.9559 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2+cu102
- Datasets 2.2.2
- Tokenizers 0.12.1
|
jgriffi/distilbert-base-uncased-finetuned-emotion
|
jgriffi
| 2022-07-13T12:52:36Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-04T10:34:19Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9225
- name: F1
type: f1
value: 0.9224581940083942
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2204
- Accuracy: 0.9225
- F1: 0.9225
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8094 | 1.0 | 250 | 0.3034 | 0.905 | 0.9031 |
| 0.2416 | 2.0 | 500 | 0.2204 | 0.9225 | 0.9225 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Sreevishnu/funnel-transformer-small-imdb
|
Sreevishnu
| 2022-07-13T12:17:17Z | 6 | 1 |
transformers
|
[
"transformers",
"pytorch",
"funnel",
"text-classification",
"sentiment-analysis",
"en",
"dataset:imdb",
"arxiv:2006.03236",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-15T18:48:18Z |
---
license: apache-2.0
language: en
widget:
- text: "In the garden of wonderment that is the body of work by the animation master Hayao Miyazaki, his 2001 gem 'Spirited Away' is at once one of his most accessible films to a Western audience and the one most distinctly rooted in Japanese culture and lore. The tale of Chihiro, a 10 year old girl who resents being moved away from all her friends, only to find herself working in a bathhouse for the gods, doesn't just use its home country's fraught relationship with deities as a backdrop. Never remotely didactic, the film is ultimately a self-fulfilment drama that touches on religious, ethical, ecological and psychological issues.
It's also a fine children's film, the kind that elicits a deepening bond across repeat viewings and the passage of time, mostly because Miyazaki refuses to talk down to younger viewers. That's been a constant in all of his filmography, but it's particularly conspicuous here because the stakes for its young protagonist are bigger than in most of his previous features aimed at younger viewers. It involves conquering fears and finding oneself in situations where safety is not a given.
There are so many moving parts in Spirited Away, from both a thematic and technical point of view, that pinpointing what makes Spirited Away stand out from an already outstanding body of work becomes as challenging as a meeting with Yubaba. But I think it comes down to an ability to deal with heady, complex subject matter from a young girl's perspective without diluting or lessening its resonance. Miyazaki has made a loopy, demanding work of art that asks your inner child to come out and play. There are few high-wire acts in all of movie-dom as satisfying as that."
datasets:
- imdb
tags:
- sentiment-analysis
---
# Funnel Transformer small (B4-4-4 with decoder) fine-tuned on IMDB for Sentiment Analysis
These are the model weights for the Funnel Transformer small model fine-tuned on the IMDB dataset for performing Sentiment Analysis with `max_position_embeddings=1024`.
The original model weights for English language are from [funnel-transformer/small](https://huggingface.co/funnel-transformer/small) and it uses a similar objective objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in [this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in [this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference between english and English.
## Fine-tuning Results
| | Accuracy | Precision | Recall | F1 |
|-------------------------------|----------|-----------|----------|----------|
| funnel-transformer-small-imdb | 0.956530 | 0.952286 | 0.961075 | 0.956661 |
## Model description (from [funnel-transformer/small](https://huggingface.co/funnel-transformer/small))
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the BERT model as inputs.
# How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained(
"Sreevishnu/funnel-transformer-small-imdb",
use_fast=True)
model = AutoModelForSequenceClassification.from_pretrained(
"Sreevishnu/funnel-transformer-small-imdb",
num_labels=2,
max_position_embeddings=1024)
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
# Example App
https://lazy-film-reviews-7gif2bz4sa-ew.a.run.app/
Project repo: https://github.com/akshaydevml/lazy-film-reviews
|
facebook/deit-tiny-distilled-patch16-224
|
facebook
| 2022-07-13T11:41:55Z | 2,674 | 4 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"deit",
"image-classification",
"vision",
"dataset:imagenet",
"arxiv:2012.12877",
"arxiv:2006.03677",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- image-classification
- vision
datasets:
- imagenet
---
# Distilled Data-efficient Image Transformer (tiny-sized model)
Distilled data-efficient Image Transformer (DeiT) model pre-trained and fine-tuned on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Touvron et al. and first released in [this repository](https://github.com/facebookresearch/deit). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman.
Disclaimer: The team releasing DeiT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
This model is a distilled Vision Transformer (ViT). It uses a distillation token, besides the class token, to effectively learn from a teacher (CNN) during both pre-training and fine-tuning. The distillation token is learned through backpropagation, by interacting with the class ([CLS]) and patch tokens through the self-attention layers.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=facebook/deit) to look for
fine-tuned versions on a task that interests you.
### How to use
Since this model is a distilled ViT model, you can plug it into DeiTModel, DeiTForImageClassification or DeiTForImageClassificationWithTeacher. Note that the model expects the data to be prepared using DeiTFeatureExtractor. Here we use AutoFeatureExtractor, which will automatically use the appropriate feature extractor given the model name.
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoFeatureExtractor, DeiTForImageClassificationWithTeacher
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = AutoFeatureExtractor.from_pretrained('facebook/deit-tiny-distilled-patch16-224')
model = DeiTForImageClassificationWithTeacher.from_pretrained('facebook/deit-tiny-distilled-patch16-224')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon.
## Training data
This model was pretrained and fine-tuned with distillation on [ImageNet-1k](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/facebookresearch/deit/blob/ab5715372db8c6cad5740714b2216d55aeae052e/datasets.py#L78).
At inference time, images are resized/rescaled to the same resolution (256x256), center-cropped at 224x224 and normalized across the RGB channels with the ImageNet mean and standard deviation.
### Pretraining
The model was trained on a single 8-GPU node for 3 days. Training resolution is 224. For all hyperparameters (such as batch size and learning rate) we refer to table 9 of the original paper.
## Evaluation results
| Model | ImageNet top-1 accuracy | ImageNet top-5 accuracy | # params | URL |
|---------------------------------------|-------------------------|-------------------------|----------|------------------------------------------------------------------|
| DeiT-tiny | 72.2 | 91.1 | 5M | https://huggingface.co/facebook/deit-tiny-patch16-224 |
| DeiT-small | 79.9 | 95.0 | 22M | https://huggingface.co/facebook/deit-small-patch16-224 |
| DeiT-base | 81.8 | 95.6 | 86M | https://huggingface.co/facebook/deit-base-patch16-224 |
| **DeiT-tiny distilled** | **74.5** | **91.9** | **6M** | **https://huggingface.co/facebook/deit-tiny-distilled-patch16-224** |
| DeiT-small distilled | 81.2 | 95.4 | 22M | https://huggingface.co/facebook/deit-small-distilled-patch16-224 |
| DeiT-base distilled | 83.4 | 96.5 | 87M | https://huggingface.co/facebook/deit-base-distilled-patch16-224 |
| DeiT-base 384 | 82.9 | 96.2 | 87M | https://huggingface.co/facebook/deit-base-patch16-384 |
| DeiT-base distilled 384 (1000 epochs) | 85.2 | 97.2 | 88M | https://huggingface.co/facebook/deit-base-distilled-patch16-384 |
Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.
### BibTeX entry and citation info
```bibtex
@misc{touvron2021training,
title={Training data-efficient image transformers & distillation through attention},
author={Hugo Touvron and Matthieu Cord and Matthijs Douze and Francisco Massa and Alexandre Sablayrolles and Hervé Jégou},
year={2021},
eprint={2012.12877},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```bibtex
@misc{wu2020visual,
title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision},
author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda},
year={2020},
eprint={2006.03677},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```bibtex
@inproceedings{deng2009imagenet,
title={Imagenet: A large-scale hierarchical image database},
author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li},
booktitle={2009 IEEE conference on computer vision and pattern recognition},
pages={248--255},
year={2009},
organization={Ieee}
}
```
|
facebook/deit-small-distilled-patch16-224
|
facebook
| 2022-07-13T11:41:21Z | 4,247 | 6 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"deit",
"image-classification",
"vision",
"dataset:imagenet",
"arxiv:2012.12877",
"arxiv:2006.03677",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- image-classification
- vision
datasets:
- imagenet
---
# Distilled Data-efficient Image Transformer (small-sized model)
Distilled data-efficient Image Transformer (DeiT) model pre-trained and fine-tuned on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Touvron et al. and first released in [this repository](https://github.com/facebookresearch/deit). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman.
Disclaimer: The team releasing DeiT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
This model is a distilled Vision Transformer (ViT). It uses a distillation token, besides the class token, to effectively learn from a teacher (CNN) during both pre-training and fine-tuning. The distillation token is learned through backpropagation, by interacting with the class ([CLS]) and patch tokens through the self-attention layers.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=facebook/deit) to look for
fine-tuned versions on a task that interests you.
### How to use
Since this model is a distilled ViT model, you can plug it into DeiTModel, DeiTForImageClassification or DeiTForImageClassificationWithTeacher. Note that the model expects the data to be prepared using DeiTFeatureExtractor. Here we use AutoFeatureExtractor, which will automatically use the appropriate feature extractor given the model name.
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoFeatureExtractor, DeiTForImageClassificationWithTeacher
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = AutoFeatureExtractor.from_pretrained('facebook/deit-small-distilled-patch16-224')
model = DeiTForImageClassificationWithTeacher.from_pretrained('facebook/deit-small-distilled-patch16-224')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon.
## Training data
This model was pretrained and fine-tuned with distillation on [ImageNet-1k](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/facebookresearch/deit/blob/ab5715372db8c6cad5740714b2216d55aeae052e/datasets.py#L78).
At inference time, images are resized/rescaled to the same resolution (256x256), center-cropped at 224x224 and normalized across the RGB channels with the ImageNet mean and standard deviation.
### Pretraining
The model was trained on a single 8-GPU node for 3 days. Training resolution is 224. For all hyperparameters (such as batch size and learning rate) we refer to table 9 of the original paper.
## Evaluation results
| Model | ImageNet top-1 accuracy | ImageNet top-5 accuracy | # params | URL |
|---------------------------------------|-------------------------|-------------------------|----------|------------------------------------------------------------------|
| DeiT-tiny | 72.2 | 91.1 | 5M | https://huggingface.co/facebook/deit-tiny-patch16-224 |
| DeiT-small | 79.9 | 95.0 | 22M | https://huggingface.co/facebook/deit-small-patch16-224 |
| DeiT-base | 81.8 | 95.6 | 86M | https://huggingface.co/facebook/deit-base-patch16-224 |
| DeiT-tiny distilled | 74.5 | 91.9 | 6M | https://huggingface.co/facebook/deit-tiny-distilled-patch16-224 |
| **DeiT-small distilled** | **81.2** | **95.4** | **22M** | **https://huggingface.co/facebook/deit-small-distilled-patch16-224** |
| DeiT-base distilled | 83.4 | 96.5 | 87M | https://huggingface.co/facebook/deit-base-distilled-patch16-224 |
| DeiT-base 384 | 82.9 | 96.2 | 87M | https://huggingface.co/facebook/deit-base-patch16-384 |
| DeiT-base distilled 384 (1000 epochs) | 85.2 | 97.2 | 88M | https://huggingface.co/facebook/deit-base-distilled-patch16-384 |
Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.
### BibTeX entry and citation info
```bibtex
@misc{touvron2021training,
title={Training data-efficient image transformers & distillation through attention},
author={Hugo Touvron and Matthieu Cord and Matthijs Douze and Francisco Massa and Alexandre Sablayrolles and Hervé Jégou},
year={2021},
eprint={2012.12877},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```bibtex
@misc{wu2020visual,
title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision},
author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda},
year={2020},
eprint={2006.03677},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```bibtex
@inproceedings{deng2009imagenet,
title={Imagenet: A large-scale hierarchical image database},
author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li},
booktitle={2009 IEEE conference on computer vision and pattern recognition},
pages={248--255},
year={2009},
organization={Ieee}
}
```
|
facebook/deit-base-patch16-224
|
facebook
| 2022-07-13T11:40:44Z | 144,060 | 13 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"vit",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2012.12877",
"arxiv:2006.03677",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- image-classification
datasets:
- imagenet-1k
---
# Data-efficient Image Transformer (base-sized model)
Data-efficient Image Transformer (DeiT) model pre-trained and fine-tuned on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Touvron et al. and first released in [this repository](https://github.com/facebookresearch/deit). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman.
Disclaimer: The team releasing DeiT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
This model is actually a more efficiently trained Vision Transformer (ViT).
The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pre-trained and fine-tuned on a large collection of images in a supervised fashion, namely ImageNet-1k, at a resolution of 224x224 pixels.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=facebook/deit) to look for
fine-tuned versions on a task that interests you.
### How to use
Since this model is a more efficiently trained ViT model, you can plug it into ViTModel or ViTForImageClassification. Note that the model expects the data to be prepared using DeiTFeatureExtractor. Here we use AutoFeatureExtractor, which will automatically use the appropriate feature extractor given the model name.
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoFeatureExtractor, ViTForImageClassification
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = AutoFeatureExtractor.from_pretrained('facebook/deit-base-patch16-224')
model = ViTForImageClassification.from_pretrained('facebook/deit-base-patch16-224')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon.
## Training data
The ViT model was pretrained on [ImageNet-1k](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/facebookresearch/deit/blob/ab5715372db8c6cad5740714b2216d55aeae052e/datasets.py#L78).
At inference time, images are resized/rescaled to the same resolution (256x256), center-cropped at 224x224 and normalized across the RGB channels with the ImageNet mean and standard deviation.
### Pretraining
The model was trained on a single 8-GPU node for 3 days. Training resolution is 224. For all hyperparameters (such as batch size and learning rate) we refer to table 9 of the original paper.
## Evaluation results
| Model | ImageNet top-1 accuracy | ImageNet top-5 accuracy | # params | URL |
|---------------------------------------|-------------------------|-------------------------|----------|------------------------------------------------------------------|
| DeiT-tiny | 72.2 | 91.1 | 5M | https://huggingface.co/facebook/deit-tiny-patch16-224 |
| DeiT-small | 79.9 | 95.0 | 22M | https://huggingface.co/facebook/deit-small-patch16-224 |
| **DeiT-base** | **81.8** | **95.6** | **86M** | **https://huggingface.co/facebook/deit-base-patch16-224** |
| DeiT-tiny distilled | 74.5 | 91.9 | 6M | https://huggingface.co/facebook/deit-tiny-distilled-patch16-224 |
| DeiT-small distilled | 81.2 | 95.4 | 22M | https://huggingface.co/facebook/deit-small-distilled-patch16-224 |
| DeiT-base distilled | 83.4 | 96.5 | 87M | https://huggingface.co/facebook/deit-base-distilled-patch16-224 |
| DeiT-base 384 | 82.9 | 96.2 | 87M | https://huggingface.co/facebook/deit-base-patch16-384 |
| DeiT-base distilled 384 (1000 epochs) | 85.2 | 97.2 | 88M | https://huggingface.co/facebook/deit-base-distilled-patch16-384 |
Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.
### BibTeX entry and citation info
```bibtex
@misc{touvron2021training,
title={Training data-efficient image transformers & distillation through attention},
author={Hugo Touvron and Matthieu Cord and Matthijs Douze and Francisco Massa and Alexandre Sablayrolles and Hervé Jégou},
year={2021},
eprint={2012.12877},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```bibtex
@misc{wu2020visual,
title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision},
author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda},
year={2020},
eprint={2006.03677},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```bibtex
@inproceedings{deng2009imagenet,
title={Imagenet: A large-scale hierarchical image database},
author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li},
booktitle={2009 IEEE conference on computer vision and pattern recognition},
pages={248--255},
year={2009},
organization={Ieee}
}
```
|
facebook/deit-base-distilled-patch16-224
|
facebook
| 2022-07-13T11:39:38Z | 16,934 | 23 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"deit",
"image-classification",
"vision",
"dataset:imagenet",
"arxiv:2012.12877",
"arxiv:2006.03677",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- image-classification
- vision
datasets:
- imagenet
---
# Distilled Data-efficient Image Transformer (base-sized model)
Distilled data-efficient Image Transformer (DeiT) model pre-trained and fine-tuned on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Touvron et al. and first released in [this repository](https://github.com/facebookresearch/deit). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman.
Disclaimer: The team releasing DeiT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
This model is a distilled Vision Transformer (ViT). It uses a distillation token, besides the class token, to effectively learn from a teacher (CNN) during both pre-training and fine-tuning. The distillation token is learned through backpropagation, by interacting with the class ([CLS]) and patch tokens through the self-attention layers.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=facebook/deit) to look for
fine-tuned versions on a task that interests you.
### How to use
Since this model is a distilled ViT model, you can plug it into DeiTModel, DeiTForImageClassification or DeiTForImageClassificationWithTeacher. Note that the model expects the data to be prepared using DeiTFeatureExtractor. Here we use AutoFeatureExtractor, which will automatically use the appropriate feature extractor given the model name.
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoFeatureExtractor, DeiTForImageClassificationWithTeacher
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = AutoFeatureExtractor.from_pretrained('facebook/deit-base-distilled-patch16-224')
model = DeiTForImageClassificationWithTeacher.from_pretrained('facebook/deit-base-distilled-patch16-224')
inputs = feature_extractor(images=image, return_tensors="pt")
# forward pass
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon.
## Training data
This model was pretrained and fine-tuned with distillation on [ImageNet-1k](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/facebookresearch/deit/blob/ab5715372db8c6cad5740714b2216d55aeae052e/datasets.py#L78).
At inference time, images are resized/rescaled to the same resolution (256x256), center-cropped at 224x224 and normalized across the RGB channels with the ImageNet mean and standard deviation.
### Pretraining
The model was trained on a single 8-GPU node for 3 days. Training resolution is 224. For all hyperparameters (such as batch size and learning rate) we refer to table 9 of the original paper.
## Evaluation results
| Model | ImageNet top-1 accuracy | ImageNet top-5 accuracy | # params | URL |
|---------------------------------------|-------------------------|-------------------------|----------|------------------------------------------------------------------|
| DeiT-tiny | 72.2 | 91.1 | 5M | https://huggingface.co/facebook/deit-tiny-patch16-224 |
| DeiT-small | 79.9 | 95.0 | 22M | https://huggingface.co/facebook/deit-small-patch16-224 |
| DeiT-base | 81.8 | 95.6 | 86M | https://huggingface.co/facebook/deit-base-patch16-224 |
| DeiT-tiny distilled | 74.5 | 91.9 | 6M | https://huggingface.co/facebook/deit-tiny-distilled-patch16-224 |
| DeiT-small distilled | 81.2 | 95.4 | 22M | https://huggingface.co/facebook/deit-small-distilled-patch16-224 |
| **DeiT-base distilled** | **83.4** | **96.5** | **87M** | **https://huggingface.co/facebook/deit-base-distilled-patch16-224** |
| DeiT-base 384 | 82.9 | 96.2 | 87M | https://huggingface.co/facebook/deit-base-patch16-384 |
| DeiT-base distilled 384 (1000 epochs) | 85.2 | 97.2 | 88M | https://huggingface.co/facebook/deit-base-distilled-patch16-384 |
Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.
### BibTeX entry and citation info
```bibtex
@misc{touvron2021training,
title={Training data-efficient image transformers & distillation through attention},
author={Hugo Touvron and Matthieu Cord and Matthijs Douze and Francisco Massa and Alexandre Sablayrolles and Hervé Jégou},
year={2021},
eprint={2012.12877},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```bibtex
@misc{wu2020visual,
title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision},
author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda},
year={2020},
eprint={2006.03677},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```bibtex
@inproceedings{deng2009imagenet,
title={Imagenet: A large-scale hierarchical image database},
author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li},
booktitle={2009 IEEE conference on computer vision and pattern recognition},
pages={248--255},
year={2009},
organization={Ieee}
}
```
|
fxmarty/20220713-h10m20s05_example_conll2003
|
fxmarty
| 2022-07-13T10:20:11Z | 0 | 0 | null |
[
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"region:us"
] |
token-classification
| 2022-07-13T10:20:05Z |
---
pipeline_tag: token-classification
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
tags:
- distilbert
---
**task**: `token-classification`
**Backend:** `sagemaker-training`
**Backend args:** `{'instance_type': 'ml.g4dn.2xlarge', 'supported_instructions': None}`
**Number of evaluation samples:** `All dataset`
Fixed parameters:
* **model_name_or_path**: `elastic/distilbert-base-uncased-finetuned-conll03-english`
* **dataset**:
* **path**: `conll2003`
* **eval_split**: `validation`
* **data_keys**: `{'primary': 'tokens'}`
* **ref_keys**: `['ner_tags']`
* **calibration_split**: `train`
* **quantization_approach**: `static`
* **operators_to_quantize**: `['Add', 'MatMul']`
* **per_channel**: `False`
* **calibration**:
* **method**: `minmax`
* **num_calibration_samples**: `100`
* **framework**: `onnxruntime`
* **framework_args**:
* **opset**: `11`
* **optimization_level**: `1`
* **aware_training**: `False`
Benchmarked parameters:
* **node_exclusion**: `[]`, `['layernorm', 'gelu', 'residual', 'gather', 'softmax']`
# Evaluation
## Non-time metrics
| node_exclusion | | precision (original) | precision (optimized) | | recall (original) | recall (optimized) | | f1 (original) | f1 (optimized) | | accuracy (original) | accuracy (optimized) |
| :------------------------------------------------------: | :-: | :------------------: | :-------------------: | :-: | :---------------: | :----------------: | :-: | :-----------: | :------------: | :-: | :-----------------: | :------------------: |
| `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 0.936 | 0.904 | \| | 0.944 | 0.921 | \| | 0.940 | 0.912 | \| | 0.988 | 0.984 |
| `[]` | \| | 0.936 | 0.065 | \| | 0.944 | 0.243 | \| | 0.940 | 0.103 | \| | 0.988 | 0.357 |
## Time metrics
Time benchmarks were run for 15 seconds per config.
Below, time metrics for batch size = 4, input length = 64.
| node_exclusion | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :------------------------------------------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 120.53 | 46.41 | \| | 8.33 | 21.60 |
| `[]` | \| | 119.97 | 59.50 | \| | 8.40 | 16.87 |
|
hugginglearners/flowers_101_convnext_model
|
hugginglearners
| 2022-07-13T09:58:32Z | 0 | 3 |
fastai
|
[
"fastai",
"image-classification",
"region:us"
] |
image-classification
| 2022-07-04T00:50:48Z |
---
tags:
- fastai
- image-classification
---
# Model card
## Model description
This model has been trained with convnext_tiny_in22k with [Flowers-101 datasets in Kaggle](https://www.kaggle.com/competitions/tpu-getting-started).
**Useful graphs logged with wandb**


## Intended uses & limitations
- The model can be used be for classifying flowers only.
**Limitations**
- Even if the picture uploaded is not of a flower, you can can notice [it will be predicted as of flower](https://www.kaggle.com/competitions/tpu-getting-started).
- The model on validation dataset has accuracy of 94.23%

## Training and evaluation data
- The models has been trained and evaluated with [Flowers-101 datasets in Kaggle](https://www.kaggle.com/competitions/tpu-getting-started).
- We used a Random Splitter to train and evaluate data
|
fxmarty/20220713-h08m45s49_example_squad
|
fxmarty
| 2022-07-13T08:46:02Z | 0 | 0 | null |
[
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"region:us"
] |
question-answering
| 2022-07-13T08:45:49Z |
---
pipeline_tag: question-answering
datasets:
- squad
metrics:
- exact_match
- f1
tags:
- distilbert
---
**task**: `question-answering`
**Backend:** `sagemaker-training`
**Backend args:** `{'instance_type': 'ml.g4dn.2xlarge', 'supported_instructions': None}`
**Number of evaluation samples:** `1000`
Fixed parameters:
* **model_name_or_path**: `distilbert-base-uncased-distilled-squad`
* **dataset**:
* **path**: `squad`
* **eval_split**: `validation`
* **data_keys**: `{'question': 'question', 'context': 'context'}`
* **ref_keys**: `['answers']`
* **calibration_split**: `train`
* **per_channel**: `False`
* **calibration**:
* **method**: `minmax`
* **num_calibration_samples**: `100`
* **framework**: `onnxruntime`
* **framework_args**:
* **opset**: `11`
* **optimization_level**: `1`
* **aware_training**: `False`
Benchmarked parameters:
* **quantization_approach**: `dynamic`, `static`
* **operators_to_quantize**: `['Add']`, `['Add', 'MatMul']`
* **node_exclusion**: `[]`, `['layernorm', 'gelu', 'residual', 'gather', 'softmax']`
# Evaluation
## Non-time metrics
| quantization_approach | operators_to_quantize | node_exclusion | | exact_match (original) | exact_match (optimized) | | f1 (original) | f1 (optimized) |
| :-------------------: | :-------------------: | :------------------------------------------------------: | :-: | :--------------------: | :---------------------: | :-: | :-----------: | :------------: |
| `dynamic` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 82.300 | 80.600 | \| | 87.232 | 86.097 |
| `dynamic` | `['Add', 'MatMul']` | `[]` | \| | 82.300 | 80.600 | \| | 87.232 | 86.097 |
| `dynamic` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 82.300 | 82.300 | \| | 87.232 | 87.232 |
| `dynamic` | `['Add']` | `[]` | \| | 82.300 | 82.300 | \| | 87.232 | 87.232 |
| `static` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 82.300 | 72.900 | \| | 87.232 | 79.964 |
| `static` | `['Add', 'MatMul']` | `[]` | \| | 82.300 | 54.500 | \| | 87.232 | 64.292 |
| `static` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 82.300 | 76.900 | \| | 87.232 | 83.014 |
| `static` | `['Add']` | `[]` | \| | 82.300 | 59.800 | \| | 87.232 | 69.217 |
## Time metrics
Time benchmarks were run for 15 seconds per config.
Below, time metrics for batch size = 1, input length = 32.
| quantization_approach | operators_to_quantize | node_exclusion | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :-------------------: | :-------------------: | :------------------------------------------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `dynamic` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 47.87 | 7.23 | \| | 20.93 | 138.40 |
| `dynamic` | `['Add', 'MatMul']` | `[]` | \| | 48.10 | 7.14 | \| | 20.80 | 140.13 |
| `dynamic` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 43.83 | 17.16 | \| | 22.87 | 58.33 |
| `dynamic` | `['Add']` | `[]` | \| | 34.13 | 17.02 | \| | 29.33 | 58.80 |
| `static` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 35.07 | 9.21 | \| | 28.53 | 108.53 |
| `static` | `['Add', 'MatMul']` | `[]` | \| | 48.27 | 11.62 | \| | 20.73 | 86.13 |
| `static` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 34.11 | 19.23 | \| | 29.33 | 52.00 |
| `static` | `['Add']` | `[]` | \| | 48.54 | 21.18 | \| | 20.67 | 47.27 |
Below, time metrics for batch size = 1, input length = 64.
| quantization_approach | operators_to_quantize | node_exclusion | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :-------------------: | :-------------------: | :------------------------------------------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `dynamic` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 59.92 | 12.60 | \| | 16.73 | 79.40 |
| `dynamic` | `['Add', 'MatMul']` | `[]` | \| | 59.64 | 13.25 | \| | 16.80 | 75.47 |
| `dynamic` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 60.13 | 29.65 | \| | 16.67 | 33.73 |
| `dynamic` | `['Add']` | `[]` | \| | 59.62 | 29.51 | \| | 16.80 | 33.93 |
| `static` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 58.94 | 15.13 | \| | 17.00 | 66.13 |
| `static` | `['Add', 'MatMul']` | `[]` | \| | 60.49 | 18.62 | \| | 16.53 | 53.73 |
| `static` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 43.32 | 28.00 | \| | 23.13 | 35.73 |
| `static` | `['Add']` | `[]` | \| | 44.19 | 32.72 | \| | 22.67 | 30.60 |
Below, time metrics for batch size = 1, input length = 128.
| quantization_approach | operators_to_quantize | node_exclusion | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :-------------------: | :-------------------: | :------------------------------------------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `dynamic` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 73.39 | 26.56 | \| | 13.67 | 37.67 |
| `dynamic` | `['Add', 'MatMul']` | `[]` | \| | 57.64 | 23.42 | \| | 17.40 | 42.73 |
| `dynamic` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 64.04 | 50.14 | \| | 15.67 | 20.00 |
| `dynamic` | `['Add']` | `[]` | \| | 72.81 | 57.05 | \| | 13.80 | 17.53 |
| `static` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 70.57 | 27.59 | \| | 14.20 | 36.27 |
| `static` | `['Add', 'MatMul']` | `[]` | \| | 71.04 | 37.94 | \| | 14.13 | 26.40 |
| `static` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 57.65 | 57.95 | \| | 17.40 | 17.27 |
| `static` | `['Add']` | `[]` | \| | 71.66 | 58.67 | \| | 14.00 | 17.07 |
Below, time metrics for batch size = 4, input length = 32.
| quantization_approach | operators_to_quantize | node_exclusion | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :-------------------: | :-------------------: | :------------------------------------------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `dynamic` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 72.11 | 21.80 | \| | 13.93 | 45.93 |
| `dynamic` | `['Add', 'MatMul']` | `[]` | \| | 73.15 | 20.70 | \| | 13.73 | 48.33 |
| `dynamic` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 72.05 | 53.68 | \| | 13.93 | 18.67 |
| `dynamic` | `['Add']` | `[]` | \| | 55.97 | 53.60 | \| | 17.87 | 18.67 |
| `static` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 70.46 | 24.88 | \| | 14.20 | 40.20 |
| `static` | `['Add', 'MatMul']` | `[]` | \| | 56.57 | 30.90 | \| | 17.73 | 32.40 |
| `static` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 62.38 | 53.64 | \| | 16.07 | 18.67 |
| `static` | `['Add']` | `[]` | \| | 60.19 | 67.29 | \| | 16.67 | 14.87 |
Below, time metrics for batch size = 4, input length = 64.
| quantization_approach | operators_to_quantize | node_exclusion | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :-------------------: | :-------------------: | :------------------------------------------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `dynamic` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 121.20 | 40.12 | \| | 8.27 | 24.93 |
| `dynamic` | `['Add', 'MatMul']` | `[]` | \| | 90.97 | 41.51 | \| | 11.00 | 24.13 |
| `dynamic` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 120.85 | 106.50 | \| | 8.33 | 9.40 |
| `dynamic` | `['Add']` | `[]` | \| | 118.58 | 106.55 | \| | 8.47 | 9.40 |
| `static` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 120.57 | 54.25 | \| | 8.33 | 18.47 |
| `static` | `['Add', 'MatMul']` | `[]` | \| | 104.93 | 57.90 | \| | 9.60 | 17.33 |
| `static` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 90.85 | 110.46 | \| | 11.07 | 9.07 |
| `static` | `['Add']` | `[]` | \| | 120.57 | 103.62 | \| | 8.33 | 9.67 |
Below, time metrics for batch size = 4, input length = 128.
| quantization_approach | operators_to_quantize | node_exclusion | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :-------------------: | :-------------------: | :------------------------------------------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `dynamic` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 172.14 | 94.78 | \| | 5.87 | 10.60 |
| `dynamic` | `['Add', 'MatMul']` | `[]` | \| | 220.38 | 84.18 | \| | 4.60 | 11.93 |
| `dynamic` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 221.22 | 221.37 | \| | 4.53 | 4.53 |
| `dynamic` | `['Add']` | `[]` | \| | 203.90 | 175.16 | \| | 4.93 | 5.73 |
| `static` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 192.63 | 113.82 | \| | 5.20 | 8.80 |
| `static` | `['Add', 'MatMul']` | `[]` | \| | 220.32 | 122.36 | \| | 4.60 | 8.20 |
| `static` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 220.58 | 207.51 | \| | 4.60 | 4.87 |
| `static` | `['Add']` | `[]` | \| | 221.94 | 246.87 | \| | 4.53 | 4.07 |
Below, time metrics for batch size = 8, input length = 32.
| quantization_approach | operators_to_quantize | node_exclusion | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :-------------------: | :-------------------: | :------------------------------------------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `dynamic` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 112.67 | 43.26 | \| | 8.93 | 23.13 |
| `dynamic` | `['Add', 'MatMul']` | `[]` | \| | 95.78 | 40.66 | \| | 10.47 | 24.60 |
| `dynamic` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 117.38 | 104.28 | \| | 8.53 | 9.60 |
| `dynamic` | `['Add']` | `[]` | \| | 89.81 | 91.00 | \| | 11.20 | 11.00 |
| `static` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 89.14 | 52.09 | \| | 11.27 | 19.20 |
| `static` | `['Add', 'MatMul']` | `[]` | \| | 92.77 | 64.21 | \| | 10.80 | 15.60 |
| `static` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 119.10 | 114.43 | \| | 8.40 | 8.80 |
| `static` | `['Add']` | `[]` | \| | 119.28 | 127.79 | \| | 8.40 | 7.87 |
Below, time metrics for batch size = 8, input length = 64.
| quantization_approach | operators_to_quantize | node_exclusion | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :-------------------: | :-------------------: | :------------------------------------------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `dynamic` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 215.03 | 78.03 | \| | 4.67 | 12.87 |
| `dynamic` | `['Add', 'MatMul']` | `[]` | \| | 214.76 | 87.19 | \| | 4.67 | 11.53 |
| `dynamic` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 216.48 | 162.64 | \| | 4.67 | 6.20 |
| `dynamic` | `['Add']` | `[]` | \| | 204.29 | 212.33 | \| | 4.93 | 4.73 |
| `static` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 215.47 | 104.45 | \| | 4.67 | 9.60 |
| `static` | `['Add', 'MatMul']` | `[]` | \| | 209.66 | 106.43 | \| | 4.80 | 9.40 |
| `static` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 166.13 | 220.92 | \| | 6.07 | 4.53 |
| `static` | `['Add']` | `[]` | \| | 214.69 | 209.01 | \| | 4.67 | 4.80 |
Below, time metrics for batch size = 8, input length = 128.
| quantization_approach | operators_to_quantize | node_exclusion | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :-------------------: | :-------------------: | :------------------------------------------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `dynamic` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 407.90 | 151.49 | \| | 2.47 | 6.67 |
| `dynamic` | `['Add', 'MatMul']` | `[]` | \| | 407.34 | 154.55 | \| | 2.47 | 6.53 |
| `dynamic` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 406.51 | 394.85 | \| | 2.47 | 2.60 |
| `dynamic` | `['Add']` | `[]` | \| | 309.53 | 445.24 | \| | 3.27 | 2.27 |
| `static` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 407.54 | 224.46 | \| | 2.47 | 4.47 |
| `static` | `['Add', 'MatMul']` | `[]` | \| | 408.14 | 236.94 | \| | 2.47 | 4.27 |
| `static` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 309.91 | 357.87 | \| | 3.27 | 2.80 |
| `static` | `['Add']` | `[]` | \| | 310.00 | 406.54 | \| | 3.27 | 2.47 |
|
dsivakumar/text2sql
|
dsivakumar
| 2022-07-13T07:27:17Z | 28 | 2 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:wikisql",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-07-10T07:43:23Z |
---
language:
- en
datasets:
- wikisql
widget:
- text: "English to SQL: Show me the average age of of wines in Italy by provinces"
- text: "English to SQL: What is the current series where the new series began in June 2011?"
---
#import transformers
```
from transformers import (
T5ForConditionalGeneration,
T5Tokenizer,
)
#load model
model = T5ForConditionalGeneration.from_pretrained('dsivakumar/text2sql')
tokenizer = T5Tokenizer.from_pretrained('dsivakumar/text2sql')
#predict function
def get_sql(query,tokenizer,model):
source_text= "English to SQL: "+query
source_text = ' '.join(source_text.split())
source = tokenizer.batch_encode_plus([source_text],max_length= 128, pad_to_max_length=True, truncation=True, padding="max_length", return_tensors='pt')
source_ids = source['input_ids'] #.squeeze()
source_mask = source['attention_mask']#.squeeze()
generated_ids = model.generate(
input_ids = source_ids.to(dtype=torch.long),
attention_mask = source_mask.to(dtype=torch.long),
max_length=150,
num_beams=2,
repetition_penalty=2.5,
length_penalty=1.0,
early_stopping=True
)
preds = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=True) for g in generated_ids]
return preds
#test
query="Show me the average age of of wines in Italy by provinces"
sql = get_sql(query,tokenizer,model)
print(sql)
#https://huggingface.co/mrm8488/t5-small-finetuned-wikiSQL
def get_sql(query):
input_text = "translate English to SQL: %s </s>" % query
features = tokenizer([input_text], return_tensors='pt')
output = model.generate(input_ids=features['input_ids'],
attention_mask=features['attention_mask'])
return tokenizer.decode(output[0])
query = "How many models were finetuned using BERT as base model?"
get_sql(query)
```
|
Loc/lucky-model
|
Loc
| 2022-07-13T07:06:05Z | 53 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"vit",
"image-classification",
"vision",
"dataset:imagenet-1k",
"dataset:imagenet-21k",
"arxiv:2010.11929",
"arxiv:2006.03677",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-07-13T03:43:48Z |
---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
- imagenet-21k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# Vision Transformer (base-sized model)
Vision Transformer (ViT) model pre-trained on ImageNet-21k (14 million images, 21,843 classes) at resolution 224x224, and fine-tuned on ImageNet 2012 (1 million images, 1,000 classes) at resolution 224x224. It was introduced in the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Dosovitskiy et al. and first released in [this repository](https://github.com/google-research/vision_transformer). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman, who already converted the weights from JAX to PyTorch. Credits go to him.
Disclaimer: The team releasing ViT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. Next, the model was fine-tuned on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=google/vit) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import ViTFeatureExtractor, ViTForImageClassification
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-base-patch16-224')
model = ViTForImageClassification.from_pretrained('google/vit-base-patch16-224')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/vit.html#).
## Training data
The ViT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes, and fine-tuned on [ImageNet](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/google-research/vision_transformer/blob/master/vit_jax/input_pipeline.py).
Images are resized/rescaled to the same resolution (224x224) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5).
### Pretraining
The model was trained on TPUv3 hardware (8 cores). All model variants are trained with a batch size of 4096 and learning rate warmup of 10k steps. For ImageNet, the authors found it beneficial to additionally apply gradient clipping at global norm 1. Training resolution is 224.
## Evaluation results
For evaluation results on several image classification benchmarks, we refer to tables 2 and 5 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.
### BibTeX entry and citation info
```bibtex
@misc{wu2020visual,
title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision},
author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda},
year={2020},
eprint={2006.03677},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```bibtex
@inproceedings{deng2009imagenet,
title={Imagenet: A large-scale hierarchical image database},
author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li},
booktitle={2009 IEEE conference on computer vision and pattern recognition},
pages={248--255},
year={2009},
organization={Ieee}
}
```
|
huggingartists/queen
|
huggingartists
| 2022-07-13T06:52:09Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/queen",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- huggingartists/queen
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/97bcb5755cb9780d76b37726a0ce4bef.1000x1000x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Queen</div>
<a href="https://genius.com/artists/queen">
<div style="text-align: center; font-size: 14px;">@queen</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Queen.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/queen).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/queen")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1jdprwq2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Queen's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/2lvkoamo) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/2lvkoamo/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/queen')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/queen")
model = AutoModelWithLMHead.from_pretrained("huggingartists/queen")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
FelipeAD/mt5-small-SENTENCE_COMPRESSION
|
FelipeAD
| 2022-07-13T06:44:19Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-07-12T21:29:25Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: FelipeAD/mt5-small-SENTENCE_COMPRESSION
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# FelipeAD/mt5-small-SENTENCE_COMPRESSION
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.1433
- Validation Loss: 0.9768
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 179848, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.6046 | 1.1992 | 0 |
| 1.3586 | 1.0826 | 1 |
| 1.2178 | 1.0241 | 2 |
| 1.1433 | 0.9768 | 3 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.6.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
NimaBoscarino/STPushToHub-test2
|
NimaBoscarino
| 2022-07-13T05:57:37Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-07-13T05:49:12Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# NimaBoscarino/STPushToHub-test2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('NimaBoscarino/STPushToHub-test2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('NimaBoscarino/STPushToHub-test2')
model = AutoModel.from_pretrained('NimaBoscarino/STPushToHub-test2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=NimaBoscarino/STPushToHub-test2)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 360 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 144,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
jason9693/soongsil-bert-small
|
jason9693
| 2022-07-13T05:33:10Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"ko",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language: ko
widget:
- 숭실대학교 글로벌<mask>학부
---
|
huggingtweets/burdeevt
|
huggingtweets
| 2022-07-13T04:15:34Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-07-13T03:51:48Z |
---
language: en
thumbnail: http://www.huggingtweets.com/burdeevt/1657685656540/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1542316332972228608/Hs2WAuIA_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Burdee 🐣💖</div>
<div style="text-align: center; font-size: 14px;">@burdeevt</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Burdee 🐣💖.
| Data | Burdee 🐣💖 |
| --- | --- |
| Tweets downloaded | 2715 |
| Retweets | 1903 |
| Short tweets | 252 |
| Tweets kept | 560 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/37eoz4i5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @burdeevt's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2t35juo3) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2t35juo3/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/burdeevt')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/kitsune__spirit
|
huggingtweets
| 2022-07-13T02:51:17Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/kitsune__spirit/1657680673292/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1523268231833739266/foV-CaZh_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">KitsuneSpirit Mei 💝🦊「 YOKOMESHI 」</div>
<div style="text-align: center; font-size: 14px;">@kitsune__spirit</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from KitsuneSpirit Mei 💝🦊「 YOKOMESHI 」.
| Data | KitsuneSpirit Mei 💝🦊「 YOKOMESHI 」 |
| --- | --- |
| Tweets downloaded | 3248 |
| Retweets | 67 |
| Short tweets | 820 |
| Tweets kept | 2361 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3uiy3sjw/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @kitsune__spirit's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1hdne87l) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1hdne87l/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/kitsune__spirit')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
xliu128/distilbert-base-uncased-finetuned-clinc
|
xliu128
| 2022-07-13T02:30:34Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-13T01:44:36Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9183870967741935
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7720
- Accuracy: 0.9184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2891 | 0.7429 |
| 3.7868 | 2.0 | 636 | 1.8755 | 0.8374 |
| 3.7868 | 3.0 | 954 | 1.1570 | 0.8961 |
| 1.6928 | 4.0 | 1272 | 0.8573 | 0.9132 |
| 0.9056 | 5.0 | 1590 | 0.7720 | 0.9184 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ariesutiono/scibert-lm-const-finetuned-20
|
ariesutiono
| 2022-07-13T00:15:55Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"dataset:conll2003",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-12T23:32:22Z |
---
tags:
- generated_from_trainer
datasets:
- conll2003
model-index:
- name: scibert-lm-const-finetuned-20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# scibert-lm-const-finetuned-20
This model is a fine-tuned version of [allenai/scibert_scivocab_cased](https://huggingface.co/allenai/scibert_scivocab_cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0099
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.6081 | 1.0 | 118 | 2.9156 |
| 2.7954 | 2.0 | 236 | 2.5940 |
| 2.5762 | 3.0 | 354 | 2.5017 |
| 2.4384 | 4.0 | 472 | 2.3923 |
| 2.3391 | 5.0 | 590 | 2.2996 |
| 2.2417 | 6.0 | 708 | 2.3180 |
| 2.2161 | 7.0 | 826 | 2.2336 |
| 2.1918 | 8.0 | 944 | 2.2465 |
| 2.1494 | 9.0 | 1062 | 2.1871 |
| 2.1215 | 10.0 | 1180 | 2.1566 |
| 2.1015 | 11.0 | 1298 | 2.1849 |
| 2.05 | 12.0 | 1416 | 2.1092 |
| 2.0653 | 13.0 | 1534 | 2.2221 |
| 2.0261 | 14.0 | 1652 | 2.1572 |
| 2.0117 | 15.0 | 1770 | 2.1452 |
| 1.9845 | 16.0 | 1888 | 2.1433 |
| 1.9791 | 17.0 | 2006 | 2.1225 |
| 1.9979 | 18.0 | 2124 | 2.0777 |
| 1.9688 | 19.0 | 2242 | 2.1765 |
| 1.9873 | 20.0 | 2360 | 2.0099 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
andrewzhang505/quad-swarm-rl-1
|
andrewzhang505
| 2022-07-13T00:02:06Z | 5 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"region:us"
] |
reinforcement-learning
| 2022-07-12T21:09:52Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
---
A(n) **APPO** model trained on the **quadrotor_multi** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
AntiSquid/Reinforce-pix-5
|
AntiSquid
| 2022-07-12T23:21:37Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-12T23:21:12Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pix-5
results:
- metrics:
- type: mean_reward
value: 20.30 +/- 17.44
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
AntiSquid/Reinforce-model-666
|
AntiSquid
| 2022-07-12T21:52:02Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-12T21:51:51Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-model-666
results:
- metrics:
- type: mean_reward
value: 117.10 +/- 4.85
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
Shaier/medqa_fine_tuned_generic_bert
|
Shaier
| 2022-07-12T20:33:17Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"multiple-choice",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2022-07-12T19:49:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: medqa_fine_tuned_generic_bert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# medqa_fine_tuned_generic_bert
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4239
- Accuracy: 0.2869
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 1.3851 | 0.2594 |
| 1.3896 | 2.0 | 636 | 1.3805 | 0.2807 |
| 1.3896 | 3.0 | 954 | 1.3852 | 0.2948 |
| 1.3629 | 4.0 | 1272 | 1.3996 | 0.2980 |
| 1.3068 | 5.0 | 1590 | 1.4239 | 0.2869 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.3.2
- Tokenizers 0.11.0
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.