modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-01 06:29:04
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 530
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-01 06:28:51
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
ashraq/dv-electra-small-news-classification
|
ashraq
| 2021-11-03T22:31:07Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
widget:
- text: 'ގޫގަލް ޕިކްސަލް 6 ގެ ކެމެރާ، އޭއައި ގެ ޖާދޫއިން ފުރިފައި'
---
# The [ELECTRA-small](https://huggingface.co/ashraq/dv-electra-small) fine-tuned for news classification in Dhivehi
|
facebook/ic_gan
|
facebook
| 2021-11-03T20:47:05Z | 0 | 5 | null |
[
"image-generation",
"conditional-image-generation",
"generative-model",
"arxiv:2109.05070",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- image-generation
- conditional-image-generation
- generative-model
license: cc-by-nc-4.0
library: pytorch
---
# <p align="center"> IC-GAN: Instance-Conditioned GAN </p>
Official Pytorch code of [Instance-Conditioned GAN](https://arxiv.org/abs/2109.05070) by Arantxa Casanova, Marlène Careil, Jakob Verbeek, Michał Drożdżal, Adriana Romero-Soriano.

## Generate images with IC-GAN in a Colab Notebook
We provide a [Google Colab notebook](https://colab.research.google.com/github/facebookresearch/ic_gan/blob/main/inference/icgan_colab.ipynb) to generate images with IC-GAN and its class-conditional counter part. We also invite users to check out the [demo on Replicate](https://replicate.ai/arantxacasanova/ic_gan), courtesy of [Replicate](https://replicate.ai/home).
The figure below depicts two instances, unseen during training and downloaded from [Creative Commons search](https://search.creativecommons.org), and the generated images with IC-GAN and class-conditional IC-GAN when conditioning on the class "castle":

Additionally, and inspired by [this Colab](https://colab.research.google.com/github/eyaler/clip_biggan/blob/main/ClipBigGAN.ipynb), we provide the funcionality in the same Colab notebook to guide generations with text captions, using the [CLIP model](https://github.com/openai/CLIP).
As an example, the following Figure shows three instance conditionings and a text caption (top), followed by the resulting generated images with IC-GAN (bottom), when optimizing the noise vector following CLIP's gradient for 100 iterations.

*Credit for the three instance conditionings, from left to right, that were modified with a resize and central crop:* [1: "Landscape in Bavaria" by shining.darkness, licensed under CC BY 2.0](https://search.creativecommons.org/photos/92ef279c-4469-49a5-aa4b-48ad746f2dc4), [2: "Fantasy Landscape - slolsss" by Douglas Tofoli is marked with CC PDM 1.0](https://search.creativecommons.org/photos/13646adc-f1df-437a-a0dd-8223452ee46c), [3: "How to Draw Landscapes Simply" by Kuwagata Keisai is marked with CC0 1.0](https://search.creativecommons.org/photos/2ab9c3b7-de99-4536-81ed-604ee988bd5f)
## Requirements
* Python 3.8
* Cuda v10.2 / Cudnn v7.6.5
* gcc v7.3.0
* Pytorch 1.8.0
* A conda environment can be created from `environment.yaml` by entering the command: `conda env create -f environment.yml`, that contains the aforemention version of Pytorch and other required packages.
* Faiss: follow the instructions in the [original repository](https://github.com/facebookresearch/faiss).
## Overview
This repository consists of four main folders:
* `data_utils`: A common folder to obtain and format the data needed to train and test IC-GAN, agnostic of the specific backbone.
* `inference`: Scripts to test the models both qualitatively and quantitatively.
* `BigGAN_PyTorch`: It provides the training, evaluation and sampling scripts for IC-GAN with a BigGAN backbone. The code base comes from [Pytorch BigGAN repository](https://github.com/ajbrock/BigGAN-PyTorch), made available under the MIT License. It has been modified to [add additional utilities](#biggan-changelog) and it enables IC-GAN training on top of it.
* `stylegan2_ada_pytorch`: It provides the training, evaluation and sampling scripts for IC-GAN with a StyleGAN2 backbone. The code base comes from [StyleGAN2 Pytorch](https://github.com/NVlabs/stylegan2-ada-pytorch), made available under the [Nvidia Source Code License](https://nvlabs.github.io/stylegan2-ada-pytorch/license.html). It has been modified to [add additional utilities](#stylegan-changelog) and it enables IC-GAN training on top of it.
## (Python script) Generate images with IC-GAN
Alternatively, we can <b> generate images with IC-GAN models </b> directly from a python script, by following the next steps:
1) Download the desired pretrained models (links below) and the [pre-computed 1000 instance features from ImageNet](https://dl.fbaipublicfiles.com/ic_gan/stored_instances.tar.gz) and extract them into a folder `pretrained_models_path`.
| model | backbone | class-conditional? | training dataset | resolution | url |
|-------------------|-------------------|-------------------|---------------------|--------------------|--------------------|
| IC-GAN | BigGAN | No | ImageNet | 256x256 | [model](https://dl.fbaipublicfiles.com/ic_gan/icgan_biggan_imagenet_res256.tar.gz) |
| IC-GAN (half capacity) | BigGAN | No | ImageNet | 256x256 | [model](https://dl.fbaipublicfiles.com/ic_gan/icgan_biggan_imagenet_res256_halfcap.tar.gz) |
| IC-GAN | BigGAN | No | ImageNet | 128x128 | [model](https://dl.fbaipublicfiles.com/ic_gan/icgan_biggan_imagenet_res128.tar.gz) |
| IC-GAN | BigGAN | No | ImageNet | 64x64 | [model](https://dl.fbaipublicfiles.com/ic_gan/icgan_biggan_imagenet_res64.tar.gz) |
| IC-GAN | BigGAN | Yes | ImageNet | 256x256 | [model](https://dl.fbaipublicfiles.com/ic_gan/cc_icgan_biggan_imagenet_res256.tar.gz) |
| IC-GAN (half capacity) | BigGAN | Yes | ImageNet | 256x256 | [model](https://dl.fbaipublicfiles.com/ic_gan/cc_icgan_biggan_imagenet_res256_halfcap.tar.gz) |
| IC-GAN | BigGAN | Yes | ImageNet | 128x128 | [model](https://dl.fbaipublicfiles.com/ic_gan/cc_icgan_biggan_imagenet_res128.tar.gz) |
| IC-GAN | BigGAN | Yes | ImageNet | 64x64 | [model](https://dl.fbaipublicfiles.com/ic_gan/cc_icgan_biggan_imagenet_res64.tar.gz) |
| IC-GAN | BigGAN | Yes | ImageNet-LT | 256x256 | [model](https://dl.fbaipublicfiles.com/ic_gan/cc_icgan_biggan_imagenetlt_res256.tar.gz) |
| IC-GAN | BigGAN | Yes | ImageNet-LT | 128x128 | [model](https://dl.fbaipublicfiles.com/ic_gan/cc_icgan_biggan_imagenetlt_res128.tar.gz) |
| IC-GAN | BigGAN | Yes | ImageNet-LT | 64x64 | [model](https://dl.fbaipublicfiles.com/ic_gan/cc_icgan_biggan_imagenetlt_res64.tar.gz) |
| IC-GAN | BigGAN | No | COCO-Stuff | 256x256 | [model](https://dl.fbaipublicfiles.com/ic_gan/icgan_biggan_coco_res256.tar.gz) |
| IC-GAN | BigGAN | No | COCO-Stuff | 128x128 | [model](https://dl.fbaipublicfiles.com/ic_gan/icgan_biggan_coco_res128.tar.gz) |
| IC-GAN | StyleGAN2 | No | COCO-Stuff | 256x256 | [model](https://dl.fbaipublicfiles.com/ic_gan/icgan_stylegan2_coco_res256.tar.gz) |
| IC-GAN | StyleGAN2 | No | COCO-Stuff | 128x128 | [model](https://dl.fbaipublicfiles.com/ic_gan/icgan_stylegan2_coco_res128.tar.gz) |
2) Execute:
```
python inference/generate_images.py --root_path [pretrained_models_path] --model [model] --model_backbone [backbone] --resolution [res]
```
* `model` can be chosen from `["icgan", "cc_icgan"]` to use the IC-GAN or the class-conditional IC-GAN model respectively.
* `backbone` can be chosen from `["biggan", "stylegan2"]`.
* `res` indicates the resolution at which the model has been trained. For ImageNet, choose one in `[64, 128, 256]`, and for COCO-Stuff, one in `[128, 256]`.
This script results in a .PNG file where several generated images are shown, given an instance feature (each row), and a sampled noise vector (each grid position).
<b>Additional and optional parameters</b>:
* `index`: (None by default), is an integer from 0 to 999 that choses a specific instance feature vector out of the 1000 instances that have been selected with k-means on the ImageNet dataset and stored in `pretrained_models_path/stored_instances`.
* `swap_target`: (None by default) is an integer from 0 to 999 indicating an ImageNet class label. This label will be used to condition the class-conditional IC-GAN, regardless of which instance features are being used.
* `which_dataset`: (ImageNet by default) can be chosen from `["imagenet", "coco"]` to indicate which dataset (training split) to sample the instances from.
* `trained_dataset`: (ImageNet by default) can be chosen from `["imagenet", "coco"]` to indicate the dataset in which the IC-GAN model has been trained on.
* `num_imgs_gen`: (5 by default), it changes the number of noise vectors to sample per conditioning. Increasing this number results in a bigger .PNG file to save and load.
* `num_conditionings_gen`: (5 by default), it changes the number of conditionings to sample. Increasing this number results in a bigger .PNG file to save and load.
* `z_var`: (1.0 by default) controls the truncation factor for the generation.
* Optionally, the script can be run with the following additional options `--visualize_instance_images --dataset_path [dataset_path]` to visualize the ground-truth images corresponding to the conditioning instance features, given a path to the dataset's ground-truth images `dataset_path`. Ground-truth instances will be plotted as the leftmost image for each row.
## Data preparation
<div id="data-preparation">
<details>
<summary>ImageNet</summary>
<br>
<ol>
<li>Download dataset from <a href="https://image-net.org/download.php"> here </a>.
</li>
<li>Download <a href="https://github.com/facebookresearch/swav"> SwAV </a> feature extractor weights from <a href="https://dl.fbaipublicfiles.com/deepcluster/swav_800ep_pretrain.pth.tar"> here </a>. </li>
<li> Replace the paths in data_utils/prepare_data.sh: <code>out_path</code> by the path where hdf5 files will be stored, <code>path_imnet</code> by the path where ImageNet dataset is downloaded, and <code>path_swav</code> by the path where SwAV weights are stored. </li>
<li> Execute <code>./data_utils/prepare_data.sh imagenet [resolution]</code>, where <code>[resolution]</code> can be an integer in {64,128,256}. This script will create several hdf5 files:
<ul> <li> <code>ILSVRC[resolution]_xy.hdf5</code> and <code>ILSVRC[resolution]_val_xy.hdf5</code>, where images and labels are stored for the training and validation set respectively. </li>
<li> <code>ILSVRC[resolution]_feats_[feature_extractor]_resnet50.hdf5</code> that contains the instance features for each image. </li>
<li> <code>ILSVRC[resolution]_feats_[feature_extractor]_resnet50_nn_k[k_nn].hdf5</code> that contains the list of [k_nn] neighbors for each of the instance features. </li> </ul> </li>
</ol> </br>
</details>
<details>
<summary>ImageNet-LT</summary>
<br>
<ol>
<li>Download ImageNet dataset from <a href="https://image-net.org/download.php"> here </a>. Following <a href="https://github.com/zhmiao/OpenLongTailRecognition-OLTR"> ImageNet-LT </a>, the file <code>ImageNet_LT_train.txt</code> can be downloaded from <a href="https://drive.google.com/drive/u/1/folders/1j7Nkfe6ZhzKFXePHdsseeeGI877Xu1yf" > this link </a> and later stored in the folder <code>./BigGAN_PyTorch/imagenet_lt</code>.
</li>
<li>Download the pre-trained weights of the ResNet on ImageNet-LT from <a href="https://dl.fbaipublicfiles.com/classifier-balancing/ImageNet_LT/models/resnet50_uniform_e90.pth"> this link</a>, provided by the <a href="https://github.com/facebookresearch/classifier-balancing"> classifier-balancing repository </a>. </li>
<li> Replace the paths in data_utils/prepare_data.sh: <code>out_path</code> by the path where hdf5 files will be stored, <code>path_imnet</code> by the path where ImageNet dataset is downloaded, and <code>path_classifier_lt</code> by the path where the pre-trained ResNet50 weights are stored. </li>
<li> Execute <code>./data_utils/prepare_data.sh imagenet_lt [resolution]</code>, where <code>[resolution]</code> can be an integer in {64,128,256}. This script will create several hdf5 files:
<ul> <li> <code>ILSVRC[resolution]longtail_xy.hdf5</code>, where images and labels are stored for the training and validation set respectively. </li>
<li> <code>ILSVRC[resolution]longtail_feats_[feature_extractor]_resnet50.hdf5</code> that contains the instance features for each image. </li>
<li> <code>ILSVRC[resolution]longtail_feats_[feature_extractor]_resnet50_nn_k[k_nn].hdf5</code> that contains the list of [k_nn] neighbors for each of the instance features. </li> </ul> </li>
</ol> </br>
</details>
<details>
<summary>COCO-Stuff</summary>
<br>
<ol>
<li>Download the dataset following the <a href="https://github.com/WillSuen/LostGANs/blob/master/INSTALL.md"> LostGANs' repository instructions </a>.
</li>
<li>Download <a href="https://github.com/facebookresearch/swav"> SwAV </a> feature extractor weights from <a href="https://dl.fbaipublicfiles.com/deepcluster/swav_800ep_pretrain.pth.tar"> here </a>. </li>
<li> Replace the paths in data_utils/prepare_data.sh: <code>out_path</code> by the path where hdf5 files will be stored, <code>path_imnet</code> by the path where ImageNet dataset is downloaded, and <code>path_swav</code> by the path where SwAV weights are stored. </li>
<li> Execute <code>./data_utils/prepare_data.sh coco [resolution]</code>, where <code>[resolution]</code> can be an integer in {128,256}. This script will create several hdf5 files:
<ul> <li> <code>COCO[resolution]_xy.hdf5</code> and <code>COCO[resolution]_val_test_xy.hdf5</code>, where images and labels are stored for the training and evaluation set respectively. </li>
<li> <code>COCO[resolution]_feats_[feature_extractor]_resnet50.hdf5</code> that contains the instance features for each image. </li>
<li> <code>COCO[resolution]_feats_[feature_extractor]_resnet50_nn_k[k_nn].hdf5</code> that contains the list of [k_nn] neighbors for each of the instance features. </li> </ul> </li>
</ol> </br>
</details>
<details>
<summary>Other datasets</summary>
<br>
<ol>
<li>Download the corresponding dataset and store in a folder <code>dataset_path</code>.
</li>
<li>Download <a href="https://github.com/facebookresearch/swav"> SwAV </a> feature extractor weights from <a href="https://dl.fbaipublicfiles.com/deepcluster/swav_800ep_pretrain.pth.tar"> here </a>. </li>
<li> Replace the paths in data_utils/prepare_data.sh: <code>out_path</code> by the path where hdf5 files will be stored and <code>path_swav</code> by the path where SwAV weights are stored. </li>
<li> Execute <code>./data_utils/prepare_data.sh [dataset_name] [resolution] [dataset_path]</code>, where <code>[dataset_name]</code> will be the dataset name, <code>[resolution]</code> can be an integer, for example 128 or 256, and <code>dataset_path</code> contains the dataset images. This script will create several hdf5 files:
<ul> <li> <code>[dataset_name][resolution]_xy.hdf5</code>, where images and labels are stored for the training set. </li>
<li> <code>[dataset_name][resolution]_feats_[feature_extractor]_resnet50.hdf5</code> that contains the instance features for each image. </li>
<li> <code>[dataset_name][resolution]_feats_[feature_extractor]_resnet50_nn_k[k_nn].hdf5</code> that contains the list of <code>k_nn</code> neighbors for each of the instance features. </li> </ul> </li>
</ol> </br>
</details>
<details>
<summary>How to subsample an instance feature dataset with k-means</summary>
<br>
To downsample the instance feature vector dataset, after we have prepared the data, we can use the k-means algorithm:
<code>
python data_utils/store_kmeans_indexes.py --resolution [resolution] --which_dataset [dataset_name] --data_root [data_path]
</code>
<ul> <li> Adding <code>--gpu</code> allows the faiss library to compute k-means leveraging GPUs, resulting in faster execution. </li>
<li> Adding the parameter <code>--feature_extractor [feature_extractor]</code> chooses which feature extractor to use, with <code>feature_extractor</code> in <code>['selfsupervised', 'classification'] </code>, if we are using swAV as feature extactor or the ResNet pretrained on the classification task on ImageNet, respectively. </li>
<li> The number of k-means clusters can be set with <code>--kmeans_subsampled [centers]</code>, where <code>centers</code> is an integer. </li> </ul>
</br>
</details>
</div>
## How to train the models
#### BigGAN or StyleGAN2 backbone
Training parameters are stored in JSON files in `[backbone_folder]/config_files/[dataset]/*.json`, where `[backbone_folder]` is either BigGAN_Pytorch or stylegan2_ada_pytorch and `[dataset]` can either be ImageNet, ImageNet-LT or COCO_Stuff.
```
cd BigGAN_PyTorch
python run.py --json_config config_files/<dataset>/<selected_config>.json --data_root [data_root] --base_root [base_root]
```
or
```
cd stylegan_ada_pytorch
python run.py --json_config config_files/<dataset>/<selected_config>.json --data_root [data_root] --base_root [base_root]
```
where:
* `data_root` path where the data has been prepared and stored, following the previous section (<a href="./README.md#data-preparation">Data preparation</a>).
* `base_root` path where to store the model weights and logs.
Note that one can create other JSON files to modify the training parameters.
#### Other backbones
To be able to run IC-GAN with other backbones, we provide some orientative steps:
* Place the new backbone code in a new folder under `ic_gan` (`ic_gan/new_backbone`).
* Modify the relevant piece of code in the GAN architecture to allow instance features as conditionings (for both generator and discriminator).
* Create a `trainer.py` file with the training loop to train an IC-GAN with the new backbone. The `data_utils` folder provides the tools to prepare the dataset, load the data and conditioning sampling to train an IC-GAN. The IC-GAN with BigGAN backbone [`trainer.py`](BigGAN_PyTorch/trainer.py) file can be used as an inspiration.
## How to test the models
<b>To obtain the FID and IS metrics on ImageNet and ImageNet-LT</b>:
1) Execute:
```
python inference/test.py --json_config [BigGAN-PyTorch or stylegan-ada-pytorch]/config_files/<dataset>/<selected_config>.json --num_inception_images [num_imgs] --sample_num_npz [num_imgs] --eval_reference_set [ref_set] --sample_npz --base_root [base_root] --data_root [data_root] --kmeans_subsampled [kmeans_centers] --model_backbone [backbone]
```
To obtain the tensorflow IS and FID metrics, use an environment with the Python <3.7 and Tensorflow 1.15. Then:
2) Obtain Inception Scores and pre-computed FID moments:
```
python ../data_utils/inception_tf13.py --experiment_name [exp_name] --experiment_root [base_root] --kmeans_subsampled [kmeans_centers]
```
For stratified FIDs in the ImageNet-LT dataset, the following parameters can be added `--which_dataset 'imagenet_lt' --split 'val' --strat_name [stratified_split]`, where `stratified_split` can be in `[few,low, many]`.
3) (Only needed once) Pre-compute reference moments with tensorflow code:
```
python ../data_utils/inception_tf13.py --use_ground_truth_data --data_root [data_root] --split [ref_set] --resolution [res] --which_dataset [dataset]
```
4) (Using this [repository](https://github.com/bioinf-jku/TTUR)) FID can be computed using the pre-computed statistics obtained in 2) and the pre-computed ground-truth statistics obtain in 3). For example, to compute the FID with reference ImageNet validation set:
```python TTUR/fid.py [base_root]/[exp_name]/TF_pool_.npz [data_root]/imagenet_val_res[res]_tf_inception_moments_ground_truth.npz ```
<b>To obtain the FID metric on COCO-Stuff</b>:
1) Obtain ground-truth jpeg images: ```python data_utils/store_coco_jpeg_images.py --resolution [res] --split [ref_set] --data_root [data_root] --out_path [gt_coco_images] --filter_hd [filter_hd] ```
2) Store generated images as jpeg images: ```python sample.py --json_config ../[BigGAN-PyTorch or stylegan-ada-pytorch]/config_files/<dataset>/<selected_config>.json --data_root [data_root] --base_root [base_root] --sample_num_npz [num_imgs] --which_dataset 'coco' --eval_instance_set [ref_set] --eval_reference_set [ref_set] --filter_hd [filter_hd] --model_backbone [backbone] ```
3) Using this [repository](https://github.com/bioinf-jku/TTUR), compute FID on the two folders of ground-truth and generated images.
where:
* `dataset`: option to select the dataset in `['imagenet', 'imagenet_lt', 'coco']
* `exp_name`: name of the experiment folder.
* `data_root`: path where the data has been prepared and stored, following the previous section ["Data preparation"](#data-preparation).
* `base_root`: path where to find the model (for example, where the pretrained models have been downloaded).
* `num_imgs`: needs to be set to 50000 for ImageNet and ImageNet-LT (with validation set as reference) and set to 11500 for ImageNet-LT (with training set as reference). For COCO-Stuff, set to 75777, 2050, 675, 1375 if using the training, evaluation, evaluation seen or evaluation unseen set as reference.
* `ref_set`: set to `'val'` for ImageNet, ImageNet-LT (and COCO) to obtain metrics with the validation (evaluation) set as reference, or set to `'train'` for ImageNet-LT or COCO to obtain metrics with the training set as reference.
* `kmeans_centers`: set to 1000 for ImageNet and to -1 for ImageNet-LT.
* `backbone`: model backbone architecture in `['biggan','stylegan2']`.
* `res`: integer indicating the resolution of the images (64,128,256).
* `gt_coco_images`: folder to store the ground-truth JPEG images of that specific split.
* `filter_hd`: only valid for `ref_set=val`. If -1, use the entire evaluation set; if 0, use only conditionings and their ground-truth images with seen class combinations during training (eval seen); if 1, use only conditionings and their ground-truth images with unseen class combinations during training (eval unseen).
## Utilities for GAN backbones
We change and provide extra utilities to facilitate the training, for both BigGAN and StyleGAN2 base repositories.
### BigGAN change log
The following changes were made:
* BigGAN architecture:
* In `train_fns.py`: option to either have the optimizers inside the generator and discriminator class, or directly in the `G_D` wrapper module. Additionally, added an option to augment both generated and real images with augmentations from [DiffAugment](https://github.com/mit-han-lab/data-efficient-gans).
* In `BigGAN.py`: added a function `get_condition_embeddings` to handle the conditioning separately.
* Small modifications to `layers.py` to adapt the batchnorm function calls to the pytorch 1.8 version.
* Training utilities:
* Added `trainer.py` file (replacing train.py):
* Training now allows the usage of DDP for faster single-node and multi-node training.
* Training is performed by epochs instead of by iterations.
* Option to stop the training by using early stopping or when experiments diverge.
* In `utils.py`:
* Replaced `MultiEpochSampler` for `CheckpointedSampler` to allow experiments to be resumable when using epochs and fixing a bug where `MultiEpochSampler` would require a long time to fetch data permutations when the number of epochs increased.
* ImageNet-LT: Added option to use different class distributions when sampling a class label for the generator.
* ImageNet-LT: Added class balancing (uniform and temperature annealed).
* Added data augmentations from [DiffAugment](https://github.com/mit-han-lab/data-efficient-gans).
* Testing utilities:
* In `calculate_inception_moments.py`: added option to obtain moments for ImageNet-LT dataset, as well as stratified moments for many, medium and few-shot classes (stratified FID computation).
* In `inception_utils.py`: added option to compute [Precision, Recall, Density, Coverage](https://github.com/clovaai/generative-evaluation-prdc) and stratified FID.
* Data utilities:
* In `datasets.py`, added option to load ImageNet-LT dataset.
* Added ImageNet-LT.txt files with image indexes for training and validation split.
* In `utils.py`:
* Separate functions to obtain the data from hdf5 files (`get_dataset_hdf5`) or from directory (`get_dataset_images`), as well as a function to obtain only the data loader (`get_dataloader`).
* Added the function `sample_conditionings` to handle possible different conditionings to train G with.
* Experiment utilities:
* Added JSON files to launch experiments with the proposed hyper-parameter configuration.
* Script to launch experiments with either the [submitit tool](https://github.com/facebookincubator/submitit) or locally in the same machine (run.py).
### StyleGAN2 change log
<div id="stylegan-changelog">
<ul>
<li> Multi-node DistributedDataParallel training. </li>
<li> Added early stopping based on the training FID metric. </li>
<li> Automatic checkpointing when jobs are automatically rescheduled on a cluster. </li>
<li> Option to load dataset from hdf5 file. </li>
<li> Replaced the usage of Click python package by an `ArgumentParser`. </li>
<li> Only saving best and last model weights. </li>
</ul>
</div>
## Acknowledgements
We would like to thanks the authors of the [Pytorch BigGAN repository](https://github.com/ajbrock/BigGAN-PyTorch) and [StyleGAN2 Pytorch](https://github.com/NVlabs/stylegan2-ada-pytorch), as our model requires their repositories to train IC-GAN with BigGAN or StyleGAN2 bakcbone respectively.
Moreover, we would like to further thank the authors of [generative-evaluation-prdc](https://github.com/clovaai/generative-evaluation-prdc), [data-efficient-gans](https://github.com/mit-han-lab/data-efficient-gans), [faiss](https://github.com/facebookresearch/faiss) and [sg2im](https://github.com/google/sg2im) as some components were borrowed and modified from their code bases. Finally, we thank the author of [WanderCLIP](https://colab.research.google.com/github/eyaler/clip_biggan/blob/main/WanderCLIP.ipynb) as well as the following repositories, that we use in our Colab notebook: [pytorch-pretrained-BigGAN](https://github.com/huggingface/pytorch-pretrained-BigGAN) and [CLIP](https://github.com/openai/CLIP).
## License
The majority of IC-GAN is licensed under CC-BY-NC, however portions of the project are available under separate license terms: BigGAN and [PRDC](https://github.com/facebookresearch/ic_gan/blob/main/data_utils/compute_pdrc.py) are licensed under the MIT license; [COCO-Stuff loader](https://github.com/facebookresearch/ic_gan/blob/main/data_utils/cocostuff_dataset.py) is licensed under Apache License 2.0; [DiffAugment](https://github.com/facebookresearch/ic_gan/blob/main/BigGAN_PyTorch/diffaugment_utils.py) is licensed under BSD 2-Clause Simplified license; StyleGAN2 is licensed under a NVIDIA license, available here: https://github.com/NVlabs/stylegan2-ada-pytorch/blob/main/LICENSE.txt. In the Colab notebook, [CLIP](https://github.com/openai/CLIP) and [pytorch-pretrained-BigGAN](https://github.com/huggingface/pytorch-pretrained-BigGAN) code is used, both licensed under the MIT license.
## Disclaimers
THE DIFFAUGMENT SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
THE CLIP SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
THE PYTORCH-PRETRAINED-BIGGAN SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
## Cite the paper
If this repository, the paper or any of its content is useful for your research, please cite:
```
@inproceedings{casanova2021instanceconditioned,
title={Instance-Conditioned GAN},
author={Arantxa Casanova and Marlène Careil and Jakob Verbeek and Michal Drozdzal and Adriana Romero-Soriano},
booktitle={Advances in Neural Information Processing Systems (NeurIPS)},
year={2021}
}
```
|
whaleloops/phrase-bert
|
whaleloops
| 2021-11-03T15:04:02Z | 7,172 | 20 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:2109.06304",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# whaleloops/phrase-bert
This is the official repository for the EMNLP 2021 long paper [Phrase-BERT: Improved Phrase Embeddings from BERT with an Application to Corpus Exploration](https://arxiv.org/abs/2109.06304). We provide [code](https://github.com/sf-wa-326/phrase-bert-topic-model) for training and evaluating Phrase-BERT in addition to the datasets used in the paper.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Our model is tested on pytorch=1.9.0, tranformers=4.8.1, sentence-tranformers = 2.1.0 TODO
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
phrase_list = [ 'play an active role', 'participate actively', 'active lifestyle']
model = SentenceTransformer('whaleloops/phrase-bert')
phrase_embs = model.encode( phrase_list )
[p1, p2, p3] = phrase_embs
```
As in sentence-BERT, the default output is a list of numpy arrays:
````
for phrase, embedding in zip(phrase_list, phrase_embs):
print("Phrase:", phrase)
print("Embedding:", embedding)
print("")
````
An example of computing the dot product of phrase embeddings:
````
import numpy as np
print(f'The dot product between phrase 1 and 2 is: {np.dot(p1, p2)}')
print(f'The dot product between phrase 1 and 3 is: {np.dot(p1, p3)}')
print(f'The dot product between phrase 2 and 3 is: {np.dot(p2, p3)}')
````
An example of computing cosine similarity of phrase embeddings:
````
import torch
from torch import nn
cos_sim = nn.CosineSimilarity(dim=0)
print(f'The cosine similarity between phrase 1 and 2 is: {cos_sim( torch.tensor(p1), torch.tensor(p2))}')
print(f'The cosine similarity between phrase 1 and 3 is: {cos_sim( torch.tensor(p1), torch.tensor(p3))}')
print(f'The cosine similarity between phrase 2 and 3 is: {cos_sim( torch.tensor(p2), torch.tensor(p3))}')
````
The output should look like:
````
The dot product between phrase 1 and 2 is: 218.43600463867188
The dot product between phrase 1 and 3 is: 165.48483276367188
The dot product between phrase 2 and 3 is: 160.51708984375
The cosine similarity between phrase 1 and 2 is: 0.8142536282539368
The cosine similarity between phrase 1 and 3 is: 0.6130303144454956
The cosine similarity between phrase 2 and 3 is: 0.584893524646759
````
## Evaluation
Given the lack of a unified phrase embedding evaluation benchmark, we collect the following five phrase semantics evaluation tasks, which are described further in our paper:
* Turney [[Download](https://storage.googleapis.com/phrase-bert/turney/data.txt) ]
* BiRD [[Download](https://storage.googleapis.com/phrase-bert/bird/data.txt)]
* PPDB [[Download](https://storage.googleapis.com/phrase-bert/ppdb/examples.json)]
* PPDB-filtered [[Download](https://storage.googleapis.com/phrase-bert/ppdb_exact/examples.json)]
* PAWS-short [[Download Train-split](https://storage.googleapis.com/phrase-bert/paws_short/train_examples.json) ] [[Download Dev-split](https://storage.googleapis.com/phrase-bert/paws_short/dev_examples.json) ] [[Download Test-split](https://storage.googleapis.com/phrase-bert/paws_short/test_examples.json) ]
Change `config/model_path.py` with the model path according to your directories and
* For evaluation on Turney, run `python eval_turney.py`
* For evaluation on BiRD, run `python eval_bird.py`
* for evaluation on PPDB / PPDB-filtered / PAWS-short, run `eval_ppdb_paws.py` with:
````
nohup python -u eval_ppdb_paws.py \
--full_run_mode \
--task <task-name> \
--data_dir <input-data-dir> \
--result_dir <result-storage-dr> \
>./output.txt 2>&1 &
````
## Train your own Phrase-BERT
If you would like to go beyond using the pre-trained Phrase-BERT model, you may train your own Phrase-BERT using data from the domain you are interested in. Please refer to
`phrase-bert/phrase_bert_finetune.py`
The datasets we used to fine-tune Phrase-BERT are here: [training data csv file](https://storage.googleapis.com/phrase-bert/phrase-bert-ft-data/pooled_context_para_triples_p%3D0.8_train.csv) and [validation data csv file](https://storage.googleapis.com/phrase-bert/phrase-bert-ft-data/pooled_context_para_triples_p%3D0.8_valid.csv).
To re-produce the trained Phrase-BERT, please run:
export INPUT_DATA_PATH=<directory-of-phrasebert-finetuning-data>
export TRAIN_DATA_FILE=<training-data-filename.csv>
export VALID_DATA_FILE=<validation-data-filename.csv>
export INPUT_MODEL_PATH=bert-base-nli-stsb-mean-tokens
export OUTPUT_MODEL_PATH=<directory-of-saved-model>
python -u phrase_bert_finetune.py \
--input_data_path $INPUT_DATA_PATH \
--train_data_file $TRAIN_DATA_FILE \
--valid_data_file $VALID_DATA_FILE \
--input_model_path $INPUT_MODEL_PATH \
--output_model_path $OUTPUT_MODEL_PATH
## Citation:
Please cite us if you find this useful:
````
@inproceedings{phrasebertwang2021,
author={Shufan Wang and Laure Thompson and Mohit Iyyer},
Booktitle = {Empirical Methods in Natural Language Processing},
Year = "2021",
Title={Phrase-BERT: Improved Phrase Embeddings from BERT with an Application to Corpus Exploration}
}
````
|
KoichiYasuoka/roberta-small-japanese-aozora
|
KoichiYasuoka
| 2021-11-03T14:44:50Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"japanese",
"masked-lm",
"ja",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
language:
- "ja"
tags:
- "japanese"
- "masked-lm"
license: "cc-by-sa-4.0"
pipeline_tag: "fill-mask"
mask_token: "[MASK]"
widget:
- text: "日本に着いたら[MASK]を訪ねなさい。"
---
# roberta-small-japanese-aozora
## Model Description
This is a RoBERTa model pre-trained on 青空文庫 texts with [Japanese-LUW-Tokenizer](https://github.com/KoichiYasuoka/Japanese-LUW-Tokenizer). You can fine-tune `roberta-small-japanese-aozora` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-small-japanese-luw-upos), dependency-parsing, and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-small-japanese-aozora")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-small-japanese-aozora")
```
|
drab/Infrastructures
|
drab
| 2021-11-03T14:30:24Z | 71 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: Infrastructures
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9253731369972229
---
# Infrastructures
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Cooling tower

#### Transmission grid

#### Wind turbines

|
Ahmad/parsT5-base
|
Ahmad
| 2021-11-03T13:47:07Z | 168 | 4 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
A monolingual T5 model for Persian trained on OSCAR 21.09 (https://oscar-corpus.com/) corpus with self-supervised method. 35 Gig deduplicated version of Persian data was used for pre-training the model.
It's similar to the English T5 model but just for Persian. You may need to fine-tune it on your specific task.
Example code:
```
from transformers import T5ForConditionalGeneration,AutoTokenizer
import torch
model_name = "Ahmad/parsT5-base"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
input_ids = tokenizer.encode('دانش آموزان به <extra_id_0> میروند و <extra_id_1> میخوانند.', return_tensors='pt')
with torch.no_grad():
hypotheses = model.generate(input_ids)
for h in hypotheses:
print(tokenizer.decode(h))
```
Steps: 725000
Accuracy: 0.66
Training More?
========
To train the model further please refer to its github repository at:
https://github.com/puraminy/parsT5
|
josmunpen/mt5-small-spanish-summarization
|
josmunpen
| 2021-11-03T09:47:51Z | 150 | 2 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"spanish",
"es",
"dataset:larazonpublico",
"dataset:es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-03-02T23:29:05Z |
---
language:
- es
thumbnail:
tags:
- summarization
- mt5
- spanish
license: apache-2.0
datasets:
- larazonpublico
- es
metrics:
- rouge
widget:
- text: "La Guardia Civil ha desarticulado un grupo organizado dedicado a copiar en los examenes teoricos para la obtencion del permiso de conducir. Para ello, empleaban receptores y camaras de alta tecnologia y operaban desde la misma sede del Centro de examenes de la Direccion General de Trafico (DGT) en Mostoles. Es lo que han llamado la Operacion pinga.
El grupo desarticulado ofrecia el servicio de transporte y tecnologia para copiar y poder aprobar. Por dicho servicio cobraban 1.000 euros. Los investigadores sorprendieron in fraganti a una mujer intentando copiar en el examen. Portaba una chaqueta con dispositivos electronicos ocultos, concretamente un telefono movil al que estaba conectada una camara que habia sido insertada en la parte frontal de la chaqueta para transmitir online el examen y que orientada al ordenador del Centro de Examenes en el que aparecen las preguntas, permitia visualizar las imagenes en otro ordenador alojado en el interior de un vehiculo estacionado en las inmediaciones del centro. En este vehiculo, se encontraban el resto del grupo desarticulado con varios ordenadores portatiles y tablets abiertos y conectados a paginas de test de la DGT para consultar las respuestas. Estos, comunicaban con la mujer que estaba en el aula haciendo el examen a traves de un diminuto receptor bluetooth que portaba en el interior de su oido.
Luis de Lama, portavoz de la Guardia Civil de Trafico destaca que los ciudadanos, eran de origen chino, y copiaban en el examen utilizando la tecnologia facilitada por una organizacion. Destaca que, ademas de parte del fraude que supone copiar en un examen muchos de estos ciudadanos desconocian el idioma, no hablan ni entienden el español lo que supone un grave riesgo para la seguridad vial por desconocer las señales y letreros que avisan en carretera de muchas incidencias.
"
---
# mt5-small-spanish-summarization
## Model description
This is a mt5-small model finetuned for generating headlines from the body of the news in Spanish.
## Training data
The model was trained with 58425 news extracted from the La Razón (31477) and Público (26948) newspapers. These news belong to the following categories: "España", "Cultura", "Economía", "Igualdad" and "Política".
## Training procedure
It was trained with Google Colab's GPU Tesla P100-PCIE-16GB for 2 epochs.
### Hyperparameters
{evaluation_strategy = "epoch",
learning_rate = 2e-4,
per_device_train_batch_size = 6,
per_device_eval_batch_size = 6,
weight_decay = 0.01,
save_total_limi t= 3,
num_train_epochs = 2,
predict_with_generate = True,
fp16 = False}
## Eval results
| metric | score |
| --- | ----- |
| rouge1 | 44.03 |
| rouge2 | 28.2900 |
| rougeL | 40.54 |
| rougeLsum | 40.5587 |
### BibTeX entry and citation info
```bibtex
@inproceedings{ mt5lrpjosmunpen,
year={2020},
}
```
|
Zichuu/spert
|
Zichuu
| 2021-11-03T04:45:41Z | 68 | 2 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# SpERT
SpERT is the Relation Extraction model [(SpERT)Span-based Entity and Relation Transformer](https://github.com/lavis-nlp/spert).This is the model trained with CoNLL04 Dataset.
## Use
## References
```
Markus Eberts, Adrian Ulges. Span-based Joint Entity and Relation Extraction with Transformer Pre-training. 24th European Conference on Artificial Intelligence, 2020.
```
|
chisadi/nice-distilbert-v2
|
chisadi
| 2021-11-02T19:21:06Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
### Distibert model finetuned on the task of classifying product descriptions to one of 45 broad [NICE classifications](https://www.wipo.int/classifications/nice/en/)
|
huggingartists/marillion
|
huggingartists
| 2021-11-02T18:03:06Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/marillion",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- huggingartists/marillion
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/669cf880ceff9d1b5d31537747c26378.495x495x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Marillion</div>
<a href="https://genius.com/artists/marillion">
<div style="text-align: center; font-size: 14px;">@marillion</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Marillion.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/marillion).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/marillion")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/bajnt52i/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Marillion's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/wi2lgudb) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/wi2lgudb/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/marillion')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/marillion")
model = AutoModelWithLMHead.from_pretrained("huggingartists/marillion")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
s-nlp/t5-paraphrase-paws-msrp-opinosis-paranmt
|
s-nlp
| 2021-11-02T17:58:47Z | 46 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"arxiv:1711.05732",
"arxiv:1911.00536",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
This is a paraphraser based on [ceshine/t5-paraphrase-paws-msrp-opinosis](https://huggingface.co/ceshine/t5-paraphrase-paws-msrp-opinosis)
and additionally fine-tuned on [ParaNMT](https://arxiv.org/abs/1711.05732).
The model was trained for the paper [Text Detoxification using Large Pre-trained Neural Models](https://arxiv.org/abs/1911.00536).
An example of its use is given in https://github.com/skoltech-nlp/detox
|
jambo/marker-associations-binary-base
|
jambo
| 2021-11-02T12:52:24Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:marker-associations-binary-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- marker-associations-binary-base
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: marker-associations-binary-base
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: marker-associations-binary-base
type: marker-associations-binary-base
metrics:
- name: Precision
type: precision
value: 0.7981651376146789
- name: Recall
type: recall
value: 0.9560439560439561
- name: F1
type: f1
value: 0.87
- name: Accuracy
type: accuracy
value: 0.8884120171673819
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marker-associations-binary-base
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the marker-associations-binary-base dataset.
It achieves the following results on the evaluation set:
### Gene Results
- Precision = 0.808
- Recall = 0.940
- F1 = 0.869
- Accuracy = 0.862
- AUC = 0.944
### Chemical Results
- Precision = 0.774
- Recall = 1.0
- F1 = 0.873
- Accuracy = 0.926
- AUC = 0.964
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | Auc |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|:------:|
| No log | 1.0 | 88 | 0.3266 | 0.8191 | 0.8462 | 0.8324 | 0.8670 | 0.9313 |
| No log | 2.0 | 176 | 0.3335 | 0.7870 | 0.9341 | 0.8543 | 0.8755 | 0.9465 |
| No log | 3.0 | 264 | 0.4243 | 0.7982 | 0.9560 | 0.87 | 0.8884 | 0.9516 |
| No log | 4.0 | 352 | 0.5388 | 0.825 | 0.7253 | 0.7719 | 0.8326 | 0.9384 |
| No log | 5.0 | 440 | 0.7101 | 0.8537 | 0.7692 | 0.8092 | 0.8584 | 0.9416 |
| 0.1824 | 6.0 | 528 | 0.6175 | 0.8242 | 0.8242 | 0.8242 | 0.8627 | 0.9478 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Tokenizers 0.10.3
|
abhijithneilabraham/stsb_multi_mt_distilbert-base-uncased
|
abhijithneilabraham
| 2021-11-02T12:23:53Z | 162 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('abhijithneilabraham/stsb_multi_mt_distilbert-base-uncased')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('abhijithneilabraham/stsb_multi_mt_distilbert-base-uncased')
model = AutoModel.from_pretrained('abhijithneilabraham/stsb_multi_mt_distilbert-base-uncased')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 360 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 25,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 900,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
huggingtweets/cummilkshake-miraiwillsaveus-technobaphomet
|
huggingtweets
| 2021-11-02T02:39:40Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/cummilkshake-miraiwillsaveus-technobaphomet/1635820776478/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1445592317314748423/Y3vOt6Xq_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1374721840472526851/kzKWx1OS_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1448723012514041865/ydq1VOBm_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">jumb & isaac & jay z</div>
<div style="text-align: center; font-size: 14px;">@cummilkshake-miraiwillsaveus-technobaphomet</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from jumb & isaac & jay z.
| Data | jumb | isaac | jay z |
| --- | --- | --- | --- |
| Tweets downloaded | 3232 | 3153 | 3061 |
| Retweets | 736 | 362 | 83 |
| Short tweets | 594 | 977 | 1230 |
| Tweets kept | 1902 | 1814 | 1748 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3tmpkkja/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cummilkshake-miraiwillsaveus-technobaphomet's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/39yato7e) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/39yato7e/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/cummilkshake-miraiwillsaveus-technobaphomet')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/bronzeswords
|
huggingtweets
| 2021-11-02T01:49:23Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/bronzeswords/1635817760027/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1243738424118358017/AQdB0Ze0_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Conrad Golden</div>
<div style="text-align: center; font-size: 14px;">@bronzeswords</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Conrad Golden.
| Data | Conrad Golden |
| --- | --- |
| Tweets downloaded | 3190 |
| Retweets | 602 |
| Short tweets | 171 |
| Tweets kept | 2417 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/10m933b8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @bronzeswords's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1yj6hliq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1yj6hliq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/bronzeswords')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
mvonwyl/roberta-base-finetuned-squad2
|
mvonwyl
| 2021-11-01T17:51:41Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: roberta-base-finetuned-squad2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-squad2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9325
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.88 | 1.0 | 8160 | 0.8129 |
| 0.6643 | 2.0 | 16320 | 0.8567 |
| 0.5096 | 3.0 | 24480 | 0.9325 |
### Framework versions
- Transformers 4.12.2
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
tiennvcs/layoutlmv2-base-uncased-finetuned-infovqa
|
tiennvcs
| 2021-11-01T16:13:10Z | 12 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"layoutlmv2",
"document-question-answering",
"generated_from_trainer",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] |
document-question-answering
| 2022-03-02T23:29:05Z |
---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: layoutlmv2-base-uncased-finetuned-infovqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-base-uncased-finetuned-infovqa
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0870
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 250500
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.8677 | 0.16 | 500 | 3.2829 |
| 3.0395 | 0.33 | 1000 | 2.8431 |
| 2.561 | 0.49 | 1500 | 2.5633 |
| 2.41 | 0.65 | 2000 | 2.3548 |
| 2.247 | 0.82 | 2500 | 2.2983 |
| 2.1538 | 0.98 | 3000 | 2.2059 |
| 1.7 | 1.14 | 3500 | 2.2006 |
| 1.5705 | 1.31 | 4000 | 2.2736 |
| 1.604 | 1.47 | 4500 | 2.1415 |
| 1.5509 | 1.63 | 5000 | 2.0853 |
| 1.5053 | 1.79 | 5500 | 2.1389 |
| 1.4787 | 1.96 | 6000 | 2.0870 |
### Framework versions
- Transformers 4.12.2
- Pytorch 1.8.0+cu101
- Datasets 1.14.0
- Tokenizers 0.10.3
|
pooyaphoenix/distilbert-base-uncased-finetuned-cola
|
pooyaphoenix
| 2021-11-01T10:54:03Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5226700639354173
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7904
- Matthews Correlation: 0.5227
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.528 | 1.0 | 535 | 0.5180 | 0.4003 |
| 0.3508 | 2.0 | 1070 | 0.5120 | 0.5019 |
| 0.2409 | 3.0 | 1605 | 0.6374 | 0.5128 |
| 0.1806 | 4.0 | 2140 | 0.7904 | 0.5227 |
| 0.1311 | 5.0 | 2675 | 0.8824 | 0.5227 |
### Framework versions
- Transformers 4.12.2
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
NhatPham/wav2vec2-base-finetuned-ks
|
NhatPham
| 2021-11-01T04:32:59Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:superb",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-ks
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-ks
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1258
- Accuracy: 0.9793
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1561 | 1.0 | 399 | 1.1127 | 0.6643 |
| 0.4803 | 2.0 | 798 | 0.3547 | 0.9687 |
| 0.2855 | 3.0 | 1197 | 0.1663 | 0.9763 |
| 0.1987 | 4.0 | 1596 | 0.1258 | 0.9793 |
| 0.2097 | 5.0 | 1995 | 0.1171 | 0.9791 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
huggingtweets/staticmeganito
|
huggingtweets
| 2021-11-01T01:13:35Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/staticmeganito/1635729212511/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1453022424610525186/0AbfRVqP_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">megan ito</div>
<div style="text-align: center; font-size: 14px;">@staticmeganito</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from megan ito.
| Data | megan ito |
| --- | --- |
| Tweets downloaded | 3248 |
| Retweets | 137 |
| Short tweets | 416 |
| Tweets kept | 2695 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2w99u9jm/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @staticmeganito's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3ss7y2ip) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3ss7y2ip/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/staticmeganito')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/2wyatt2mason
|
huggingtweets
| 2021-10-31T23:45:40Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/2wyatt2mason/1635723936956/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1441261735004966923/Slec8aEM_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">di!!! 🎮🕹️🎤</div>
<div style="text-align: center; font-size: 14px;">@2wyatt2mason</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from di!!! 🎮🕹️🎤.
| Data | di!!! 🎮🕹️🎤 |
| --- | --- |
| Tweets downloaded | 389 |
| Retweets | 11 |
| Short tweets | 49 |
| Tweets kept | 329 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/26ny09im/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @2wyatt2mason's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1rslzcw9) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1rslzcw9/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/2wyatt2mason')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/dril-kanyewest-ph4370n
|
huggingtweets
| 2021-10-31T21:42:34Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/dril-kanyewest-ph4370n/1635716550756/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1404915829427212289/9npX2HXW_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/847818629840228354/VXyQHfn0_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1276461929934942210/cqNhNk6v_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">lexi & wint & ye</div>
<div style="text-align: center; font-size: 14px;">@dril-kanyewest-ph4370n</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from lexi & wint & ye.
| Data | lexi | wint | ye |
| --- | --- | --- | --- |
| Tweets downloaded | 2679 | 3226 | 1856 |
| Retweets | 1274 | 468 | 186 |
| Short tweets | 199 | 319 | 573 |
| Tweets kept | 1206 | 2439 | 1097 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3g14a01v/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dril-kanyewest-ph4370n's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1gh1q6ja) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1gh1q6ja/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dril-kanyewest-ph4370n')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Trixzy/rickai-v1
|
Trixzy
| 2021-10-31T20:17:36Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
tags:
- conversational
---
Rick chatbot made with GPT2 ai from the show Rick and Morty, discord bot available now!
https://discord.com/oauth2/authorize?client_id=894569097818431519&permissions=1074113536&scope=bot
(v1 is no longer supported with RickBot)
|
huggingtweets/ph4370n
|
huggingtweets
| 2021-10-31T18:55:07Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/ph4370n/1635706503727/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1404915829427212289/9npX2HXW_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">lexi</div>
<div style="text-align: center; font-size: 14px;">@ph4370n</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from lexi.
| Data | lexi |
| --- | --- |
| Tweets downloaded | 2674 |
| Retweets | 1269 |
| Short tweets | 199 |
| Tweets kept | 1206 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2oj3ctzo/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ph4370n's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/yjm8doqr) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/yjm8doqr/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ph4370n')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ielab/TILDEv2-TILDE200-exp
|
ielab
| 2021-10-31T13:50:55Z | 26 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
TILDEv2 trained with passages expand with TILDE (m=200)
|
ielab/unicoil-tilde200-msmarco-passage
|
ielab
| 2021-10-31T13:50:01Z | 20 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
uniCOIL trained with passages expand with TILDE (m=200)
|
JazibEijaz/bert-base-uncased-finetuned-swag-e1-b16-l5e5
|
JazibEijaz
| 2021-10-30T15:50:27Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"generated_from_trainer",
"dataset:swag",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- swag
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned-swag-e1-b16-l5e5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-swag-e1-b16-l5e5
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the swag dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5202
- Accuracy: 0.7997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.701 | 1.0 | 4597 | 0.5202 | 0.7997 |
### Framework versions
- Transformers 4.12.2
- Pytorch 1.9.1
- Datasets 1.12.1
- Tokenizers 0.10.3
|
huggingartists/linkin-park
|
huggingartists
| 2021-10-30T14:56:26Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/linkin-park",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- huggingartists/linkin-park
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/a865aac7693c39977b9b402dc364908e.1000x1000x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Linkin Park</div>
<a href="https://genius.com/artists/linkin-park">
<div style="text-align: center; font-size: 14px;">@linkin-park</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Linkin Park.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/linkin-park).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/linkin-park")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/3mtr0u4z/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Linkin Park's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/fxn4brd6) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/fxn4brd6/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/linkin-park')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/linkin-park")
model = AutoModelWithLMHead.from_pretrained("huggingartists/linkin-park")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingtweets/phaggotthefrog
|
huggingtweets
| 2021-10-30T10:52:42Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/phaggotthefrog/1635591158850/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1444194494430081025/FVUA149U_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Anti-Soap Frög 🐀</div>
<div style="text-align: center; font-size: 14px;">@phaggotthefrog</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Anti-Soap Frög 🐀.
| Data | Anti-Soap Frög 🐀 |
| --- | --- |
| Tweets downloaded | 3226 |
| Retweets | 629 |
| Short tweets | 738 |
| Tweets kept | 1859 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3el8bjuf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @phaggotthefrog's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2qjb6app) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2qjb6app/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/phaggotthefrog')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
dexhrestha/Nepali-DistilBERT
|
dexhrestha
| 2021-10-30T08:31:53Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
DistilBERT model trained on OSCAR nepali corpus from huggingface datasets.
We trained the DitilBERT language model on OSCAR nepali corpus and then for downstream sentiment analysis task. The dataset we used for sentiment analysis was first extracted from twitter filtering for devenagari text then labelled it as postive,negative and neutral. However, since neutral labels exceeded the positive and negative tweets we decided to use only positive and negative tweets for ease of training.
LABEL_1 = negative
LABEL_0 = positive
|
terri1102/wav2vec2-base-timit-demo-colab
|
terri1102
| 2021-10-29T20:57:45Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4275
- Wer: 0.3380
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.707 | 4.0 | 500 | 2.1164 | 1.0081 |
| 0.9098 | 8.0 | 1000 | 0.4767 | 0.4694 |
| 0.304 | 12.0 | 1500 | 0.4063 | 0.4007 |
| 0.1754 | 16.0 | 2000 | 0.4179 | 0.3640 |
| 0.1355 | 20.0 | 2500 | 0.4223 | 0.3585 |
| 0.1166 | 24.0 | 3000 | 0.4286 | 0.3457 |
| 0.0835 | 28.0 | 3500 | 0.4275 | 0.3380 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
huggingtweets/cnn-elonmusk-kanyewest
|
huggingtweets
| 2021-10-29T15:21:45Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1276461929934942210/cqNhNk6v_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1442634650703237120/mXIcYtIs_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1278259160644227073/MfCyF7CG_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">ye & Elon Musk & CNN</div>
<div style="text-align: center; font-size: 14px;">@cnn-elonmusk-kanyewest</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ye & Elon Musk & CNN.
| Data | ye | Elon Musk | CNN |
| --- | --- | --- | --- |
| Tweets downloaded | 1856 | 3250 | 3250 |
| Retweets | 186 | 186 | 104 |
| Short tweets | 573 | 853 | 18 |
| Tweets kept | 1097 | 2211 | 3128 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3ehxjxud/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cnn-elonmusk-kanyewest's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1dcouz7e) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1dcouz7e/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/cnn-elonmusk-kanyewest')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Monsia/camembert-fr-covid-tweet-classification
|
Monsia
| 2021-10-29T15:17:47Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"camembert",
"text-classification",
"classification",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
language:
- fr
tags:
- classification
license: apache-2.0
metrics:
- accuracy
widget:
- text: "tchai on est morts. on va se faire vacciner et ils vont contrôler comme les marionnettes avec des fils. d'après les 'ont dit'..."
---
# camembert-fr-covid-tweet-classification
This model is a fine-tune checkpoint of [Yanzhu/bertweetfr-base](https://huggingface.co/Yanzhu/bertweetfr-base), fine-tuned on SST-2.
This model reaches an accuracy of 66.00% on the dev set.
In this dataset, given a tweet, the goal was to infer the underlying topic of the tweet by choosing from four topics classes:
- chiffres : this means, the tweet talk about statistics of covid.
- mesures : this means, the tweet talk about measures take by government of covid
- opinions : this means, the tweet talk about opinion of people like fake new.
- symptomes : this means, the tweet talk about symptoms or variant of covid.
- divers : or other
# Pipelining the Model
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
tokenizer = AutoTokenizer.from_pretrained("Monsia/camembert-fr-covid-tweet-classification")
model = AutoModelForSequenceClassification.from_pretrained("Monsia/camembert-fr-covid-tweet-classification")
nlp_topic_classif = transformers.pipeline('topics-classification', model = model, tokenizer = tokenizer)
nlp_topic_classif("tchai on est morts. on va se faire vacciner et ils vont contrôler comme les marionnettes avec des fils. d'après les '' ont dit ''...")
# Output: [{'label': 'opinions', 'score': 0.831]
```
|
furyhawk/t5-small-finetuned-bbc
|
furyhawk
| 2021-10-29T11:01:51Z | 7 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-finetuned-bbc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-bbc
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3238
- Rouge1: 21.2266
- Rouge2: 16.0927
- Rougel: 19.6785
- Rougelsum: 19.8849
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.4882 | 1.0 | 1001 | 0.3238 | 21.2266 | 16.0927 | 19.6785 | 19.8849 | 19.0 |
### Framework versions
- Transformers 4.12.0
- Pytorch 1.10.0
- Datasets 1.14.0
- Tokenizers 0.10.3
|
shiqing/opus-mt-en-zh-finetuned-en-to-zh
|
shiqing
| 2021-10-29T08:38:40Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: opus-mt-en-zh-finetuned-en-to-zh
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-zh-finetuned-en-to-zh
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-zh](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| No log | 1.0 | 10 | 4.0166 | 1.3628 | 416.6867 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cpu
- Datasets 1.14.0
- Tokenizers 0.10.3
|
vijayv500/DialoGPT-small-Big-Bang-Theory-Series-Transcripts
|
vijayv500
| 2021-10-29T07:39:27Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
tags:
- conversational
license: mit
---
## I fine-tuned DialoGPT-small model on "The Big Bang Theory" TV Series dataset from Kaggle (https://www.kaggle.com/mitramir5/the-big-bang-theory-series-transcript)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("vijayv500/DialoGPT-small-Big-Bang-Theory-Series-Transcripts")
model = AutoModelForCausalLM.from_pretrained("vijayv500/DialoGPT-small-Big-Bang-Theory-Series-Transcripts")
# Let's chat for 5 lines
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature = 0.8
)
# pretty print last ouput tokens from bot
print("TBBT Bot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
```
|
aguilara42/openl3-labeler-w-timestamps
|
aguilara42
| 2021-10-29T01:38:54Z | 0 | 1 | null |
[
"audacity",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- audacity
inference: false
---
# Labeler With Timestamps
## Being used for the `Audio Labeler` effect in Audacity
This is a audio labeler model which is used in Audacity's labeler effect.
metadata:
```
{
"sample_rate": 48000,
"domain_tags": ["Music"],
"tags": ["Audio Labeler"],
"effect_type": "waveform-to-labels",
"multichannel": false,
"labels": ["Acoustic Guitar", "Auxiliary Percussion", "Brass", "Clean Electric Guitar", "Distorted Electric Guitar", "Double Bass", "Drum Set", "Electric Bass", "Flute", "piano", "Reeds", "Saxophone", "Strings", "Trumpet", "Voice"],
"short_description": "Use me to label some instruments!",
"long_description": "An audio labeler, which outputs label predictions and time ranges for the labels. This model can label various instruments listed in the labels section."
}
```
|
patrickvonplaten/sew-d-small-100k-ft-timit-2
|
patrickvonplaten
| 2021-10-28T15:51:49Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"sew-d",
"automatic-speech-recognition",
"timit_asr",
"generated_from_trainer",
"dataset:timit_asr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- automatic-speech-recognition
- timit_asr
- generated_from_trainer
datasets:
- timit_asr
model-index:
- name: sew-d-small-100k-ft-timit-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sew-d-small-100k-ft-timit-2
This model is a fine-tuned version of [asapp/sew-d-small-100k](https://huggingface.co/asapp/sew-d-small-100k) on the TIMIT_ASR - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7357
- Wer: 0.7935
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.1554 | 0.69 | 100 | 4.0531 | 1.0 |
| 2.9584 | 1.38 | 200 | 2.9775 | 1.0 |
| 2.9355 | 2.07 | 300 | 2.9412 | 1.0 |
| 2.9048 | 2.76 | 400 | 2.9143 | 1.0 |
| 2.8568 | 3.45 | 500 | 2.8786 | 1.0 |
| 2.7248 | 4.14 | 600 | 2.7553 | 0.9833 |
| 2.6124 | 4.83 | 700 | 2.5874 | 1.0511 |
| 2.5463 | 5.52 | 800 | 2.4630 | 1.0883 |
| 2.3302 | 6.21 | 900 | 2.3948 | 1.0651 |
| 2.0669 | 6.9 | 1000 | 2.2228 | 0.9920 |
| 2.1991 | 7.59 | 1100 | 2.0815 | 0.9185 |
| 2.293 | 8.28 | 1200 | 2.0229 | 0.8674 |
| 2.0366 | 8.97 | 1300 | 1.9590 | 0.9165 |
| 1.767 | 9.66 | 1400 | 1.9129 | 0.8125 |
| 1.6222 | 10.34 | 1500 | 1.8868 | 0.8259 |
| 2.173 | 11.03 | 1600 | 1.8691 | 0.8661 |
| 1.8614 | 11.72 | 1700 | 1.8388 | 0.8250 |
| 1.5928 | 12.41 | 1800 | 1.8528 | 0.7772 |
| 1.5978 | 13.1 | 1900 | 1.8002 | 0.7892 |
| 1.9886 | 13.79 | 2000 | 1.7848 | 0.8448 |
| 1.8042 | 14.48 | 2100 | 1.7819 | 0.8156 |
| 1.5488 | 15.17 | 2200 | 1.7615 | 0.8228 |
| 1.4468 | 15.86 | 2300 | 1.7565 | 0.7946 |
| 1.8153 | 16.55 | 2400 | 1.7537 | 0.8341 |
| 1.77 | 17.24 | 2500 | 1.7527 | 0.7958 |
| 1.4742 | 17.93 | 2600 | 1.7592 | 0.7850 |
| 1.4088 | 18.62 | 2700 | 1.7421 | 0.8149 |
| 1.7066 | 19.31 | 2800 | 1.7382 | 0.7977 |
| 1.7068 | 20.0 | 2900 | 1.7357 | 0.7935 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.1
- Datasets 1.14.1.dev0
- Tokenizers 0.10.3
|
furyhawk/t5-base-finetuned-bbc-headline
|
furyhawk
| 2021-10-28T15:44:15Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-base-finetuned-bbc-headline
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-bbc-headline
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 167 | 2.2978 | 31.8313 | 10.3824 | 29.6182 | 29.4336 | 10.3153 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.12.1
- Tokenizers 0.10.3
|
patrickvonplaten/sew-d-small-100k-ft-timit
|
patrickvonplaten
| 2021-10-28T15:26:02Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"sew-d",
"automatic-speech-recognition",
"timit_asr",
"generated_from_trainer",
"dataset:timit_asr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- automatic-speech-recognition
- timit_asr
- generated_from_trainer
datasets:
- timit_asr
model-index:
- name: sew-d-small-100k-ft-timit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sew-d-small-100k-ft-timit
This model is a fine-tuned version of [asapp/sew-d-small-100k](https://huggingface.co/asapp/sew-d-small-100k) on the TIMIT_ASR - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7482
- Wer: 0.7987
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.2068 | 0.69 | 100 | 4.0802 | 1.0 |
| 2.9805 | 1.38 | 200 | 2.9792 | 1.0 |
| 2.9781 | 2.07 | 300 | 2.9408 | 1.0 |
| 2.9655 | 2.76 | 400 | 2.9143 | 1.0 |
| 2.8953 | 3.45 | 500 | 2.8775 | 1.0 |
| 2.7719 | 4.14 | 600 | 2.7815 | 0.9999 |
| 2.6531 | 4.83 | 700 | 2.6375 | 1.0065 |
| 2.6425 | 5.52 | 800 | 2.5602 | 1.0210 |
| 2.3963 | 6.21 | 900 | 2.4665 | 1.0591 |
| 2.1447 | 6.9 | 1000 | 2.2792 | 0.9848 |
| 2.2719 | 7.59 | 1100 | 2.2237 | 0.9465 |
| 2.3629 | 8.28 | 1200 | 2.1058 | 0.8907 |
| 2.0913 | 8.97 | 1300 | 2.0113 | 0.9070 |
| 1.8334 | 9.66 | 1400 | 1.9466 | 0.8177 |
| 1.6608 | 10.34 | 1500 | 1.9217 | 0.8698 |
| 2.2194 | 11.03 | 1600 | 1.9091 | 0.8727 |
| 1.9002 | 11.72 | 1700 | 1.8746 | 0.8332 |
| 1.6268 | 12.41 | 1800 | 1.8782 | 0.7951 |
| 1.6455 | 13.1 | 1900 | 1.8230 | 0.8225 |
| 2.0308 | 13.79 | 2000 | 1.8067 | 0.8560 |
| 1.855 | 14.48 | 2100 | 1.8129 | 0.8177 |
| 1.5901 | 15.17 | 2200 | 1.7891 | 0.8367 |
| 1.4848 | 15.86 | 2300 | 1.7821 | 0.8201 |
| 1.8754 | 16.55 | 2400 | 1.7700 | 0.8137 |
| 1.7975 | 17.24 | 2500 | 1.7795 | 0.8171 |
| 1.5194 | 17.93 | 2600 | 1.7605 | 0.7977 |
| 1.4374 | 18.62 | 2700 | 1.7529 | 0.7978 |
| 1.7498 | 19.31 | 2800 | 1.7522 | 0.8023 |
| 1.7452 | 20.0 | 2900 | 1.7482 | 0.7987 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.1
- Datasets 1.14.1.dev0
- Tokenizers 0.10.3
|
asapp/sew-d-mid-k127-400k
|
asapp
| 2021-10-28T14:04:35Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"sew-d",
"feature-extraction",
"speech",
"en",
"dataset:librispeech_asr",
"arxiv:2109.06870",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
---
# SEW-D-mid
[SEW-D by ASAPP Research](https://github.com/asappresearch/sew)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...
Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870)
Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi
**Abstract**
This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.
The original model can be found under https://github.com/asappresearch/sew#model-checkpoints .
# Usage
See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWDForCTC`.
|
asapp/sew-d-mid-400k
|
asapp
| 2021-10-28T13:59:38Z | 6 | 1 |
transformers
|
[
"transformers",
"pytorch",
"sew-d",
"feature-extraction",
"speech",
"en",
"dataset:librispeech_asr",
"arxiv:2109.06870",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
---
# SEW-D-mid
[SEW-D by ASAPP Research](https://github.com/asappresearch/sew)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...
Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870)
Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi
**Abstract**
This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.
The original model can be found under https://github.com/asappresearch/sew#model-checkpoints .
# Usage
See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWDForCTC`.
|
asapp/sew-d-mid-100k
|
asapp
| 2021-10-28T13:56:56Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"sew-d",
"feature-extraction",
"speech",
"en",
"dataset:librispeech_asr",
"arxiv:2109.06870",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
---
# SEW-D-mid
[SEW-D by ASAPP Research](https://github.com/asappresearch/sew)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...
Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870)
Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi
**Abstract**
This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.
The original model can be found under https://github.com/asappresearch/sew#model-checkpoints .
# Usage
See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWDForCTC`.
|
asapp/sew-d-base-plus-100k
|
asapp
| 2021-10-28T13:48:40Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"sew-d",
"feature-extraction",
"speech",
"en",
"dataset:librispeech_asr",
"arxiv:2109.06870",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
---
# SEW-D-base+
[SEW-D by ASAPP Research](https://github.com/asappresearch/sew)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...
Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870)
Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi
**Abstract**
This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.
The original model can be found under https://github.com/asappresearch/sew#model-checkpoints .
# Usage
See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWDForCTC`.
|
asapp/sew-d-base-100k
|
asapp
| 2021-10-28T13:44:39Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"sew-d",
"feature-extraction",
"speech",
"en",
"dataset:librispeech_asr",
"arxiv:2109.06870",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
---
# SEW-D-base
[SEW-D by ASAPP Research](https://github.com/asappresearch/sew)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...
Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870)
Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi
**Abstract**
This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.
The original model can be found under https://github.com/asappresearch/sew#model-checkpoints .
# Usage
See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWDForCTC`.
|
SajjadAyoubi/distil-bigbird-fa-zwnj
|
SajjadAyoubi
| 2021-10-28T13:14:34Z | 83 | 0 |
transformers
|
[
"transformers",
"pytorch",
"big_bird",
"fill-mask",
"arxiv:1810.04805",
"arxiv:2005.12515",
"arxiv:2007.14062",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
<span align="center">
<a href="https://huggingface.co/SajjadAyoubi/"><img src="https://img.shields.io/static/v1?label=%F0%9F%A4%97%20Hugging%20Face&message=SajjadAyoubi&color=yellow"></a>
<a href="https://colab.research.google.com/github/sajjjadayobi/PersianQA/blob/main/notebooks/Demo.ipynb"><img src="https://img.shields.io/static/v1?label=Colab&message=Fine-tuning Example&logo=Google%20Colab&color=f9ab00"></a>
</span>
# ParsBigBird: Persian Bert For **Long-Range** Sequences
The [Bert](https://arxiv.org/abs/1810.04805) and [ParsBert](https://arxiv.org/abs/2005.12515) algorithms can handle texts with token lengths of up to 512, however, many tasks such as summarizing and answering questions require longer texts. In our work, we have trained the [BigBird](https://arxiv.org/abs/2007.14062) model for the Persian language to process texts up to 4096 in the Farsi (Persian) language using sparse attention.
## Evaluation: 🌡️
We have evaluated the model on three tasks with different sequence lengths
| Name | Params | SnappFood (F1) | Digikala Magazine(F1) | PersianQA (F1) |
| :--------------------------------------------------------------: | :----: | :-----------------: | :---------------: | :--------------: |
| [distil-bigbird-fa-zwnj](https://github.com/sajjjadayobi/ParsBigBird) | 78M | 85.43% | **94.05%** | **73.34%** |
| [bert-base-fa](https://github.com/hooshvare/parsbert) | 118M | **87.98%** | 93.65% | 70.06% |
- Despite being as big as distill-bert, the model performs equally well as ParsBert and is much better on PersianQA which requires much more context
- This evaluation was based on `max_lentgh=2048` (It can be changed up to 4096)
## How to use❓
### As Contextualized Word Embedding
```python
from transformers import BigBirdModel, AutoTokenizer
MODEL_NAME = "SajjadAyoubi/distil-bigbird-fa-zwnj"
# by default its in `block_sparse` block_size=32
model = BigBirdModel.from_pretrained(MODEL_NAME, block_size=32)
# you can use full attention like the following: use this when input isn't longer than 512
model = BigBirdModel.from_pretrained(MODEL_NAME, attention_type="original_full")
text = "😃 امیدوارم مدل بدردبخوری باشه چون خیلی طول کشید تا ترین بشه"
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
tokens = tokenizer(text, return_tensors='pt')
output = model(**tokens) # contextualized embedding
```
### As Fill Blank
```python
from transformers import pipeline
MODEL_NAME = 'SajjadAyoubi/distil-bigbird-fa-zwnj'
fill = pipeline('fill-mask', model=MODEL_NAME, tokenizer=MODEL_NAME)
results = fill('تهران پایتخت [MASK] است.')
print(results[0]['token_str'])
>>> 'ایران'
```
## Pretraining details: 🔭
This model was pretrained using a masked language model (MLM) objective on the Persian section of the Oscar dataset. Following the original BERT training, 15% of tokens were masked. This was first described in this [paper](https://arxiv.org/abs/2007.14062) and released in this [repository](https://github.com/google-research/bigbird). Documents longer than 4096 were split into multiple documents, while documents much smaller than 4096 were merged using the [SEP] token. Model is warm started from `distilbert-fa`’s [checkpoint](https://huggingface.co/HooshvareLab/distilbert-fa-zwnj-base).
- For more details, you can take a look at config.json at the model card in 🤗 Model Hub
## Fine Tuning Recommendations: 🐤
Due to the model's memory requirements, `gradient_checkpointing` and `gradient_accumulation` should be used to maintain a reasonable batch size. Considering this model isn't really big, it's a good idea to first fine-tune it on your dataset using Masked LM objective (also called intermediate fine-tuning) before implementing the main task. In block_sparse mode, it doesn't matter how many tokens are input. It just attends to 256 tokens. Furthermore, original_full should be used up to 512 sequence lengths (instead of block sparse).
### Fine Tuning Examples 👷♂️👷♀️
| Dataset | Fine Tuning Example |
| ------------------------------------- | ------------------------------------------------------------ |
| Digikala Magazine Text Classification | <a href="https://colab.research.google.com/github/sajjjadayobi/PersianQA/blob/main/notebooks/Demo.ipynb"><img src="https://img.shields.io/static/v1?label=Colab&message=Fine-tuning Example&logo=Google%20Colab&color=f9ab00"></a> |
## Contact us: 🤝
If you have a technical question regarding the model, pretraining, code or publication, please create an issue in the repository. This is the fastest way to reach us.
## Citation: ↩️
we didn't publish any papers on the work. However, if you did, please cite us properly with an entry like one below.
```bibtex
@misc{ParsBigBird,
author = {Ayoubi, Sajjad},
title = {ParsBigBird: Persian Bert For Long-Range Sequences},
year = 2021,
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/SajjjadAyobi/ParsBigBird}},
}
```
|
Narrativaai/fake-news-detection-spanish
|
Narrativaai
| 2021-10-28T11:03:28Z | 26 | 11 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"fake",
"news",
"competition",
"es",
"dataset:fakedes",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
language: es
tags:
- generated_from_trainer
- fake
- news
- competition
datasets:
- fakedes
widget:
- text: 'La palabra "haiga", aceptada por la RAE [SEP] La palabra "haiga", aceptada por la RAE La Real Academia de la Lengua (RAE), ha aceptado el uso de "HAIGA", para su utilización en las tres personas del singular del presente del subjuntivo del verbo hacer, aunque asegura que la forma más recomendable en la lengua culta para este tiempo, sigue siendo "haya".
Así lo han confirmado fuentes de la RAE, que explican que este cambio ha sido propuesto y aprobado por el pleno de la Academia de la Lengua, tras la extendida utilización por todo el territorio nacional, sobre todo, empleado por personas carentes de estudios o con estudios básicos de graduado escolar. Ya no será objeto de burla ese compañero que a diario repite aquello de "Mientras que haiga faena, no podemos quejarnos" o esa abuela que repite aquello de "El que haiga sacao los juguetes, que los recoja".
Entre otras palabras novedosas que ha aceptado la RAE, contamos también con "Descambiar", significa deshacer un cambio, por ejemplo "devolver la compra". Visto lo visto, nadie apostaría que la palabra "follamigos" sea la siguiente de la lista.'
metrics:
- f1
- accuracy
model-index:
- name: roberta-large-fake-news-detection-spanish
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RoBERTa-large-fake-news-detection-spanish
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-large-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne) on an [Spanish Fake News Dataset](https://sites.google.com/view/iberlef2020/#h.p_w0c31bn0r-SW).
It achieves the following results on the evaluation set:
- Loss: 1.7474
- F1: **0.7717**
- Accuracy: 0.7797
> So, based on the [leaderboard](https://sites.google.com/view/fakedes/results?authuser=0) our model **outperforms** the best model (scores F1 = 0.7666).
## Model description
RoBERTa-large-bne is a transformer-based masked language model for the Spanish language. It is based on the RoBERTa large model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the National Library of Spain (Biblioteca Nacional de España) from 2009 to 2019.
## Intended uses & limitations
The objective of this task is to decide if a news item is fake or real by analyzing its textual representation.
## Training and evaluation data
**FakeDeS**: [Fake News Detection in Spanish Shared Task](https://sites.google.com/view/fakedes/home)
Fake news provides information that aims to manipulate people for different purposes: terrorism, political elections, advertisement, satire, among others. In social networks, misinformation extends in seconds among thousands of people, so it is necessary to develop tools that help control the amount of false information on the web. Similar tasks are detection of popularity in social networks and detection of subjectivity of messages in this media. A fake news detection system aims to help users detect and filter out potentially deceptive news. The prediction of intentionally misleading news is based on the analysis of truthful and fraudulent previously reviewed news, i.e., annotated corpora.
The Spanish Fake News Corpus is a collection of news compiled from several web sources: established newspapers websites,media companies websites, special websites dedicated to validating fake news, websites designated by different journalists as sites that regularly publish fake news. The news were collected from January to July of 2018 and all of them were written in Mexican Spanish.
The corpus has 971 news collected from January to July, 2018, from different sources:
- Established newspapers websites,
- Media companies websites,
- Special websites dedicated to validating fake news,
- Websites designated by different journalists as sites that regularly publish fake news.
The corpus was tagged considering only two classes (true or fake), following a manual labeling process:
- A news is true if there is evidence that it has been published in reliable sites.
- A news is fake if there is news from reliable sites or specialized website in detection of deceptive content that contradicts it or no other evidence was found about the news besides the source.
- We collected the true-fake news pair of an event so there is a correlation of news in the corpus.
In order to avoid topic bias, the corpus covers news from 9 different topics: Science, Sport, Economy, Education, Entertainment, Politics, Health, Security, and Society. As it can be seen in the table below, the number of fake and true news is quite balanced. Approximately 70% will be used as training corpus (676 news), and the 30% as testing corpus (295 news).
The training corpus contains the following information:
- Category: Fake/ True
- Topic: Science/ Sport/ Economy/ Education/ Entertainment/ Politics, Health/ Security/ Society
- Headline: The title of the news.
- Text: The complete text of the news.
- Link: The URL where the news was published.
More information needed
## Training procedure
TBA
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| No log | 1.0 | 243 | 0.6282 | 0.7513 | 0.75 |
| No log | 2.0 | 486 | 0.9600 | 0.7346 | 0.7587 |
| 0.5099 | 3.0 | 729 | 1.2128 | 0.7656 | 0.7570 |
| 0.5099 | 4.0 | 972 | 1.4001 | 0.7606 | 0.7622 |
| 0.1949 | 5.0 | 1215 | 1.9748 | 0.6475 | 0.7220 |
| 0.1949 | 6.0 | 1458 | 1.7386 | 0.7706 | 0.7710 |
| 0.0263 | 7.0 | 1701 | 1.7474 | 0.7717 | 0.7797 |
| 0.0263 | 8.0 | 1944 | 1.8114 | 0.7695 | 0.7780 |
| 0.0046 | 9.0 | 2187 | 1.8444 | 0.7709 | 0.7797 |
| 0.0046 | 10.0 | 2430 | 1.8552 | 0.7709 | 0.7797 |
### Fast usage with HF `pipelines`
```python
from transformers import pipeline
ckpt = "Narrativaai/fake-news-detection-spanish"
classifier = pipeline("text-classification", model=ckpt)
headline = "Your headline"
text = "Your article text here..."
classifier(headline + " [SEP] " + text)
```
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
Created by: [Narrativa](https://www.narrativa.com/)
About Narrativa: Natural Language Generation (NLG) | Gabriele, our machine learning-based platform, builds and deploys natural language solutions. #NLG #AI
|
hchc/distilbert-base-uncased-finetuned-cola
|
hchc
| 2021-10-28T08:08:19Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5451837431775948
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8508
- Matthews Correlation: 0.5452
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5221 | 1.0 | 535 | 0.5370 | 0.4246 |
| 0.3462 | 2.0 | 1070 | 0.5157 | 0.5183 |
| 0.2332 | 3.0 | 1605 | 0.6324 | 0.5166 |
| 0.1661 | 4.0 | 2140 | 0.7616 | 0.5370 |
| 0.1263 | 5.0 | 2675 | 0.8508 | 0.5452 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
huggingtweets/claire_v0ltaire
|
huggingtweets
| 2021-10-28T04:03:43Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/claire_v0ltaire/1635393819410/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/984455379659575296/-0punyb9_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Claire</div>
<div style="text-align: center; font-size: 14px;">@claire_v0ltaire</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Claire.
| Data | Claire |
| --- | --- |
| Tweets downloaded | 3236 |
| Retweets | 491 |
| Short tweets | 574 |
| Tweets kept | 2171 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/5yprh52r/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @claire_v0ltaire's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/33jg2b88) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/33jg2b88/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/claire_v0ltaire')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
aditeyabaral/sentencetransformer-indic-bert
|
aditeyabaral
| 2021-10-28T02:17:50Z | 8 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"albert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# aditeyabaral/sentencetransformer-indic-bert
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('aditeyabaral/sentencetransformer-indic-bert')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('aditeyabaral/sentencetransformer-indic-bert')
model = AutoModel.from_pretrained('aditeyabaral/sentencetransformer-indic-bert')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=aditeyabaral/sentencetransformer-indic-bert)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 9234 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: AlbertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
anton-l/wav2vec2-base-ft-keyword-spotting
|
anton-l
| 2021-10-27T22:16:42Z | 468 | 4 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:superb",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- audio-classification
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: wav2vec2-base-ft-keyword-spotting
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-ft-keyword-spotting
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0824
- Accuracy: 0.9826
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 0
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8972 | 1.0 | 399 | 0.7023 | 0.8174 |
| 0.3274 | 2.0 | 798 | 0.1634 | 0.9773 |
| 0.1993 | 3.0 | 1197 | 0.1048 | 0.9788 |
| 0.1777 | 4.0 | 1596 | 0.0824 | 0.9826 |
| 0.1527 | 5.0 | 1995 | 0.0812 | 0.9810 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
jwuthri/autonlp-shipping_status_2-27366103
|
jwuthri
| 2021-10-27T21:34:42Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autonlp",
"unk",
"dataset:jwuthri/autonlp-data-shipping_status_2",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- jwuthri/autonlp-data-shipping_status_2
co2_eq_emissions: 32.912881644048
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 27366103
- CO2 Emissions (in grams): 32.912881644048
## Validation Metrics
- Loss: 0.18175844848155975
- Accuracy: 0.9437683592110785
- Precision: 0.9416809605488851
- Recall: 0.8459167950693375
- AUC: 0.9815242330050846
- F1: 0.8912337662337663
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/jwuthri/autonlp-shipping_status_2-27366103
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("jwuthri/autonlp-shipping_status_2-27366103", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("jwuthri/autonlp-shipping_status_2-27366103", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
anton-l/distilhubert-ft-common-language
|
anton-l
| 2021-10-27T21:29:13Z | 12 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:common_language",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- audio-classification
- generated_from_trainer
datasets:
- common_language
metrics:
- accuracy
model-index:
- name: distilhubert-ft-common-language
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-ft-common-language
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the common_language dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7214
- Accuracy: 0.2797
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 4
- seed: 0
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.6543 | 1.0 | 173 | 3.7611 | 0.0491 |
| 3.2221 | 2.0 | 346 | 3.4868 | 0.1352 |
| 2.9332 | 3.0 | 519 | 3.2732 | 0.1861 |
| 2.7299 | 4.0 | 692 | 3.0944 | 0.2172 |
| 2.5638 | 5.0 | 865 | 2.9790 | 0.2400 |
| 2.3871 | 6.0 | 1038 | 2.8668 | 0.2590 |
| 2.3384 | 7.0 | 1211 | 2.7972 | 0.2653 |
| 2.2648 | 8.0 | 1384 | 2.7625 | 0.2695 |
| 2.2162 | 9.0 | 1557 | 2.7405 | 0.2782 |
| 2.1915 | 10.0 | 1730 | 2.7214 | 0.2797 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
huggingtweets/void_vomicae
|
huggingtweets
| 2021-10-27T21:01:11Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/void_vomicae/1635368467642/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1452295981517742087/v8HfhHLT_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">《 𝚟 o̶ 𝚒 𝚍 》</div>
<div style="text-align: center; font-size: 14px;">@void_vomicae</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 《 𝚟 o̶ 𝚒 𝚍 》.
| Data | 《 𝚟 o̶ 𝚒 𝚍 》 |
| --- | --- |
| Tweets downloaded | 2083 |
| Retweets | 417 |
| Short tweets | 422 |
| Tweets kept | 1244 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/fju0lp9t/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @void_vomicae's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1wos3ytc) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1wos3ytc/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/void_vomicae')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
anton-l/distilhubert-ft-keyword-spotting
|
anton-l
| 2021-10-27T19:00:06Z | 93 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:superb",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- audio-classification
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: distilhubert-ft-keyword-spotting
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-ft-keyword-spotting
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1163
- Accuracy: 0.9706
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 256
- eval_batch_size: 32
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8176 | 1.0 | 200 | 0.7718 | 0.8116 |
| 0.2364 | 2.0 | 400 | 0.2107 | 0.9662 |
| 0.1198 | 3.0 | 600 | 0.1374 | 0.9678 |
| 0.0891 | 4.0 | 800 | 0.1163 | 0.9706 |
| 0.085 | 5.0 | 1000 | 0.1180 | 0.9690 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
prajjwal1/bert-medium
|
prajjwal1
| 2021-10-27T18:30:16Z | 37,177 | 3 |
transformers
|
[
"transformers",
"pytorch",
"BERT",
"MNLI",
"NLI",
"transformer",
"pre-training",
"en",
"arxiv:1908.08962",
"arxiv:2110.01518",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language:
- en
license:
- mit
tags:
- BERT
- MNLI
- NLI
- transformer
- pre-training
---
The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert).
This is one of the smaller pre-trained BERT variants, together with [bert-tiny](https://huggingface.co/prajjwal1/bert-tiny), [bert-mini](https://huggingface.co/prajjwal1/bert-mini) and [bert-small](https://huggingface.co/prajjwal1/bert-small). They were introduced in the study `Well-Read Students Learn Better: On the Importance of Pre-training Compact Models` ([arxiv](https://arxiv.org/abs/1908.08962)), and ported to HF for the study `Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics` ([arXiv](https://arxiv.org/abs/2110.01518)). These models are supposed to be trained on a downstream task.
If you use the model, please consider citing both the papers:
```
@misc{bhargava2021generalization,
title={Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics},
author={Prajjwal Bhargava and Aleksandr Drozd and Anna Rogers},
year={2021},
eprint={2110.01518},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@article{DBLP:journals/corr/abs-1908-08962,
author = {Iulia Turc and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {Well-Read Students Learn Better: The Impact of Student Initialization
on Knowledge Distillation},
journal = {CoRR},
volume = {abs/1908.08962},
year = {2019},
url = {http://arxiv.org/abs/1908.08962},
eprinttype = {arXiv},
eprint = {1908.08962},
timestamp = {Thu, 29 Aug 2019 16:32:34 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1908-08962.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Config of this model:
- `prajjwal1/bert-medium` (L=8, H=512) [Model Link](https://huggingface.co/prajjwal1/bert-medium)
Other models to check out:
- `prajjwal1/bert-tiny` (L=2, H=128) [Model Link](https://huggingface.co/prajjwal1/bert-tiny)
- `prajjwal1/bert-mini` (L=4, H=256) [Model Link](https://huggingface.co/prajjwal1/bert-mini)
- `prajjwal1/bert-small` (L=4, H=512) [Model Link](https://huggingface.co/prajjwal1/bert-small)
Original Implementation and more info can be found in [this Github repository](https://github.com/prajjwal1/generalize_lm_nli).
Twitter: [@prajjwal_1](https://twitter.com/prajjwal_1)
|
prajjwal1/bert-tiny
|
prajjwal1
| 2021-10-27T18:29:01Z | 487,487 | 103 |
transformers
|
[
"transformers",
"pytorch",
"BERT",
"MNLI",
"NLI",
"transformer",
"pre-training",
"en",
"arxiv:1908.08962",
"arxiv:2110.01518",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language:
- en
license:
- mit
tags:
- BERT
- MNLI
- NLI
- transformer
- pre-training
---
The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert).
This is one of the smaller pre-trained BERT variants, together with [bert-mini](https://huggingface.co/prajjwal1/bert-mini) [bert-small](https://huggingface.co/prajjwal1/bert-small) and [bert-medium](https://huggingface.co/prajjwal1/bert-medium). They were introduced in the study `Well-Read Students Learn Better: On the Importance of Pre-training Compact Models` ([arxiv](https://arxiv.org/abs/1908.08962)), and ported to HF for the study `Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics` ([arXiv](https://arxiv.org/abs/2110.01518)). These models are supposed to be trained on a downstream task.
If you use the model, please consider citing both the papers:
```
@misc{bhargava2021generalization,
title={Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics},
author={Prajjwal Bhargava and Aleksandr Drozd and Anna Rogers},
year={2021},
eprint={2110.01518},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@article{DBLP:journals/corr/abs-1908-08962,
author = {Iulia Turc and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {Well-Read Students Learn Better: The Impact of Student Initialization
on Knowledge Distillation},
journal = {CoRR},
volume = {abs/1908.08962},
year = {2019},
url = {http://arxiv.org/abs/1908.08962},
eprinttype = {arXiv},
eprint = {1908.08962},
timestamp = {Thu, 29 Aug 2019 16:32:34 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1908-08962.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Config of this model:
- `prajjwal1/bert-tiny` (L=2, H=128) [Model Link](https://huggingface.co/prajjwal1/bert-tiny)
Other models to check out:
- `prajjwal1/bert-mini` (L=4, H=256) [Model Link](https://huggingface.co/prajjwal1/bert-mini)
- `prajjwal1/bert-small` (L=4, H=512) [Model Link](https://huggingface.co/prajjwal1/bert-small)
- `prajjwal1/bert-medium` (L=8, H=512) [Model Link](https://huggingface.co/prajjwal1/bert-medium)
Original Implementation and more info can be found in [this Github repository](https://github.com/prajjwal1/generalize_lm_nli).
Twitter: [@prajjwal_1](https://twitter.com/prajjwal_1)
|
Michael711/feinschwarz
|
Michael711
| 2021-10-27T18:28:16Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"de",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:04Z |
---
license: mit
tags:
- generated_from_trainer
- de
model-index:
- name: feinesblack
results: []
---
# feinschwarz
This model is a fine-tuned version of [dbmdz/german-gpt2](https://huggingface.co/dbmdz/german-gpt2). The dataset was compiled from all texts of https://www.feinschwarz.net (as of October 2021). The homepage gathers essayistic texts on theological topics.
The model will be used to explore the challenges of text-generating AI for theology with a hands on approach. Can an AI generate theological knowledge? Is a text by Karl Rahner of more value than an AI-generated text? Can we even distinguish a Rahner text from an AI-generated text in the future? And the crucial question: Would it be bad if not?
The model is a very first attempt and in its current version certainly not yet a danger for academic theology 🤓
# Using the model
You can create text with the model using this code:
```python
from transformers import pipeline
pipe = pipeline('text-generation', model="Michael711/feinschwarz",
tokenizer="Michael711/feinschwarz")
text = pipe("Der Sinn des Lebens ist es", max_length=100)[0]["generated_text"]
print(text)
```
Have fun theologizing!
|
prajjwal1/bert-mini
|
prajjwal1
| 2021-10-27T18:27:38Z | 98,112 | 20 |
transformers
|
[
"transformers",
"pytorch",
"BERT",
"MNLI",
"NLI",
"transformer",
"pre-training",
"en",
"arxiv:1908.08962",
"arxiv:2110.01518",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language:
- en
license:
- mit
tags:
- BERT
- MNLI
- NLI
- transformer
- pre-training
---
The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert).
This is one of the smaller pre-trained BERT variants, together with [bert-small](https://huggingface.co/prajjwal1/bert-small) and [bert-medium](https://huggingface.co/prajjwal1/bert-medium). They were introduced in the study `Well-Read Students Learn Better: On the Importance of Pre-training Compact Models` ([arxiv](https://arxiv.org/abs/1908.08962)), and ported to HF for the study `Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics` ([arXiv](https://arxiv.org/abs/2110.01518)). These models are supposed to be trained on a downstream task.
If you use the model, please consider citing both the papers:
```
@misc{bhargava2021generalization,
title={Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics},
author={Prajjwal Bhargava and Aleksandr Drozd and Anna Rogers},
year={2021},
eprint={2110.01518},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@article{DBLP:journals/corr/abs-1908-08962,
author = {Iulia Turc and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {Well-Read Students Learn Better: The Impact of Student Initialization
on Knowledge Distillation},
journal = {CoRR},
volume = {abs/1908.08962},
year = {2019},
url = {http://arxiv.org/abs/1908.08962},
eprinttype = {arXiv},
eprint = {1908.08962},
timestamp = {Thu, 29 Aug 2019 16:32:34 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1908-08962.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Config of this model:
`prajjwal1/bert-mini` (L=4, H=256) [Model Link](https://huggingface.co/prajjwal1/bert-mini)
Other models to check out:
- `prajjwal1/bert-tiny` (L=2, H=128) [Model Link](https://huggingface.co/prajjwal1/bert-tiny)
- `prajjwal1/bert-small` (L=4, H=512) [Model Link](https://huggingface.co/prajjwal1/bert-small)
- `prajjwal1/bert-medium` (L=8, H=512) [Model Link](https://huggingface.co/prajjwal1/bert-medium)
Original Implementation and more info can be found in [this Github repository](https://github.com/prajjwal1/generalize_lm_nli).
Twitter: [@prajjwal_1](https://twitter.com/prajjwal_1)
|
patrickvonplaten/sew-d-small-100k-timit
|
patrickvonplaten
| 2021-10-27T17:15:26Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"sew-d",
"automatic-speech-recognition",
"timit_asr",
"generated_from_trainer",
"dataset:timit_asr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- automatic-speech-recognition
- timit_asr
- generated_from_trainer
datasets:
- timit_asr
model-index:
- name: sew-d-small-100k-timit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sew-d-small-100k-timit
This model is a fine-tuned version of [asapp/sew-d-small-100k](https://huggingface.co/asapp/sew-d-small-100k) on the TIMIT_ASR - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7541
- Wer: 0.8061
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.2068 | 0.69 | 100 | 4.0802 | 1.0 |
| 2.9805 | 1.38 | 200 | 2.9792 | 1.0 |
| 2.9781 | 2.07 | 300 | 2.9408 | 1.0 |
| 2.9655 | 2.76 | 400 | 2.9143 | 1.0 |
| 2.8953 | 3.45 | 500 | 2.8775 | 1.0 |
| 2.7718 | 4.14 | 600 | 2.7787 | 1.0 |
| 2.6711 | 4.83 | 700 | 2.6401 | 0.9786 |
| 2.6403 | 5.52 | 800 | 2.5435 | 1.0392 |
| 2.4052 | 6.21 | 900 | 2.4580 | 1.0706 |
| 2.1708 | 6.9 | 1000 | 2.2800 | 1.0090 |
| 2.2555 | 7.59 | 1100 | 2.1493 | 0.9579 |
| 2.3673 | 8.28 | 1200 | 2.0709 | 0.9051 |
| 2.091 | 8.97 | 1300 | 2.0258 | 0.8926 |
| 1.8433 | 9.66 | 1400 | 1.9645 | 0.8243 |
| 1.6824 | 10.34 | 1500 | 1.9211 | 0.8707 |
| 2.2282 | 11.03 | 1600 | 1.8914 | 0.8695 |
| 1.9027 | 11.72 | 1700 | 1.8718 | 0.8343 |
| 1.6303 | 12.41 | 1800 | 1.8646 | 0.8232 |
| 1.648 | 13.1 | 1900 | 1.8297 | 0.8177 |
| 2.0429 | 13.79 | 2000 | 1.8127 | 0.8642 |
| 1.8833 | 14.48 | 2100 | 1.8005 | 0.8307 |
| 1.5996 | 15.17 | 2200 | 1.7926 | 0.8467 |
| 1.4876 | 15.86 | 2300 | 1.7795 | 0.8341 |
| 1.8925 | 16.55 | 2400 | 1.7716 | 0.8199 |
| 1.814 | 17.24 | 2500 | 1.7846 | 0.8086 |
| 1.536 | 17.93 | 2600 | 1.7655 | 0.8019 |
| 1.4476 | 18.62 | 2700 | 1.7599 | 0.8070 |
| 1.7629 | 19.31 | 2800 | 1.7589 | 0.8119 |
| 1.7646 | 20.0 | 2900 | 1.7541 | 0.8061 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.1
- Datasets 1.14.1.dev0
- Tokenizers 0.10.3
|
patrickvonplaten/wav2vec2-large-xlsr-129-turkish-colab
|
patrickvonplaten
| 2021-10-27T17:08:13Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xlsr-129-turkish-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-129-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-129](https://huggingface.co/facebook/wav2vec2-large-xlsr-129) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3149
- Wer: 0.4748
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.4837 | 3.67 | 400 | 3.2526 | 1.0 |
| 3.0896 | 7.34 | 800 | 2.8037 | 1.0 |
| 1.5604 | 11.01 | 1200 | 0.5688 | 0.6613 |
| 0.6511 | 14.68 | 1600 | 0.3998 | 0.5580 |
| 0.4798 | 18.35 | 2000 | 0.3505 | 0.5118 |
| 0.4047 | 22.02 | 2400 | 0.3273 | 0.4858 |
| 0.3519 | 25.69 | 2800 | 0.3224 | 0.4796 |
| 0.343 | 29.36 | 3200 | 0.3149 | 0.4748 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu102
- Datasets 1.13.3
- Tokenizers 0.10.3
|
philschmid/pt-tblard-tf-allocine
|
philschmid
| 2021-10-27T13:54:09Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"camembert",
"text-classification",
"fr",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language: fr
---
# Pytorch Fork of [tblard/tf-allocine](https://huggingface.co/tblard/tf-allocine)
A french sentiment analysis model, based on [CamemBERT](https://camembert-model.fr/), and finetuned on a large-scale dataset scraped from [Allociné.fr](http://www.allocine.fr/) user reviews.
## Results
| Validation Accuracy | Validation F1-Score | Test Accuracy | Test F1-Score |
|--------------------:| -------------------:| -------------:|--------------:|
| 97.39 | 97.36 | 97.44 | 97.34 |
The dataset and the evaluation code are available on [this repo](https://github.com/TheophileBlard/french-sentiment-analysis-with-bert).
## Usage
```python
from transformers import AutoTokenizer, TFAutoModelForSequenceClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("tblard/tf-allocine")
model = TFAutoModelForSequenceClassification.from_pretrained("tblard/tf-allocine")
nlp = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer)
print(nlp("Alad'2 est clairement le meilleur film de l'année 2018.")) # POSITIVE
print(nlp("Juste whoaaahouuu !")) # POSITIVE
print(nlp("NUL...A...CHIER ! FIN DE TRANSMISSION.")) # NEGATIVE
print(nlp("Je m'attendais à mieux de la part de Franck Dubosc !")) # NEGATIVE
```
## Author
Théophile Blard – :email: theophile.blard@gmail.com
If you use this work (code, model or dataset), please cite as:
> Théophile Blard, French sentiment analysis with BERT, (2020), GitHub repository, <https://github.com/TheophileBlard/french-sentiment-analysis-with-bert>
|
patrickvonplaten/wav2vec2-xlarge-dotdotdot-common_voice-tr-demo
|
patrickvonplaten
| 2021-10-27T10:41:06Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"tr",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- tr
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-xlarge-...-common_voice-tr-demo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlarge-...-common_voice-tr-demo
This model is a fine-tuned version of [facebook/wav2vec2-xlarge-xlsr-...](https://huggingface.co/facebook/wav2vec2-xlarge-xlsr-...) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2701
- Wer: 0.2309
- Cer: 0.0527
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00005
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.4388 | 3.7 | 400 | 1.366 | 0.9701 |
| 0.3766 | 7.4 | 800 | 0.4914 | 0.5374 |
| 0.2295 | 11.11 | 1200 | 0.3934 | 0.4125 |
| 0.1121 | 14.81 | 1600 | 0.3264 | 0.2904 |
| 0.1473 | 18.51 | 2000 | 0.3103 | 0.2671 |
| 0.1013 | 22.22 | 2400 | 0.2589 | 0.2324 |
| 0.0704 | 25.92 | 2800 | 0.2826 | 0.2339 |
| 0.0537 | 29.63 | 3200 | 0.2704 | 0.2309 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.1
- Datasets 1.14.1.dev0
- Tokenizers 0.10.3
|
doc2query/S2ORC-t5-base-v1
|
doc2query
| 2021-10-27T10:04:09Z | 35 | 4 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:S2ORC",
"arxiv:1904.08375",
"arxiv:2104.08663",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- S2ORC
widget:
- text: "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."
license: apache-2.0
---
# doc2query/S2ORC-t5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on T5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/UKPLab/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. On [SBERT.net](https://www.sbert.net/examples/unsupervised_learning/query_generation/README.html) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
model_name = 'doc2query/S2ORC-t5-base-v1'
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
text = "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."
input_ids = tokenizer.encode(text, max_length=320, truncation=True, return_tensors='pt')
outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
num_return_sequences=5)
print("Text:")
print(text)
print("\nGenerated Queries:")
for i in range(len(outputs)):
query = tokenizer.decode(outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
```
**Note:** `model.generate()` is non-deterministic. It produces different queries each time you run it.
## Training
This model fine-tuned [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) for 156k training steps. For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (title, abstract) pairs from [S2ORC](https://github.com/allenai/s2orc).
|
doc2query/reddit-t5-base-v1
|
doc2query
| 2021-10-27T09:56:25Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"arxiv:1904.08375",
"arxiv:2104.08663",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- datasets/sentence-transformers/reddit-title-body
widget:
- text: "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."
license: apache-2.0
---
# doc2query/reddit-t5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on T5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/UKPLab/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. On [SBERT.net](https://www.sbert.net/examples/unsupervised_learning/query_generation/README.html) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
model_name = 'doc2query/reddit-t5-base-v1'
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
text = "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."
input_ids = tokenizer.encode(text, max_length=320, truncation=True, return_tensors='pt')
outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
num_return_sequences=5)
print("Text:")
print(text)
print("\nGenerated Queries:")
for i in range(len(outputs)):
query = tokenizer.decode(outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
```
**Note:** `model.generate()` is non-deterministic. It produces different queries each time you run it.
## Training
This model fine-tuned [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) for 533k training steps. For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (title, body) from Reddit.
|
peter2000/xlm-roberta-base-finetuned-ecoicop
|
peter2000
| 2021-10-27T09:02:06Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base-finetuned-ecoicop
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-ecoicop
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1685
- Acc: 0.9659
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.4224 | 1.0 | 2577 | 0.3612 | 0.9132 |
| 0.2313 | 2.0 | 5154 | 0.2510 | 0.9441 |
| 0.1746 | 3.0 | 7731 | 0.1928 | 0.9569 |
| 0.1325 | 4.0 | 10308 | 0.1731 | 0.9640 |
| 0.0946 | 5.0 | 12885 | 0.1685 | 0.9659 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
lighteternal/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-mnli
|
lighteternal
| 2021-10-27T07:47:56Z | 188 | 4 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"textual-entailment",
"nli",
"en",
"dataset:mnli",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language: en
tags:
- textual-entailment
- nli
- pytorch
datasets:
- mnli
license: mit
widget :
- text: "EpCAM is overexpressed in breast cancer. </s></s> EpCAM is downregulated in breast cancer."
---
# BiomedNLP-PubMedBERT finetuned on textual entailment (NLI)
The [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext?text=%5BMASK%5D+is+a+tumor+suppressor+gene) finetuned on the MNLI dataset. It should be useful in textual entailment tasks involving biomedical corpora.
## Usage
Given two sentences (a premise and a hypothesis), the model outputs the logits of entailment, neutral or contradiction.
You can test the model using the HuggingFace model widget on the side:
- Input two sentences (premise and hypothesis) one after the other.
- The model returns the probabilities of 3 labels: entailment(LABEL:0), neutral(LABEL:1) and contradiction(LABEL:2) respectively.
To use the model locally on your machine:
```python
# import torch
# device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("lighteternal/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-mnli")
model = AutoModelForSequenceClassification.from_pretrained("lighteternal/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-mnli")
premise = 'EpCAM is overexpressed in breast cancer'
hypothesis = 'EpCAM is downregulated in breast cancer.'
# run through model pre-trained on MNLI
x = tokenizer.encode(premise, hypothesis, return_tensors='pt',
truncation_strategy='only_first')
logits = model(x)[0]
probs = logits.softmax(dim=1)
print('Probabilities for entailment, neutral, contradiction \n', np.around(probs.cpu().
detach().numpy(),3))
# Probabilities for entailment, neutral, contradiction
# 0.001 0.001 0.998
```
## Metrics
Evaluation on classification accuracy (entailment, contradiction, neutral) on MNLI test set:
| Metric | Value |
| --- | --- |
| Accuracy | 0.8338|
See Training Metrics tab for detailed info.
|
Tuana/eigenfaces-sklearn-lfw
|
Tuana
| 2021-10-27T01:53:23Z | 0 | 1 | null |
[
"joblib",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# Model to Recognize Faces using eigenfaces and scikit-learn
Simple model that was trained on a preprocessed excerpt of the “Labeled Faces in the Wild”, aka [LFW](http://vis-www.cs.umass.edu/lfw/)
This demo was taken from [Scikit-learn](https://scikit-learn.org/stable/auto_examples/applications/plot_face_recognition.html)
The dataset includes 7 classes (individuals):

|
chandank/bart-base-finetuned-kagglenews-entityfiltering
|
chandank
| 2021-10-27T01:06:10Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-base-finetuned-kagglenews-entityfiltering
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-kagglenews-entityfiltering
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5703
- Rouge1: 28.2719
- Rouge2: 15.6883
- Rougel: 24.0674
- Rougelsum: 25.616
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.9187 | 1.0 | 863 | 1.5703 | 28.2719 | 15.6883 | 24.0674 | 25.616 | 20.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu102
- Datasets 1.14.0
- Tokenizers 0.10.3
|
pritoms/gpt2-finetuned-python2
|
pritoms
| 2021-10-26T23:15:08Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-finetuned-python2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-finetuned-python2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9454
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 25 | 2.0135 |
| No log | 2.0 | 50 | 1.9618 |
| No log | 3.0 | 75 | 1.9454 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
asapp/sew-small-100k
|
asapp
| 2021-10-26T19:39:52Z | 25 | 0 |
transformers
|
[
"transformers",
"pytorch",
"sew",
"feature-extraction",
"speech",
"en",
"dataset:librispeech_asr",
"arxiv:2109.06870",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
---
# SEW-small
[SEW by ASAPP Research](https://github.com/asappresearch/sew)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...
Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870)
Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi
**Abstract**
This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.
The original model can be found under https://github.com/asappresearch/sew#model-checkpoints .
# Usage
See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWForCTC`.
|
chaitanya97/german_pretrained
|
chaitanya97
| 2021-10-26T13:35:37Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: german_pretrained
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# german_pretrained
This model is a fine-tuned version of [flozi00/wav2vec-xlsr-german](https://huggingface.co/flozi00/wav2vec-xlsr-german) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9812
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 12.5229 | 5.0 | 5 | 12.9520 | 1.0 |
| 4.3782 | 10.0 | 10 | 5.5689 | 1.0 |
| 2.56 | 15.0 | 15 | 4.8410 | 1.0 |
| 2.2895 | 20.0 | 20 | 4.0380 | 1.0 |
| 1.872 | 25.0 | 25 | 3.9558 | 1.0 |
| 1.6992 | 30.0 | 30 | 3.9812 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu102
- Datasets 1.13.3
- Tokenizers 0.10.3
|
Jihyun22/bert-base-finetuned-nli
|
Jihyun22
| 2021-10-26T11:07:39Z | 17 | 3 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:klue",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
tags:
- generated_from_trainer
datasets:
- klue
metrics:
- accuracy
model_index:
- name: bert-base-finetuned-nli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: klue
type: klue
args: nli
metric:
name: Accuracy
type: accuracy
value: 0.756
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-finetuned-nli
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1357
- Accuracy: 0.756
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 196 | 0.7357 | 0.156 |
| No log | 2.0 | 392 | 0.5952 | 0.0993 |
| 0.543 | 3.0 | 588 | 0.5630 | 0.099 |
| 0.543 | 4.0 | 784 | 0.5670 | 0.079 |
| 0.543 | 5.0 | 980 | 0.5795 | 0.078 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
owen99630/catexp2
|
owen99630
| 2021-10-26T04:58:10Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
{0: 'Anorexia',
1: 'Anxiety',
2: 'Bullying',
3: 'Care',
4: 'Creativity',
5: 'Culture',
6: 'Depression',
7: 'Friends',
8: 'Getting help',
9: 'Happiness',
10: 'Helping others',
11: 'Helping yourself',
12: 'Hope',
13: 'Learning',
14: 'Life Issues',
15: 'Mental Health',
16: 'Mental Health Matters',
17: 'Mental health awareness',
18: 'PTSD',
19: 'Positivity',
20: 'Resilience',
21: 'Self-care',
22: 'Sharing',
23: 'Support',
24: 'University'}
|
huggingtweets/theonion
|
huggingtweets
| 2021-10-26T04:42:42Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/theonion/1635223358201/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/875392068125769732/yrN-1k0Y_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">The Onion</div>
<div style="text-align: center; font-size: 14px;">@theonion</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from The Onion.
| Data | The Onion |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 2 |
| Short tweets | 10 |
| Tweets kept | 3238 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/tl5cqc3f/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @theonion's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1y8p1w9v) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1y8p1w9v/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/theonion')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
kornesh/xlm-roberta-large
|
kornesh
| 2021-10-26T01:30:01Z | 139 | 0 |
transformers
|
[
"transformers",
"tf",
"xlm-roberta",
"feature-extraction",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
Converted for Tensorflow
```
name = "xlm-roberta-large"
!rm -rf local
!git clone https://huggingface.co/kornesh/"$name" local
model = TFAutoModel.from_pretrained(name, from_pt=True)
tokenizer = AutoTokenizer.from_pretrained(name)
model.save_pretrained("local")
tokenizer.save_pretrained("local")
!cd local/ && git lfs install && git add . && git commit -m "Initial commit" && git push
```
|
kornesh/xlm-roberta-base
|
kornesh
| 2021-10-26T01:25:22Z | 146 | 1 |
transformers
|
[
"transformers",
"tf",
"xlm-roberta",
"feature-extraction",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
Converted for Tensorflow
```
!pip install transformers sentencepiece
from transformers import TFAutoModel, AutoTokenizer
name = "xlm-roberta-base"
model = TFAutoModel.from_pretrained(name, from_pt=True)
tokenizer = AutoTokenizer.from_pretrained(name)
model.save_pretrained("local-xlm-roberta-base")
tokenizer.save_pretrained("local-xlm-roberta-base")
```
|
kornesh/indic-bert
|
kornesh
| 2021-10-26T01:03:15Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"albert",
"feature-extraction",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
Converted for Tensorflow
```
!pip install transformers sentencepiece
from transformers import TFAutoModel, AutoTokenizer
name = "ai4bharat/indic-bert"
model = TFAutoModel.from_pretrained(name, from_pt=True)
tokenizer = AutoTokenizer.from_pretrained(name)
model.save_pretrained("local-indic-bert")
tokenizer.save_pretrained("local-indic-bert")
```
|
espnet/siddhana_slurp_new_asr_train_asr_conformer_raw_en_word_valid.acc.ave_10best
|
espnet
| 2021-10-25T23:23:39Z | 1 | 0 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:slurp",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: en
datasets:
- slurp
license: cc-by-4.0
---
## ESPnet2 SLU pretrained model
### `siddhana/slurp_new_asr_train_asr_conformer_raw_en_word_valid.acc.ave_10best`
♻️ Imported from https://zenodo.org/record/5590384
This model was trained by siddhana using slurp/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
danielvasic/en_acnl_electra_pipeline
|
danielvasic
| 2021-10-25T18:45:15Z | 4 | 0 |
spacy
|
[
"spacy",
"token-classification",
"en",
"model-index",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
tags:
- spacy
- token-classification
language:
- en
model-index:
- name: en_acnl_electra_pipeline
results:
- task:
name: POS
type: token-classification
metrics:
- name: POS Accuracy
type: accuracy
value: 0.9769257272
- task:
name: SENTER
type: token-classification
metrics:
- name: SENTER Precision
type: precision
value: 0.9508884151
- name: SENTER Recall
type: recall
value: 0.94805839
- name: SENTER F Score
type: f_score
value: 0.9494712937
- task:
name: UNLABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Unlabeled Dependencies Accuracy
type: accuracy
value: 0.9577103137
- task:
name: LABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Labeled Dependencies Accuracy
type: accuracy
value: 0.9577103137
---
| Feature | Description |
| --- | --- |
| **Name** | `en_acnl_electra_pipeline` |
| **Version** | `0.0.1` |
| **spaCy** | `>=3.1.3,<3.2.0` |
| **Default Pipeline** | `transformer`, `tagger`, `parser` |
| **Components** | `transformer`, `tagger`, `parser` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | GPL |
| **Author** | Daniel Vasić() |
### Label Scheme
<details>
<summary>View label scheme (87 labels for 2 components)</summary>
| Component | Labels |
| --- | --- |
| **`tagger`** | `$`, `''`, `,`, `-LRB-`, `-RRB-`, `.`, `:`, `ADD`, `AFX`, `CC`, `CD`, `DT`, `EX`, `FW`, `HYPH`, `IN`, `JJ`, `JJR`, `JJS`, `LS`, `MD`, `NFP`, `NN`, `NNP`, `NNPS`, `NNS`, `PDT`, `POS`, `PRP`, `PRP$`, `RB`, `RBR`, `RBS`, `RP`, `SYM`, `TO`, `UH`, `VB`, `VBD`, `VBG`, `VBN`, `VBP`, `VBZ`, `VERB`, `WDT`, `WP`, `WP$`, `WRB`, `XX`, ```` |
| **`parser`** | `ROOT`, `acl`, `acomp`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `auxpass`, `case`, `cc`, `ccomp`, `compound`, `conj`, `dative`, `dep`, `det`, `dobj`, `intj`, `mark`, `meta`, `neg`, `nmod`, `npadvmod`, `nummod`, `parataxis`, `pcomp`, `pobj`, `poss`, `preconj`, `predet`, `prep`, `prt`, `punct`, `quantmod`, `relcl`, `xcomp` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TAG_ACC` | 97.69 |
| `DEP_UAS` | 95.77 |
| `DEP_LAS` | 94.52 |
| `SENTS_P` | 95.09 |
| `SENTS_R` | 94.81 |
| `SENTS_F` | 94.95 |
| `TRANSFORMER_LOSS` | 6123357.72 |
| `TAGGER_LOSS` | 338995.26 |
| `PARSER_LOSS` | 4101825.66 |
|
kwang2049/TSDAE-cqadupstack
|
kwang2049
| 2021-10-25T16:18:29Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"arxiv:2104.06979",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
# kwang2049/TSDAE-cqadupstack2nli_stsb
This is a model from the paper ["TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning"](https://arxiv.org/abs/2104.06979). This model was only trained with the TSDAE objective on cqadupstack in an unsupervised manner. Training procedure of this model:
1. Initialized with [bert-base-uncased](https://huggingface.co/bert-base-uncased);
2. Unsupervised training on cqadupstack with the TSDAE objective;
The pooling method is CLS-pooling.
## Usage
To use this model, an convenient way is through [SentenceTransformers](https://github.com/UKPLab/sentence-transformers). So please install it via:
```bash
pip install sentence-transformers
```
And then load the model and use it to encode sentences:
```python
from sentence_transformers import SentenceTransformer, models
dataset = 'cqadupstack'
model_name_or_path = f'kwang2049/TSDAE-{dataset}'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
sentence_embeddings = model.encode(['This is the first sentence.', 'This is the second one.'])
```
## Evaluation
To evaluate the model against the datasets used in the paper, please install our evaluation toolkit [USEB](https://github.com/UKPLab/useb):
```bash
pip install useb # Or git clone and pip install .
python -m useb.downloading all # Download both training and evaluation data
```
And then do the evaluation:
```python
from sentence_transformers import SentenceTransformer, models
import torch
from useb import run_on
dataset = 'cqadupstack'
model_name_or_path = f'kwang2049/TSDAE-{dataset}'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
@torch.no_grad()
def semb_fn(sentences) -> torch.Tensor:
return torch.Tensor(model.encode(sentences, show_progress_bar=False))
result = run_on(
dataset,
semb_fn=semb_fn,
eval_type='test',
data_eval_path='data-eval'
)
```
## Training
Please refer to [the page of TSDAE training](https://github.com/UKPLab/sentence-transformers/tree/master/examples/unsupervised_learning/TSDAE) in SentenceTransformers.
## Cite & Authors
If you use the code for evaluation, feel free to cite our publication [TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning](https://arxiv.org/abs/2104.06979):
```bibtex
@article{wang-2021-TSDAE,
title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.06979",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.06979",
}
```
|
patrickvonplaten/wav2vec2-base-repro-timit
|
patrickvonplaten
| 2021-10-25T16:17:50Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"timit_asr",
"generated_from_trainer",
"dataset:timit_asr",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- automatic-speech-recognition
- timit_asr
- generated_from_trainer
datasets:
- timit_asr
model-index:
- name: wav2vec2-base-repro-timit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-repro-timit
This model is a fine-tuned version of [patrickvonplaten/wav2vec2-base-repro-960h-libri-85k-steps](https://huggingface.co/patrickvonplaten/wav2vec2-base-repro-960h-libri-85k-steps) on the TIMIT_ASR - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8562
- Wer: 0.5484
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.9793 | 0.69 | 100 | 5.4532 | 1.0 |
| 2.9066 | 1.38 | 200 | 2.9070 | 1.0 |
| 2.2562 | 2.07 | 300 | 2.0323 | 1.0 |
| 1.5273 | 2.76 | 400 | 1.1510 | 0.8001 |
| 1.1085 | 3.45 | 500 | 0.9521 | 0.7053 |
| 0.813 | 4.14 | 600 | 0.8617 | 0.6702 |
| 0.8434 | 4.83 | 700 | 0.8068 | 0.6393 |
| 0.9631 | 5.52 | 800 | 0.7863 | 0.6248 |
| 0.707 | 6.21 | 900 | 0.7476 | 0.5973 |
| 0.5568 | 6.9 | 1000 | 0.7350 | 0.5911 |
| 0.6171 | 7.59 | 1100 | 0.7171 | 0.5841 |
| 0.7011 | 8.28 | 1200 | 0.7318 | 0.5798 |
| 0.5546 | 8.97 | 1300 | 0.7447 | 0.5767 |
| 0.4278 | 9.66 | 1400 | 0.7481 | 0.5650 |
| 0.3576 | 10.34 | 1500 | 0.7443 | 0.5713 |
| 0.5506 | 11.03 | 1600 | 0.7574 | 0.5664 |
| 0.4127 | 11.72 | 1700 | 0.8043 | 0.5631 |
| 0.3251 | 12.41 | 1800 | 0.7738 | 0.5550 |
| 0.3119 | 13.1 | 1900 | 0.7829 | 0.5516 |
| 0.4371 | 13.79 | 2000 | 0.8025 | 0.5556 |
| 0.3772 | 14.48 | 2100 | 0.8451 | 0.5559 |
| 0.2942 | 15.17 | 2200 | 0.8300 | 0.5556 |
| 0.2503 | 15.86 | 2300 | 0.8417 | 0.5541 |
| 0.3671 | 16.55 | 2400 | 0.8568 | 0.5528 |
| 0.3867 | 17.24 | 2500 | 0.8521 | 0.5510 |
| 0.2614 | 17.93 | 2600 | 0.8479 | 0.5523 |
| 0.2441 | 18.62 | 2700 | 0.8558 | 0.5494 |
| 0.3059 | 19.31 | 2800 | 0.8553 | 0.5474 |
| 0.3734 | 20.0 | 2900 | 0.8562 | 0.5484 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.1
- Datasets 1.14.1.dev0
- Tokenizers 0.10.3
|
kwang2049/TSDAE-twitterpara2nli_stsb
|
kwang2049
| 2021-10-25T16:14:49Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"arxiv:2104.06979",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
# kwang2049/TSDAE-twitterpara2nli_stsb
This is a model from the paper ["TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning"](https://arxiv.org/abs/2104.06979). This model adapts the knowledge from the NLI and STSb data to the specific domain twitterpara. Training procedure of this model:
1. Initialized with [bert-base-uncased](https://huggingface.co/bert-base-uncased);
2. Unsupervised training on twitterpara with the TSDAE objective;
3. Supervised training on the NLI data with cross-entropy loss;
4. Supervised training on the STSb data with MSE loss.
The pooling method is CLS-pooling.
## Usage
To use this model, an convenient way is through [SentenceTransformers](https://github.com/UKPLab/sentence-transformers). So please install it via:
```bash
pip install sentence-transformers
```
And then load the model and use it to encode sentences:
```python
from sentence_transformers import SentenceTransformer, models
dataset = 'twitterpara'
model_name_or_path = f'kwang2049/TSDAE-{dataset}2nli_stsb'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
sentence_embeddings = model.encode(['This is the first sentence.', 'This is the second one.'])
```
## Evaluation
To evaluate the model against the datasets used in the paper, please install our evaluation toolkit [USEB](https://github.com/UKPLab/useb):
```bash
pip install useb # Or git clone and pip install .
python -m useb.downloading all # Download both training and evaluation data
```
And then do the evaluation:
```python
from sentence_transformers import SentenceTransformer, models
import torch
from useb import run_on
dataset = 'twitterpara'
model_name_or_path = f'kwang2049/TSDAE-{dataset}2nli_stsb'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
@torch.no_grad()
def semb_fn(sentences) -> torch.Tensor:
return torch.Tensor(model.encode(sentences, show_progress_bar=False))
result = run_on(
dataset,
semb_fn=semb_fn,
eval_type='test',
data_eval_path='data-eval'
)
```
## Training
Please refer to [the page of TSDAE training](https://github.com/UKPLab/sentence-transformers/tree/master/examples/unsupervised_learning/TSDAE) in SentenceTransformers.
## Cite & Authors
If you use the code for evaluation, feel free to cite our publication [TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning](https://arxiv.org/abs/2104.06979):
```bibtex
@article{wang-2021-TSDAE,
title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.06979",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.06979",
}
```
|
kwang2049/TSDAE-cqadupstack2nli_stsb
|
kwang2049
| 2021-10-25T16:14:19Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"arxiv:2104.06979",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
# kwang2049/TSDAE-cqadupstack2nli_stsb
This is a model from the paper ["TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning"](https://arxiv.org/abs/2104.06979). This model adapts the knowledge from the NLI and STSb data to the specific domain cqadupstack. Training procedure of this model:
1. Initialized with [bert-base-uncased](https://huggingface.co/bert-base-uncased);
2. Unsupervised training on cqadupstack with the TSDAE objective;
3. Supervised training on the NLI data with cross-entropy loss;
4. Supervised training on the STSb data with MSE loss.
The pooling method is CLS-pooling.
## Usage
To use this model, an convenient way is through [SentenceTransformers](https://github.com/UKPLab/sentence-transformers). So please install it via:
```bash
pip install sentence-transformers
```
And then load the model and use it to encode sentences:
```python
from sentence_transformers import SentenceTransformer, models
dataset = 'cqadupstack'
model_name_or_path = f'kwang2049/TSDAE-{dataset}2nli_stsb'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
sentence_embeddings = model.encode(['This is the first sentence.', 'This is the second one.'])
```
## Evaluation
To evaluate the model against the datasets used in the paper, please install our evaluation toolkit [USEB](https://github.com/UKPLab/useb):
```bash
pip install useb # Or git clone and pip install .
python -m useb.downloading all # Download both training and evaluation data
```
And then do the evaluation:
```python
from sentence_transformers import SentenceTransformer, models
import torch
from useb import run_on
dataset = 'cqadupstack'
model_name_or_path = f'kwang2049/TSDAE-{dataset}2nli_stsb'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
@torch.no_grad()
def semb_fn(sentences) -> torch.Tensor:
return torch.Tensor(model.encode(sentences, show_progress_bar=False))
result = run_on(
dataset,
semb_fn=semb_fn,
eval_type='test',
data_eval_path='data-eval'
)
```
## Training
Please refer to [the page of TSDAE training](https://github.com/UKPLab/sentence-transformers/tree/master/examples/unsupervised_learning/TSDAE) in SentenceTransformers.
## Cite & Authors
If you use the code for evaluation, feel free to cite our publication [TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning](https://arxiv.org/abs/2104.06979):
```bibtex
@article{wang-2021-TSDAE,
title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.06979",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.06979",
}
```
|
kwang2049/TSDAE-askubuntu2nli_stsb
|
kwang2049
| 2021-10-25T16:13:34Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"arxiv:2104.06979",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
# kwang2049/TSDAE-askubuntu2nli_stsb
This is a model from the paper ["TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning"](https://arxiv.org/abs/2104.06979). This model adapts the knowledge from the NLI and STSb data to the specific domain AskUbuntu. Training procedure of this model:
1. Initialized with [bert-base-uncased](https://huggingface.co/bert-base-uncased);
2. Unsupervised training on AskUbuntu with the TSDAE objective;
3. Supervised training on the NLI data with cross-entropy loss;
4. Supervised training on the STSb data with MSE loss.
The pooling method is CLS-pooling.
## Usage
To use this model, an convenient way is through [SentenceTransformers](https://github.com/UKPLab/sentence-transformers). So please install it via:
```bash
pip install sentence-transformers
```
And then load the model and use it to encode sentences:
```python
from sentence_transformers import SentenceTransformer, models
dataset = 'askubuntu'
model_name_or_path = f'kwang2049/TSDAE-{dataset}2nli_stsb'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
sentence_embeddings = model.encode(['This is the first sentence.', 'This is the second one.'])
```
## Evaluation
To evaluate the model against the datasets used in the paper, please install our evaluation toolkit [USEB](https://github.com/UKPLab/useb):
```bash
pip install useb # Or git clone and pip install .
python -m useb.downloading all # Download both training and evaluation data
```
And then do the evaluation:
```python
from sentence_transformers import SentenceTransformer, models
import torch
from useb import run_on
dataset = 'askubuntu'
model_name_or_path = f'kwang2049/TSDAE-{dataset}2nli_stsb'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
@torch.no_grad()
def semb_fn(sentences) -> torch.Tensor:
return torch.Tensor(model.encode(sentences, show_progress_bar=False))
result = run_on(
dataset,
semb_fn=semb_fn,
eval_type='test',
data_eval_path='data-eval'
)
```
## Training
Please refer to [the page of TSDAE training](https://github.com/UKPLab/sentence-transformers/tree/master/examples/unsupervised_learning/TSDAE) in SentenceTransformers.
## Cite & Authors
If you use the code for evaluation, feel free to cite our publication [TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning](https://arxiv.org/abs/2104.06979):
```bibtex
@article{wang-2021-TSDAE,
title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.06979",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.06979",
}
```
|
napoler/bart-chinese-6-960-words-pkuseg
|
napoler
| 2021-10-25T15:05:51Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
# 使用
这个模型是在uer/bart-chinese-6-960-cluecorpussmall基础上训练的,数据量不是很大,但是修改了默认分词。
使用pkuseg分词,禁用BertTokenizer的do_basic_tokenize分词,不禁用do_basic_tokenize的话会把正常词汇按照逐字分词,禁用后可以导入自己的分词方案。
pip install git+https://github.com/napoler/tkit-AutoTokenizerPosition
```python
import pkuseg
from tkitAutoTokenizerPosition.AutoPos import AutoPos
seg = pkuseg.pkuseg(model_name='medicine') # 程序会自动下载所对应的细领域模型
tokenizer = BertTokenizer.from_pretrained("uer/chinese_roberta_L-2_H-128",do_basic_tokenize=False)
ATP=AutoPos(seg,tokenizer)
# 清理文本中的问题
ATP.getTokenize(text)
```
分词结果如下
```
['他', '##们', '的', '伤', '##害', ',', '以', '##及', '陷', '##阱', '能', '##力', '的', '组', '##合', ',', '猎', '##人', '对', '##于', '任', '##何', '团', '##队', '都', '是', '最', '##好', '的', '拉', '##怪', '##者', '.'], 'cut': ['他们', '的', '伤害', ',', '以及', '陷阱', '能力', '的', '组合', ',', '猎人', '对于', '任何', '团队', '都', '是', '最好', '的', '拉怪者', '.']
```
https://www.kaggle.com/terrychanorg/napolerbartchinese6960wordspkuseg
https://www.kaggle.com/terrychanorg/buliddataforbert-7803feff2
https://www.kaggle.com/terrychanorg/bart-notebook8wewew6eeb0f8af
https://www.kaggle.com/terrychanorg/fork-of-bart-notebook8wewew6eeb0f8af/data?scriptVersionId=77962540
|
patrickvonplaten/wav2vec2-base-repro-960h-libri-85k-steps
|
patrickvonplaten
| 2021-10-25T13:15:45Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
https://wandb.ai/patrickvonplaten/test/reports/Wav2Vec2-Base--VmlldzoxMTUyODQ0?accessToken=rg6e8u9yizx964k8q47zctq1m4afpvtn1i3qi9exgdmzip6xwkfzvagfajpzj55n
|
teacookies/autonlp-more_fine_tune_24465520-26265900
|
teacookies
| 2021-10-25T09:51:20Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"autonlp",
"unk",
"dataset:teacookies/autonlp-data-more_fine_tune_24465520",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-more_fine_tune_24465520
co2_eq_emissions: 123.16270720220912
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 26265900
- CO2 Emissions (in grams): 123.16270720220912
## Validation Metrics
- Loss: 0.6387976408004761
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-more_fine_tune_24465520-26265900
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265900", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265900", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
```
|
teacookies/autonlp-more_fine_tune_24465520-26265908
|
teacookies
| 2021-10-25T09:36:35Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"autonlp",
"unk",
"dataset:teacookies/autonlp-data-more_fine_tune_24465520",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-more_fine_tune_24465520
co2_eq_emissions: 96.32087452115675
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 26265908
- CO2 Emissions (in grams): 96.32087452115675
## Validation Metrics
- Loss: 0.5696008801460266
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-more_fine_tune_24465520-26265908
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265908", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265908", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
```
|
teacookies/autonlp-more_fine_tune_24465520-26265904
|
teacookies
| 2021-10-25T09:36:11Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"autonlp",
"unk",
"dataset:teacookies/autonlp-data-more_fine_tune_24465520",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-more_fine_tune_24465520
co2_eq_emissions: 108.63800043275934
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 26265904
- CO2 Emissions (in grams): 108.63800043275934
## Validation Metrics
- Loss: 0.5807144045829773
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-more_fine_tune_24465520-26265904
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265904", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265904", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
```
|
teacookies/autonlp-more_fine_tune_24465520-26265907
|
teacookies
| 2021-10-25T09:35:36Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"autonlp",
"unk",
"dataset:teacookies/autonlp-data-more_fine_tune_24465520",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-more_fine_tune_24465520
co2_eq_emissions: 103.5636883689371
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 26265907
- CO2 Emissions (in grams): 103.5636883689371
## Validation Metrics
- Loss: 0.6072460412979126
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-more_fine_tune_24465520-26265907
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265907", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265907", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
```
|
teacookies/autonlp-more_fine_tune_24465520-26265911
|
teacookies
| 2021-10-25T09:35:36Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"autonlp",
"unk",
"dataset:teacookies/autonlp-data-more_fine_tune_24465520",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-more_fine_tune_24465520
co2_eq_emissions: 97.58591836686978
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 26265911
- CO2 Emissions (in grams): 97.58591836686978
## Validation Metrics
- Loss: 6.2383246421813965
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-more_fine_tune_24465520-26265911
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265911", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265911", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
```
|
teacookies/autonlp-more_fine_tune_24465520-26265898
|
teacookies
| 2021-10-25T09:22:22Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"autonlp",
"unk",
"dataset:teacookies/autonlp-data-more_fine_tune_24465520",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-more_fine_tune_24465520
co2_eq_emissions: 82.78379967029494
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 26265898
- CO2 Emissions (in grams): 82.78379967029494
## Validation Metrics
- Loss: 0.5732079148292542
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-more_fine_tune_24465520-26265898
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265898", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265898", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
```
|
teacookies/autonlp-more_fine_tune_24465520-26265906
|
teacookies
| 2021-10-25T09:22:17Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"autonlp",
"unk",
"dataset:teacookies/autonlp-data-more_fine_tune_24465520",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-more_fine_tune_24465520
co2_eq_emissions: 83.00580438705762
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 26265906
- CO2 Emissions (in grams): 83.00580438705762
## Validation Metrics
- Loss: 0.5259918570518494
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-more_fine_tune_24465520-26265906
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265906", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265906", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
```
|
teacookies/autonlp-more_fine_tune_24465520-26265902
|
teacookies
| 2021-10-25T09:22:00Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"autonlp",
"unk",
"dataset:teacookies/autonlp-data-more_fine_tune_24465520",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-more_fine_tune_24465520
co2_eq_emissions: 83.78453848505326
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 26265902
- CO2 Emissions (in grams): 83.78453848505326
## Validation Metrics
- Loss: 0.5470030903816223
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-more_fine_tune_24465520-26265902
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265902", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265902", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
```
|
tftransformers/t5-small
|
tftransformers
| 2021-10-25T08:13:06Z | 4 | 0 |
transformers
|
[
"transformers",
"summarization",
"translation",
"en",
"fr",
"ro",
"de",
"dataset:c4",
"arxiv:1910.10683",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:05Z |
---
language:
- en
- fr
- ro
- de
datasets:
- c4
tags:
- summarization
- translation
license: apache-2.0
---
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html)
Pretraining Dataset: [C4](https://huggingface.co/datasets/c4)
Other Community Checkpoints: [here](https://huggingface.co/models?search=t5)
Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf)
Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu*
## Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpusâ€Â, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.

## Usage
```
from tf_transformers.models import T5Model
# Any T5 model (t5-small, t5-base, t5-large etc)
model_name = 't5-small'
model = T5Model.from_pretrained(model_name)
```
|
yseop/distilbert-base-financial-relation-extraction
|
yseop
| 2021-10-25T07:33:13Z | 24 | 5 |
transformers
|
[
"transformers",
"pytorch",
"feature-extraction",
"text-classification",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
inference: true
pipeline_tag: text-classification
tags:
- feature-extraction
- text-classification
library: pytorch
---
<div style="clear: both;">
<div style="float: left; margin-right 1em;">
<h1><strong>FReE (Financial Relation Extraction)</strong></h1>
</div>
<div>
<h2><img src="https://pbs.twimg.com/profile_images/1333760924914753538/fQL4zLUw_400x400.png" alt="" width="25" height="25"></h2>
</div>
</div>
We present FReE, a [DistilBERT](https://huggingface.co/distilbert-base-uncased) base model fine-tuned on a custom financial dataset for financial relation type detection and classification.
## Process
Detecting the presence of a relationship between financial terms and qualifying the relationship in case of its presence. Example use cases:
* An A-B trust is a joint trust created by a married couple for the purpose of minimizing estate taxes. (<em>Relationship **exists**, type: **is**</em>)
* There are no withdrawal penalties. (<em>Relationship **does not exist**, type: **x**</em>)
## Data
The data consists of financial definitions collected from different sources (Wikimedia, IFRS, Investopedia) for financial indicators. Each definition has been split up into sentences, and term relationships in a sentence have been extracted using the [Stanford Open Information Extraction](https://nlp.stanford.edu/software/openie.html) module.
A typical row in the dataset consists of a definition sentence and its corresponding relationship label.
The labels were restricted to the 5 most-widely identified relationships, namely: **x** (no relationship), **has**, **is in**, **is** and **are**.
## Model
The model used is a standard DistilBERT-base transformer model from the Hugging Face library. See [HUGGING FACE DistilBERT base model](https://huggingface.co/distilbert-base-uncased) for more details about the model.
In addition, the model has been pretrained to initializa weigths that would otherwise be unused if loaded from an existing pretrained stock model.
## Metrics
The evaluation metrics used are: Precision, Recall and F1-score. The following is the classification report on the test set.
| relation | precision | recall | f1-score | support |
| ------------- |:-------------:|:-------------:|:-------------:| -----:|
| has | 0.7416 | 0.9674 | 0.8396 | 2362 |
| is in | 0.7813 | 0.7925 | 0.7869 | 2362 |
| is | 0.8650 | 0.6863 | 0.7653 | 2362 |
| are | 0.8365 | 0.8493 | 0.8429 | 2362 |
| x | 0.9515 | 0.8302 | 0.8867 | 2362 |
| | | | | |
| macro avg | 0.8352 | 0.8251 | 0.8243 | 11810 |
| weighted avg | 0.8352 | 0.8251 | 0.8243 | 11810 |
|
delvan/DialoGPT-medium-DwightV1
|
delvan
| 2021-10-24T20:29:11Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
tags:
- conversational
---
#DialoGPT medium based model of Dwight Schrute, trained with 10 context lines of history for 20 epochs.
|
ydshieh/vit-gpt2-coco-en-ckpts
|
ydshieh
| 2021-10-24T12:01:42Z | 32 | 11 |
generic
|
[
"generic",
"pytorch",
"jax",
"tensorboard",
"vision-encoder-decoder",
"image-classification",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
tags:
- image-classification
library_name: generic
---
## Example
The model is by no means a state-of-the-art model, but nevertheless
produces reasonable image captioning results. It was mainly fine-tuned
as a proof-of-concept for the 🤗 FlaxVisionEncoderDecoder Framework.
The model can be used as follows:
```python
import requests
from PIL import Image
from transformers import ViTFeatureExtractor, AutoTokenizer, FlaxVisionEncoderDecoderModel
loc = "ydshieh/vit-gpt2-coco-en"
feature_extractor = ViTFeatureExtractor.from_pretrained(loc)
tokenizer = AutoTokenizer.from_pretrained(loc)
model = FlaxVisionEncoderDecoderModel.from_pretrained(loc)
# We will verify our results on an image of cute cats
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
with Image.open(requests.get(url, stream=True).raw) as img:
pixel_values = feature_extractor(images=img, return_tensors="np").pixel_values
def generate_step(pixel_values):
output_ids = model.generate(pixel_values, max_length=16, num_beams=4).sequences
preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
preds = [pred.strip() for pred in preds]
return preds
preds = generate_step(pixel_values)
print(preds)
# should produce
# ['a cat laying on top of a couch next to another cat']
```
|
tftransformers/gpt2-medium
|
tftransformers
| 2021-10-24T08:42:17Z | 3 | 0 |
transformers
|
[
"transformers",
"exbert",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language: en
tags:
- exbert
license: mit
---
# GPT-2
Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
Disclaimer: The team releasing GPT-2 also wrote a
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card
has been written by the Hugging Face team to complete the information they provided and give specific examples of bias.
## Model description
GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt.
## Intended uses & limitations
You can use the raw model for text generation or fine-tune it to a downstream task. See the
[model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you.
### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
from tf_transformers.models import GPT2Model
from transformers import GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium')
model = GPT2Model.from_pretrained("gpt2-medium")
text = "Replace me by any text you'd like."
inputs_tf = {}
inputs = tokenizer(text, return_tensors='tf')
inputs_tf["input_ids"] = inputs["input_ids"]
outputs_tf = model(inputs_tf)
```
### Limitations and bias
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of
unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases):
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
> that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do
> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a
> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,
> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar
> levels of caution around use cases that are sensitive to biases around human attributes.
## Training data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
## Training procedure
### Preprocessing
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact
details of training.
## Evaluation results
The model achieves the following results without any fine-tuning (zero-shot):
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
| | 35.13 | 45.99 | 87.65 | 83.4 | 29.41 | 65.85 | 1.16 | 1,17 | 37.50 | 75.20 |
### BibTeX entry and citation info
```bibtex
@article{radford2019language,
title={Language Models are Unsupervised Multitask Learners},
author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
year={2019}
}
```
<a href="https://huggingface.co/exbert/?model=gpt2">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.