pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
listlengths 0
201
| languages
listlengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
listlengths 0
722
| processed_texts
listlengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
text-classification
|
transformers
|
# Roberta Large STS-B
This model is a fine tuned RoBERTA model over STS-B.
It was trained with these params:
!python /content/transformers/examples/text-classification/run_glue.py \
--model_type roberta \
--model_name_or_path roberta-large \
--task_name STS-B \
--do_train \
--do_eval \
--do_lower_case \
--data_dir /content/glue_data/STS-B/ \
--max_seq_length 128 \
--per_gpu_eval_batch_size=8 \
--per_gpu_train_batch_size=8 \
--learning_rate 2e-5 \
--num_train_epochs 3.0 \
--output_dir /content/roberta-sts-b
## How to run
```python
import toolz
import torch
batch_size = 6
def roberta_similarity_batches(to_predict):
batches = toolz.partition(batch_size, to_predict)
similarity_scores = []
for batch in batches:
sentences = [(sentence_similarity["sent1"], sentence_similarity["sent2"]) for sentence_similarity in batch]
batch_scores = similarity_roberta(model, tokenizer,sentences)
similarity_scores = similarity_scores + batch_scores[0].cpu().squeeze(axis=1).tolist()
return similarity_scores
def similarity_roberta(model, tokenizer, sent_pairs):
batch_token = tokenizer(sent_pairs, padding='max_length', truncation=True, max_length=500)
res = model(torch.tensor(batch_token['input_ids']).cuda(), attention_mask=torch.tensor(batch_token["attention_mask"]).cuda())
return res
similarity_roberta(model, tokenizer, [('NEW YORK--(BUSINESS WIRE)--Rosen Law Firm, a global investor rights law firm, announces it is investigating potential securities claims on behalf of shareholders of Vale S.A. ( VALE ) resulting from allegations that Vale may have issued materially misleading business information to the investing public',
'EQUITY ALERT: Rosen Law Firm Announces Investigation of Securities Claims Against Vale S.A. – VALE')])
```
|
{}
|
SparkBeyond/roberta-large-sts-b
| null |
[
"transformers",
"pytorch",
"jax",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #jax #roberta #text-classification #autotrain_compatible #endpoints_compatible #region-us
|
# Roberta Large STS-B
This model is a fine tuned RoBERTA model over STS-B.
It was trained with these params:
!python /content/transformers/examples/text-classification/run_glue.py \
--model_type roberta \
--model_name_or_path roberta-large \
--task_name STS-B \
--do_train \
--do_eval \
--do_lower_case \
--data_dir /content/glue_data/STS-B/ \
--max_seq_length 128 \
--per_gpu_eval_batch_size=8 \
--per_gpu_train_batch_size=8 \
--learning_rate 2e-5 \
--num_train_epochs 3.0 \
--output_dir /content/roberta-sts-b
## How to run
|
[
"# Roberta Large STS-B\n\nThis model is a fine tuned RoBERTA model over STS-B.\nIt was trained with these params:\n!python /content/transformers/examples/text-classification/run_glue.py \\\n --model_type roberta \\\n --model_name_or_path roberta-large \\\n --task_name STS-B \\\n --do_train \\\n --do_eval \\\n --do_lower_case \\\n --data_dir /content/glue_data/STS-B/ \\\n --max_seq_length 128 \\\n --per_gpu_eval_batch_size=8 \\\n --per_gpu_train_batch_size=8 \\\n --learning_rate 2e-5 \\\n --num_train_epochs 3.0 \\\n --output_dir /content/roberta-sts-b",
"## How to run"
] |
[
"TAGS\n#transformers #pytorch #jax #roberta #text-classification #autotrain_compatible #endpoints_compatible #region-us \n",
"# Roberta Large STS-B\n\nThis model is a fine tuned RoBERTA model over STS-B.\nIt was trained with these params:\n!python /content/transformers/examples/text-classification/run_glue.py \\\n --model_type roberta \\\n --model_name_or_path roberta-large \\\n --task_name STS-B \\\n --do_train \\\n --do_eval \\\n --do_lower_case \\\n --data_dir /content/glue_data/STS-B/ \\\n --max_seq_length 128 \\\n --per_gpu_eval_batch_size=8 \\\n --per_gpu_train_batch_size=8 \\\n --learning_rate 2e-5 \\\n --num_train_epochs 3.0 \\\n --output_dir /content/roberta-sts-b",
"## How to run"
] |
text-generation
|
transformers
|
#EmmyBot
|
{"tags": ["conversational"]}
|
Spectrox/emmybot
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#EmmyBot
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation
|
transformers
|
# DialoGPT Trained on the Speech of a TV Series Character
This is an instance of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) trained on a TV series character, Sheldon from [The Big Bang Theory](https://en.wikipedia.org/wiki/The_Big_Bang_Theory). The data comes from [a Kaggle TV series script dataset](https://www.kaggle.com/mitramir5/the-big-bang-theory-series-transcript).
Chat with the model:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("spirax/DialoGPT-medium-sheldon")
model = AutoModelWithLMHead.from_pretrained("spirax/DialoGPT-medium-sheldon")
# Let's chat for 4 lines
for step in range(4):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("SheldorBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
```
|
{"license": "mit", "tags": ["conversational"], "thumbnail": "https://i.imgur.com/7HAcbbD.gif"}
|
Spirax/DialoGPT-medium-sheldon
| null |
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #safetensors #gpt2 #text-generation #conversational #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# DialoGPT Trained on the Speech of a TV Series Character
This is an instance of microsoft/DialoGPT-medium trained on a TV series character, Sheldon from The Big Bang Theory. The data comes from a Kaggle TV series script dataset.
Chat with the model:
|
[
"# DialoGPT Trained on the Speech of a TV Series Character\n\nThis is an instance of microsoft/DialoGPT-medium trained on a TV series character, Sheldon from The Big Bang Theory. The data comes from a Kaggle TV series script dataset.\n\n\nChat with the model:"
] |
[
"TAGS\n#transformers #pytorch #safetensors #gpt2 #text-generation #conversational #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# DialoGPT Trained on the Speech of a TV Series Character\n\nThis is an instance of microsoft/DialoGPT-medium trained on a TV series character, Sheldon from The Big Bang Theory. The data comes from a Kaggle TV series script dataset.\n\n\nChat with the model:"
] |
text-generation
|
transformers
|
# Engineer DialoGPT Model
|
{"tags": ["conversational"]}
|
Spoon/DialoGPT-small-engineer
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Engineer DialoGPT Model
|
[
"# Engineer DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Engineer DialoGPT Model"
] |
image-classification
|
transformers
|
# sriram-car-classifier
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### AM_General_Hummer_SUV_2000

#### Acura_Integra_Type_R_2001

#### Acura_RL_Sedan_2012

#### Acura_TL_Sedan_2012

#### Acura_TL_Type-S_2008

#### Acura_TSX_Sedan_2012

#### Acura_ZDX_Hatchback_2012

#### Aston_Martin_V8_Vantage_Convertible_2012

#### Aston_Martin_V8_Vantage_Coupe_2012

#### Aston_Martin_Virage_Convertible_2012

#### Aston_Martin_Virage_Coupe_2012

#### Audi_100_Sedan_1994

#### Audi_100_Wagon_1994

#### Audi_A5_Coupe_2012

#### Audi_R8_Coupe_2012

#### Audi_RS_4_Convertible_2008

#### Audi_S4_Sedan_2007

#### Audi_S4_Sedan_2012

#### Audi_S5_Convertible_2012

#### Audi_S5_Coupe_2012

#### Audi_S6_Sedan_2011

#### Audi_TTS_Coupe_2012

#### Audi_TT_Hatchback_2011

#### Audi_TT_RS_Coupe_2012

#### Audi_V8_Sedan_1994

#### BMW_1_Series_Convertible_2012

#### BMW_1_Series_Coupe_2012

#### BMW_3_Series_Sedan_2012

#### BMW_3_Series_Wagon_2012

#### BMW_6_Series_Convertible_2007

#### BMW_ActiveHybrid_5_Sedan_2012

#### BMW_M3_Coupe_2012

#### BMW_M5_Sedan_2010

#### BMW_M6_Convertible_2010

#### BMW_X3_SUV_2012

#### BMW_X5_SUV_2007

#### BMW_X6_SUV_2012

#### BMW_Z4_Convertible_2012

#### Bentley_Arnage_Sedan_2009

#### Bentley_Continental_Flying_Spur_Sedan_2007

#### Bentley_Continental_GT_Coupe_2007

#### Bentley_Continental_GT_Coupe_2012

#### Bentley_Continental_Supersports_Conv._Convertible_2012

#### Bentley_Mulsanne_Sedan_2011

#### Bugatti_Veyron_16.4_Convertible_2009

#### Bugatti_Veyron_16.4_Coupe_2009

#### Buick_Enclave_SUV_2012

#### Buick_Rainier_SUV_2007

#### Buick_Regal_GS_2012

#### Buick_Verano_Sedan_2012

#### Cadillac_CTS-V_Sedan_2012

#### Cadillac_Escalade_EXT_Crew_Cab_2007

#### Cadillac_SRX_SUV_2012

#### Chevrolet_Avalanche_Crew_Cab_2012

#### Chevrolet_Camaro_Convertible_2012

#### Chevrolet_Cobalt_SS_2010

#### Chevrolet_Corvette_Convertible_2012

#### Chevrolet_Corvette_Ron_Fellows_Edition_Z06_2007

#### Chevrolet_Corvette_ZR1_2012

#### Chevrolet_Express_Cargo_Van_2007

#### Chevrolet_Express_Van_2007

#### Chevrolet_HHR_SS_2010

#### Chevrolet_Impala_Sedan_2007

#### Chevrolet_Malibu_Hybrid_Sedan_2010

#### Chevrolet_Malibu_Sedan_2007

#### Chevrolet_Monte_Carlo_Coupe_2007

#### Chevrolet_Silverado_1500_Classic_Extended_Cab_2007

#### Chevrolet_Silverado_1500_Extended_Cab_2012

#### Chevrolet_Silverado_1500_Hybrid_Crew_Cab_2012

#### Chevrolet_Silverado_1500_Regular_Cab_2012

#### Chevrolet_Silverado_2500HD_Regular_Cab_2012

#### Chevrolet_Sonic_Sedan_2012

#### Chevrolet_Tahoe_Hybrid_SUV_2012

#### Chevrolet_TrailBlazer_SS_2009

#### Chevrolet_Traverse_SUV_2012

#### Chrysler_300_SRT-8_2010

#### Chrysler_Aspen_SUV_2009

#### Chrysler_Crossfire_Convertible_2008

#### Chrysler_PT_Cruiser_Convertible_2008

#### Chrysler_Sebring_Convertible_2010

#### Chrysler_Town_and_Country_Minivan_2012

#### Daewoo_Nubira_Wagon_2002

#### Dodge_Caliber_Wagon_2007

#### Dodge_Caliber_Wagon_2012

#### Dodge_Caravan_Minivan_1997

#### Dodge_Challenger_SRT8_2011

#### Dodge_Charger_SRT-8_2009

#### Dodge_Charger_Sedan_2012

#### Dodge_Dakota_Club_Cab_2007

#### Dodge_Dakota_Crew_Cab_2010

#### Dodge_Durango_SUV_2007

#### Dodge_Durango_SUV_2012

#### Dodge_Journey_SUV_2012

#### Dodge_Magnum_Wagon_2008

#### Dodge_Ram_Pickup_3500_Crew_Cab_2010

#### Dodge_Ram_Pickup_3500_Quad_Cab_2009

#### Dodge_Sprinter_Cargo_Van_2009

#### Eagle_Talon_Hatchback_1998

#### FIAT_500_Abarth_2012

#### FIAT_500_Convertible_2012

#### Ferrari_458_Italia_Convertible_2012

#### Ferrari_458_Italia_Coupe_2012

#### Ferrari_California_Convertible_2012

#### Ferrari_FF_Coupe_2012

#### Fisker_Karma_Sedan_2012

#### Ford_E-Series_Wagon_Van_2012

#### Ford_Edge_SUV_2012

#### Ford_Expedition_EL_SUV_2009

#### Ford_F-150_Regular_Cab_2007

#### Ford_F-150_Regular_Cab_2012

#### Ford_F-450_Super_Duty_Crew_Cab_2012

#### Ford_Fiesta_Sedan_2012

#### Ford_Focus_Sedan_2007

#### Ford_Freestar_Minivan_2007

#### Ford_GT_Coupe_2006

#### Ford_Mustang_Convertible_2007

#### Ford_Ranger_SuperCab_2011

#### GMC_Acadia_SUV_2012

#### GMC_Canyon_Extended_Cab_2012

#### GMC_Savana_Van_2012

#### GMC_Terrain_SUV_2012

#### GMC_Yukon_Hybrid_SUV_2012

#### Geo_Metro_Convertible_1993

#### HUMMER_H2_SUT_Crew_Cab_2009

#### HUMMER_H3T_Crew_Cab_2010

#### Honda_Accord_Coupe_2012

#### Honda_Accord_Sedan_2012

#### Honda_Odyssey_Minivan_2007

#### Honda_Odyssey_Minivan_2012

#### Hyundai_Accent_Sedan_2012

#### Hyundai_Azera_Sedan_2012

#### Hyundai_Elantra_Sedan_2007

#### Hyundai_Elantra_Touring_Hatchback_2012

#### Hyundai_Genesis_Sedan_2012

#### Hyundai_Santa_Fe_SUV_2012

#### Hyundai_Sonata_Hybrid_Sedan_2012

#### Hyundai_Sonata_Sedan_2012

#### Hyundai_Tucson_SUV_2012

#### Hyundai_Veloster_Hatchback_2012

#### Hyundai_Veracruz_SUV_2012

#### Infiniti_G_Coupe_IPL_2012

#### Infiniti_QX56_SUV_2011

#### Isuzu_Ascender_SUV_2008

#### Jaguar_XK_XKR_2012

#### Jeep_Compass_SUV_2012

#### Jeep_Grand_Cherokee_SUV_2012

#### Jeep_Liberty_SUV_2012

#### Jeep_Patriot_SUV_2012

#### Jeep_Wrangler_SUV_2012

#### Lamborghini_Aventador_Coupe_2012

#### Lamborghini_Diablo_Coupe_2001

#### Lamborghini_Gallardo_LP_570-4_Superleggera_2012

#### Lamborghini_Reventon_Coupe_2008

#### Land_Rover_LR2_SUV_2012

#### Land_Rover_Range_Rover_SUV_2012

#### Lincoln_Town_Car_Sedan_2011

#### MINI_Cooper_Roadster_Convertible_2012

#### Maybach_Landaulet_Convertible_2012

#### Mazda_Tribute_SUV_2011

#### McLaren_MP4-12C_Coupe_2012

#### Mercedes-Benz_300-Class_Convertible_1993

#### Mercedes-Benz_C-Class_Sedan_2012

#### Mercedes-Benz_E-Class_Sedan_2012

#### Mercedes-Benz_S-Class_Sedan_2012

#### Mercedes-Benz_SL-Class_Coupe_2009

#### Mercedes-Benz_Sprinter_Van_2012

#### Mitsubishi_Lancer_Sedan_2012

#### Nissan_240SX_Coupe_1998

#### Nissan_Juke_Hatchback_2012

#### Nissan_Leaf_Hatchback_2012

#### Nissan_NV_Passenger_Van_2012

#### Plymouth_Neon_Coupe_1999

#### Porsche_Panamera_Sedan_2012

#### Ram_C_V_Cargo_Van_Minivan_2012

#### Rolls-Royce_Ghost_Sedan_2012

#### Rolls-Royce_Phantom_Drophead_Coupe_Convertible_2012

#### Rolls-Royce_Phantom_Sedan_2012

#### Scion_xD_Hatchback_2012

#### Spyker_C8_Convertible_2009

#### Spyker_C8_Coupe_2009

#### Suzuki_Aerio_Sedan_2007

#### Suzuki_Kizashi_Sedan_2012

#### Suzuki_SX4_Hatchback_2012

#### Suzuki_SX4_Sedan_2012

#### Tesla_Model_S_Sedan_2012

#### Toyota_4Runner_SUV_2012

#### Toyota_Camry_Sedan_2012

#### Toyota_Corolla_Sedan_2012

#### Toyota_Sequoia_SUV_2012

#### Volkswagen_Beetle_Hatchback_2012

#### Volkswagen_Golf_Hatchback_1991

#### Volkswagen_Golf_Hatchback_2012

#### Volvo_240_Sedan_1993

#### Volvo_C30_Hatchback_2012

#### Volvo_XC90_SUV_2007

#### smart_fortwo_Convertible_2012

|
{"tags": ["image-classification", "pytorch", "huggingpics"], "metrics": ["accuracy"]}
|
SriramSridhar78/sriram-car-classifier
| null |
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #safetensors #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us
|
# sriram-car-classifier
Autogenerated by HuggingPics️
Create your own image classifier for anything by running the demo on Google Colab.
Report any issues with the demo at the github repo.
## Example Images
#### AM_General_Hummer_SUV_2000
!AM_General_Hummer_SUV_2000
#### Acura_Integra_Type_R_2001
!Acura_Integra_Type_R_2001
#### Acura_RL_Sedan_2012
!Acura_RL_Sedan_2012
#### Acura_TL_Sedan_2012
!Acura_TL_Sedan_2012
#### Acura_TL_Type-S_2008
!Acura_TL_Type-S_2008
#### Acura_TSX_Sedan_2012
!Acura_TSX_Sedan_2012
#### Acura_ZDX_Hatchback_2012
!Acura_ZDX_Hatchback_2012
#### Aston_Martin_V8_Vantage_Convertible_2012
!Aston_Martin_V8_Vantage_Convertible_2012
#### Aston_Martin_V8_Vantage_Coupe_2012
!Aston_Martin_V8_Vantage_Coupe_2012
#### Aston_Martin_Virage_Convertible_2012
!Aston_Martin_Virage_Convertible_2012
#### Aston_Martin_Virage_Coupe_2012
!Aston_Martin_Virage_Coupe_2012
#### Audi_100_Sedan_1994
!Audi_100_Sedan_1994
#### Audi_100_Wagon_1994
!Audi_100_Wagon_1994
#### Audi_A5_Coupe_2012
!Audi_A5_Coupe_2012
#### Audi_R8_Coupe_2012
!Audi_R8_Coupe_2012
#### Audi_RS_4_Convertible_2008
!Audi_RS_4_Convertible_2008
#### Audi_S4_Sedan_2007
!Audi_S4_Sedan_2007
#### Audi_S4_Sedan_2012
!Audi_S4_Sedan_2012
#### Audi_S5_Convertible_2012
!Audi_S5_Convertible_2012
#### Audi_S5_Coupe_2012
!Audi_S5_Coupe_2012
#### Audi_S6_Sedan_2011
!Audi_S6_Sedan_2011
#### Audi_TTS_Coupe_2012
!Audi_TTS_Coupe_2012
#### Audi_TT_Hatchback_2011
!Audi_TT_Hatchback_2011
#### Audi_TT_RS_Coupe_2012
!Audi_TT_RS_Coupe_2012
#### Audi_V8_Sedan_1994
!Audi_V8_Sedan_1994
#### BMW_1_Series_Convertible_2012
!BMW_1_Series_Convertible_2012
#### BMW_1_Series_Coupe_2012
!BMW_1_Series_Coupe_2012
#### BMW_3_Series_Sedan_2012
!BMW_3_Series_Sedan_2012
#### BMW_3_Series_Wagon_2012
!BMW_3_Series_Wagon_2012
#### BMW_6_Series_Convertible_2007
!BMW_6_Series_Convertible_2007
#### BMW_ActiveHybrid_5_Sedan_2012
!BMW_ActiveHybrid_5_Sedan_2012
#### BMW_M3_Coupe_2012
!BMW_M3_Coupe_2012
#### BMW_M5_Sedan_2010
!BMW_M5_Sedan_2010
#### BMW_M6_Convertible_2010
!BMW_M6_Convertible_2010
#### BMW_X3_SUV_2012
!BMW_X3_SUV_2012
#### BMW_X5_SUV_2007
!BMW_X5_SUV_2007
#### BMW_X6_SUV_2012
!BMW_X6_SUV_2012
#### BMW_Z4_Convertible_2012
!BMW_Z4_Convertible_2012
#### Bentley_Arnage_Sedan_2009
!Bentley_Arnage_Sedan_2009
#### Bentley_Continental_Flying_Spur_Sedan_2007
!Bentley_Continental_Flying_Spur_Sedan_2007
#### Bentley_Continental_GT_Coupe_2007
!Bentley_Continental_GT_Coupe_2007
#### Bentley_Continental_GT_Coupe_2012
!Bentley_Continental_GT_Coupe_2012
#### Bentley_Continental_Supersports_Conv._Convertible_2012
!Bentley_Continental_Supersports_Conv._Convertible_2012
#### Bentley_Mulsanne_Sedan_2011
!Bentley_Mulsanne_Sedan_2011
#### Bugatti_Veyron_16.4_Convertible_2009
!Bugatti_Veyron_16.4_Convertible_2009
#### Bugatti_Veyron_16.4_Coupe_2009
!Bugatti_Veyron_16.4_Coupe_2009
#### Buick_Enclave_SUV_2012
!Buick_Enclave_SUV_2012
#### Buick_Rainier_SUV_2007
!Buick_Rainier_SUV_2007
#### Buick_Regal_GS_2012
!Buick_Regal_GS_2012
#### Buick_Verano_Sedan_2012
!Buick_Verano_Sedan_2012
#### Cadillac_CTS-V_Sedan_2012
!Cadillac_CTS-V_Sedan_2012
#### Cadillac_Escalade_EXT_Crew_Cab_2007
!Cadillac_Escalade_EXT_Crew_Cab_2007
#### Cadillac_SRX_SUV_2012
!Cadillac_SRX_SUV_2012
#### Chevrolet_Avalanche_Crew_Cab_2012
!Chevrolet_Avalanche_Crew_Cab_2012
#### Chevrolet_Camaro_Convertible_2012
!Chevrolet_Camaro_Convertible_2012
#### Chevrolet_Cobalt_SS_2010
!Chevrolet_Cobalt_SS_2010
#### Chevrolet_Corvette_Convertible_2012
!Chevrolet_Corvette_Convertible_2012
#### Chevrolet_Corvette_Ron_Fellows_Edition_Z06_2007
!Chevrolet_Corvette_Ron_Fellows_Edition_Z06_2007
#### Chevrolet_Corvette_ZR1_2012
!Chevrolet_Corvette_ZR1_2012
#### Chevrolet_Express_Cargo_Van_2007
!Chevrolet_Express_Cargo_Van_2007
#### Chevrolet_Express_Van_2007
!Chevrolet_Express_Van_2007
#### Chevrolet_HHR_SS_2010
!Chevrolet_HHR_SS_2010
#### Chevrolet_Impala_Sedan_2007
!Chevrolet_Impala_Sedan_2007
#### Chevrolet_Malibu_Hybrid_Sedan_2010
!Chevrolet_Malibu_Hybrid_Sedan_2010
#### Chevrolet_Malibu_Sedan_2007
!Chevrolet_Malibu_Sedan_2007
#### Chevrolet_Monte_Carlo_Coupe_2007
!Chevrolet_Monte_Carlo_Coupe_2007
#### Chevrolet_Silverado_1500_Classic_Extended_Cab_2007
!Chevrolet_Silverado_1500_Classic_Extended_Cab_2007
#### Chevrolet_Silverado_1500_Extended_Cab_2012
!Chevrolet_Silverado_1500_Extended_Cab_2012
#### Chevrolet_Silverado_1500_Hybrid_Crew_Cab_2012
!Chevrolet_Silverado_1500_Hybrid_Crew_Cab_2012
#### Chevrolet_Silverado_1500_Regular_Cab_2012
!Chevrolet_Silverado_1500_Regular_Cab_2012
#### Chevrolet_Silverado_2500HD_Regular_Cab_2012
!Chevrolet_Silverado_2500HD_Regular_Cab_2012
#### Chevrolet_Sonic_Sedan_2012
!Chevrolet_Sonic_Sedan_2012
#### Chevrolet_Tahoe_Hybrid_SUV_2012
!Chevrolet_Tahoe_Hybrid_SUV_2012
#### Chevrolet_TrailBlazer_SS_2009
!Chevrolet_TrailBlazer_SS_2009
#### Chevrolet_Traverse_SUV_2012
!Chevrolet_Traverse_SUV_2012
#### Chrysler_300_SRT-8_2010
!Chrysler_300_SRT-8_2010
#### Chrysler_Aspen_SUV_2009
!Chrysler_Aspen_SUV_2009
#### Chrysler_Crossfire_Convertible_2008
!Chrysler_Crossfire_Convertible_2008
#### Chrysler_PT_Cruiser_Convertible_2008
!Chrysler_PT_Cruiser_Convertible_2008
#### Chrysler_Sebring_Convertible_2010
!Chrysler_Sebring_Convertible_2010
#### Chrysler_Town_and_Country_Minivan_2012
!Chrysler_Town_and_Country_Minivan_2012
#### Daewoo_Nubira_Wagon_2002
!Daewoo_Nubira_Wagon_2002
#### Dodge_Caliber_Wagon_2007
!Dodge_Caliber_Wagon_2007
#### Dodge_Caliber_Wagon_2012
!Dodge_Caliber_Wagon_2012
#### Dodge_Caravan_Minivan_1997
!Dodge_Caravan_Minivan_1997
#### Dodge_Challenger_SRT8_2011
!Dodge_Challenger_SRT8_2011
#### Dodge_Charger_SRT-8_2009
!Dodge_Charger_SRT-8_2009
#### Dodge_Charger_Sedan_2012
!Dodge_Charger_Sedan_2012
#### Dodge_Dakota_Club_Cab_2007
!Dodge_Dakota_Club_Cab_2007
#### Dodge_Dakota_Crew_Cab_2010
!Dodge_Dakota_Crew_Cab_2010
#### Dodge_Durango_SUV_2007
!Dodge_Durango_SUV_2007
#### Dodge_Durango_SUV_2012
!Dodge_Durango_SUV_2012
#### Dodge_Journey_SUV_2012
!Dodge_Journey_SUV_2012
#### Dodge_Magnum_Wagon_2008
!Dodge_Magnum_Wagon_2008
#### Dodge_Ram_Pickup_3500_Crew_Cab_2010
!Dodge_Ram_Pickup_3500_Crew_Cab_2010
#### Dodge_Ram_Pickup_3500_Quad_Cab_2009
!Dodge_Ram_Pickup_3500_Quad_Cab_2009
#### Dodge_Sprinter_Cargo_Van_2009
!Dodge_Sprinter_Cargo_Van_2009
#### Eagle_Talon_Hatchback_1998
!Eagle_Talon_Hatchback_1998
#### FIAT_500_Abarth_2012
!FIAT_500_Abarth_2012
#### FIAT_500_Convertible_2012
!FIAT_500_Convertible_2012
#### Ferrari_458_Italia_Convertible_2012
!Ferrari_458_Italia_Convertible_2012
#### Ferrari_458_Italia_Coupe_2012
!Ferrari_458_Italia_Coupe_2012
#### Ferrari_California_Convertible_2012
!Ferrari_California_Convertible_2012
#### Ferrari_FF_Coupe_2012
!Ferrari_FF_Coupe_2012
#### Fisker_Karma_Sedan_2012
!Fisker_Karma_Sedan_2012
#### Ford_E-Series_Wagon_Van_2012
!Ford_E-Series_Wagon_Van_2012
#### Ford_Edge_SUV_2012
!Ford_Edge_SUV_2012
#### Ford_Expedition_EL_SUV_2009
!Ford_Expedition_EL_SUV_2009
#### Ford_F-150_Regular_Cab_2007
!Ford_F-150_Regular_Cab_2007
#### Ford_F-150_Regular_Cab_2012
!Ford_F-150_Regular_Cab_2012
#### Ford_F-450_Super_Duty_Crew_Cab_2012
!Ford_F-450_Super_Duty_Crew_Cab_2012
#### Ford_Fiesta_Sedan_2012
!Ford_Fiesta_Sedan_2012
#### Ford_Focus_Sedan_2007
!Ford_Focus_Sedan_2007
#### Ford_Freestar_Minivan_2007
!Ford_Freestar_Minivan_2007
#### Ford_GT_Coupe_2006
!Ford_GT_Coupe_2006
#### Ford_Mustang_Convertible_2007
!Ford_Mustang_Convertible_2007
#### Ford_Ranger_SuperCab_2011
!Ford_Ranger_SuperCab_2011
#### GMC_Acadia_SUV_2012
!GMC_Acadia_SUV_2012
#### GMC_Canyon_Extended_Cab_2012
!GMC_Canyon_Extended_Cab_2012
#### GMC_Savana_Van_2012
!GMC_Savana_Van_2012
#### GMC_Terrain_SUV_2012
!GMC_Terrain_SUV_2012
#### GMC_Yukon_Hybrid_SUV_2012
!GMC_Yukon_Hybrid_SUV_2012
#### Geo_Metro_Convertible_1993
!Geo_Metro_Convertible_1993
#### HUMMER_H2_SUT_Crew_Cab_2009
!HUMMER_H2_SUT_Crew_Cab_2009
#### HUMMER_H3T_Crew_Cab_2010
!HUMMER_H3T_Crew_Cab_2010
#### Honda_Accord_Coupe_2012
!Honda_Accord_Coupe_2012
#### Honda_Accord_Sedan_2012
!Honda_Accord_Sedan_2012
#### Honda_Odyssey_Minivan_2007
!Honda_Odyssey_Minivan_2007
#### Honda_Odyssey_Minivan_2012
!Honda_Odyssey_Minivan_2012
#### Hyundai_Accent_Sedan_2012
!Hyundai_Accent_Sedan_2012
#### Hyundai_Azera_Sedan_2012
!Hyundai_Azera_Sedan_2012
#### Hyundai_Elantra_Sedan_2007
!Hyundai_Elantra_Sedan_2007
#### Hyundai_Elantra_Touring_Hatchback_2012
!Hyundai_Elantra_Touring_Hatchback_2012
#### Hyundai_Genesis_Sedan_2012
!Hyundai_Genesis_Sedan_2012
#### Hyundai_Santa_Fe_SUV_2012
!Hyundai_Santa_Fe_SUV_2012
#### Hyundai_Sonata_Hybrid_Sedan_2012
!Hyundai_Sonata_Hybrid_Sedan_2012
#### Hyundai_Sonata_Sedan_2012
!Hyundai_Sonata_Sedan_2012
#### Hyundai_Tucson_SUV_2012
!Hyundai_Tucson_SUV_2012
#### Hyundai_Veloster_Hatchback_2012
!Hyundai_Veloster_Hatchback_2012
#### Hyundai_Veracruz_SUV_2012
!Hyundai_Veracruz_SUV_2012
#### Infiniti_G_Coupe_IPL_2012
!Infiniti_G_Coupe_IPL_2012
#### Infiniti_QX56_SUV_2011
!Infiniti_QX56_SUV_2011
#### Isuzu_Ascender_SUV_2008
!Isuzu_Ascender_SUV_2008
#### Jaguar_XK_XKR_2012
!Jaguar_XK_XKR_2012
#### Jeep_Compass_SUV_2012
!Jeep_Compass_SUV_2012
#### Jeep_Grand_Cherokee_SUV_2012
!Jeep_Grand_Cherokee_SUV_2012
#### Jeep_Liberty_SUV_2012
!Jeep_Liberty_SUV_2012
#### Jeep_Patriot_SUV_2012
!Jeep_Patriot_SUV_2012
#### Jeep_Wrangler_SUV_2012
!Jeep_Wrangler_SUV_2012
#### Lamborghini_Aventador_Coupe_2012
!Lamborghini_Aventador_Coupe_2012
#### Lamborghini_Diablo_Coupe_2001
!Lamborghini_Diablo_Coupe_2001
#### Lamborghini_Gallardo_LP_570-4_Superleggera_2012
!Lamborghini_Gallardo_LP_570-4_Superleggera_2012
#### Lamborghini_Reventon_Coupe_2008
!Lamborghini_Reventon_Coupe_2008
#### Land_Rover_LR2_SUV_2012
!Land_Rover_LR2_SUV_2012
#### Land_Rover_Range_Rover_SUV_2012
!Land_Rover_Range_Rover_SUV_2012
#### Lincoln_Town_Car_Sedan_2011
!Lincoln_Town_Car_Sedan_2011
#### MINI_Cooper_Roadster_Convertible_2012
!MINI_Cooper_Roadster_Convertible_2012
#### Maybach_Landaulet_Convertible_2012
!Maybach_Landaulet_Convertible_2012
#### Mazda_Tribute_SUV_2011
!Mazda_Tribute_SUV_2011
#### McLaren_MP4-12C_Coupe_2012
!McLaren_MP4-12C_Coupe_2012
#### Mercedes-Benz_300-Class_Convertible_1993
!Mercedes-Benz_300-Class_Convertible_1993
#### Mercedes-Benz_C-Class_Sedan_2012
!Mercedes-Benz_C-Class_Sedan_2012
#### Mercedes-Benz_E-Class_Sedan_2012
!Mercedes-Benz_E-Class_Sedan_2012
#### Mercedes-Benz_S-Class_Sedan_2012
!Mercedes-Benz_S-Class_Sedan_2012
#### Mercedes-Benz_SL-Class_Coupe_2009
!Mercedes-Benz_SL-Class_Coupe_2009
#### Mercedes-Benz_Sprinter_Van_2012
!Mercedes-Benz_Sprinter_Van_2012
#### Mitsubishi_Lancer_Sedan_2012
!Mitsubishi_Lancer_Sedan_2012
#### Nissan_240SX_Coupe_1998
!Nissan_240SX_Coupe_1998
#### Nissan_Juke_Hatchback_2012
!Nissan_Juke_Hatchback_2012
#### Nissan_Leaf_Hatchback_2012
!Nissan_Leaf_Hatchback_2012
#### Nissan_NV_Passenger_Van_2012
!Nissan_NV_Passenger_Van_2012
#### Plymouth_Neon_Coupe_1999
!Plymouth_Neon_Coupe_1999
#### Porsche_Panamera_Sedan_2012
!Porsche_Panamera_Sedan_2012
#### Ram_C_V_Cargo_Van_Minivan_2012
!Ram_C_V_Cargo_Van_Minivan_2012
#### Rolls-Royce_Ghost_Sedan_2012
!Rolls-Royce_Ghost_Sedan_2012
#### Rolls-Royce_Phantom_Drophead_Coupe_Convertible_2012
!Rolls-Royce_Phantom_Drophead_Coupe_Convertible_2012
#### Rolls-Royce_Phantom_Sedan_2012
!Rolls-Royce_Phantom_Sedan_2012
#### Scion_xD_Hatchback_2012
!Scion_xD_Hatchback_2012
#### Spyker_C8_Convertible_2009
!Spyker_C8_Convertible_2009
#### Spyker_C8_Coupe_2009
!Spyker_C8_Coupe_2009
#### Suzuki_Aerio_Sedan_2007
!Suzuki_Aerio_Sedan_2007
#### Suzuki_Kizashi_Sedan_2012
!Suzuki_Kizashi_Sedan_2012
#### Suzuki_SX4_Hatchback_2012
!Suzuki_SX4_Hatchback_2012
#### Suzuki_SX4_Sedan_2012
!Suzuki_SX4_Sedan_2012
#### Tesla_Model_S_Sedan_2012
!Tesla_Model_S_Sedan_2012
#### Toyota_4Runner_SUV_2012
!Toyota_4Runner_SUV_2012
#### Toyota_Camry_Sedan_2012
!Toyota_Camry_Sedan_2012
#### Toyota_Corolla_Sedan_2012
!Toyota_Corolla_Sedan_2012
#### Toyota_Sequoia_SUV_2012
!Toyota_Sequoia_SUV_2012
#### Volkswagen_Beetle_Hatchback_2012
!Volkswagen_Beetle_Hatchback_2012
#### Volkswagen_Golf_Hatchback_1991
!Volkswagen_Golf_Hatchback_1991
#### Volkswagen_Golf_Hatchback_2012
!Volkswagen_Golf_Hatchback_2012
#### Volvo_240_Sedan_1993
!Volvo_240_Sedan_1993
#### Volvo_C30_Hatchback_2012
!Volvo_C30_Hatchback_2012
#### Volvo_XC90_SUV_2007
!Volvo_XC90_SUV_2007
#### smart_fortwo_Convertible_2012
!smart_fortwo_Convertible_2012
|
[
"# sriram-car-classifier\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo.",
"## Example Images",
"#### AM_General_Hummer_SUV_2000\n\n!AM_General_Hummer_SUV_2000",
"#### Acura_Integra_Type_R_2001\n\n!Acura_Integra_Type_R_2001",
"#### Acura_RL_Sedan_2012\n\n!Acura_RL_Sedan_2012",
"#### Acura_TL_Sedan_2012\n\n!Acura_TL_Sedan_2012",
"#### Acura_TL_Type-S_2008\n\n!Acura_TL_Type-S_2008",
"#### Acura_TSX_Sedan_2012\n\n!Acura_TSX_Sedan_2012",
"#### Acura_ZDX_Hatchback_2012\n\n!Acura_ZDX_Hatchback_2012",
"#### Aston_Martin_V8_Vantage_Convertible_2012\n\n!Aston_Martin_V8_Vantage_Convertible_2012",
"#### Aston_Martin_V8_Vantage_Coupe_2012\n\n!Aston_Martin_V8_Vantage_Coupe_2012",
"#### Aston_Martin_Virage_Convertible_2012\n\n!Aston_Martin_Virage_Convertible_2012",
"#### Aston_Martin_Virage_Coupe_2012\n\n!Aston_Martin_Virage_Coupe_2012",
"#### Audi_100_Sedan_1994\n\n!Audi_100_Sedan_1994",
"#### Audi_100_Wagon_1994\n\n!Audi_100_Wagon_1994",
"#### Audi_A5_Coupe_2012\n\n!Audi_A5_Coupe_2012",
"#### Audi_R8_Coupe_2012\n\n!Audi_R8_Coupe_2012",
"#### Audi_RS_4_Convertible_2008\n\n!Audi_RS_4_Convertible_2008",
"#### Audi_S4_Sedan_2007\n\n!Audi_S4_Sedan_2007",
"#### Audi_S4_Sedan_2012\n\n!Audi_S4_Sedan_2012",
"#### Audi_S5_Convertible_2012\n\n!Audi_S5_Convertible_2012",
"#### Audi_S5_Coupe_2012\n\n!Audi_S5_Coupe_2012",
"#### Audi_S6_Sedan_2011\n\n!Audi_S6_Sedan_2011",
"#### Audi_TTS_Coupe_2012\n\n!Audi_TTS_Coupe_2012",
"#### Audi_TT_Hatchback_2011\n\n!Audi_TT_Hatchback_2011",
"#### Audi_TT_RS_Coupe_2012\n\n!Audi_TT_RS_Coupe_2012",
"#### Audi_V8_Sedan_1994\n\n!Audi_V8_Sedan_1994",
"#### BMW_1_Series_Convertible_2012\n\n!BMW_1_Series_Convertible_2012",
"#### BMW_1_Series_Coupe_2012\n\n!BMW_1_Series_Coupe_2012",
"#### BMW_3_Series_Sedan_2012\n\n!BMW_3_Series_Sedan_2012",
"#### BMW_3_Series_Wagon_2012\n\n!BMW_3_Series_Wagon_2012",
"#### BMW_6_Series_Convertible_2007\n\n!BMW_6_Series_Convertible_2007",
"#### BMW_ActiveHybrid_5_Sedan_2012\n\n!BMW_ActiveHybrid_5_Sedan_2012",
"#### BMW_M3_Coupe_2012\n\n!BMW_M3_Coupe_2012",
"#### BMW_M5_Sedan_2010\n\n!BMW_M5_Sedan_2010",
"#### BMW_M6_Convertible_2010\n\n!BMW_M6_Convertible_2010",
"#### BMW_X3_SUV_2012\n\n!BMW_X3_SUV_2012",
"#### BMW_X5_SUV_2007\n\n!BMW_X5_SUV_2007",
"#### BMW_X6_SUV_2012\n\n!BMW_X6_SUV_2012",
"#### BMW_Z4_Convertible_2012\n\n!BMW_Z4_Convertible_2012",
"#### Bentley_Arnage_Sedan_2009\n\n!Bentley_Arnage_Sedan_2009",
"#### Bentley_Continental_Flying_Spur_Sedan_2007\n\n!Bentley_Continental_Flying_Spur_Sedan_2007",
"#### Bentley_Continental_GT_Coupe_2007\n\n!Bentley_Continental_GT_Coupe_2007",
"#### Bentley_Continental_GT_Coupe_2012\n\n!Bentley_Continental_GT_Coupe_2012",
"#### Bentley_Continental_Supersports_Conv._Convertible_2012\n\n!Bentley_Continental_Supersports_Conv._Convertible_2012",
"#### Bentley_Mulsanne_Sedan_2011\n\n!Bentley_Mulsanne_Sedan_2011",
"#### Bugatti_Veyron_16.4_Convertible_2009\n\n!Bugatti_Veyron_16.4_Convertible_2009",
"#### Bugatti_Veyron_16.4_Coupe_2009\n\n!Bugatti_Veyron_16.4_Coupe_2009",
"#### Buick_Enclave_SUV_2012\n\n!Buick_Enclave_SUV_2012",
"#### Buick_Rainier_SUV_2007\n\n!Buick_Rainier_SUV_2007",
"#### Buick_Regal_GS_2012\n\n!Buick_Regal_GS_2012",
"#### Buick_Verano_Sedan_2012\n\n!Buick_Verano_Sedan_2012",
"#### Cadillac_CTS-V_Sedan_2012\n\n!Cadillac_CTS-V_Sedan_2012",
"#### Cadillac_Escalade_EXT_Crew_Cab_2007\n\n!Cadillac_Escalade_EXT_Crew_Cab_2007",
"#### Cadillac_SRX_SUV_2012\n\n!Cadillac_SRX_SUV_2012",
"#### Chevrolet_Avalanche_Crew_Cab_2012\n\n!Chevrolet_Avalanche_Crew_Cab_2012",
"#### Chevrolet_Camaro_Convertible_2012\n\n!Chevrolet_Camaro_Convertible_2012",
"#### Chevrolet_Cobalt_SS_2010\n\n!Chevrolet_Cobalt_SS_2010",
"#### Chevrolet_Corvette_Convertible_2012\n\n!Chevrolet_Corvette_Convertible_2012",
"#### Chevrolet_Corvette_Ron_Fellows_Edition_Z06_2007\n\n!Chevrolet_Corvette_Ron_Fellows_Edition_Z06_2007",
"#### Chevrolet_Corvette_ZR1_2012\n\n!Chevrolet_Corvette_ZR1_2012",
"#### Chevrolet_Express_Cargo_Van_2007\n\n!Chevrolet_Express_Cargo_Van_2007",
"#### Chevrolet_Express_Van_2007\n\n!Chevrolet_Express_Van_2007",
"#### Chevrolet_HHR_SS_2010\n\n!Chevrolet_HHR_SS_2010",
"#### Chevrolet_Impala_Sedan_2007\n\n!Chevrolet_Impala_Sedan_2007",
"#### Chevrolet_Malibu_Hybrid_Sedan_2010\n\n!Chevrolet_Malibu_Hybrid_Sedan_2010",
"#### Chevrolet_Malibu_Sedan_2007\n\n!Chevrolet_Malibu_Sedan_2007",
"#### Chevrolet_Monte_Carlo_Coupe_2007\n\n!Chevrolet_Monte_Carlo_Coupe_2007",
"#### Chevrolet_Silverado_1500_Classic_Extended_Cab_2007\n\n!Chevrolet_Silverado_1500_Classic_Extended_Cab_2007",
"#### Chevrolet_Silverado_1500_Extended_Cab_2012\n\n!Chevrolet_Silverado_1500_Extended_Cab_2012",
"#### Chevrolet_Silverado_1500_Hybrid_Crew_Cab_2012\n\n!Chevrolet_Silverado_1500_Hybrid_Crew_Cab_2012",
"#### Chevrolet_Silverado_1500_Regular_Cab_2012\n\n!Chevrolet_Silverado_1500_Regular_Cab_2012",
"#### Chevrolet_Silverado_2500HD_Regular_Cab_2012\n\n!Chevrolet_Silverado_2500HD_Regular_Cab_2012",
"#### Chevrolet_Sonic_Sedan_2012\n\n!Chevrolet_Sonic_Sedan_2012",
"#### Chevrolet_Tahoe_Hybrid_SUV_2012\n\n!Chevrolet_Tahoe_Hybrid_SUV_2012",
"#### Chevrolet_TrailBlazer_SS_2009\n\n!Chevrolet_TrailBlazer_SS_2009",
"#### Chevrolet_Traverse_SUV_2012\n\n!Chevrolet_Traverse_SUV_2012",
"#### Chrysler_300_SRT-8_2010\n\n!Chrysler_300_SRT-8_2010",
"#### Chrysler_Aspen_SUV_2009\n\n!Chrysler_Aspen_SUV_2009",
"#### Chrysler_Crossfire_Convertible_2008\n\n!Chrysler_Crossfire_Convertible_2008",
"#### Chrysler_PT_Cruiser_Convertible_2008\n\n!Chrysler_PT_Cruiser_Convertible_2008",
"#### Chrysler_Sebring_Convertible_2010\n\n!Chrysler_Sebring_Convertible_2010",
"#### Chrysler_Town_and_Country_Minivan_2012\n\n!Chrysler_Town_and_Country_Minivan_2012",
"#### Daewoo_Nubira_Wagon_2002\n\n!Daewoo_Nubira_Wagon_2002",
"#### Dodge_Caliber_Wagon_2007\n\n!Dodge_Caliber_Wagon_2007",
"#### Dodge_Caliber_Wagon_2012\n\n!Dodge_Caliber_Wagon_2012",
"#### Dodge_Caravan_Minivan_1997\n\n!Dodge_Caravan_Minivan_1997",
"#### Dodge_Challenger_SRT8_2011\n\n!Dodge_Challenger_SRT8_2011",
"#### Dodge_Charger_SRT-8_2009\n\n!Dodge_Charger_SRT-8_2009",
"#### Dodge_Charger_Sedan_2012\n\n!Dodge_Charger_Sedan_2012",
"#### Dodge_Dakota_Club_Cab_2007\n\n!Dodge_Dakota_Club_Cab_2007",
"#### Dodge_Dakota_Crew_Cab_2010\n\n!Dodge_Dakota_Crew_Cab_2010",
"#### Dodge_Durango_SUV_2007\n\n!Dodge_Durango_SUV_2007",
"#### Dodge_Durango_SUV_2012\n\n!Dodge_Durango_SUV_2012",
"#### Dodge_Journey_SUV_2012\n\n!Dodge_Journey_SUV_2012",
"#### Dodge_Magnum_Wagon_2008\n\n!Dodge_Magnum_Wagon_2008",
"#### Dodge_Ram_Pickup_3500_Crew_Cab_2010\n\n!Dodge_Ram_Pickup_3500_Crew_Cab_2010",
"#### Dodge_Ram_Pickup_3500_Quad_Cab_2009\n\n!Dodge_Ram_Pickup_3500_Quad_Cab_2009",
"#### Dodge_Sprinter_Cargo_Van_2009\n\n!Dodge_Sprinter_Cargo_Van_2009",
"#### Eagle_Talon_Hatchback_1998\n\n!Eagle_Talon_Hatchback_1998",
"#### FIAT_500_Abarth_2012\n\n!FIAT_500_Abarth_2012",
"#### FIAT_500_Convertible_2012\n\n!FIAT_500_Convertible_2012",
"#### Ferrari_458_Italia_Convertible_2012\n\n!Ferrari_458_Italia_Convertible_2012",
"#### Ferrari_458_Italia_Coupe_2012\n\n!Ferrari_458_Italia_Coupe_2012",
"#### Ferrari_California_Convertible_2012\n\n!Ferrari_California_Convertible_2012",
"#### Ferrari_FF_Coupe_2012\n\n!Ferrari_FF_Coupe_2012",
"#### Fisker_Karma_Sedan_2012\n\n!Fisker_Karma_Sedan_2012",
"#### Ford_E-Series_Wagon_Van_2012\n\n!Ford_E-Series_Wagon_Van_2012",
"#### Ford_Edge_SUV_2012\n\n!Ford_Edge_SUV_2012",
"#### Ford_Expedition_EL_SUV_2009\n\n!Ford_Expedition_EL_SUV_2009",
"#### Ford_F-150_Regular_Cab_2007\n\n!Ford_F-150_Regular_Cab_2007",
"#### Ford_F-150_Regular_Cab_2012\n\n!Ford_F-150_Regular_Cab_2012",
"#### Ford_F-450_Super_Duty_Crew_Cab_2012\n\n!Ford_F-450_Super_Duty_Crew_Cab_2012",
"#### Ford_Fiesta_Sedan_2012\n\n!Ford_Fiesta_Sedan_2012",
"#### Ford_Focus_Sedan_2007\n\n!Ford_Focus_Sedan_2007",
"#### Ford_Freestar_Minivan_2007\n\n!Ford_Freestar_Minivan_2007",
"#### Ford_GT_Coupe_2006\n\n!Ford_GT_Coupe_2006",
"#### Ford_Mustang_Convertible_2007\n\n!Ford_Mustang_Convertible_2007",
"#### Ford_Ranger_SuperCab_2011\n\n!Ford_Ranger_SuperCab_2011",
"#### GMC_Acadia_SUV_2012\n\n!GMC_Acadia_SUV_2012",
"#### GMC_Canyon_Extended_Cab_2012\n\n!GMC_Canyon_Extended_Cab_2012",
"#### GMC_Savana_Van_2012\n\n!GMC_Savana_Van_2012",
"#### GMC_Terrain_SUV_2012\n\n!GMC_Terrain_SUV_2012",
"#### GMC_Yukon_Hybrid_SUV_2012\n\n!GMC_Yukon_Hybrid_SUV_2012",
"#### Geo_Metro_Convertible_1993\n\n!Geo_Metro_Convertible_1993",
"#### HUMMER_H2_SUT_Crew_Cab_2009\n\n!HUMMER_H2_SUT_Crew_Cab_2009",
"#### HUMMER_H3T_Crew_Cab_2010\n\n!HUMMER_H3T_Crew_Cab_2010",
"#### Honda_Accord_Coupe_2012\n\n!Honda_Accord_Coupe_2012",
"#### Honda_Accord_Sedan_2012\n\n!Honda_Accord_Sedan_2012",
"#### Honda_Odyssey_Minivan_2007\n\n!Honda_Odyssey_Minivan_2007",
"#### Honda_Odyssey_Minivan_2012\n\n!Honda_Odyssey_Minivan_2012",
"#### Hyundai_Accent_Sedan_2012\n\n!Hyundai_Accent_Sedan_2012",
"#### Hyundai_Azera_Sedan_2012\n\n!Hyundai_Azera_Sedan_2012",
"#### Hyundai_Elantra_Sedan_2007\n\n!Hyundai_Elantra_Sedan_2007",
"#### Hyundai_Elantra_Touring_Hatchback_2012\n\n!Hyundai_Elantra_Touring_Hatchback_2012",
"#### Hyundai_Genesis_Sedan_2012\n\n!Hyundai_Genesis_Sedan_2012",
"#### Hyundai_Santa_Fe_SUV_2012\n\n!Hyundai_Santa_Fe_SUV_2012",
"#### Hyundai_Sonata_Hybrid_Sedan_2012\n\n!Hyundai_Sonata_Hybrid_Sedan_2012",
"#### Hyundai_Sonata_Sedan_2012\n\n!Hyundai_Sonata_Sedan_2012",
"#### Hyundai_Tucson_SUV_2012\n\n!Hyundai_Tucson_SUV_2012",
"#### Hyundai_Veloster_Hatchback_2012\n\n!Hyundai_Veloster_Hatchback_2012",
"#### Hyundai_Veracruz_SUV_2012\n\n!Hyundai_Veracruz_SUV_2012",
"#### Infiniti_G_Coupe_IPL_2012\n\n!Infiniti_G_Coupe_IPL_2012",
"#### Infiniti_QX56_SUV_2011\n\n!Infiniti_QX56_SUV_2011",
"#### Isuzu_Ascender_SUV_2008\n\n!Isuzu_Ascender_SUV_2008",
"#### Jaguar_XK_XKR_2012\n\n!Jaguar_XK_XKR_2012",
"#### Jeep_Compass_SUV_2012\n\n!Jeep_Compass_SUV_2012",
"#### Jeep_Grand_Cherokee_SUV_2012\n\n!Jeep_Grand_Cherokee_SUV_2012",
"#### Jeep_Liberty_SUV_2012\n\n!Jeep_Liberty_SUV_2012",
"#### Jeep_Patriot_SUV_2012\n\n!Jeep_Patriot_SUV_2012",
"#### Jeep_Wrangler_SUV_2012\n\n!Jeep_Wrangler_SUV_2012",
"#### Lamborghini_Aventador_Coupe_2012\n\n!Lamborghini_Aventador_Coupe_2012",
"#### Lamborghini_Diablo_Coupe_2001\n\n!Lamborghini_Diablo_Coupe_2001",
"#### Lamborghini_Gallardo_LP_570-4_Superleggera_2012\n\n!Lamborghini_Gallardo_LP_570-4_Superleggera_2012",
"#### Lamborghini_Reventon_Coupe_2008\n\n!Lamborghini_Reventon_Coupe_2008",
"#### Land_Rover_LR2_SUV_2012\n\n!Land_Rover_LR2_SUV_2012",
"#### Land_Rover_Range_Rover_SUV_2012\n\n!Land_Rover_Range_Rover_SUV_2012",
"#### Lincoln_Town_Car_Sedan_2011\n\n!Lincoln_Town_Car_Sedan_2011",
"#### MINI_Cooper_Roadster_Convertible_2012\n\n!MINI_Cooper_Roadster_Convertible_2012",
"#### Maybach_Landaulet_Convertible_2012\n\n!Maybach_Landaulet_Convertible_2012",
"#### Mazda_Tribute_SUV_2011\n\n!Mazda_Tribute_SUV_2011",
"#### McLaren_MP4-12C_Coupe_2012\n\n!McLaren_MP4-12C_Coupe_2012",
"#### Mercedes-Benz_300-Class_Convertible_1993\n\n!Mercedes-Benz_300-Class_Convertible_1993",
"#### Mercedes-Benz_C-Class_Sedan_2012\n\n!Mercedes-Benz_C-Class_Sedan_2012",
"#### Mercedes-Benz_E-Class_Sedan_2012\n\n!Mercedes-Benz_E-Class_Sedan_2012",
"#### Mercedes-Benz_S-Class_Sedan_2012\n\n!Mercedes-Benz_S-Class_Sedan_2012",
"#### Mercedes-Benz_SL-Class_Coupe_2009\n\n!Mercedes-Benz_SL-Class_Coupe_2009",
"#### Mercedes-Benz_Sprinter_Van_2012\n\n!Mercedes-Benz_Sprinter_Van_2012",
"#### Mitsubishi_Lancer_Sedan_2012\n\n!Mitsubishi_Lancer_Sedan_2012",
"#### Nissan_240SX_Coupe_1998\n\n!Nissan_240SX_Coupe_1998",
"#### Nissan_Juke_Hatchback_2012\n\n!Nissan_Juke_Hatchback_2012",
"#### Nissan_Leaf_Hatchback_2012\n\n!Nissan_Leaf_Hatchback_2012",
"#### Nissan_NV_Passenger_Van_2012\n\n!Nissan_NV_Passenger_Van_2012",
"#### Plymouth_Neon_Coupe_1999\n\n!Plymouth_Neon_Coupe_1999",
"#### Porsche_Panamera_Sedan_2012\n\n!Porsche_Panamera_Sedan_2012",
"#### Ram_C_V_Cargo_Van_Minivan_2012\n\n!Ram_C_V_Cargo_Van_Minivan_2012",
"#### Rolls-Royce_Ghost_Sedan_2012\n\n!Rolls-Royce_Ghost_Sedan_2012",
"#### Rolls-Royce_Phantom_Drophead_Coupe_Convertible_2012\n\n!Rolls-Royce_Phantom_Drophead_Coupe_Convertible_2012",
"#### Rolls-Royce_Phantom_Sedan_2012\n\n!Rolls-Royce_Phantom_Sedan_2012",
"#### Scion_xD_Hatchback_2012\n\n!Scion_xD_Hatchback_2012",
"#### Spyker_C8_Convertible_2009\n\n!Spyker_C8_Convertible_2009",
"#### Spyker_C8_Coupe_2009\n\n!Spyker_C8_Coupe_2009",
"#### Suzuki_Aerio_Sedan_2007\n\n!Suzuki_Aerio_Sedan_2007",
"#### Suzuki_Kizashi_Sedan_2012\n\n!Suzuki_Kizashi_Sedan_2012",
"#### Suzuki_SX4_Hatchback_2012\n\n!Suzuki_SX4_Hatchback_2012",
"#### Suzuki_SX4_Sedan_2012\n\n!Suzuki_SX4_Sedan_2012",
"#### Tesla_Model_S_Sedan_2012\n\n!Tesla_Model_S_Sedan_2012",
"#### Toyota_4Runner_SUV_2012\n\n!Toyota_4Runner_SUV_2012",
"#### Toyota_Camry_Sedan_2012\n\n!Toyota_Camry_Sedan_2012",
"#### Toyota_Corolla_Sedan_2012\n\n!Toyota_Corolla_Sedan_2012",
"#### Toyota_Sequoia_SUV_2012\n\n!Toyota_Sequoia_SUV_2012",
"#### Volkswagen_Beetle_Hatchback_2012\n\n!Volkswagen_Beetle_Hatchback_2012",
"#### Volkswagen_Golf_Hatchback_1991\n\n!Volkswagen_Golf_Hatchback_1991",
"#### Volkswagen_Golf_Hatchback_2012\n\n!Volkswagen_Golf_Hatchback_2012",
"#### Volvo_240_Sedan_1993\n\n!Volvo_240_Sedan_1993",
"#### Volvo_C30_Hatchback_2012\n\n!Volvo_C30_Hatchback_2012",
"#### Volvo_XC90_SUV_2007\n\n!Volvo_XC90_SUV_2007",
"#### smart_fortwo_Convertible_2012\n\n!smart_fortwo_Convertible_2012"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #safetensors #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"# sriram-car-classifier\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo.",
"## Example Images",
"#### AM_General_Hummer_SUV_2000\n\n!AM_General_Hummer_SUV_2000",
"#### Acura_Integra_Type_R_2001\n\n!Acura_Integra_Type_R_2001",
"#### Acura_RL_Sedan_2012\n\n!Acura_RL_Sedan_2012",
"#### Acura_TL_Sedan_2012\n\n!Acura_TL_Sedan_2012",
"#### Acura_TL_Type-S_2008\n\n!Acura_TL_Type-S_2008",
"#### Acura_TSX_Sedan_2012\n\n!Acura_TSX_Sedan_2012",
"#### Acura_ZDX_Hatchback_2012\n\n!Acura_ZDX_Hatchback_2012",
"#### Aston_Martin_V8_Vantage_Convertible_2012\n\n!Aston_Martin_V8_Vantage_Convertible_2012",
"#### Aston_Martin_V8_Vantage_Coupe_2012\n\n!Aston_Martin_V8_Vantage_Coupe_2012",
"#### Aston_Martin_Virage_Convertible_2012\n\n!Aston_Martin_Virage_Convertible_2012",
"#### Aston_Martin_Virage_Coupe_2012\n\n!Aston_Martin_Virage_Coupe_2012",
"#### Audi_100_Sedan_1994\n\n!Audi_100_Sedan_1994",
"#### Audi_100_Wagon_1994\n\n!Audi_100_Wagon_1994",
"#### Audi_A5_Coupe_2012\n\n!Audi_A5_Coupe_2012",
"#### Audi_R8_Coupe_2012\n\n!Audi_R8_Coupe_2012",
"#### Audi_RS_4_Convertible_2008\n\n!Audi_RS_4_Convertible_2008",
"#### Audi_S4_Sedan_2007\n\n!Audi_S4_Sedan_2007",
"#### Audi_S4_Sedan_2012\n\n!Audi_S4_Sedan_2012",
"#### Audi_S5_Convertible_2012\n\n!Audi_S5_Convertible_2012",
"#### Audi_S5_Coupe_2012\n\n!Audi_S5_Coupe_2012",
"#### Audi_S6_Sedan_2011\n\n!Audi_S6_Sedan_2011",
"#### Audi_TTS_Coupe_2012\n\n!Audi_TTS_Coupe_2012",
"#### Audi_TT_Hatchback_2011\n\n!Audi_TT_Hatchback_2011",
"#### Audi_TT_RS_Coupe_2012\n\n!Audi_TT_RS_Coupe_2012",
"#### Audi_V8_Sedan_1994\n\n!Audi_V8_Sedan_1994",
"#### BMW_1_Series_Convertible_2012\n\n!BMW_1_Series_Convertible_2012",
"#### BMW_1_Series_Coupe_2012\n\n!BMW_1_Series_Coupe_2012",
"#### BMW_3_Series_Sedan_2012\n\n!BMW_3_Series_Sedan_2012",
"#### BMW_3_Series_Wagon_2012\n\n!BMW_3_Series_Wagon_2012",
"#### BMW_6_Series_Convertible_2007\n\n!BMW_6_Series_Convertible_2007",
"#### BMW_ActiveHybrid_5_Sedan_2012\n\n!BMW_ActiveHybrid_5_Sedan_2012",
"#### BMW_M3_Coupe_2012\n\n!BMW_M3_Coupe_2012",
"#### BMW_M5_Sedan_2010\n\n!BMW_M5_Sedan_2010",
"#### BMW_M6_Convertible_2010\n\n!BMW_M6_Convertible_2010",
"#### BMW_X3_SUV_2012\n\n!BMW_X3_SUV_2012",
"#### BMW_X5_SUV_2007\n\n!BMW_X5_SUV_2007",
"#### BMW_X6_SUV_2012\n\n!BMW_X6_SUV_2012",
"#### BMW_Z4_Convertible_2012\n\n!BMW_Z4_Convertible_2012",
"#### Bentley_Arnage_Sedan_2009\n\n!Bentley_Arnage_Sedan_2009",
"#### Bentley_Continental_Flying_Spur_Sedan_2007\n\n!Bentley_Continental_Flying_Spur_Sedan_2007",
"#### Bentley_Continental_GT_Coupe_2007\n\n!Bentley_Continental_GT_Coupe_2007",
"#### Bentley_Continental_GT_Coupe_2012\n\n!Bentley_Continental_GT_Coupe_2012",
"#### Bentley_Continental_Supersports_Conv._Convertible_2012\n\n!Bentley_Continental_Supersports_Conv._Convertible_2012",
"#### Bentley_Mulsanne_Sedan_2011\n\n!Bentley_Mulsanne_Sedan_2011",
"#### Bugatti_Veyron_16.4_Convertible_2009\n\n!Bugatti_Veyron_16.4_Convertible_2009",
"#### Bugatti_Veyron_16.4_Coupe_2009\n\n!Bugatti_Veyron_16.4_Coupe_2009",
"#### Buick_Enclave_SUV_2012\n\n!Buick_Enclave_SUV_2012",
"#### Buick_Rainier_SUV_2007\n\n!Buick_Rainier_SUV_2007",
"#### Buick_Regal_GS_2012\n\n!Buick_Regal_GS_2012",
"#### Buick_Verano_Sedan_2012\n\n!Buick_Verano_Sedan_2012",
"#### Cadillac_CTS-V_Sedan_2012\n\n!Cadillac_CTS-V_Sedan_2012",
"#### Cadillac_Escalade_EXT_Crew_Cab_2007\n\n!Cadillac_Escalade_EXT_Crew_Cab_2007",
"#### Cadillac_SRX_SUV_2012\n\n!Cadillac_SRX_SUV_2012",
"#### Chevrolet_Avalanche_Crew_Cab_2012\n\n!Chevrolet_Avalanche_Crew_Cab_2012",
"#### Chevrolet_Camaro_Convertible_2012\n\n!Chevrolet_Camaro_Convertible_2012",
"#### Chevrolet_Cobalt_SS_2010\n\n!Chevrolet_Cobalt_SS_2010",
"#### Chevrolet_Corvette_Convertible_2012\n\n!Chevrolet_Corvette_Convertible_2012",
"#### Chevrolet_Corvette_Ron_Fellows_Edition_Z06_2007\n\n!Chevrolet_Corvette_Ron_Fellows_Edition_Z06_2007",
"#### Chevrolet_Corvette_ZR1_2012\n\n!Chevrolet_Corvette_ZR1_2012",
"#### Chevrolet_Express_Cargo_Van_2007\n\n!Chevrolet_Express_Cargo_Van_2007",
"#### Chevrolet_Express_Van_2007\n\n!Chevrolet_Express_Van_2007",
"#### Chevrolet_HHR_SS_2010\n\n!Chevrolet_HHR_SS_2010",
"#### Chevrolet_Impala_Sedan_2007\n\n!Chevrolet_Impala_Sedan_2007",
"#### Chevrolet_Malibu_Hybrid_Sedan_2010\n\n!Chevrolet_Malibu_Hybrid_Sedan_2010",
"#### Chevrolet_Malibu_Sedan_2007\n\n!Chevrolet_Malibu_Sedan_2007",
"#### Chevrolet_Monte_Carlo_Coupe_2007\n\n!Chevrolet_Monte_Carlo_Coupe_2007",
"#### Chevrolet_Silverado_1500_Classic_Extended_Cab_2007\n\n!Chevrolet_Silverado_1500_Classic_Extended_Cab_2007",
"#### Chevrolet_Silverado_1500_Extended_Cab_2012\n\n!Chevrolet_Silverado_1500_Extended_Cab_2012",
"#### Chevrolet_Silverado_1500_Hybrid_Crew_Cab_2012\n\n!Chevrolet_Silverado_1500_Hybrid_Crew_Cab_2012",
"#### Chevrolet_Silverado_1500_Regular_Cab_2012\n\n!Chevrolet_Silverado_1500_Regular_Cab_2012",
"#### Chevrolet_Silverado_2500HD_Regular_Cab_2012\n\n!Chevrolet_Silverado_2500HD_Regular_Cab_2012",
"#### Chevrolet_Sonic_Sedan_2012\n\n!Chevrolet_Sonic_Sedan_2012",
"#### Chevrolet_Tahoe_Hybrid_SUV_2012\n\n!Chevrolet_Tahoe_Hybrid_SUV_2012",
"#### Chevrolet_TrailBlazer_SS_2009\n\n!Chevrolet_TrailBlazer_SS_2009",
"#### Chevrolet_Traverse_SUV_2012\n\n!Chevrolet_Traverse_SUV_2012",
"#### Chrysler_300_SRT-8_2010\n\n!Chrysler_300_SRT-8_2010",
"#### Chrysler_Aspen_SUV_2009\n\n!Chrysler_Aspen_SUV_2009",
"#### Chrysler_Crossfire_Convertible_2008\n\n!Chrysler_Crossfire_Convertible_2008",
"#### Chrysler_PT_Cruiser_Convertible_2008\n\n!Chrysler_PT_Cruiser_Convertible_2008",
"#### Chrysler_Sebring_Convertible_2010\n\n!Chrysler_Sebring_Convertible_2010",
"#### Chrysler_Town_and_Country_Minivan_2012\n\n!Chrysler_Town_and_Country_Minivan_2012",
"#### Daewoo_Nubira_Wagon_2002\n\n!Daewoo_Nubira_Wagon_2002",
"#### Dodge_Caliber_Wagon_2007\n\n!Dodge_Caliber_Wagon_2007",
"#### Dodge_Caliber_Wagon_2012\n\n!Dodge_Caliber_Wagon_2012",
"#### Dodge_Caravan_Minivan_1997\n\n!Dodge_Caravan_Minivan_1997",
"#### Dodge_Challenger_SRT8_2011\n\n!Dodge_Challenger_SRT8_2011",
"#### Dodge_Charger_SRT-8_2009\n\n!Dodge_Charger_SRT-8_2009",
"#### Dodge_Charger_Sedan_2012\n\n!Dodge_Charger_Sedan_2012",
"#### Dodge_Dakota_Club_Cab_2007\n\n!Dodge_Dakota_Club_Cab_2007",
"#### Dodge_Dakota_Crew_Cab_2010\n\n!Dodge_Dakota_Crew_Cab_2010",
"#### Dodge_Durango_SUV_2007\n\n!Dodge_Durango_SUV_2007",
"#### Dodge_Durango_SUV_2012\n\n!Dodge_Durango_SUV_2012",
"#### Dodge_Journey_SUV_2012\n\n!Dodge_Journey_SUV_2012",
"#### Dodge_Magnum_Wagon_2008\n\n!Dodge_Magnum_Wagon_2008",
"#### Dodge_Ram_Pickup_3500_Crew_Cab_2010\n\n!Dodge_Ram_Pickup_3500_Crew_Cab_2010",
"#### Dodge_Ram_Pickup_3500_Quad_Cab_2009\n\n!Dodge_Ram_Pickup_3500_Quad_Cab_2009",
"#### Dodge_Sprinter_Cargo_Van_2009\n\n!Dodge_Sprinter_Cargo_Van_2009",
"#### Eagle_Talon_Hatchback_1998\n\n!Eagle_Talon_Hatchback_1998",
"#### FIAT_500_Abarth_2012\n\n!FIAT_500_Abarth_2012",
"#### FIAT_500_Convertible_2012\n\n!FIAT_500_Convertible_2012",
"#### Ferrari_458_Italia_Convertible_2012\n\n!Ferrari_458_Italia_Convertible_2012",
"#### Ferrari_458_Italia_Coupe_2012\n\n!Ferrari_458_Italia_Coupe_2012",
"#### Ferrari_California_Convertible_2012\n\n!Ferrari_California_Convertible_2012",
"#### Ferrari_FF_Coupe_2012\n\n!Ferrari_FF_Coupe_2012",
"#### Fisker_Karma_Sedan_2012\n\n!Fisker_Karma_Sedan_2012",
"#### Ford_E-Series_Wagon_Van_2012\n\n!Ford_E-Series_Wagon_Van_2012",
"#### Ford_Edge_SUV_2012\n\n!Ford_Edge_SUV_2012",
"#### Ford_Expedition_EL_SUV_2009\n\n!Ford_Expedition_EL_SUV_2009",
"#### Ford_F-150_Regular_Cab_2007\n\n!Ford_F-150_Regular_Cab_2007",
"#### Ford_F-150_Regular_Cab_2012\n\n!Ford_F-150_Regular_Cab_2012",
"#### Ford_F-450_Super_Duty_Crew_Cab_2012\n\n!Ford_F-450_Super_Duty_Crew_Cab_2012",
"#### Ford_Fiesta_Sedan_2012\n\n!Ford_Fiesta_Sedan_2012",
"#### Ford_Focus_Sedan_2007\n\n!Ford_Focus_Sedan_2007",
"#### Ford_Freestar_Minivan_2007\n\n!Ford_Freestar_Minivan_2007",
"#### Ford_GT_Coupe_2006\n\n!Ford_GT_Coupe_2006",
"#### Ford_Mustang_Convertible_2007\n\n!Ford_Mustang_Convertible_2007",
"#### Ford_Ranger_SuperCab_2011\n\n!Ford_Ranger_SuperCab_2011",
"#### GMC_Acadia_SUV_2012\n\n!GMC_Acadia_SUV_2012",
"#### GMC_Canyon_Extended_Cab_2012\n\n!GMC_Canyon_Extended_Cab_2012",
"#### GMC_Savana_Van_2012\n\n!GMC_Savana_Van_2012",
"#### GMC_Terrain_SUV_2012\n\n!GMC_Terrain_SUV_2012",
"#### GMC_Yukon_Hybrid_SUV_2012\n\n!GMC_Yukon_Hybrid_SUV_2012",
"#### Geo_Metro_Convertible_1993\n\n!Geo_Metro_Convertible_1993",
"#### HUMMER_H2_SUT_Crew_Cab_2009\n\n!HUMMER_H2_SUT_Crew_Cab_2009",
"#### HUMMER_H3T_Crew_Cab_2010\n\n!HUMMER_H3T_Crew_Cab_2010",
"#### Honda_Accord_Coupe_2012\n\n!Honda_Accord_Coupe_2012",
"#### Honda_Accord_Sedan_2012\n\n!Honda_Accord_Sedan_2012",
"#### Honda_Odyssey_Minivan_2007\n\n!Honda_Odyssey_Minivan_2007",
"#### Honda_Odyssey_Minivan_2012\n\n!Honda_Odyssey_Minivan_2012",
"#### Hyundai_Accent_Sedan_2012\n\n!Hyundai_Accent_Sedan_2012",
"#### Hyundai_Azera_Sedan_2012\n\n!Hyundai_Azera_Sedan_2012",
"#### Hyundai_Elantra_Sedan_2007\n\n!Hyundai_Elantra_Sedan_2007",
"#### Hyundai_Elantra_Touring_Hatchback_2012\n\n!Hyundai_Elantra_Touring_Hatchback_2012",
"#### Hyundai_Genesis_Sedan_2012\n\n!Hyundai_Genesis_Sedan_2012",
"#### Hyundai_Santa_Fe_SUV_2012\n\n!Hyundai_Santa_Fe_SUV_2012",
"#### Hyundai_Sonata_Hybrid_Sedan_2012\n\n!Hyundai_Sonata_Hybrid_Sedan_2012",
"#### Hyundai_Sonata_Sedan_2012\n\n!Hyundai_Sonata_Sedan_2012",
"#### Hyundai_Tucson_SUV_2012\n\n!Hyundai_Tucson_SUV_2012",
"#### Hyundai_Veloster_Hatchback_2012\n\n!Hyundai_Veloster_Hatchback_2012",
"#### Hyundai_Veracruz_SUV_2012\n\n!Hyundai_Veracruz_SUV_2012",
"#### Infiniti_G_Coupe_IPL_2012\n\n!Infiniti_G_Coupe_IPL_2012",
"#### Infiniti_QX56_SUV_2011\n\n!Infiniti_QX56_SUV_2011",
"#### Isuzu_Ascender_SUV_2008\n\n!Isuzu_Ascender_SUV_2008",
"#### Jaguar_XK_XKR_2012\n\n!Jaguar_XK_XKR_2012",
"#### Jeep_Compass_SUV_2012\n\n!Jeep_Compass_SUV_2012",
"#### Jeep_Grand_Cherokee_SUV_2012\n\n!Jeep_Grand_Cherokee_SUV_2012",
"#### Jeep_Liberty_SUV_2012\n\n!Jeep_Liberty_SUV_2012",
"#### Jeep_Patriot_SUV_2012\n\n!Jeep_Patriot_SUV_2012",
"#### Jeep_Wrangler_SUV_2012\n\n!Jeep_Wrangler_SUV_2012",
"#### Lamborghini_Aventador_Coupe_2012\n\n!Lamborghini_Aventador_Coupe_2012",
"#### Lamborghini_Diablo_Coupe_2001\n\n!Lamborghini_Diablo_Coupe_2001",
"#### Lamborghini_Gallardo_LP_570-4_Superleggera_2012\n\n!Lamborghini_Gallardo_LP_570-4_Superleggera_2012",
"#### Lamborghini_Reventon_Coupe_2008\n\n!Lamborghini_Reventon_Coupe_2008",
"#### Land_Rover_LR2_SUV_2012\n\n!Land_Rover_LR2_SUV_2012",
"#### Land_Rover_Range_Rover_SUV_2012\n\n!Land_Rover_Range_Rover_SUV_2012",
"#### Lincoln_Town_Car_Sedan_2011\n\n!Lincoln_Town_Car_Sedan_2011",
"#### MINI_Cooper_Roadster_Convertible_2012\n\n!MINI_Cooper_Roadster_Convertible_2012",
"#### Maybach_Landaulet_Convertible_2012\n\n!Maybach_Landaulet_Convertible_2012",
"#### Mazda_Tribute_SUV_2011\n\n!Mazda_Tribute_SUV_2011",
"#### McLaren_MP4-12C_Coupe_2012\n\n!McLaren_MP4-12C_Coupe_2012",
"#### Mercedes-Benz_300-Class_Convertible_1993\n\n!Mercedes-Benz_300-Class_Convertible_1993",
"#### Mercedes-Benz_C-Class_Sedan_2012\n\n!Mercedes-Benz_C-Class_Sedan_2012",
"#### Mercedes-Benz_E-Class_Sedan_2012\n\n!Mercedes-Benz_E-Class_Sedan_2012",
"#### Mercedes-Benz_S-Class_Sedan_2012\n\n!Mercedes-Benz_S-Class_Sedan_2012",
"#### Mercedes-Benz_SL-Class_Coupe_2009\n\n!Mercedes-Benz_SL-Class_Coupe_2009",
"#### Mercedes-Benz_Sprinter_Van_2012\n\n!Mercedes-Benz_Sprinter_Van_2012",
"#### Mitsubishi_Lancer_Sedan_2012\n\n!Mitsubishi_Lancer_Sedan_2012",
"#### Nissan_240SX_Coupe_1998\n\n!Nissan_240SX_Coupe_1998",
"#### Nissan_Juke_Hatchback_2012\n\n!Nissan_Juke_Hatchback_2012",
"#### Nissan_Leaf_Hatchback_2012\n\n!Nissan_Leaf_Hatchback_2012",
"#### Nissan_NV_Passenger_Van_2012\n\n!Nissan_NV_Passenger_Van_2012",
"#### Plymouth_Neon_Coupe_1999\n\n!Plymouth_Neon_Coupe_1999",
"#### Porsche_Panamera_Sedan_2012\n\n!Porsche_Panamera_Sedan_2012",
"#### Ram_C_V_Cargo_Van_Minivan_2012\n\n!Ram_C_V_Cargo_Van_Minivan_2012",
"#### Rolls-Royce_Ghost_Sedan_2012\n\n!Rolls-Royce_Ghost_Sedan_2012",
"#### Rolls-Royce_Phantom_Drophead_Coupe_Convertible_2012\n\n!Rolls-Royce_Phantom_Drophead_Coupe_Convertible_2012",
"#### Rolls-Royce_Phantom_Sedan_2012\n\n!Rolls-Royce_Phantom_Sedan_2012",
"#### Scion_xD_Hatchback_2012\n\n!Scion_xD_Hatchback_2012",
"#### Spyker_C8_Convertible_2009\n\n!Spyker_C8_Convertible_2009",
"#### Spyker_C8_Coupe_2009\n\n!Spyker_C8_Coupe_2009",
"#### Suzuki_Aerio_Sedan_2007\n\n!Suzuki_Aerio_Sedan_2007",
"#### Suzuki_Kizashi_Sedan_2012\n\n!Suzuki_Kizashi_Sedan_2012",
"#### Suzuki_SX4_Hatchback_2012\n\n!Suzuki_SX4_Hatchback_2012",
"#### Suzuki_SX4_Sedan_2012\n\n!Suzuki_SX4_Sedan_2012",
"#### Tesla_Model_S_Sedan_2012\n\n!Tesla_Model_S_Sedan_2012",
"#### Toyota_4Runner_SUV_2012\n\n!Toyota_4Runner_SUV_2012",
"#### Toyota_Camry_Sedan_2012\n\n!Toyota_Camry_Sedan_2012",
"#### Toyota_Corolla_Sedan_2012\n\n!Toyota_Corolla_Sedan_2012",
"#### Toyota_Sequoia_SUV_2012\n\n!Toyota_Sequoia_SUV_2012",
"#### Volkswagen_Beetle_Hatchback_2012\n\n!Volkswagen_Beetle_Hatchback_2012",
"#### Volkswagen_Golf_Hatchback_1991\n\n!Volkswagen_Golf_Hatchback_1991",
"#### Volkswagen_Golf_Hatchback_2012\n\n!Volkswagen_Golf_Hatchback_2012",
"#### Volvo_240_Sedan_1993\n\n!Volvo_240_Sedan_1993",
"#### Volvo_C30_Hatchback_2012\n\n!Volvo_C30_Hatchback_2012",
"#### Volvo_XC90_SUV_2007\n\n!Volvo_XC90_SUV_2007",
"#### smart_fortwo_Convertible_2012\n\n!smart_fortwo_Convertible_2012"
] |
null | null |
-----
tags:
- conversational
----
# Discord Bot
|
{}
|
Sristi/Senti-Bot
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
-----
tags:
- conversational
----
# Discord Bot
|
[
"# Discord Bot"
] |
[
"TAGS\n#region-us \n",
"# Discord Bot"
] |
automatic-speech-recognition
|
transformers
|
Wav2Vec2-Large-XLSR-Welsh
Fine-tuned facebook/wav2vec2-large-xlsr-53 on the Welsh Common Voice dataset.
The data was augmented using standard augmentation approach.
When using this model, make sure that your speech input is sampled at 16kHz.
Test Result: 29.4%
Usage
The model can be used directly (without a language model) as follows:
```
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "cy", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("Srulikbdd/Wav2vec2-large-xlsr-welsh")
model = Wav2Vec2ForCTC.from_pretrained("Srulikbdd/Wav2vec2-large-xlsr-welsh")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
Evaluation
The model can be evaluated as follows on the Welsh test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "cy", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("Srulikbdd/Wav2Vec2-large-xlsr-welsh")
model = Wav2Vec2ForCTC.from_pretrained("Srulikbdd/Wav2Vec2-large-xlsr-welsh")
model.to("cuda")
chars_to_ignore_regex = '[\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\?\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\!\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\-\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\u2013\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\u2014\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\;\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\:\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\%\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
|
{"language": "sv", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "model-index": [{"name": "XLSR Wav2Vec2 Welsh by Srulik Ben David", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice cy", "type": "common_voice", "args": "cy"}, "metrics": [{"type": "wer", "value": 29.4, "name": "Test WER"}]}]}]}
|
Srulikbdd/Wav2Vec2-large-xlsr-welsh
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"sv",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"sv"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #sv #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
Wav2Vec2-Large-XLSR-Welsh
Fine-tuned facebook/wav2vec2-large-xlsr-53 on the Welsh Common Voice dataset.
The data was augmented using standard augmentation approach.
When using this model, make sure that your speech input is sampled at 16kHz.
Test Result: 29.4%
Usage
The model can be used directly (without a language model) as follows:
Evaluation
The model can be evaluated as follows on the Welsh test data of Common Voice.
|
[] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #sv #license-apache-2.0 #model-index #endpoints_compatible #region-us \n"
] |
text-generation
|
transformers
|
# Evelynn DialoGPT Model
|
{"tags": ["conversational"]}
|
Stabley/DialoGPT-small-evelynn
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Evelynn DialoGPT Model
|
[
"# Evelynn DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Evelynn DialoGPT Model"
] |
null | null |
This is a dummy readme
|
{}
|
StephennFernandes/XLS-R-assamese-LM
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
This is a dummy readme
|
[] |
[
"TAGS\n#region-us \n"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLS-R-marathi
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MR dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1200
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
{"language": ["mr"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "generated_from_trainer", "hf-asr-leaderboard"], "model-index": [{"name": "XLS-R-marathi", "results": []}]}
|
StephennFernandes/XLS-R-marathi
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"generated_from_trainer",
"hf-asr-leaderboard",
"mr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"mr"
] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #robust-speech-event #generated_from_trainer #hf-asr-leaderboard #mr #license-apache-2.0 #endpoints_compatible #region-us
|
# XLS-R-marathi
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MR dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1200
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
[
"# XLS-R-marathi\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MR dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 1200\n- num_epochs: 30.0\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.17.0.dev0\n- Pytorch 1.10.2+cu113\n- Datasets 1.18.3\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #robust-speech-event #generated_from_trainer #hf-asr-leaderboard #mr #license-apache-2.0 #endpoints_compatible #region-us \n",
"# XLS-R-marathi\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MR dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 1200\n- num_epochs: 30.0\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.17.0.dev0\n- Pytorch 1.10.2+cu113\n- Datasets 1.18.3\n- Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
tags:
- automatic-speech-recognition
- robust-speech-event
---
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on a private dataset.
It achieves the following results on the evaluation set:
The following hyper-parameters were used during training:
- learning_rate: 3e-4
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- num_epochs: 30
- mixed_precision_training: Native AMP
|
{}
|
StephennFernandes/wav2vec2-XLS-R-300m-konkani
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #endpoints_compatible #region-us
|
tags:
- automatic-speech-recognition
- robust-speech-event
---
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on a private dataset.
It achieves the following results on the evaluation set:
The following hyper-parameters were used during training:
- learning_rate: 3e-4
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- num_epochs: 30
- mixed_precision_training: Native AMP
|
[] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #endpoints_compatible #region-us \n"
] |
text-generation
|
transformers
|
It's just a dialog bot trained on my Tweets. Unfortunately as tweets aren\'t very conversational it comes off pretty random.
|
{}
|
SteveC/sdc_bot_15K
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
It's just a dialog bot trained on my Tweets. Unfortunately as tweets aren\'t very conversational it comes off pretty random.
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
fill-mask
|
transformers
|
## Melayu BERT
Melayu BERT is a masked language model based on [BERT](https://arxiv.org/abs/1810.04805). It was trained on the [OSCAR](https://huggingface.co/datasets/oscar) dataset, specifically the `unshuffled_original_ms` subset. The model used was [English BERT model](https://huggingface.co/bert-base-uncased) and fine-tuned on the Malaysian dataset. The model achieved a perplexity of 9.46 on a 20% validation dataset. Many of the techniques used are based on a Hugging Face tutorial [notebook](https://github.com/huggingface/notebooks/blob/master/examples/language_modeling.ipynb) written by [Sylvain Gugger](https://github.com/sgugger), and [fine-tuning tutorial notebook](https://github.com/piegu/fastai-projects/blob/master/finetuning-English-GPT2-any-language-Portuguese-HuggingFace-fastaiv2.ipynb) written by [Pierre Guillou](https://huggingface.co/pierreguillou). The model is available both for PyTorch and TensorFlow use.
## Model
The model was trained on 3 epochs with a learning rate of 2e-3 and achieved a training loss per steps as shown below.
| Step |Training loss|
|--------|-------------|
|500 | 5.051300 |
|1000 | 3.701700 |
|1500 | 3.288600 |
|2000 | 3.024000 |
|2500 | 2.833500 |
|3000 | 2.741600 |
|3500 | 2.637900 |
|4000 | 2.547900 |
|4500 | 2.451500 |
|5000 | 2.409600 |
|5500 | 2.388300 |
|6000 | 2.351600 |
## How to Use
### As Masked Language Model
```python
from transformers import pipeline
pretrained_name = "StevenLimcorn/MelayuBERT"
fill_mask = pipeline(
"fill-mask",
model=pretrained_name,
tokenizer=pretrained_name
)
fill_mask("Saya [MASK] makan nasi hari ini.")
```
### Import Tokenizer and Model
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("StevenLimcorn/MelayuBERT")
model = AutoModelForMaskedLM.from_pretrained("StevenLimcorn/MelayuBERT")
```
## Author
Melayu BERT was trained by [Steven Limcorn](https://github.com/stevenlimcorn) and [Wilson Wongso](https://hf.co/w11wo).
|
{"language": "ms", "license": "mit", "tags": ["melayu-bert"], "datasets": ["oscar"], "widget": [{"text": "Saya [MASK] makan nasi hari ini."}]}
|
StevenLimcorn/MelayuBERT
| null |
[
"transformers",
"pytorch",
"tf",
"bert",
"fill-mask",
"melayu-bert",
"ms",
"dataset:oscar",
"arxiv:1810.04805",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1810.04805"
] |
[
"ms"
] |
TAGS
#transformers #pytorch #tf #bert #fill-mask #melayu-bert #ms #dataset-oscar #arxiv-1810.04805 #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
Melayu BERT
-----------
Melayu BERT is a masked language model based on BERT. It was trained on the OSCAR dataset, specifically the 'unshuffled\_original\_ms' subset. The model used was English BERT model and fine-tuned on the Malaysian dataset. The model achieved a perplexity of 9.46 on a 20% validation dataset. Many of the techniques used are based on a Hugging Face tutorial notebook written by Sylvain Gugger, and fine-tuning tutorial notebook written by Pierre Guillou. The model is available both for PyTorch and TensorFlow use.
Model
-----
The model was trained on 3 epochs with a learning rate of 2e-3 and achieved a training loss per steps as shown below.
How to Use
----------
### As Masked Language Model
### Import Tokenizer and Model
Author
------
Melayu BERT was trained by Steven Limcorn and Wilson Wongso.
|
[
"### As Masked Language Model",
"### Import Tokenizer and Model\n\n\nAuthor\n------\n\n\nMelayu BERT was trained by Steven Limcorn and Wilson Wongso."
] |
[
"TAGS\n#transformers #pytorch #tf #bert #fill-mask #melayu-bert #ms #dataset-oscar #arxiv-1810.04805 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### As Masked Language Model",
"### Import Tokenizer and Model\n\n\nAuthor\n------\n\n\nMelayu BERT was trained by Steven Limcorn and Wilson Wongso."
] |
text-classification
|
transformers
|
## Indo-roberta-indonli
Indo-roberta-indonli is natural language inference classifier based on [Indo-roberta](https://huggingface.co/flax-community/indonesian-roberta-base) model. It was trained on the trained on [IndoNLI](https://github.com/ir-nlp-csui/indonli/tree/main/data/indonli) dataset. The model used was [Indo-roberta](https://huggingface.co/flax-community/indonesian-roberta-base) and was transfer-learned to a natural inference classifier model. The model are tested using the validation, test_layer and test_expert dataset given in the github repository. The results are shown below.
### Result
| Dataset | Accuracy | F1 | Precision | Recall |
|-------------|----------|---------|-----------|---------|
| Test Lay | 0.74329 | 0.74075 | 0.74283 | 0.74133 |
| Test Expert | 0.6115 | 0.60543 | 0.63924 | 0.61742 |
## Model
The model was trained on with 5 epochs, batch size 16, learning rate 2e-5 and weight decay 0.01. Achieved different metrics as shown below.
| Epoch | Training Loss | Validation Loss | Accuracy | F1 | Precision | Recall |
|-------|---------------|-----------------|----------|----------|-----------|----------|
| 1 | 0.942500 | 0.658559 | 0.737369 | 0.735552 | 0.735488 | 0.736679 |
| 2 | 0.649200 | 0.645290 | 0.761493 | 0.759593 | 0.762784 | 0.759642 |
| 3 | 0.437100 | 0.667163 | 0.766045 | 0.763979 | 0.765740 | 0.763792 |
| 4 | 0.282000 | 0.786683 | 0.764679 | 0.761802 | 0.762011 | 0.761684 |
| 5 | 0.193500 | 0.925717 | 0.765134 | 0.763127 | 0.763560 | 0.763489 |
## How to Use
### As NLI Classifier
```python
from transformers import pipeline
pretrained_name = "StevenLimcorn/indonesian-roberta-indonli"
nlp = pipeline(
"zero-shot-classification",
model=pretrained_name,
tokenizer=pretrained_name
)
nlp("Amir Sjarifoeddin Harahap lahir di Kota Medan, Sumatera Utara, 27 April 1907. Ia meninggal di Surakarta, Jawa Tengah, pada 19 Desember 1948 dalam usia 41 tahun. </s></s> Amir Sjarifoeddin Harahap masih hidup.")
```
## Disclaimer
Do consider the biases which come from both the pre-trained RoBERTa model and the `INDONLI` dataset that may be carried over into the results of this model.
## Author
Indonesian RoBERTa Base IndoNLI was trained and evaluated by [Steven Limcorn](https://github.com/stevenlimcorn). All computation and development are done on Google Colaboratory using their free GPU access.
## Reference
The dataset we used is by IndoNLI.
```
@inproceedings{indonli,
title = "IndoNLI: A Natural Language Inference Dataset for Indonesian",
author = "Mahendra, Rahmad and Aji, Alham Fikri and Louvan, Samuel and Rahman, Fahrurrozi and Vania, Clara",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
publisher = "Association for Computational Linguistics",
}
```
|
{"language": "id", "license": "mit", "tags": ["roberta"], "datasets": ["indonli"], "widget": [{"text": "Amir Sjarifoeddin Harahap lahir di Kota Medan, Sumatera Utara, 27 April 1907. Ia meninggal di Surakarta, Jawa Tengah, pada 19 Desember 1948 dalam usia 41 tahun. </s></s> Amir Sjarifoeddin Harahap masih hidup."}]}
|
StevenLimcorn/indo-roberta-indonli
| null |
[
"transformers",
"pytorch",
"tf",
"roberta",
"text-classification",
"id",
"dataset:indonli",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"id"
] |
TAGS
#transformers #pytorch #tf #roberta #text-classification #id #dataset-indonli #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
Indo-roberta-indonli
--------------------
Indo-roberta-indonli is natural language inference classifier based on Indo-roberta model. It was trained on the trained on IndoNLI dataset. The model used was Indo-roberta and was transfer-learned to a natural inference classifier model. The model are tested using the validation, test\_layer and test\_expert dataset given in the github repository. The results are shown below.
### Result
Model
-----
The model was trained on with 5 epochs, batch size 16, learning rate 2e-5 and weight decay 0.01. Achieved different metrics as shown below.
How to Use
----------
### As NLI Classifier
Disclaimer
----------
Do consider the biases which come from both the pre-trained RoBERTa model and the 'INDONLI' dataset that may be carried over into the results of this model.
Author
------
Indonesian RoBERTa Base IndoNLI was trained and evaluated by Steven Limcorn. All computation and development are done on Google Colaboratory using their free GPU access.
Reference
---------
The dataset we used is by IndoNLI.
|
[
"### Result\n\n\n\nModel\n-----\n\n\nThe model was trained on with 5 epochs, batch size 16, learning rate 2e-5 and weight decay 0.01. Achieved different metrics as shown below.\n\n\n\nHow to Use\n----------",
"### As NLI Classifier\n\n\nDisclaimer\n----------\n\n\nDo consider the biases which come from both the pre-trained RoBERTa model and the 'INDONLI' dataset that may be carried over into the results of this model.\n\n\nAuthor\n------\n\n\nIndonesian RoBERTa Base IndoNLI was trained and evaluated by Steven Limcorn. All computation and development are done on Google Colaboratory using their free GPU access.\n\n\nReference\n---------\n\n\nThe dataset we used is by IndoNLI."
] |
[
"TAGS\n#transformers #pytorch #tf #roberta #text-classification #id #dataset-indonli #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Result\n\n\n\nModel\n-----\n\n\nThe model was trained on with 5 epochs, batch size 16, learning rate 2e-5 and weight decay 0.01. Achieved different metrics as shown below.\n\n\n\nHow to Use\n----------",
"### As NLI Classifier\n\n\nDisclaimer\n----------\n\n\nDo consider the biases which come from both the pre-trained RoBERTa model and the 'INDONLI' dataset that may be carried over into the results of this model.\n\n\nAuthor\n------\n\n\nIndonesian RoBERTa Base IndoNLI was trained and evaluated by Steven Limcorn. All computation and development are done on Google Colaboratory using their free GPU access.\n\n\nReference\n---------\n\n\nThe dataset we used is by IndoNLI."
] |
text-classification
|
transformers
|
# Indo RoBERTa Emotion Classifier
Indo RoBERTa Emotion Classifier is emotion classifier based on [Indo-roberta](https://huggingface.co/flax-community/indonesian-roberta-base) model. It was trained on the trained on [IndoNLU EmoT](https://huggingface.co/datasets/indonlu) dataset. The model used was [Indo-roberta](https://huggingface.co/flax-community/indonesian-roberta-base) and was transfer-learned to an emotion classifier model. Based from the [IndoNLU bencmark](https://www.indobenchmark.com/), the model achieve an f1-macro of 72.05%, accuracy of 71.81%, precision of 72.47% and recall of 71.94%.
## Model
The model was trained on 7 epochs with learning rate 2e-5. Achieved different metrics as shown below.
| Epoch | Training Loss | Validation Loss | Accuracy | F1 | Precision | Recall |
|-------|---------------|-----------------|----------|----------|-----------|----------|
| 1 | 1.300700 | 1.005149 | 0.622727 | 0.601846 | 0.640845 | 0.611144 |
| 2 | 0.806300 | 0.841953 | 0.686364 | 0.694096 | 0.701984 | 0.696657 |
| 3 | 0.591900 | 0.796794 | 0.686364 | 0.696573 | 0.707520 | 0.691671 |
| 4 | 0.441200 | 0.782094 | 0.722727 | 0.724359 | 0.725985 | 0.730229 |
| 5 | 0.334700 | 0.809931 | 0.711364 | 0.720550 | 0.718318 | 0.724608 |
| 6 | 0.268400 | 0.812771 | 0.718182 | 0.724192 | 0.721222 | 0.729195 |
| 7 | 0.226000 | 0.828461 | 0.725000 | 0.733625 | 0.731709 | 0.735800 |
## How to Use
### As Text Classifier
```python
from transformers import pipeline
pretrained_name = "StevenLimcorn/indonesian-roberta-base-emotion-classifier"
nlp = pipeline(
"sentiment-analysis",
model=pretrained_name,
tokenizer=pretrained_name
)
nlp("Hal-hal baik akan datang.")
```
## Disclaimer
Do consider the biases which come from both the pre-trained RoBERTa model and the `EmoT` dataset that may be carried over into the results of this model.
## Author
Indonesian RoBERTa Base Emotion Classifier was trained and evaluated by [Steven Limcorn](https://github.com/stevenlimcorn). All computation and development are done on Google Colaboratory using their free GPU access.
If used, please cite
```bibtex
@misc {steven_limcorn_2023,
author = { {Steven Limcorn} },
title = { indonesian-roberta-base-emotion-classifier (Revision e8a9cb9) },
year = 2023,
url = { https://huggingface.co/StevenLimcorn/indonesian-roberta-base-emotion-classifier },
doi = { 10.57967/hf/0681 },
publisher = { Hugging Face }
}
```
|
{"language": "id", "license": "mit", "tags": ["roberta"], "datasets": ["indonlu"], "widget": [{"text": "Hal-hal baik akan datang."}]}
|
StevenLimcorn/indonesian-roberta-base-emotion-classifier
| null |
[
"transformers",
"pytorch",
"tf",
"safetensors",
"roberta",
"text-classification",
"id",
"dataset:indonlu",
"doi:10.57967/hf/0681",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"id"
] |
TAGS
#transformers #pytorch #tf #safetensors #roberta #text-classification #id #dataset-indonlu #doi-10.57967/hf/0681 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
|
Indo RoBERTa Emotion Classifier
===============================
Indo RoBERTa Emotion Classifier is emotion classifier based on Indo-roberta model. It was trained on the trained on IndoNLU EmoT dataset. The model used was Indo-roberta and was transfer-learned to an emotion classifier model. Based from the IndoNLU bencmark, the model achieve an f1-macro of 72.05%, accuracy of 71.81%, precision of 72.47% and recall of 71.94%.
Model
-----
The model was trained on 7 epochs with learning rate 2e-5. Achieved different metrics as shown below.
How to Use
----------
### As Text Classifier
Disclaimer
----------
Do consider the biases which come from both the pre-trained RoBERTa model and the 'EmoT' dataset that may be carried over into the results of this model.
Author
------
Indonesian RoBERTa Base Emotion Classifier was trained and evaluated by Steven Limcorn. All computation and development are done on Google Colaboratory using their free GPU access.
If used, please cite
|
[
"### As Text Classifier\n\n\nDisclaimer\n----------\n\n\nDo consider the biases which come from both the pre-trained RoBERTa model and the 'EmoT' dataset that may be carried over into the results of this model.\n\n\nAuthor\n------\n\n\nIndonesian RoBERTa Base Emotion Classifier was trained and evaluated by Steven Limcorn. All computation and development are done on Google Colaboratory using their free GPU access.\n\n\nIf used, please cite"
] |
[
"TAGS\n#transformers #pytorch #tf #safetensors #roberta #text-classification #id #dataset-indonlu #doi-10.57967/hf/0681 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### As Text Classifier\n\n\nDisclaimer\n----------\n\n\nDo consider the biases which come from both the pre-trained RoBERTa model and the 'EmoT' dataset that may be carried over into the results of this model.\n\n\nAuthor\n------\n\n\nIndonesian RoBERTa Base Emotion Classifier was trained and evaluated by Steven Limcorn. All computation and development are done on Google Colaboratory using their free GPU access.\n\n\nIf used, please cite"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - ZH-TW dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1786
- Wer: 0.8594
- Cer: 0.2964
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 64.6189 | 2.51 | 500 | 63.8077 | 1.0 | 1.0 |
| 8.0561 | 5.03 | 1000 | 6.8014 | 1.0 | 1.0 |
| 6.0427 | 7.54 | 1500 | 6.0745 | 1.0 | 1.0 |
| 5.9357 | 10.05 | 2000 | 5.8682 | 1.0 | 1.0 |
| 5.0489 | 12.56 | 2500 | 4.4032 | 0.9990 | 0.7750 |
| 4.6184 | 15.08 | 3000 | 3.8383 | 0.9983 | 0.6768 |
| 4.365 | 17.59 | 3500 | 3.4633 | 0.9959 | 0.6299 |
| 4.1026 | 20.1 | 4000 | 3.0732 | 0.9902 | 0.5814 |
| 3.8655 | 22.61 | 4500 | 2.7638 | 0.9868 | 0.5465 |
| 3.6991 | 25.13 | 5000 | 2.4759 | 0.9811 | 0.5088 |
| 3.4894 | 27.64 | 5500 | 2.2937 | 0.9746 | 0.4852 |
| 3.3983 | 30.15 | 6000 | 2.1684 | 0.9733 | 0.4674 |
| 3.2736 | 32.66 | 6500 | 2.0372 | 0.9659 | 0.4458 |
| 3.1884 | 35.18 | 7000 | 1.9267 | 0.9648 | 0.4329 |
| 3.1248 | 37.69 | 7500 | 1.8408 | 0.9591 | 0.4217 |
| 3.0381 | 40.2 | 8000 | 1.7531 | 0.9503 | 0.4074 |
| 2.9515 | 42.71 | 8500 | 1.6880 | 0.9459 | 0.3967 |
| 2.8704 | 45.23 | 9000 | 1.6264 | 0.9378 | 0.3884 |
| 2.8128 | 47.74 | 9500 | 1.5621 | 0.9341 | 0.3782 |
| 2.7386 | 50.25 | 10000 | 1.5011 | 0.9243 | 0.3664 |
| 2.6646 | 52.76 | 10500 | 1.4608 | 0.9192 | 0.3575 |
| 2.6072 | 55.28 | 11000 | 1.4251 | 0.9148 | 0.3501 |
| 2.569 | 57.79 | 11500 | 1.3837 | 0.9060 | 0.3462 |
| 2.5091 | 60.3 | 12000 | 1.3589 | 0.9070 | 0.3392 |
| 2.4588 | 62.81 | 12500 | 1.3261 | 0.8966 | 0.3284 |
| 2.4083 | 65.33 | 13000 | 1.3052 | 0.8982 | 0.3265 |
| 2.3787 | 67.84 | 13500 | 1.2997 | 0.8908 | 0.3243 |
| 2.3457 | 70.35 | 14000 | 1.2778 | 0.8898 | 0.3187 |
| 2.3099 | 72.86 | 14500 | 1.2661 | 0.8830 | 0.3172 |
| 2.2559 | 75.38 | 15000 | 1.2475 | 0.8851 | 0.3143 |
| 2.2264 | 77.89 | 15500 | 1.2319 | 0.8739 | 0.3085 |
| 2.196 | 80.4 | 16000 | 1.2218 | 0.8722 | 0.3049 |
| 2.1613 | 82.91 | 16500 | 1.2093 | 0.8719 | 0.3051 |
| 2.1455 | 85.43 | 17000 | 1.2055 | 0.8624 | 0.3005 |
| 2.1193 | 87.94 | 17500 | 1.1975 | 0.8600 | 0.2982 |
| 2.0911 | 90.45 | 18000 | 1.1960 | 0.8648 | 0.3003 |
| 2.0884 | 92.96 | 18500 | 1.1871 | 0.8638 | 0.2971 |
| 2.0766 | 95.48 | 19000 | 1.1814 | 0.8617 | 0.2967 |
| 2.0735 | 97.99 | 19500 | 1.1801 | 0.8621 | 0.2969 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
{"language": ["zh-TW"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "common_voice", "generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "", "results": []}]}
|
StevenLimcorn/wav2vec2-xls-r-300m-zh-TW
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"zh-TW"
] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #common_voice #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the COMMON\_VOICE - ZH-TW dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1786
* Wer: 0.8594
* Cer: 0.2964
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 7.5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 2000
* num\_epochs: 100.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2.dev0
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #common_voice #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
text-generation
|
transformers
|
@ Deltarune Spamton DialoGPT Model
|
{"tags": ["conversational"]}
|
Stevo/DiagloGPT-medium-spamton
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
@ Deltarune Spamton DialoGPT Model
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-finetuned-ner-4
#This model is part of a test for creating multilingual BioMedical NER systems. Not intended for proffesional use now.
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the CRAFT+BC4CHEMD+BioNLP09 datasets concatenated.
It achieves the following results on the evaluation set:
- Loss: 0.1027
- Precision: 0.9830
- Recall: 0.9832
- F1: 0.9831
- Accuracy: 0.9799
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0658 | 1.0 | 6128 | 0.0751 | 0.9795 | 0.9795 | 0.9795 | 0.9758 |
| 0.0406 | 2.0 | 12256 | 0.0753 | 0.9827 | 0.9815 | 0.9821 | 0.9786 |
| 0.0182 | 3.0 | 18384 | 0.0934 | 0.9834 | 0.9825 | 0.9829 | 0.9796 |
| 0.011 | 4.0 | 24512 | 0.1027 | 0.9830 | 0.9832 | 0.9831 | 0.9799 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "bert-base-multilingual-cased-finetuned-ner-4", "results": []}]}
|
StivenLancheros/mBERT-base-Biomedical-NER
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
bert-base-multilingual-cased-finetuned-ner-4
============================================
#This model is part of a test for creating multilingual BioMedical NER systems. Not intended for proffesional use now.
This model is a fine-tuned version of bert-base-multilingual-cased on the CRAFT+BC4CHEMD+BioNLP09 datasets concatenated.
It achieves the following results on the evaluation set:
* Loss: 0.1027
* Precision: 0.9830
* Recall: 0.9832
* F1: 0.9831
* Accuracy: 0.9799
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 3e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 4
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-biomedical-clinical-es](https://huggingface.co/PlanTL-GOB-ES/roberta-base-biomedical-clinical-es) on the CRAFT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1720
- Precision: 0.8253
- Recall: 0.8147
- F1: 0.8200
- Accuracy: 0.9660
## Model description
This model performs Named Entity Recognition for 6 entity tags: Sequence, Cell, Protein, Gene, Taxon, and Chemical from the [CRAFT](https://github.com/UCDenver-ccp/CRAFT/releases)(Colorado Richly Annotated Full Text) Corpus in English.
Entity tags have been normalized and replaced from the original three letter code to a full name e.g. B-Protein, I-Chemical.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1133 | 1.0 | 1360 | 0.1629 | 0.7985 | 0.7782 | 0.7882 | 0.9610 |
| 0.049 | 2.0 | 2720 | 0.1530 | 0.8165 | 0.8084 | 0.8124 | 0.9651 |
| 0.0306 | 3.0 | 4080 | 0.1603 | 0.8198 | 0.8075 | 0.8136 | 0.9650 |
| 0.0158 | 4.0 | 5440 | 0.1720 | 0.8253 | 0.8147 | 0.8200 | 0.9660 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT", "results": []}]}
|
StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT
| null |
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #roberta #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT
=======================================================
This model is a fine-tuned version of PlanTL-GOB-ES/roberta-base-biomedical-clinical-es on the CRAFT dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1720
* Precision: 0.8253
* Recall: 0.8147
* F1: 0.8200
* Accuracy: 0.9660
Model description
-----------------
This model performs Named Entity Recognition for 6 entity tags: Sequence, Cell, Protein, Gene, Taxon, and Chemical from the CRAFT(Colorado Richly Annotated Full Text) Corpus in English.
Entity tags have been normalized and replaced from the original three letter code to a full name e.g. B-Protein, I-Chemical.
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 3e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 4
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.6
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.6"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #roberta #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.6"
] |
null | null |
asdf
|
{}
|
Subfire/testModel
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
asdf
|
[] |
[
"TAGS\n#region-us \n"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-ta-colab-new1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.6642
- eval_wer: 0.7611
- eval_runtime: 152.4412
- eval_samples_per_second: 11.683
- eval_steps_per_second: 1.463
- epoch: 10.11
- step: 960
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.13.3
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-ta-colab-new1", "results": []}]}
|
Subhashini17/wav2vec2-large-xls-r-300m-ta-colab-new1
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
# wav2vec2-large-xls-r-300m-ta-colab-new1
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.6642
- eval_wer: 0.7611
- eval_runtime: 152.4412
- eval_samples_per_second: 11.683
- eval_steps_per_second: 1.463
- epoch: 10.11
- step: 960
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.13.3
- Tokenizers 0.10.3
|
[
"# wav2vec2-large-xls-r-300m-ta-colab-new1\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.6642\n- eval_wer: 0.7611\n- eval_runtime: 152.4412\n- eval_samples_per_second: 11.683\n- eval_steps_per_second: 1.463\n- epoch: 10.11\n- step: 960",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 30\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu113\n- Datasets 1.13.3\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"# wav2vec2-large-xls-r-300m-ta-colab-new1\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.6642\n- eval_wer: 0.7611\n- eval_runtime: 152.4412\n- eval_samples_per_second: 11.683\n- eval_steps_per_second: 1.463\n- epoch: 10.11\n- step: 960",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 30\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu113\n- Datasets 1.13.3\n- Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-ta-colab
This model is a fine-tuned version of [akashsivanandan/wav2vec2-large-xls-r-300m-tamil-colab-final](https://huggingface.co/akashsivanandan/wav2vec2-large-xls-r-300m-tamil-colab-final) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 5
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-ta-colab", "results": []}]}
|
Subhashini17/wav2vec2-large-xls-r-300m-ta-colab
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
# wav2vec2-large-xls-r-300m-ta-colab
This model is a fine-tuned version of akashsivanandan/wav2vec2-large-xls-r-300m-tamil-colab-final on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 5
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
[
"# wav2vec2-large-xls-r-300m-ta-colab\n\nThis model is a fine-tuned version of akashsivanandan/wav2vec2-large-xls-r-300m-tamil-colab-final on the common_voice dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 3\n- total_train_batch_size: 6\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 5\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu111\n- Datasets 1.13.3\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"# wav2vec2-large-xls-r-300m-ta-colab\n\nThis model is a fine-tuned version of akashsivanandan/wav2vec2-large-xls-r-300m-tamil-colab-final on the common_voice dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 3\n- total_train_batch_size: 6\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 5\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu111\n- Datasets 1.13.3\n- Tokenizers 0.10.3"
] |
token-classification
|
transformers
|
<h1>Bengali Named Entity Recognition</h1>
Fine-tuning bert-base-multilingual-cased on Wikiann dataset for performing NER on Bengali language.
## Label ID and its corresponding label name
| Label ID | Label Name|
| -------- | ----- |
|0 | O |
| 1 | B-PER |
| 2 | I-PER |
| 3 | B-ORG|
| 4 | I-ORG |
| 5 | B-LOC |
| 6 | I-LOC |
<h1>Results</h1>
| Name | Overall F1 | LOC F1 | ORG F1 | PER F1 |
| ---- | -------- | ----- | ---- | ---- |
| Train set | 0.997927 | 0.998246 | 0.996613 | 0.998769 |
| Validation set | 0.970187 | 0.969212 | 0.956831 | 0.982079 |
| Test set | 0.9673011 | 0.967120 | 0.963614 | 0.970938 |
Example
```py
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("Suchandra/bengali_language_NER")
model = AutoModelForTokenClassification.from_pretrained("Suchandra/bengali_language_NER")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "মারভিন দি মারসিয়ান"
ner_results = nlp(example)
ner_results
```
|
{"language": "bn", "datasets": ["wikiann"], "widget": [{"text": "\u09ae\u09be\u09b0\u09ad\u09bf\u09a8 \u09a6\u09bf \u09ae\u09be\u09b0\u09b8\u09bf\u09af\u09bc\u09be\u09a8", "example_title": "Sentence_1"}, {"text": "\u09b2\u09bf\u0993\u09a8\u09be\u09b0\u09cd\u09a6\u09cb \u09a6\u09be \u09ad\u09bf\u099e\u09cd\u099a\u09bf", "example_title": "Sentence_2"}, {"text": "\u09ac\u09b8\u09a8\u09bf\u09af\u09bc\u09be \u0993 \u09b9\u09be\u09b0\u09cd\u099c\u09c7\u0997\u09cb\u09ad\u09bf\u09a8\u09be", "example_title": "Sentence_3"}, {"text": "\u09b8\u09be\u0989\u09a5 \u0987\u09b8\u09cd\u099f \u0987\u0989\u09a8\u09bf\u09ad\u09be\u09b0\u09cd\u09b8\u09bf\u099f\u09bf", "example_title": "Sentence_4"}, {"text": "\u09ae\u09be\u09a8\u09bf\u0995 \u09ac\u09a8\u09cd\u09a6\u09cd\u09af\u09cb\u09aa\u09be\u09a7\u09cd\u09af\u09be\u09af\u09bc \u09b2\u09c7\u0996\u0995", "example_title": "Sentence_5"}]}
|
Suchandra/bengali_language_NER
| null |
[
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"bn",
"dataset:wikiann",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"bn"
] |
TAGS
#transformers #pytorch #safetensors #bert #token-classification #bn #dataset-wikiann #autotrain_compatible #endpoints_compatible #region-us
|
Bengali Named Entity Recognition
================================
Fine-tuning bert-base-multilingual-cased on Wikiann dataset for performing NER on Bengali language.
Label ID and its corresponding label name
-----------------------------------------
Results
=======
Example
|
[] |
[
"TAGS\n#transformers #pytorch #safetensors #bert #token-classification #bn #dataset-wikiann #autotrain_compatible #endpoints_compatible #region-us \n"
] |
null | null |
## SunBERT
Sunbert is a variant of bert trained on Ugandan text data for the tasks of ``Covid/Non Covid`` tweet classification as well as classification of Social Media news articles as either ``Organic, Promotional or Editorial``
Information has become more abundant with the internet. Specifically, people communicate in natural language over social media. Machine learning offers a good way to analyze natural language. We utilized methods from deep learning to analyze text from social media. We build models based on deep learning architectures - Bidirectional Encoder Representations from Transformers (BERT) to perform two downstream tasks:
1. Analyze posts from social media as promotional, editorial or Organic and
2. To identify tweets as either covid19 related or not. Both tasks show the ability of machine learning to be used to analyze large data and be used to support decision making.
We open source the dataset and source code of our model called SunBERT so that other people can utilize these techniques to their needs.
## Datasets:
We use data from Twitter and Facebook. The dataset contained tweets and posts from both social networks collected through CrowdTangle - a tool from facebook to help follow, analyze and report on what’s happening across social media.
## Models:
BERT (Bidirectional Encoder Representations from Transformers is a deep learning model published by researchers at Google AI. It presented state of the art performance in different Natural Language Processing tasks including Question Answering, Text Classification and Language Modelling. The key technical innovation is that BERT applies a bidirectional training of the Transformer - a popular Attention-based model to language processing.
## Use Cases:
We have shown the application of SunBERT to three use cases, Covid19 classification, News Classification and Language adaptation for Machine Learning research and development. However, SunBERT can be extended to perform other tasks; these include; Question Answering, Masked Language Modelling, Next Sentence Prediction.
Our code and datasets can be used as a starting point for any of these tasks, with minor modification to the model architecture.
|
{}
|
Sunbird/sunbert
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
## SunBERT
Sunbert is a variant of bert trained on Ugandan text data for the tasks of ''Covid/Non Covid'' tweet classification as well as classification of Social Media news articles as either ''Organic, Promotional or Editorial''
Information has become more abundant with the internet. Specifically, people communicate in natural language over social media. Machine learning offers a good way to analyze natural language. We utilized methods from deep learning to analyze text from social media. We build models based on deep learning architectures - Bidirectional Encoder Representations from Transformers (BERT) to perform two downstream tasks:
1. Analyze posts from social media as promotional, editorial or Organic and
2. To identify tweets as either covid19 related or not. Both tasks show the ability of machine learning to be used to analyze large data and be used to support decision making.
We open source the dataset and source code of our model called SunBERT so that other people can utilize these techniques to their needs.
## Datasets:
We use data from Twitter and Facebook. The dataset contained tweets and posts from both social networks collected through CrowdTangle - a tool from facebook to help follow, analyze and report on what’s happening across social media.
## Models:
BERT (Bidirectional Encoder Representations from Transformers is a deep learning model published by researchers at Google AI. It presented state of the art performance in different Natural Language Processing tasks including Question Answering, Text Classification and Language Modelling. The key technical innovation is that BERT applies a bidirectional training of the Transformer - a popular Attention-based model to language processing.
## Use Cases:
We have shown the application of SunBERT to three use cases, Covid19 classification, News Classification and Language adaptation for Machine Learning research and development. However, SunBERT can be extended to perform other tasks; these include; Question Answering, Masked Language Modelling, Next Sentence Prediction.
Our code and datasets can be used as a starting point for any of these tasks, with minor modification to the model architecture.
|
[
"## SunBERT\n\nSunbert is a variant of bert trained on Ugandan text data for the tasks of ''Covid/Non Covid'' tweet classification as well as classification of Social Media news articles as either ''Organic, Promotional or Editorial''\n\nInformation has become more abundant with the internet. Specifically, people communicate in natural language over social media. Machine learning offers a good way to analyze natural language. We utilized methods from deep learning to analyze text from social media. We build models based on deep learning architectures - Bidirectional Encoder Representations from Transformers (BERT) to perform two downstream tasks: \n1. Analyze posts from social media as promotional, editorial or Organic and\n2. To identify tweets as either covid19 related or not. Both tasks show the ability of machine learning to be used to analyze large data and be used to support decision making.\n\nWe open source the dataset and source code of our model called SunBERT so that other people can utilize these techniques to their needs.",
"## Datasets:\nWe use data from Twitter and Facebook. The dataset contained tweets and posts from both social networks collected through CrowdTangle - a tool from facebook to help follow, analyze and report on what’s happening across social media.",
"## Models:\nBERT (Bidirectional Encoder Representations from Transformers is a deep learning model published by researchers at Google AI. It presented state of the art performance in different Natural Language Processing tasks including Question Answering, Text Classification and Language Modelling. The key technical innovation is that BERT applies a bidirectional training of the Transformer - a popular Attention-based model to language processing.",
"## Use Cases:\nWe have shown the application of SunBERT to three use cases, Covid19 classification, News Classification and Language adaptation for Machine Learning research and development. However, SunBERT can be extended to perform other tasks; these include; Question Answering, Masked Language Modelling, Next Sentence Prediction. \nOur code and datasets can be used as a starting point for any of these tasks, with minor modification to the model architecture."
] |
[
"TAGS\n#region-us \n",
"## SunBERT\n\nSunbert is a variant of bert trained on Ugandan text data for the tasks of ''Covid/Non Covid'' tweet classification as well as classification of Social Media news articles as either ''Organic, Promotional or Editorial''\n\nInformation has become more abundant with the internet. Specifically, people communicate in natural language over social media. Machine learning offers a good way to analyze natural language. We utilized methods from deep learning to analyze text from social media. We build models based on deep learning architectures - Bidirectional Encoder Representations from Transformers (BERT) to perform two downstream tasks: \n1. Analyze posts from social media as promotional, editorial or Organic and\n2. To identify tweets as either covid19 related or not. Both tasks show the ability of machine learning to be used to analyze large data and be used to support decision making.\n\nWe open source the dataset and source code of our model called SunBERT so that other people can utilize these techniques to their needs.",
"## Datasets:\nWe use data from Twitter and Facebook. The dataset contained tweets and posts from both social networks collected through CrowdTangle - a tool from facebook to help follow, analyze and report on what’s happening across social media.",
"## Models:\nBERT (Bidirectional Encoder Representations from Transformers is a deep learning model published by researchers at Google AI. It presented state of the art performance in different Natural Language Processing tasks including Question Answering, Text Classification and Language Modelling. The key technical innovation is that BERT applies a bidirectional training of the Transformer - a popular Attention-based model to language processing.",
"## Use Cases:\nWe have shown the application of SunBERT to three use cases, Covid19 classification, News Classification and Language adaptation for Machine Learning research and development. However, SunBERT can be extended to perform other tasks; these include; Question Answering, Masked Language Modelling, Next Sentence Prediction. \nOur code and datasets can be used as a starting point for any of these tasks, with minor modification to the model architecture."
] |
text2text-generation
|
transformers
|
English to Luganda text translation
|
{}
|
Sunbird/sunbird-en-lg
| null |
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #marian #text2text-generation #autotrain_compatible #endpoints_compatible #region-us
|
English to Luganda text translation
|
[] |
[
"TAGS\n#transformers #pytorch #marian #text2text-generation #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-generation
|
transformers
|
#Bill cipher chat bot
|
{"tags": ["conversational"]}
|
Sunnydx/BillCipherBot
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#Bill cipher chat bot
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
[SuperAI Engineer Season 2](https://superai.aiat.or.th/) , [Machima](https://machchima.superai.me/)
[Google's mT5](https://github.com/google-research/multilingual-t5) , [Pollawat](https://huggingface.co/Pollawat/mt5-small-thai-qg)
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration, T5Config
model = T5ForConditionalGeneration.from_pretrained('SuperAI2-Machima/mt5-small-thai-qg-v2')
tokenizer = T5Tokenizer.from_pretrained('SuperAI2-Machima/mt5-small-thai-qg-v2')
source_text = 'บุกยึดไม้เถื่อน อดีต ส.ส.บุรีรัมย์ เตรียมสร้างคฤหาสน์ทรงไทย 1 กันยายน 2550 12:00 น. ตำรวจภูธรจ.บุรีรัมย์บุกตรวจยึดไม้แปรรูปหวงห้ามกว่า 80 แผ่น'
print('Predicted Summary Text : ')
tokenized_text = tokenizer.encode(source_text, return_tensors="pt").to(device)
summary_ids = model.generate(tokenized_text,
num_beams=4,
no_repeat_ngram_size=2,
max_length=50,
early_stopping=True)
output = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
print(output)
#Predicted Summary Text :
#answer: 80 แผ่น question: ตํารวจภูธรจ.บุรีรัมย์บุกตรวจยึดไม้แปรรูปหวงห้ามกว่ากี่แผ่น
```
|
{"language": ["thai", "th"], "license": "mit", "tags": ["question-generation"], "datasets": ["NSC2018", "wiki-documents-nsc", "ThaiQACorpus-DevelopmentDataset"], "widget": [{"text": "\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e1a\u0e49\u0e32\u0e19\u0e02\u0e38\u0e19\u0e14\u0e48\u0e32\u0e19 \u0e15\u0e31\u0e49\u0e07\u0e2d\u0e22\u0e39\u0e48\u0e17\u0e35\u0e48\u0e02\u0e38\u0e19\u0e14\u0e48\u0e32\u0e19 \u0e08.\u0e19\u0e04\u0e23\u0e19\u0e32\u0e22\u0e01 </s>", "example_title": "Example 01"}, {"text": "\u0e1e\u0e25\u0e40\u0e2d\u0e01 \u0e1b\u0e23\u0e30\u0e22\u0e38\u0e17\u0e18\u0e4c \u0e08\u0e31\u0e19\u0e17\u0e23\u0e4c\u0e42\u0e2d\u0e0a\u0e32 (\u0e40\u0e01\u0e34\u0e14 21 \u0e21\u0e35\u0e19\u0e32\u0e04\u0e21 \u0e1e.\u0e28. 2497) \u0e0a\u0e37\u0e48\u0e2d\u0e40\u0e25\u0e48\u0e19 \u0e15\u0e39\u0e48 \u0e40\u0e1b\u0e47\u0e19\u0e19\u0e31\u0e01\u0e01\u0e32\u0e23\u0e40\u0e21\u0e37\u0e2d\u0e07\u0e41\u0e25\u0e30\u0e2d\u0e14\u0e35\u0e15\u0e19\u0e32\u0e22\u0e17\u0e2b\u0e32\u0e23\u0e1a\u0e01\u0e0a\u0e32\u0e27\u0e44\u0e17\u0e22 </s>", "example_title": "Example 02"}, {"text": "\u0e27\u0e31\u0e19\u0e17\u0e35\u0e48 1 \u0e01\u0e31\u0e19\u0e22\u0e32\u0e22\u0e19 2550 12:00 \u0e19. \u0e15\u0e33\u0e23\u0e27\u0e08\u0e20\u0e39\u0e18\u0e23\u0e08.\u0e1a\u0e38\u0e23\u0e35\u0e23\u0e31\u0e21\u0e22\u0e4c\u0e1a\u0e38\u0e01\u0e15\u0e23\u0e27\u0e08\u0e22\u0e36\u0e14\u0e44\u0e21\u0e49\u0e41\u0e1b\u0e23\u0e23\u0e39\u0e1b\u0e2b\u0e27\u0e07\u0e2b\u0e49\u0e32\u0e21\u0e01\u0e27\u0e48\u0e32 80 \u0e41\u0e1c\u0e48\u0e19 </s>", "example_title": "Example 03"}, {"text": "\u0e01\u0e23\u0e38\u0e07\u0e40\u0e17\u0e1e\u0e21\u0e2b\u0e32\u0e19\u0e04\u0e23 \u0e40\u0e1b\u0e47\u0e19\u0e28\u0e39\u0e19\u0e22\u0e4c\u0e01\u0e25\u0e32\u0e07\u0e01\u0e32\u0e23\u0e1b\u0e01\u0e04\u0e23\u0e2d\u0e07 \u0e01\u0e32\u0e23\u0e28\u0e36\u0e01\u0e29\u0e32 \u0e01\u0e32\u0e23\u0e04\u0e21\u0e19\u0e32\u0e04\u0e21\u0e02\u0e19\u0e2a\u0e48\u0e07 \u0e01\u0e32\u0e23\u0e40\u0e07\u0e34\u0e19\u0e01\u0e32\u0e23\u0e18\u0e19\u0e32\u0e04\u0e32\u0e23 \u0e01\u0e32\u0e23\u0e1e\u0e32\u0e13\u0e34\u0e0a\u0e22\u0e4c \u0e01\u0e32\u0e23\u0e2a\u0e37\u0e48\u0e2d\u0e2a\u0e32\u0e23 \u0e41\u0e25\u0e30\u0e04\u0e27\u0e32\u0e21\u0e40\u0e08\u0e23\u0e34\u0e0d\u0e02\u0e2d\u0e07\u0e1b\u0e23\u0e30\u0e40\u0e17\u0e28 \u0e15\u0e31\u0e49\u0e07\u0e2d\u0e22\u0e39\u0e48\u0e1a\u0e19\u0e2a\u0e32\u0e21\u0e40\u0e2b\u0e25\u0e35\u0e48\u0e22\u0e21\u0e1b\u0e32\u0e01\u0e41\u0e21\u0e48\u0e19\u0e49\u0e33\u0e40\u0e08\u0e49\u0e32\u0e1e\u0e23\u0e30\u0e22\u0e32 \u0e21\u0e35\u0e41\u0e21\u0e48\u0e19\u0e49\u0e33\u0e40\u0e08\u0e49\u0e32\u0e1e\u0e23\u0e30\u0e22\u0e32\u0e44\u0e2b\u0e25\u0e1c\u0e48\u0e32\u0e19\u0e41\u0e25\u0e30\u0e41\u0e1a\u0e48\u0e07\u0e40\u0e21\u0e37\u0e2d\u0e07\u0e2d\u0e2d\u0e01\u0e40\u0e1b\u0e47\u0e19 2 \u0e1d\u0e31\u0e48\u0e07 \u0e04\u0e37\u0e2d \u0e1d\u0e31\u0e48\u0e07\u0e1e\u0e23\u0e30\u0e19\u0e04\u0e23\u0e41\u0e25\u0e30\u0e1d\u0e31\u0e48\u0e07\u0e18\u0e19\u0e1a\u0e38\u0e23\u0e35 \u0e01\u0e23\u0e38\u0e07\u0e40\u0e17\u0e1e\u0e21\u0e2b\u0e32\u0e19\u0e04\u0e23\u0e21\u0e35\u0e1e\u0e37\u0e49\u0e19\u0e17\u0e35\u0e48\u0e17\u0e31\u0e49\u0e07\u0e2b\u0e21\u0e14 1,568.737 \u0e15\u0e23.\u0e01\u0e21. </s>", "example_title": "Example 04"}]}
|
SuperAI2-Machima/mt5-small-thai-qg-v2
| null |
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"question-generation",
"dataset:NSC2018",
"dataset:wiki-documents-nsc",
"dataset:ThaiQACorpus-DevelopmentDataset",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"thai",
"th"
] |
TAGS
#transformers #pytorch #mt5 #text2text-generation #question-generation #dataset-NSC2018 #dataset-wiki-documents-nsc #dataset-ThaiQACorpus-DevelopmentDataset #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
SuperAI Engineer Season 2 , Machima
Google's mT5 , Pollawat
|
[] |
[
"TAGS\n#transformers #pytorch #mt5 #text2text-generation #question-generation #dataset-NSC2018 #dataset-wiki-documents-nsc #dataset-ThaiQACorpus-DevelopmentDataset #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
[SuperAI Engineer Season 2](https://superai.aiat.or.th/) , [Machima](https://machchima.superai.me/)
[Google's mT5](https://github.com/google-research/multilingual-t5) , [Pollawat](https://huggingface.co/Pollawat/mt5-small-thai-qg)
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration, T5Config
model = T5ForConditionalGeneration.from_pretrained('SuperAI2-Machima/mt5-small-thai-qg')
tokenizer = T5Tokenizer.from_pretrained('SuperAI2-Machima/mt5-small-thai-qg')
source_text = 'บุกยึดไม้เถื่อน อดีต ส.ส.บุรีรัมย์ เตรียมสร้างคฤหาสน์ทรงไทย 1 กันยายน 2550 12:00 น. ตำรวจภูธรจ.บุรีรัมย์บุกตรวจยึดไม้แปรรูปหวงห้ามกว่า 80 แผ่น'
print('Predicted Summary Text : ')
tokenized_text = tokenizer.encode(source_text, return_tensors="pt").to(device)
summary_ids = model.generate(tokenized_text,
num_beams=4,
no_repeat_ngram_size=2,
max_length=50,
early_stopping=True)
output = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
print(output)
#Predicted Summary Text :
#answer: 80 แผ่น question: ตํารวจภูธรจ.บุรีรัมย์บุกตรวจยึดไม้แปรรูปหวงห้ามกว่ากี่แผ่น
```
|
{"language": ["thai", "th"], "license": "mit", "tags": ["question-generation"], "datasets": ["NSC2018", "wiki-documents-nsc", "ThaiQACorpus-DevelopmentDataset"], "widget": [{"text": "\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e1a\u0e49\u0e32\u0e19\u0e02\u0e38\u0e19\u0e14\u0e48\u0e32\u0e19 \u0e15\u0e31\u0e49\u0e07\u0e2d\u0e22\u0e39\u0e48\u0e17\u0e35\u0e48\u0e02\u0e38\u0e19\u0e14\u0e48\u0e32\u0e19 \u0e08.\u0e19\u0e04\u0e23\u0e19\u0e32\u0e22\u0e01", "example_title": "Example 01"}, {"text": "\u0e1e\u0e25\u0e40\u0e2d\u0e01 \u0e1b\u0e23\u0e30\u0e22\u0e38\u0e17\u0e18\u0e4c \u0e08\u0e31\u0e19\u0e17\u0e23\u0e4c\u0e42\u0e2d\u0e0a\u0e32 (\u0e40\u0e01\u0e34\u0e14 21 \u0e21\u0e35\u0e19\u0e32\u0e04\u0e21 \u0e1e.\u0e28. 2497) \u0e0a\u0e37\u0e48\u0e2d\u0e40\u0e25\u0e48\u0e19 \u0e15\u0e39\u0e48 \u0e40\u0e1b\u0e47\u0e19\u0e19\u0e31\u0e01\u0e01\u0e32\u0e23\u0e40\u0e21\u0e37\u0e2d\u0e07\u0e41\u0e25\u0e30\u0e2d\u0e14\u0e35\u0e15\u0e19\u0e32\u0e22\u0e17\u0e2b\u0e32\u0e23\u0e1a\u0e01\u0e0a\u0e32\u0e27\u0e44\u0e17\u0e22", "example_title": "Example 02"}, {"text": "\u0e27\u0e31\u0e19\u0e17\u0e35\u0e48 1 \u0e01\u0e31\u0e19\u0e22\u0e32\u0e22\u0e19 2550 12:00 \u0e19. \u0e15\u0e33\u0e23\u0e27\u0e08\u0e20\u0e39\u0e18\u0e23\u0e08.\u0e1a\u0e38\u0e23\u0e35\u0e23\u0e31\u0e21\u0e22\u0e4c\u0e1a\u0e38\u0e01\u0e15\u0e23\u0e27\u0e08\u0e22\u0e36\u0e14\u0e44\u0e21\u0e49\u0e41\u0e1b\u0e23\u0e23\u0e39\u0e1b\u0e2b\u0e27\u0e07\u0e2b\u0e49\u0e32\u0e21\u0e01\u0e27\u0e48\u0e32 80 \u0e41\u0e1c\u0e48\u0e19", "example_title": "Example 03"}]}
|
SuperAI2-Machima/mt5-small-thai-qg
| null |
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"question-generation",
"dataset:NSC2018",
"dataset:wiki-documents-nsc",
"dataset:ThaiQACorpus-DevelopmentDataset",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"thai",
"th"
] |
TAGS
#transformers #pytorch #mt5 #text2text-generation #question-generation #dataset-NSC2018 #dataset-wiki-documents-nsc #dataset-ThaiQACorpus-DevelopmentDataset #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
SuperAI Engineer Season 2 , Machima
Google's mT5 , Pollawat
|
[] |
[
"TAGS\n#transformers #pytorch #mt5 #text2text-generation #question-generation #dataset-NSC2018 #dataset-wiki-documents-nsc #dataset-ThaiQACorpus-DevelopmentDataset #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
[SuperAI Engineer Season 2](https://superai.aiat.or.th/) , [Machima](https://machchima.superai.me/)
[Google's mT5](https://github.com/google-research/multilingual-t5) , [Pollawat](https://huggingface.co/Pollawat/mt5-small-thai-qg)
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration, T5Config
model = T5ForConditionalGeneration.from_pretrained('SuperAI2-Machima/mt5-small-thai-yes-no-qg')
tokenizer = T5Tokenizer.from_pretrained('SuperAI2-Machima/mt5-small-thai-yes-no-qg')
source_text = 'บุกยึดไม้เถื่อน อดีต ส.ส.บุรีรัมย์ เตรียมสร้างคฤหาสน์ทรงไทย 1 กันยายน 2550 12:00 น. ตำรวจภูธรจ.บุรีรัมย์บุกตรวจยึดไม้แปรรูปหวงห้ามกว่า 80 แผ่น'
print('Predicted Summary Text : ')
tokenized_text = tokenizer.encode(source_text, return_tensors="pt").to(device)
summary_ids = model.generate(tokenized_text,
num_beams=4,
no_repeat_ngram_size=2,
max_length=50,
early_stopping=True)
output = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
print(output)
#Predicted Summary Text :
#answer: 80 แผ่น question: ตํารวจภูธรจ.บุรีรัมย์บุกตรวจยึดไม้แปรรูปหวงห้ามกว่ากี่แผ่น
```
|
{"language": ["thai", "th"], "license": "mit", "tags": ["Yes No question-generation"], "datasets": ["NSC2018", "wiki-documents-nsc", "ThaiQACorpus-DevelopmentDataset"], "widget": [{"text": "\u0e27\u0e31\u0e19\u0e17\u0e35\u0e48 1 \u0e01\u0e31\u0e19\u0e22\u0e32\u0e22\u0e19 2550 12:00 \u0e19. \u0e15\u0e33\u0e23\u0e27\u0e08\u0e20\u0e39\u0e18\u0e23\u0e08.\u0e1a\u0e38\u0e23\u0e35\u0e23\u0e31\u0e21\u0e22\u0e4c\u0e1a\u0e38\u0e01\u0e15\u0e23\u0e27\u0e08\u0e22\u0e36\u0e14\u0e44\u0e21\u0e49\u0e41\u0e1b\u0e23\u0e23\u0e39\u0e1b\u0e2b\u0e27\u0e07\u0e2b\u0e49\u0e32\u0e21\u0e01\u0e27\u0e48\u0e32 80 \u0e41\u0e1c\u0e48\u0e19", "example_title": "Example 01"}, {"text": "\u0e1e\u0e25\u0e40\u0e2d\u0e01 \u0e1b\u0e23\u0e30\u0e22\u0e38\u0e17\u0e18\u0e4c \u0e08\u0e31\u0e19\u0e17\u0e23\u0e4c\u0e42\u0e2d\u0e0a\u0e32 (\u0e40\u0e01\u0e34\u0e14 21 \u0e21\u0e35\u0e19\u0e32\u0e04\u0e21 \u0e1e.\u0e28. 2497) \u0e0a\u0e37\u0e48\u0e2d\u0e40\u0e25\u0e48\u0e19 \u0e15\u0e39\u0e48 \u0e40\u0e1b\u0e47\u0e19\u0e19\u0e31\u0e01\u0e01\u0e32\u0e23\u0e40\u0e21\u0e37\u0e2d\u0e07\u0e41\u0e25\u0e30\u0e2d\u0e14\u0e35\u0e15\u0e19\u0e32\u0e22\u0e17\u0e2b\u0e32\u0e23\u0e1a\u0e01\u0e0a\u0e32\u0e27\u0e44\u0e17\u0e22", "example_title": "Example 02"}]}
|
SuperAI2-Machima/mt5-small-thai-yes-no-qg
| null |
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"Yes No question-generation",
"dataset:NSC2018",
"dataset:wiki-documents-nsc",
"dataset:ThaiQACorpus-DevelopmentDataset",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"thai",
"th"
] |
TAGS
#transformers #pytorch #mt5 #text2text-generation #Yes No question-generation #dataset-NSC2018 #dataset-wiki-documents-nsc #dataset-ThaiQACorpus-DevelopmentDataset #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
SuperAI Engineer Season 2 , Machima
Google's mT5 , Pollawat
|
[] |
[
"TAGS\n#transformers #pytorch #mt5 #text2text-generation #Yes No question-generation #dataset-NSC2018 #dataset-wiki-documents-nsc #dataset-ThaiQACorpus-DevelopmentDataset #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
# FreeIsland AI
With the advancement of the graphical processing power of computers and sophisticated algorithms like [Nanite](https://docs.unrealengine.com/5.0/en-US/RenderingFeatures/Nanite/), simulating lifelike sceneries in real-time is never being easier. About a month ago Epic Games [showoff](https://www.youtube.com/watch?v=WU0gvPcc3jQ) the newest capabilities of their newest game engine by simulating an entire city including population, traffic, weather, etc running on a Playstore 5. That made me think what are the things missing from that simulation and how can I use my skills to improve it.
One of the main missing components that separate our world and the simulated world is people. More importantly, the interactivity of people in simulated worlds. Last year a game called cyberpunk got released and it had an option to [talk to any person](https://www.youtube.com/watch?v=Z1OtYGzUoSo) in its city but the problem with that was all the responses from the Non-player Characters (NPCs) are hardcoded which greatly reduce the immersion of the game.
So the goal of this project is to experiment with how the advancement of Natural Language Processing makes NPCs from video games interactive and enhances immersion in video games.
# Usage
```py
from transformers import AutoModelForSeq2SeqLM
trained_model = AutoModelForSeq2SeqLM.from_pretrained(f"Supiri/t5-base-conversation")
prompt = "What's your name?"
context = "Hinata was soft-spoken and polite, always addressing people with proper honorifics. She is kind, always thinking of others more than for herself, caring for their feelings and well-being. She doesn't like being confrontational for any reason. This led to her being meek or timid to others, as her overwhelming kindness can render her unable to respond or act for fear of offending somebody."
input_ids = tokenizer(f"personality: {context}", f"inquiry: {prompt}", return_tensors='pt').input_ids
outputs = trained_model.generate(input_ids, num_beams=6, diversity_penalty=2.5, num_beam_groups=2)
print("Answer:\t", tokenizer.decode(outputs[0], skip_special_tokens=True))
# Answer: My name is Hinata
```
# Evaluation
## Test 1
For this test, I sampled input from the test dataset. For this question the actual response is
> "It works a little."
But models' response was
> "I don't want to flirt with you."
Which reflect its bio which was filled by GPT-3
> "He stands primarily to gain self-esteem, which he often receives through the submission of others"
In gist, Dr. Greenbaum tried to tease Sebastian about his seductive traits but this model's go-to response was to shut her down since the biography of Sebastian states he often tries to assert his dominance over others.
```py
prompt = dataset['test'][66]['request']
contexts = dataset['test'][66]['bio']
input_ids = tokenizer(f"personality: {contexts}", f"inquiry: {prompt}", return_tensors='pt').input_ids
outputs = trained_model.generate(input_ids, num_beams=6, diversity_penalty=5.0, num_beam_groups=2)
print("Input to the Model")
print("Bio:\t",contexts)
print("\nPrompt:\t", prompt)
print("\nGround truth response")
print("\t", dataset['test'][66]['response'])
print("\nModel's Prediction")
print("Answer:\t", tokenizer.decode(outputs[0], skip_special_tokens=True))
```
```txt
Input to the Model
Bio: Sebastian is a very extreme representation of the trope of the "Confidence Man", and acts it out to a degree that is sometimes comedic but mostly frightening. He stands primarily to gain self-esteem, which he often receives through the submission of others or solely through his own perceptions. An artful seducer, his incredible charisma is both his greatest weapon and most intoxicating weakness.
Prompt: You think you can come in here with that cute little smirk on your face and try and flirt with me. It doesn't work, Sebastian.
Ground truth response
It works a little.
Model's Prediction
Answer: I don't want to flirt with you.
```
### Test 2
Hinata is a kind-hearted girl from the anime series Naruto. I took her bio from [personality database](https://www.personality-database.com/profile/2790/hinata-hyga-naruto-shippden-mbti-personality-type) and ask a few questions about her.
Off the top, you can see the model understands the context since when I asked the model, "**What's your name?**" it responded with the name given with the context.
Also, notice when prompted the same question differently (**"Who are you?"**), it still manages to answer it well.
```py
prompts = ["What's your name?", "How are you feeling?", "Do you like Star Wars?", "Who are you?", "Coffee or tea?"]
contexts = "Hinata was soft-spoken and polite, always addressing people with proper honorifics. She is kind, always thinking of others more than for herself, caring for their feelings and well-being. She doesn't like being confrontational for any reason. This led to her being meek or timid to others, as her overwhelming kindness can render her unable to respond or act for fear of offending somebody."
print("Bio:\t",contexts, "\n")
for prompt in prompts:
input_ids = tokenizer(f"personality: {contexts}", f"inquiry: {prompt}", return_tensors='pt').input_ids
outputs = trained_model.generate(input_ids, num_beams=6, diversity_penalty=5.0, num_beam_groups=2)
print("Prompt:\t", prompt)
print("Answer:\t", tokenizer.decode(outputs[0], skip_special_tokens=True), "\n")
```
```txt
Bio: Hinata was soft-spoken and polite, always addressing people with proper honorifics. She is kind, always thinking of others more than for herself, caring for their feelings and well-being. She doesn't like being confrontational for any reason. This led to her being meek or timid to others, as her overwhelming kindness can render her unable to respond or act for fear of offending somebody.
Prompt: What's your name?
Answer: My name is Hinata
Prompt: How are you feeling?
Answer: I'm fine.
Prompt: Do you like Star Wars?
Answer: No, I don't.
Prompt: Who are you?
Answer: My name is Hinata
Prompt: Coffee or tea?
Answer: No, I don't drink much.
```
# Conclusion
After training the `t5-base` model for 5 epochs, the model started getting adapted to the dataset but there are a lot more improvements that can be done.
1. During the dataset creation part I had to limit the dataset size to 200 unique characters out of 9,035 that's present in the dataset due to the **budget constraints**. So If I manage to cover at least half of the dataset this model will have come up with far better responses.
2. Both input size and batch size were severely constrained due to the lack of access to GPU memory. Having the batch size of 64 is in contrast to 8 would have massive improvements in both training time and **generalization of model**.
3. Using a bigger model like `t5-large` or `t5-3b` will certainly improve the performance.
4. One of the main downsides to using this pre-trained model is this model was trained in German, French, and Romanian. Which consumed a chunk of the **vocabulary size and trainable parameters**. Retraining this model from scratch will help to reduce both needed parameter count and training loss when it comes to this specific task.
|
{"language": "en", "license": "gpl-3.0", "tags": ["NLP", "ChatBot", "Game AI"], "datasets": ["cornell_movie_dialog"], "metrics": ["rouge"], "widget": [{"text": "personality: Hinata was soft-spoken and polite, always addressing people with proper honorifics. She is kind, always thinking of others more than for herself, caring for their feelings and well-being. She doesn't like being confrontational for any reason. This led to her being meek or timid to others, as her overwhelming kindness can render her unable to respond or act for fear of offending somebody.</s> inquiry: What's your name?", "example_title": "Talk to Hinata"}, {"text": "personality: Voldemort is a raging psychopath, devoid of the normal human responses to other people's suffering. He has no conscience, feels no remorse or empathy, and does not recognize the worth and humanity of anybody except himself.</s> inquiry: What's your name?", "example_title": "Talk to Voldemort"}], "inference": {"parameters": {"num_beams": 6, "diversity_penalty": 2.5, "num_beam_groups": 2}}}
|
Supiri/t5-base-conversation
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"NLP",
"ChatBot",
"Game AI",
"en",
"dataset:cornell_movie_dialog",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #tensorboard #t5 #text2text-generation #NLP #ChatBot #Game AI #en #dataset-cornell_movie_dialog #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# FreeIsland AI
With the advancement of the graphical processing power of computers and sophisticated algorithms like Nanite, simulating lifelike sceneries in real-time is never being easier. About a month ago Epic Games showoff the newest capabilities of their newest game engine by simulating an entire city including population, traffic, weather, etc running on a Playstore 5. That made me think what are the things missing from that simulation and how can I use my skills to improve it.
One of the main missing components that separate our world and the simulated world is people. More importantly, the interactivity of people in simulated worlds. Last year a game called cyberpunk got released and it had an option to talk to any person in its city but the problem with that was all the responses from the Non-player Characters (NPCs) are hardcoded which greatly reduce the immersion of the game.
So the goal of this project is to experiment with how the advancement of Natural Language Processing makes NPCs from video games interactive and enhances immersion in video games.
# Usage
# Evaluation
## Test 1
For this test, I sampled input from the test dataset. For this question the actual response is
> "It works a little."
But models' response was
> "I don't want to flirt with you."
Which reflect its bio which was filled by GPT-3
> "He stands primarily to gain self-esteem, which he often receives through the submission of others"
In gist, Dr. Greenbaum tried to tease Sebastian about his seductive traits but this model's go-to response was to shut her down since the biography of Sebastian states he often tries to assert his dominance over others.
### Test 2
Hinata is a kind-hearted girl from the anime series Naruto. I took her bio from personality database and ask a few questions about her.
Off the top, you can see the model understands the context since when I asked the model, "What's your name?" it responded with the name given with the context.
Also, notice when prompted the same question differently ("Who are you?"), it still manages to answer it well.
# Conclusion
After training the 't5-base' model for 5 epochs, the model started getting adapted to the dataset but there are a lot more improvements that can be done.
1. During the dataset creation part I had to limit the dataset size to 200 unique characters out of 9,035 that's present in the dataset due to the budget constraints. So If I manage to cover at least half of the dataset this model will have come up with far better responses.
2. Both input size and batch size were severely constrained due to the lack of access to GPU memory. Having the batch size of 64 is in contrast to 8 would have massive improvements in both training time and generalization of model.
3. Using a bigger model like 't5-large' or 't5-3b' will certainly improve the performance.
4. One of the main downsides to using this pre-trained model is this model was trained in German, French, and Romanian. Which consumed a chunk of the vocabulary size and trainable parameters. Retraining this model from scratch will help to reduce both needed parameter count and training loss when it comes to this specific task.
|
[
"# FreeIsland AI\n\nWith the advancement of the graphical processing power of computers and sophisticated algorithms like Nanite, simulating lifelike sceneries in real-time is never being easier. About a month ago Epic Games showoff the newest capabilities of their newest game engine by simulating an entire city including population, traffic, weather, etc running on a Playstore 5. That made me think what are the things missing from that simulation and how can I use my skills to improve it.\n\nOne of the main missing components that separate our world and the simulated world is people. More importantly, the interactivity of people in simulated worlds. Last year a game called cyberpunk got released and it had an option to talk to any person in its city but the problem with that was all the responses from the Non-player Characters (NPCs) are hardcoded which greatly reduce the immersion of the game.\n\nSo the goal of this project is to experiment with how the advancement of Natural Language Processing makes NPCs from video games interactive and enhances immersion in video games.",
"# Usage",
"# Evaluation",
"## Test 1\nFor this test, I sampled input from the test dataset. For this question the actual response is \n\n> \"It works a little.\"\n\nBut models' response was\n\n> \"I don't want to flirt with you.\"\n\nWhich reflect its bio which was filled by GPT-3\n\n> \"He stands primarily to gain self-esteem, which he often receives through the submission of others\"\n\n\nIn gist, Dr. Greenbaum tried to tease Sebastian about his seductive traits but this model's go-to response was to shut her down since the biography of Sebastian states he often tries to assert his dominance over others.",
"### Test 2\n\nHinata is a kind-hearted girl from the anime series Naruto. I took her bio from personality database and ask a few questions about her.\n\nOff the top, you can see the model understands the context since when I asked the model, \"What's your name?\" it responded with the name given with the context.\n\nAlso, notice when prompted the same question differently (\"Who are you?\"), it still manages to answer it well.",
"# Conclusion\n\nAfter training the 't5-base' model for 5 epochs, the model started getting adapted to the dataset but there are a lot more improvements that can be done.\n\n1. During the dataset creation part I had to limit the dataset size to 200 unique characters out of 9,035 that's present in the dataset due to the budget constraints. So If I manage to cover at least half of the dataset this model will have come up with far better responses.\n2. Both input size and batch size were severely constrained due to the lack of access to GPU memory. Having the batch size of 64 is in contrast to 8 would have massive improvements in both training time and generalization of model.\n3. Using a bigger model like 't5-large' or 't5-3b' will certainly improve the performance.\n4. One of the main downsides to using this pre-trained model is this model was trained in German, French, and Romanian. Which consumed a chunk of the vocabulary size and trainable parameters. Retraining this model from scratch will help to reduce both needed parameter count and training loss when it comes to this specific task."
] |
[
"TAGS\n#transformers #pytorch #tensorboard #t5 #text2text-generation #NLP #ChatBot #Game AI #en #dataset-cornell_movie_dialog #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# FreeIsland AI\n\nWith the advancement of the graphical processing power of computers and sophisticated algorithms like Nanite, simulating lifelike sceneries in real-time is never being easier. About a month ago Epic Games showoff the newest capabilities of their newest game engine by simulating an entire city including population, traffic, weather, etc running on a Playstore 5. That made me think what are the things missing from that simulation and how can I use my skills to improve it.\n\nOne of the main missing components that separate our world and the simulated world is people. More importantly, the interactivity of people in simulated worlds. Last year a game called cyberpunk got released and it had an option to talk to any person in its city but the problem with that was all the responses from the Non-player Characters (NPCs) are hardcoded which greatly reduce the immersion of the game.\n\nSo the goal of this project is to experiment with how the advancement of Natural Language Processing makes NPCs from video games interactive and enhances immersion in video games.",
"# Usage",
"# Evaluation",
"## Test 1\nFor this test, I sampled input from the test dataset. For this question the actual response is \n\n> \"It works a little.\"\n\nBut models' response was\n\n> \"I don't want to flirt with you.\"\n\nWhich reflect its bio which was filled by GPT-3\n\n> \"He stands primarily to gain self-esteem, which he often receives through the submission of others\"\n\n\nIn gist, Dr. Greenbaum tried to tease Sebastian about his seductive traits but this model's go-to response was to shut her down since the biography of Sebastian states he often tries to assert his dominance over others.",
"### Test 2\n\nHinata is a kind-hearted girl from the anime series Naruto. I took her bio from personality database and ask a few questions about her.\n\nOff the top, you can see the model understands the context since when I asked the model, \"What's your name?\" it responded with the name given with the context.\n\nAlso, notice when prompted the same question differently (\"Who are you?\"), it still manages to answer it well.",
"# Conclusion\n\nAfter training the 't5-base' model for 5 epochs, the model started getting adapted to the dataset but there are a lot more improvements that can be done.\n\n1. During the dataset creation part I had to limit the dataset size to 200 unique characters out of 9,035 that's present in the dataset due to the budget constraints. So If I manage to cover at least half of the dataset this model will have come up with far better responses.\n2. Both input size and batch size were severely constrained due to the lack of access to GPU memory. Having the batch size of 64 is in contrast to 8 would have massive improvements in both training time and generalization of model.\n3. Using a bigger model like 't5-large' or 't5-3b' will certainly improve the performance.\n4. One of the main downsides to using this pre-trained model is this model was trained in German, French, and Romanian. Which consumed a chunk of the vocabulary size and trainable parameters. Retraining this model from scratch will help to reduce both needed parameter count and training loss when it comes to this specific task."
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-squad
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0755
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.0698 | 1.0 | 5533 | 1.0240 |
| 0.7813 | 2.0 | 11066 | 1.0310 |
| 0.608 | 3.0 | 16599 | 1.0755 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-finetuned-squad", "results": []}]}
|
SupriyaArun/bert-base-uncased-finetuned-squad
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
bert-base-uncased-finetuned-squad
=================================
This model is a fine-tuned version of bert-base-uncased on the squad dataset.
It achieves the following results on the evaluation set:
* Loss: 1.0755
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.13.0
* Pytorch 1.10.0+cu111
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1569
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2213 | 1.0 | 5533 | 1.1560 |
| 0.943 | 2.0 | 11066 | 1.1227 |
| 0.7633 | 3.0 | 16599 | 1.1569 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "distilbert-base-uncased-finetuned-squad", "results": []}]}
|
SupriyaArun/distilbert-base-uncased-finetuned-squad
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-squad
=======================================
This model is a fine-tuned version of distilbert-base-uncased on the squad dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1569
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.13.0
* Pytorch 1.10.0+cu111
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# squeezebert-uncased-finetuned-squad-finetuned-squad
This model is a fine-tuned version of [SupriyaArun/squeezebert-uncased-finetuned-squad](https://huggingface.co/SupriyaArun/squeezebert-uncased-finetuned-squad) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "squeezebert-uncased-finetuned-squad-finetuned-squad", "results": []}]}
|
SupriyaArun/squeezebert-uncased-finetuned-squad-finetuned-squad
| null |
[
"transformers",
"pytorch",
"squeezebert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #squeezebert #question-answering #generated_from_trainer #dataset-squad #endpoints_compatible #region-us
|
# squeezebert-uncased-finetuned-squad-finetuned-squad
This model is a fine-tuned version of SupriyaArun/squeezebert-uncased-finetuned-squad on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
[
"# squeezebert-uncased-finetuned-squad-finetuned-squad\n\nThis model is a fine-tuned version of SupriyaArun/squeezebert-uncased-finetuned-squad on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Framework versions\n\n- Transformers 4.13.0\n- Pytorch 1.10.0+cu111\n- Datasets 1.16.1\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #squeezebert #question-answering #generated_from_trainer #dataset-squad #endpoints_compatible #region-us \n",
"# squeezebert-uncased-finetuned-squad-finetuned-squad\n\nThis model is a fine-tuned version of SupriyaArun/squeezebert-uncased-finetuned-squad on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Framework versions\n\n- Transformers 4.13.0\n- Pytorch 1.10.0+cu111\n- Datasets 1.16.1\n- Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# squeezebert-uncased-finetuned-squad
This model is a fine-tuned version of [squeezebert/squeezebert-uncased](https://huggingface.co/squeezebert/squeezebert-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0808
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2624 | 1.0 | 5533 | 1.1648 |
| 1.0699 | 2.0 | 11066 | 1.0920 |
| 0.9463 | 3.0 | 16599 | 1.0808 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "squeezebert-uncased-finetuned-squad", "results": []}]}
|
SupriyaArun/squeezebert-uncased-finetuned-squad
| null |
[
"transformers",
"pytorch",
"tensorboard",
"squeezebert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #squeezebert #question-answering #generated_from_trainer #dataset-squad #endpoints_compatible #region-us
|
squeezebert-uncased-finetuned-squad
===================================
This model is a fine-tuned version of squeezebert/squeezebert-uncased on the squad dataset.
It achieves the following results on the evaluation set:
* Loss: 1.0808
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.13.0
* Pytorch 1.10.0+cu111
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #squeezebert #question-answering #generated_from_trainer #dataset-squad #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
null |
transformers
|
# BLEURT
Pretrained model on English language. It was introduced in
[this paper](https://arxiv.org/pdf/2004.04696.pdf), described in [this blogpost](https://ai.googleblog.com/2020/05/evaluating-natural-language-generation.html) and first released in
[this repository](https://github.com/google-research/bleurt).
The team releasing BLEURT did not write a model card for this model so this model card has been written by
the Surfer team.
Original TensorFlow implementation has been converted to PyTorch with help of [this article](https://medium.com/huggingface/from-tensorflow-to-pytorch-265f40ef2a28) by Surfer team.
Visit us at [surferseo.com](https://surferseo.com).
### How to use
Since BLEURT is not implemented in transformers library yet, you have to import BleurtModel from bleurt_model.py
```python
import torch
from bleurt_model import BleurtModel
from transformers import BertTokenizerFast
model = BleurtModel.from_pretrained("SurferSEO/bleurt")
tokenizer = BertTokenizerFast.from_pretrained("SurferSEO/bleurt")
sentence_pairs = [("I love surfing.", "I'd like to surf.")]
encoded = tokenizer(sentence_pairs, padding=True, truncation=True, return_tensors="pt")
input_ids, attention_mask, token_type_ids = (
encoded["input_ids"],
encoded["attention_mask"],
encoded["token_type_ids"],
)
with torch.set_grad_enabled(False):
predictions = model(
input_ids=input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
)
print(predictions)
```
|
{"language": "en", "license": "apache-2.0"}
|
Surfer/bleurt
| null |
[
"transformers",
"pytorch",
"bert",
"en",
"arxiv:2004.04696",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2004.04696"
] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #en #arxiv-2004.04696 #license-apache-2.0 #endpoints_compatible #region-us
|
# BLEURT
Pretrained model on English language. It was introduced in
this paper, described in this blogpost and first released in
this repository.
The team releasing BLEURT did not write a model card for this model so this model card has been written by
the Surfer team.
Original TensorFlow implementation has been converted to PyTorch with help of this article by Surfer team.
Visit us at URL.
### How to use
Since BLEURT is not implemented in transformers library yet, you have to import BleurtModel from bleurt_model.py
|
[
"# BLEURT\n\nPretrained model on English language. It was introduced in\nthis paper, described in this blogpost and first released in\nthis repository.\n\nThe team releasing BLEURT did not write a model card for this model so this model card has been written by\nthe Surfer team.\n\nOriginal TensorFlow implementation has been converted to PyTorch with help of this article by Surfer team.\n\nVisit us at URL.",
"### How to use\n\nSince BLEURT is not implemented in transformers library yet, you have to import BleurtModel from bleurt_model.py"
] |
[
"TAGS\n#transformers #pytorch #bert #en #arxiv-2004.04696 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# BLEURT\n\nPretrained model on English language. It was introduced in\nthis paper, described in this blogpost and first released in\nthis repository.\n\nThe team releasing BLEURT did not write a model card for this model so this model card has been written by\nthe Surfer team.\n\nOriginal TensorFlow implementation has been converted to PyTorch with help of this article by Surfer team.\n\nVisit us at URL.",
"### How to use\n\nSince BLEURT is not implemented in transformers library yet, you have to import BleurtModel from bleurt_model.py"
] |
text2text-generation
|
transformers
|
## Usage:
```python
abstract = """We describe a system called Overton, whose main design goal is to support engineers in building, monitoring, and improving production
machine learning systems. Key challenges engineers face are monitoring fine-grained quality, diagnosing errors in sophisticated applications, and
handling contradictory or incomplete supervision data. Overton automates the life cycle of model construction, deployment, and monitoring by providing a
set of novel high-level, declarative abstractions. Overton's vision is to shift developers to these higher-level tasks instead of lower-level machine learning tasks.
In fact, using Overton, engineers can build deep-learning-based applications without writing any code in frameworks like TensorFlow. For over a year,
Overton has been used in production to support multiple applications in both near-real-time applications and back-of-house processing. In that time,
Overton-based applications have answered billions of queries in multiple languages and processed trillions of records reducing errors 1.7-2.9 times versus production systems.
"""
```
### Using Transformers🤗
```python
model_name = "Suva/uptag-url-model"
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
input_ids = tokenizer.encode("summarize: " + abstract, return_tensors="pt", add_special_tokens=True)
generated_ids = model.generate(input_ids=input_ids,num_beams=5,max_length=100,repetition_penalty=2.5,length_penalty=1,early_stopping=True,num_return_sequences=3)
preds = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=True) for g in generated_ids]
print(preds)
# output
["Overton: Building, Deploying, and Monitoring Machine Learning Systems for Engineers",
"Overton: A System for Building, Monitoring, and Improving Production Machine Learning Systems",
"Overton: Building, Monitoring, and Improving Production Machine Learning Systems"]
```
|
{"license": "mit", "datasets": ["arxiv"], "widget": [{"text": "summarize: We describe a system called Overton, whose main design goal is to support engineers in building, monitoring, and improving production machinelearning systems. Key challenges engineers face are monitoring fine-grained quality, diagnosing errors in sophisticated applications, and handling contradictory or incomplete supervision data. Overton automates the life cycle of model construction, deployment, and monitoring by providing a set of novel high-level, declarative abstractions. Overton's vision is to shift developers to these higher-level tasks instead of lower-level machine learning tasks. In fact, using Overton, engineers can build deep-learning-based applications without writing any code in frameworks like TensorFlow. For over a year, Overton has been used in production to support multiple applications in both near-real-time applications and back-of-house processing. In that time, Overton-based applications have answered billions of queries in multiple languages and processed trillions of records reducing errors 1.7-2.9 times versus production systems."}]}
|
Suva/uptag-url-model
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"dataset:arxiv",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #t5 #text2text-generation #dataset-arxiv #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
## Usage:
### Using Transformers
|
[
"## Usage:",
"### Using Transformers"
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #dataset-arxiv #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"## Usage:",
"### Using Transformers"
] |
image-classification
|
transformers
|
# new-york-tokyo-london
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### London

#### New York

#### Tokyo

|
{"tags": ["image-classification", "pytorch", "huggingpics"], "metrics": ["accuracy"]}
|
Suzana/new-york-tokyo-london
| null |
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# new-york-tokyo-london
Autogenerated by HuggingPics️
Create your own image classifier for anything by running the demo on Google Colab.
Report any issues with the demo at the github repo.
## Example Images
#### London
!London
#### New York
!New York
#### Tokyo
!Tokyo
|
[
"# new-york-tokyo-london\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo.",
"## Example Images",
"#### London\n\n!London",
"#### New York\n\n!New York",
"#### Tokyo\n\n!Tokyo"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# new-york-tokyo-london\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo.",
"## Example Images",
"#### London\n\n!London",
"#### New York\n\n!New York",
"#### Tokyo\n\n!Tokyo"
] |
feature-extraction
|
transformers
|
# bert-german-dbmdz-uncased-sentence-stsb
**This model is outdated!**
The new [T-Systems-onsite/cross-en-de-roberta-sentence-transformer](https://huggingface.co/T-Systems-onsite/cross-en-de-roberta-sentence-transformer) model is better for German language. It is also the current best model for English language and works cross-lingually. Please consider using that model.
|
{"language": "de", "license": "mit"}
|
T-Systems-onsite/bert-german-dbmdz-uncased-sentence-stsb
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"feature-extraction",
"de",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"de"
] |
TAGS
#transformers #pytorch #tf #jax #safetensors #bert #feature-extraction #de #license-mit #endpoints_compatible #region-us
|
# bert-german-dbmdz-uncased-sentence-stsb
This model is outdated!
The new T-Systems-onsite/cross-en-de-roberta-sentence-transformer model is better for German language. It is also the current best model for English language and works cross-lingually. Please consider using that model.
|
[
"# bert-german-dbmdz-uncased-sentence-stsb\nThis model is outdated!\n\nThe new T-Systems-onsite/cross-en-de-roberta-sentence-transformer model is better for German language. It is also the current best model for English language and works cross-lingually. Please consider using that model."
] |
[
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #feature-extraction #de #license-mit #endpoints_compatible #region-us \n",
"# bert-german-dbmdz-uncased-sentence-stsb\nThis model is outdated!\n\nThe new T-Systems-onsite/cross-en-de-roberta-sentence-transformer model is better for German language. It is also the current best model for English language and works cross-lingually. Please consider using that model."
] |
feature-extraction
|
transformers
|
# Cross German & French RoBERTa for Sentence Embeddings
|
{"language": ["fr", "de", "multilingual"], "license": "mit", "tags": ["sentence_embedding", "search", "pytorch", "xlm-roberta", "roberta", "xlm-r-distilroberta-base-paraphrase-v1"], "datasets": ["stsb_multi_mt"], "metrics": ["Spearman\u2019s rank correlation", "cosine similarity"]}
|
T-Systems-onsite/cross-de-fr-roberta-sentence-transformer
| null |
[
"transformers",
"pytorch",
"tf",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence_embedding",
"search",
"roberta",
"xlm-r-distilroberta-base-paraphrase-v1",
"fr",
"de",
"multilingual",
"dataset:stsb_multi_mt",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"fr",
"de",
"multilingual"
] |
TAGS
#transformers #pytorch #tf #safetensors #xlm-roberta #feature-extraction #sentence_embedding #search #roberta #xlm-r-distilroberta-base-paraphrase-v1 #fr #de #multilingual #dataset-stsb_multi_mt #license-mit #endpoints_compatible #region-us
|
# Cross German & French RoBERTa for Sentence Embeddings
|
[
"# Cross German & French RoBERTa for Sentence Embeddings"
] |
[
"TAGS\n#transformers #pytorch #tf #safetensors #xlm-roberta #feature-extraction #sentence_embedding #search #roberta #xlm-r-distilroberta-base-paraphrase-v1 #fr #de #multilingual #dataset-stsb_multi_mt #license-mit #endpoints_compatible #region-us \n",
"# Cross German & French RoBERTa for Sentence Embeddings"
] |
feature-extraction
|
transformers
|
# Cross English & German RoBERTa for Sentence Embeddings
This model is intended to [compute sentence (text) embeddings](https://www.sbert.net/examples/applications/computing-embeddings/README.html) for English and German text. These embeddings can then be compared with [cosine-similarity](https://en.wikipedia.org/wiki/Cosine_similarity) to find sentences with a similar semantic meaning. For example this can be useful for [semantic textual similarity](https://www.sbert.net/docs/usage/semantic_textual_similarity.html), [semantic search](https://www.sbert.net/docs/usage/semantic_search.html), or [paraphrase mining](https://www.sbert.net/docs/usage/paraphrase_mining.html). To do this you have to use the [Sentence Transformers Python framework](https://github.com/UKPLab/sentence-transformers).
The speciality of this model is that it also works cross-lingually. Regardless of the language, the sentences are translated into very similar vectors according to their semantics. This means that you can, for example, enter a search in German and find results according to the semantics in German and also in English. Using a xlm model and _multilingual finetuning with language-crossing_ we reach performance that even exceeds the best current dedicated English large model (see Evaluation section below).
> Sentence-BERT (SBERT) is a modification of the pretrained BERT network that use siamese and triplet network structures to derive semantically meaningful sentence embeddings that can be compared using cosine-similarity. This reduces the effort for finding the most similar pair from 65hours with BERT / RoBERTa to about 5 seconds with SBERT, while maintaining the accuracy from BERT.
Source: [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084)
This model is fine-tuned from [Philip May](https://may.la/) and open-sourced by [T-Systems-onsite](https://www.t-systems-onsite.de/). Special thanks to [Nils Reimers](https://www.nils-reimers.de/) for your awesome open-source work, the Sentence Transformers, the models and your help on GitHub.
## How to use
To use this model install the `sentence-transformers` package (see here: <https://github.com/UKPLab/sentence-transformers>).
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('T-Systems-onsite/cross-en-de-roberta-sentence-transformer')
```
For details of usage and examples see here:
- [Computing Sentence Embeddings](https://www.sbert.net/docs/usage/computing_sentence_embeddings.html)
- [Semantic Textual Similarity](https://www.sbert.net/docs/usage/semantic_textual_similarity.html)
- [Paraphrase Mining](https://www.sbert.net/docs/usage/paraphrase_mining.html)
- [Semantic Search](https://www.sbert.net/docs/usage/semantic_search.html)
- [Cross-Encoders](https://www.sbert.net/docs/usage/cross-encoder.html)
- [Examples on GitHub](https://github.com/UKPLab/sentence-transformers/tree/master/examples)
## Training
The base model is [xlm-roberta-base](https://huggingface.co/xlm-roberta-base). This model has been further trained by [Nils Reimers](https://www.nils-reimers.de/) on a large scale paraphrase dataset for 50+ languages. [Nils Reimers](https://www.nils-reimers.de/) about this [on GitHub](https://github.com/UKPLab/sentence-transformers/issues/509#issuecomment-712243280):
>A paper is upcoming for the paraphrase models.
>
>These models were trained on various datasets with Millions of examples for paraphrases, mainly derived from Wikipedia edit logs, paraphrases mined from Wikipedia and SimpleWiki, paraphrases from news reports, AllNLI-entailment pairs with in-batch-negative loss etc.
>
>In internal tests, they perform much better than the NLI+STSb models as they have see more and broader type of training data. NLI+STSb has the issue that they are rather narrow in their domain and do not contain any domain specific words / sentences (like from chemistry, computer science, math etc.). The paraphrase models has seen plenty of sentences from various domains.
>
>More details with the setup, all the datasets, and a wider evaluation will follow soon.
The resulting model called `xlm-r-distilroberta-base-paraphrase-v1` has been released here: <https://github.com/UKPLab/sentence-transformers/releases/tag/v0.3.8>
Building on this cross language model we fine-tuned it for English and German language on the [STSbenchmark](http://ixa2.si.ehu.es/stswiki/index.php/STSbenchmark) dataset. For German language we used the dataset of our [German STSbenchmark dataset](https://github.com/t-systems-on-site-services-gmbh/german-STSbenchmark) which has been translated with [deepl.com](https://www.deepl.com/translator). Additionally to the German and English training samples we generated samples of English and German crossed. We call this _multilingual finetuning with language-crossing_. It doubled the traing-datasize and tests show that it further improves performance.
We did an automatic hyperparameter search for 33 trials with [Optuna](https://github.com/optuna/optuna). Using 10-fold crossvalidation on the deepl.com test and dev dataset we found the following best hyperparameters:
- batch_size = 8
- num_epochs = 2
- lr = 1.026343323298136e-05,
- eps = 4.462251033010287e-06
- weight_decay = 0.04794438776350409
- warmup_steps_proportion = 0.1609010732760181
The final model was trained with these hyperparameters on the combination of the train and dev datasets from English, German and the crossings of them. The testset was left for testing.
# Evaluation
The evaluation has been done on English, German and both languages crossed with the STSbenchmark test data. The evaluation-code is available on [Colab](https://colab.research.google.com/drive/1gtGnKq_dYU_sDYqMohTYVMVpxMJjyH0M?usp=sharing). As the metric for evaluation we use the Spearman’s rank correlation between the cosine-similarity of the sentence embeddings and STSbenchmark labels.
| Model Name | Spearman<br/>German | Spearman<br/>English | Spearman<br/>EN-DE & DE-EN<br/>(cross) |
|---------------------------------------------------------------|-------------------|--------------------|------------------|
| xlm-r-distilroberta-base-paraphrase-v1 | 0.8079 | 0.8350 | 0.7983 |
| [xlm-r-100langs-bert-base-nli-stsb-mean-tokens](https://huggingface.co/sentence-transformers/xlm-r-100langs-bert-base-nli-stsb-mean-tokens) | 0.7877 | 0.8465 | 0.7908 |
| xlm-r-bert-base-nli-stsb-mean-tokens | 0.7877 | 0.8465 | 0.7908 |
| [roberta-large-nli-stsb-mean-tokens](https://huggingface.co/sentence-transformers/roberta-large-nli-stsb-mean-tokens) | 0.6371 | 0.8639 | 0.4109 |
| [T-Systems-onsite/<br/>german-roberta-sentence-transformer-v2](https://huggingface.co/T-Systems-onsite/german-roberta-sentence-transformer-v2) | 0.8529 | 0.8634 | 0.8415 |
| [paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) | 0.8355 | **0.8682** | 0.8309 |
| **T-Systems-onsite/<br/>cross-en-de-roberta-sentence-transformer** | **0.8550** | 0.8660 | **0.8525** |
## License
Copyright (c) 2020 Philip May, T-Systems on site services GmbH
Licensed under the MIT License (the "License"); you may not use this work except in compliance with the License. You may obtain a copy of the License by reviewing the file [LICENSE](https://huggingface.co/T-Systems-onsite/cross-en-de-roberta-sentence-transformer/blob/main/LICENSE) in the repository.
|
{"language": ["de", "en", "multilingual"], "license": "mit", "tags": ["sentence_embedding", "search", "pytorch", "xlm-roberta", "roberta", "xlm-r-distilroberta-base-paraphrase-v1", "paraphrase"], "datasets": ["stsb_multi_mt"], "metrics": ["Spearman\u2019s rank correlation", "cosine similarity"]}
|
T-Systems-onsite/cross-en-de-roberta-sentence-transformer
| null |
[
"transformers",
"pytorch",
"tf",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence_embedding",
"search",
"roberta",
"xlm-r-distilroberta-base-paraphrase-v1",
"paraphrase",
"de",
"en",
"multilingual",
"dataset:stsb_multi_mt",
"arxiv:1908.10084",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1908.10084"
] |
[
"de",
"en",
"multilingual"
] |
TAGS
#transformers #pytorch #tf #safetensors #xlm-roberta #feature-extraction #sentence_embedding #search #roberta #xlm-r-distilroberta-base-paraphrase-v1 #paraphrase #de #en #multilingual #dataset-stsb_multi_mt #arxiv-1908.10084 #license-mit #endpoints_compatible #has_space #region-us
|
Cross English & German RoBERTa for Sentence Embeddings
======================================================
This model is intended to compute sentence (text) embeddings for English and German text. These embeddings can then be compared with cosine-similarity to find sentences with a similar semantic meaning. For example this can be useful for semantic textual similarity, semantic search, or paraphrase mining. To do this you have to use the Sentence Transformers Python framework.
The speciality of this model is that it also works cross-lingually. Regardless of the language, the sentences are translated into very similar vectors according to their semantics. This means that you can, for example, enter a search in German and find results according to the semantics in German and also in English. Using a xlm model and *multilingual finetuning with language-crossing* we reach performance that even exceeds the best current dedicated English large model (see Evaluation section below).
>
> Sentence-BERT (SBERT) is a modification of the pretrained BERT network that use siamese and triplet network structures to derive semantically meaningful sentence embeddings that can be compared using cosine-similarity. This reduces the effort for finding the most similar pair from 65hours with BERT / RoBERTa to about 5 seconds with SBERT, while maintaining the accuracy from BERT.
>
>
>
Source: Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
This model is fine-tuned from Philip May and open-sourced by T-Systems-onsite. Special thanks to Nils Reimers for your awesome open-source work, the Sentence Transformers, the models and your help on GitHub.
How to use
----------
To use this model install the 'sentence-transformers' package (see here: <URL
For details of usage and examples see here:
* Computing Sentence Embeddings
* Semantic Textual Similarity
* Paraphrase Mining
* Semantic Search
* Cross-Encoders
* Examples on GitHub
Training
--------
The base model is xlm-roberta-base. This model has been further trained by Nils Reimers on a large scale paraphrase dataset for 50+ languages. Nils Reimers about this on GitHub:
>
> A paper is upcoming for the paraphrase models.
>
>
> These models were trained on various datasets with Millions of examples for paraphrases, mainly derived from Wikipedia edit logs, paraphrases mined from Wikipedia and SimpleWiki, paraphrases from news reports, AllNLI-entailment pairs with in-batch-negative loss etc.
>
>
> In internal tests, they perform much better than the NLI+STSb models as they have see more and broader type of training data. NLI+STSb has the issue that they are rather narrow in their domain and do not contain any domain specific words / sentences (like from chemistry, computer science, math etc.). The paraphrase models has seen plenty of sentences from various domains.
>
>
> More details with the setup, all the datasets, and a wider evaluation will follow soon.
>
>
>
The resulting model called 'xlm-r-distilroberta-base-paraphrase-v1' has been released here: <URL
Building on this cross language model we fine-tuned it for English and German language on the STSbenchmark dataset. For German language we used the dataset of our German STSbenchmark dataset which has been translated with URL. Additionally to the German and English training samples we generated samples of English and German crossed. We call this *multilingual finetuning with language-crossing*. It doubled the traing-datasize and tests show that it further improves performance.
We did an automatic hyperparameter search for 33 trials with Optuna. Using 10-fold crossvalidation on the URL test and dev dataset we found the following best hyperparameters:
* batch\_size = 8
* num\_epochs = 2
* lr = 1.026343323298136e-05,
* eps = 4.462251033010287e-06
* weight\_decay = 0.04794438776350409
* warmup\_steps\_proportion = 0.1609010732760181
The final model was trained with these hyperparameters on the combination of the train and dev datasets from English, German and the crossings of them. The testset was left for testing.
Evaluation
==========
The evaluation has been done on English, German and both languages crossed with the STSbenchmark test data. The evaluation-code is available on Colab. As the metric for evaluation we use the Spearman’s rank correlation between the cosine-similarity of the sentence embeddings and STSbenchmark labels.
License
-------
Copyright (c) 2020 Philip May, T-Systems on site services GmbH
Licensed under the MIT License (the "License"); you may not use this work except in compliance with the License. You may obtain a copy of the License by reviewing the file LICENSE in the repository.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #safetensors #xlm-roberta #feature-extraction #sentence_embedding #search #roberta #xlm-r-distilroberta-base-paraphrase-v1 #paraphrase #de #en #multilingual #dataset-stsb_multi_mt #arxiv-1908.10084 #license-mit #endpoints_compatible #has_space #region-us \n"
] |
feature-extraction
|
transformers
|
# Cross English & French RoBERTa for Sentence Embeddings
|
{"language": ["fr", "en", "multilingual"], "license": "mit", "tags": ["sentence_embedding", "search", "pytorch", "xlm-roberta", "roberta", "xlm-r-distilroberta-base-paraphrase-v1"], "datasets": ["stsb_multi_mt"], "metrics": ["Spearman\u2019s rank correlation", "cosine similarity"]}
|
T-Systems-onsite/cross-en-fr-roberta-sentence-transformer
| null |
[
"transformers",
"pytorch",
"tf",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence_embedding",
"search",
"roberta",
"xlm-r-distilroberta-base-paraphrase-v1",
"fr",
"en",
"multilingual",
"dataset:stsb_multi_mt",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"fr",
"en",
"multilingual"
] |
TAGS
#transformers #pytorch #tf #safetensors #xlm-roberta #feature-extraction #sentence_embedding #search #roberta #xlm-r-distilroberta-base-paraphrase-v1 #fr #en #multilingual #dataset-stsb_multi_mt #license-mit #endpoints_compatible #region-us
|
# Cross English & French RoBERTa for Sentence Embeddings
|
[
"# Cross English & French RoBERTa for Sentence Embeddings"
] |
[
"TAGS\n#transformers #pytorch #tf #safetensors #xlm-roberta #feature-extraction #sentence_embedding #search #roberta #xlm-r-distilroberta-base-paraphrase-v1 #fr #en #multilingual #dataset-stsb_multi_mt #license-mit #endpoints_compatible #region-us \n",
"# Cross English & French RoBERTa for Sentence Embeddings"
] |
feature-extraction
|
transformers
|
# German RoBERTa for Sentence Embeddings V2
**The new [T-Systems-onsite/cross-en-de-roberta-sentence-transformer](https://huggingface.co/T-Systems-onsite/cross-en-de-roberta-sentence-transformer) model is slightly better for German language. It is also the current best model for English language and works cross-lingually. Please consider using that model.**
|
{"language": "de", "license": "mit", "tags": ["sentence_embedding", "search", "pytorch", "xlm-roberta", "roberta", "xlm-r-distilroberta-base-paraphrase-v1", "paraphrase"], "datasets": ["STSbenchmark"], "metrics": ["Spearman\u2019s rank correlation", "cosine similarity"]}
|
T-Systems-onsite/german-roberta-sentence-transformer-v2
| null |
[
"transformers",
"pytorch",
"tf",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence_embedding",
"search",
"roberta",
"xlm-r-distilroberta-base-paraphrase-v1",
"paraphrase",
"de",
"dataset:STSbenchmark",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"de"
] |
TAGS
#transformers #pytorch #tf #safetensors #xlm-roberta #feature-extraction #sentence_embedding #search #roberta #xlm-r-distilroberta-base-paraphrase-v1 #paraphrase #de #dataset-STSbenchmark #license-mit #endpoints_compatible #has_space #region-us
|
# German RoBERTa for Sentence Embeddings V2
The new T-Systems-onsite/cross-en-de-roberta-sentence-transformer model is slightly better for German language. It is also the current best model for English language and works cross-lingually. Please consider using that model.
|
[
"# German RoBERTa for Sentence Embeddings V2\nThe new T-Systems-onsite/cross-en-de-roberta-sentence-transformer model is slightly better for German language. It is also the current best model for English language and works cross-lingually. Please consider using that model."
] |
[
"TAGS\n#transformers #pytorch #tf #safetensors #xlm-roberta #feature-extraction #sentence_embedding #search #roberta #xlm-r-distilroberta-base-paraphrase-v1 #paraphrase #de #dataset-STSbenchmark #license-mit #endpoints_compatible #has_space #region-us \n",
"# German RoBERTa for Sentence Embeddings V2\nThe new T-Systems-onsite/cross-en-de-roberta-sentence-transformer model is slightly better for German language. It is also the current best model for English language and works cross-lingually. Please consider using that model."
] |
summarization
|
transformers
|
# mT5-small-sum-de-en-v2
This is a bilingual summarization model for English and German. It is based on the multilingual T5 model [google/mt5-small](https://huggingface.co/google/mt5-small).
## Training
The training was conducted with the following hyperparameters:
- base model: [google/mt5-small](https://huggingface.co/google/mt5-small)
- source_prefix: `"summarize: "`
- batch size: 3
- max_source_length: 800
- max_target_length: 96
- warmup_ratio: 0.3
- number of train epochs: 10
- gradient accumulation steps: 2
- learning rate: 5e-5
## Datasets and Preprocessing
The datasets were preprocessed as follows:
The summary was tokenized with the [google/mt5-small](https://huggingface.co/google/mt5-small) tokenizer. Then only the records with no more than 94 summary tokens were selected.
The MLSUM dataset has a special characteristic. In the text, the summary is often included completely as one or more sentences. These have been removed from the texts. The reason is that we do not want to train a model that ultimately extracts only sentences as a summary.
This model is trained on the following datasets:
| Name | Language | License
|------|----------|--------
| [CNN Daily - Train](https://github.com/abisee/cnn-dailymail) | en | The license is unclear. The data comes from CNN and Daily Mail. We assume that it may only be used for research purposes and not commercially.
| [Extreme Summarization (XSum) - Train](https://github.com/EdinburghNLP/XSum) | en | The license is unclear. The data comes from BBC. We assume that it may only be used for research purposes and not commercially.
| [MLSUM German - Train](https://github.com/ThomasScialom/MLSUM) | de | Usage of dataset is restricted to non-commercial research purposes only. Copyright belongs to the original copyright holders (see [here](https://github.com/ThomasScialom/MLSUM#mlsum)).
| [SwissText 2019 - Train](https://www.swisstext.org/2019/shared-task/german-text-summarization-challenge.html) | de | The license is unclear. The data was published in the [German Text Summarization Challenge](https://www.swisstext.org/2019/shared-task/german-text-summarization-challenge.html). We assume that they may be used for research purposes and not commercially.
| Language | Size
|------|------
| German | 302,607
| English | 422,228
| Total | 724,835
## Evaluation on MLSUM German Test Set (no beams)
| Model | rouge1 | rouge2 | rougeL | rougeLsum
|-------|--------|--------|--------|----------
| [ml6team/mt5-small-german-finetune-mlsum](https://huggingface.co/ml6team/mt5-small-german-finetune-mlsum) | 18.3607 | 5.3604 | 14.5456 | 16.1946
| [deutsche-telekom/mT5-small-sum-de-en-01](https://huggingface.co/deutsche-telekom/mt5-small-sum-de-en-v1) | 21.7336 | 7.2614 | 17.1323 | 19.3977
| **T-Systems-onsite/mt5-small-sum-de-en-v2 (this)** | **21.7756** | **7.2662** | **17.1444** | **19.4242**
## Evaluation on CNN Daily English Test Set (no beams)
| Model | rouge1 | rouge2 | rougeL | rougeLsum
|-------|--------|--------|--------|----------
| [sshleifer/distilbart-xsum-12-6](https://huggingface.co/sshleifer/distilbart-xsum-12-6) | 26.7664 | 8.8243 | 18.3703 | 23.2614
| [facebook/bart-large-xsum](https://huggingface.co/facebook/bart-large-xsum) | 28.5374 | 9.8565 | 19.4829 | 24.7364
| [mrm8488/t5-base-finetuned-summarize-news](https://huggingface.co/mrm8488/t5-base-finetuned-summarize-news) | 37.576 | 14.7389 | 24.0254 | 34.4634
| [deutsche-telekom/mT5-small-sum-de-en-01](https://huggingface.co/deutsche-telekom/mt5-small-sum-de-en-v1) | 37.6339 | 16.5317 | 27.1418 | 34.9951
| **T-Systems-onsite/mt5-small-sum-de-en-v2 (this)** | **37.8096** | **16.6646** | **27.2239** | **35.1916**
## Evaluation on Extreme Summarization (XSum) English Test Set (no beams)
| Model | rouge1 | rouge2 | rougeL | rougeLsum
|-------|--------|--------|--------|----------
| [mrm8488/t5-base-finetuned-summarize-news](https://huggingface.co/mrm8488/t5-base-finetuned-summarize-news) | 18.6204 | 3.535 | 12.3997 | 15.2111
| [facebook/bart-large-xsum](https://huggingface.co/facebook/bart-large-xsum) | 28.5374 | 9.8565 | 19.4829 | 24.7364
| [deutsche-telekom/mT5-small-sum-de-en-01](https://huggingface.co/deutsche-telekom/mt5-small-sum-de-en-v1) | 32.3416 | 10.6191 | 25.3799 | 25.3908
| T-Systems-onsite/mt5-small-sum-de-en-v2 (this) | 32.4828 | 10.7004| 25.5238 | 25.5369
| [sshleifer/distilbart-xsum-12-6](https://huggingface.co/sshleifer/distilbart-xsum-12-6) | 44.2553 ♣ | 21.4289 ♣ | 36.2639 ♣ | 36.2696 ♣
♣: These values seem to be unusually high. It could be that the test set was used in the training data.
## License
Copyright (c) 2021 Philip May, T-Systems on site services GmbH
This work is licensed under the [Attribution-NonCommercial-ShareAlike 3.0 Unported (CC BY-NC-SA 3.0)](https://creativecommons.org/licenses/by-nc-sa/3.0/) license.
|
{"language": ["de", "en", "multilingual"], "license": "cc-by-nc-sa-4.0", "tags": ["summarization"], "datasets": ["cnn_dailymail", "xsum", "mlsum", "swiss_text_2019"]}
|
T-Systems-onsite/mt5-small-sum-de-en-v2
| null |
[
"transformers",
"pytorch",
"safetensors",
"mt5",
"text2text-generation",
"summarization",
"de",
"en",
"multilingual",
"dataset:cnn_dailymail",
"dataset:xsum",
"dataset:mlsum",
"dataset:swiss_text_2019",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"de",
"en",
"multilingual"
] |
TAGS
#transformers #pytorch #safetensors #mt5 #text2text-generation #summarization #de #en #multilingual #dataset-cnn_dailymail #dataset-xsum #dataset-mlsum #dataset-swiss_text_2019 #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
mT5-small-sum-de-en-v2
======================
This is a bilingual summarization model for English and German. It is based on the multilingual T5 model google/mt5-small.
Training
--------
The training was conducted with the following hyperparameters:
* base model: google/mt5-small
* source\_prefix: '"summarize: "'
* batch size: 3
* max\_source\_length: 800
* max\_target\_length: 96
* warmup\_ratio: 0.3
* number of train epochs: 10
* gradient accumulation steps: 2
* learning rate: 5e-5
Datasets and Preprocessing
--------------------------
The datasets were preprocessed as follows:
The summary was tokenized with the google/mt5-small tokenizer. Then only the records with no more than 94 summary tokens were selected.
The MLSUM dataset has a special characteristic. In the text, the summary is often included completely as one or more sentences. These have been removed from the texts. The reason is that we do not want to train a model that ultimately extracts only sentences as a summary.
This model is trained on the following datasets:
Name: CNN Daily - Train, Language: en, License: The license is unclear. The data comes from CNN and Daily Mail. We assume that it may only be used for research purposes and not commercially.
Name: Extreme Summarization (XSum) - Train, Language: en, License: The license is unclear. The data comes from BBC. We assume that it may only be used for research purposes and not commercially.
Name: MLSUM German - Train, Language: de, License: Usage of dataset is restricted to non-commercial research purposes only. Copyright belongs to the original copyright holders (see here).
Name: SwissText 2019 - Train, Language: de, License: The license is unclear. The data was published in the German Text Summarization Challenge. We assume that they may be used for research purposes and not commercially.
Evaluation on MLSUM German Test Set (no beams)
----------------------------------------------
Evaluation on CNN Daily English Test Set (no beams)
---------------------------------------------------
Evaluation on Extreme Summarization (XSum) English Test Set (no beams)
----------------------------------------------------------------------
♣: These values seem to be unusually high. It could be that the test set was used in the training data.
License
-------
Copyright (c) 2021 Philip May, T-Systems on site services GmbH
This work is licensed under the Attribution-NonCommercial-ShareAlike 3.0 Unported (CC BY-NC-SA 3.0) license.
|
[] |
[
"TAGS\n#transformers #pytorch #safetensors #mt5 #text2text-generation #summarization #de #en #multilingual #dataset-cnn_dailymail #dataset-xsum #dataset-mlsum #dataset-swiss_text_2019 #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation
|
transformers
|
# mGPT
mGPT is pre-trained on the [mC4 dataset](https://huggingface.co/datasets/mc4) using a causal language modeling objective. It was introduced in this [paper](https://arxiv.org/abs/2110.06609) and first released on this page.
## Model description
mGPT is a Transformer-based model which pre-trained on massive multilingual data covering over 101 languages. Similar to GPT-2, It was pre-trained on the raw texts only, with no human labeling. We use the same tokenization and vocabulary as the [mT5 model](https://huggingface.co/google/mt5-base).
## Intended uses
You can use the raw model for text generation or using prompts for adapting it to a downstream task.
## How to use
You can use this model directly with a pipeline for text generation. Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import MT5Tokenizer, GPT2LMHeadModel, TextGenerationPipeline
tokenizer = MT5Tokenizer.from_pretrained("THUMT/mGPT")
model = GPT2LMHeadModel.from_pretrained("THUMT/mGPT")
pipeline = TextGenerationPipeline(model=model, tokenizer=tokenizer)
text = "Replace me by any text you'd like."
text = pipeline(text, do_sample=True, max_length=1024)[0]["generated_text"]
```
## Preprocessing
The texts are tokenized using `sentencepiece` and a vocabulary size of 250,100. The inputs are sequences of 1,024 consecutive tokens. We use `<extra_id_0>` to separate lines in a document.
## BibTeX entry and citation info
```bibtex
@misc{tan2021msp,
title={MSP: Multi-Stage Prompting for Making Pre-trained Language Models Better Translators},
author={Zhixing Tan and Xiangwen Zhang and Shuo Wang and Yang Liu},
year={2021},
eprint={2110.06609},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{}
|
THUMT/mGPT
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"arxiv:2110.06609",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2110.06609"
] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #arxiv-2110.06609 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# mGPT
mGPT is pre-trained on the mC4 dataset using a causal language modeling objective. It was introduced in this paper and first released on this page.
## Model description
mGPT is a Transformer-based model which pre-trained on massive multilingual data covering over 101 languages. Similar to GPT-2, It was pre-trained on the raw texts only, with no human labeling. We use the same tokenization and vocabulary as the mT5 model.
## Intended uses
You can use the raw model for text generation or using prompts for adapting it to a downstream task.
## How to use
You can use this model directly with a pipeline for text generation. Here is how to use this model to get the features of a given text in PyTorch:
## Preprocessing
The texts are tokenized using 'sentencepiece' and a vocabulary size of 250,100. The inputs are sequences of 1,024 consecutive tokens. We use '<extra_id_0>' to separate lines in a document.
## BibTeX entry and citation info
|
[
"# mGPT\n\nmGPT is pre-trained on the mC4 dataset using a causal language modeling objective. It was introduced in this paper and first released on this page.",
"## Model description\n\nmGPT is a Transformer-based model which pre-trained on massive multilingual data covering over 101 languages. Similar to GPT-2, It was pre-trained on the raw texts only, with no human labeling. We use the same tokenization and vocabulary as the mT5 model.",
"## Intended uses\n\nYou can use the raw model for text generation or using prompts for adapting it to a downstream task.",
"## How to use\n\nYou can use this model directly with a pipeline for text generation. Here is how to use this model to get the features of a given text in PyTorch:",
"## Preprocessing\n\nThe texts are tokenized using 'sentencepiece' and a vocabulary size of 250,100. The inputs are sequences of 1,024 consecutive tokens. We use '<extra_id_0>' to separate lines in a document.",
"## BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #arxiv-2110.06609 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# mGPT\n\nmGPT is pre-trained on the mC4 dataset using a causal language modeling objective. It was introduced in this paper and first released on this page.",
"## Model description\n\nmGPT is a Transformer-based model which pre-trained on massive multilingual data covering over 101 languages. Similar to GPT-2, It was pre-trained on the raw texts only, with no human labeling. We use the same tokenization and vocabulary as the mT5 model.",
"## Intended uses\n\nYou can use the raw model for text generation or using prompts for adapting it to a downstream task.",
"## How to use\n\nYou can use this model directly with a pipeline for text generation. Here is how to use this model to get the features of a given text in PyTorch:",
"## Preprocessing\n\nThe texts are tokenized using 'sentencepiece' and a vocabulary size of 250,100. The inputs are sequences of 1,024 consecutive tokens. We use '<extra_id_0>' to separate lines in a document.",
"## BibTeX entry and citation info"
] |
fill-mask
|
transformers
|
# iSEEEK
A universal approach for integrating super large-scale single-cell transcriptomes by exploring gene rankings
## An simple pipeline for single-cell analysis
```python
import torch
import gzip
import re
from tqdm import tqdm
import numpy as np
import scanpy as sc
from torch.utils.data import DataLoader, Dataset
from transformers import PreTrainedTokenizerFast, BertForMaskedLM
class LineDataset(Dataset):
def __init__(self, lines):
self.lines = lines
self.regex = re.compile(r'\-|\.')
def __getitem__(self, i):
return self.regex.sub('_', self.lines[i])
def __len__(self):
return len(self.lines)
device = "cuda" if torch.cuda.is_available() else "cpu"
torch.set_num_threads(2)
tokenizer = PreTrainedTokenizerFast.from_pretrained("TJMUCH/transcriptome-iseeek")
model = BertForMaskedLM.from_pretrained("TJMUCH/transcriptome-iseeek").bert
model = model.to(device)
model.eval()
## Data desposited in https://huggingface.co/TJMUCH/transcriptome-iseeek/tree/main
lines = [s.strip().decode() for s in gzip.open("pbmc_ranking.txt.gz")]
labels = [s.strip().decode() for s in gzip.open("pbmc_label.txt.gz")]
labels = np.asarray(labels)
ds = LineDataset(lines)
dl = DataLoader(ds, batch_size=80)
features = []
for a in tqdm(dl, total=len(dl)):
batch = tokenizer(a, max_length=128, truncation=True,
padding=True, return_tensors="pt")
for k, v in batch.items():
batch[k] = v.to(device)
with torch.no_grad():
out = model(**batch)
f = out.last_hidden_state[:,0,:]
features.extend(f.tolist())
features = np.stack(features)
adata = sc.AnnData(features)
adata.obs['celltype'] = labels
adata.obs.celltype = adata.obs.celltype.astype("category")
sc.pp.neighbors(adata, use_rep='X')
sc.tl.umap(adata)
sc.tl.leiden(adata)
sc.pl.umap(adata, color=['celltype','leiden'],save= "UMAP")
```
## Extract token representations
```python
cell_counts = len(lines)
x = np.zeros((cell_counts, len(tokenizer)), dtype=np.float16)
for a in tqdm(dl, total=len(dl)):
batch = tokenizer(a, max_length=128, truncation=True,
padding=True, return_tensors="pt")
for k, v in batch.items():
batch[k] = v.to(device)
with torch.no_grad():
out = model(**batch)
eos_idxs = batch.attention_mask.sum(dim=1) - 1
f = out.last_hidden_state
batch_size = f.shape[0]
input_ids = batch.input_ids
for i in range(batch_size):
##genes = tokenizer.batch_decode(input_ids[i])
token_norms = [f[i][j].norm().item() for j in range(1, eos_idxs[i])]
idxs = input_ids[i].tolist()[1:eos_idxs[i]]
x[counter, idxs] = token_norms
counter = counter + 1
```
|
{}
|
TJMUCH/transcriptome-iseeek
| null |
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us
|
# iSEEEK
A universal approach for integrating super large-scale single-cell transcriptomes by exploring gene rankings
## An simple pipeline for single-cell analysis
## Extract token representations
|
[
"# iSEEEK\nA universal approach for integrating super large-scale single-cell transcriptomes by exploring gene rankings",
"## An simple pipeline for single-cell analysis",
"## Extract token representations"
] |
[
"TAGS\n#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n",
"# iSEEEK\nA universal approach for integrating super large-scale single-cell transcriptomes by exploring gene rankings",
"## An simple pipeline for single-cell analysis",
"## Extract token representations"
] |
null | null |
# MASC
The final output model is: `model.pb`
The language model can be found at: https://huggingface.co/TRoboto/masc_kenlm_3grams_lm
To run the model, clone this repo and the language model repo, then follow the instructions here: https://deepspeech.readthedocs.io/en/master/USING.html
To use the checkpoint to retrain the model, clone this repo and follow the instructions here: https://deepspeech.readthedocs.io/en/r0.9/TRAINING.html
|
{}
|
TRoboto/masc_deepspeech_asr_model_v0
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
# MASC
The final output model is: 'URL'
The language model can be found at: URL
To run the model, clone this repo and the language model repo, then follow the instructions here: URL
To use the checkpoint to retrain the model, clone this repo and follow the instructions here: URL
|
[
"# MASC\nThe final output model is: 'URL'\n\nThe language model can be found at: URL\n\nTo run the model, clone this repo and the language model repo, then follow the instructions here: URL\n\nTo use the checkpoint to retrain the model, clone this repo and follow the instructions here: URL"
] |
[
"TAGS\n#region-us \n",
"# MASC\nThe final output model is: 'URL'\n\nThe language model can be found at: URL\n\nTo run the model, clone this repo and the language model repo, then follow the instructions here: URL\n\nTo use the checkpoint to retrain the model, clone this repo and follow the instructions here: URL"
] |
null | null |
# MASC
The scorer model can be found under files with the name of `masc.scorer`
More info on how the scorer was produced: https://deepspeech.readthedocs.io/en/master/Scorer.html
|
{}
|
TRoboto/masc_kenlm_3grams_lm
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
# MASC
The scorer model can be found under files with the name of 'URL'
More info on how the scorer was produced: URL
|
[
"# MASC\nThe scorer model can be found under files with the name of 'URL'\n\nMore info on how the scorer was produced: URL"
] |
[
"TAGS\n#region-us \n",
"# MASC\nThe scorer model can be found under files with the name of 'URL'\n\nMore info on how the scorer was produced: URL"
] |
text-generation
|
transformers
|
# Trump Tweets DialoGPT Model
|
{"tags": ["conversational"]}
|
TTYU/DialoGPT-small-trump
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Trump Tweets DialoGPT Model
|
[
"# Trump Tweets DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Trump Tweets DialoGPT Model"
] |
text-generation
|
transformers
|
# Iroh DialoGPT Model
|
{"tags": ["conversational"]}
|
TVLG/DialoGPT-small-Iroh-Bot
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Iroh DialoGPT Model
|
[
"# Iroh DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Iroh DialoGPT Model"
] |
null | null |
hello
hello
hello
hello
|
{}
|
TaahaKazi/bert-joke-identifier
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
hello
hello
hello
hello
|
[] |
[
"TAGS\n#region-us \n"
] |
null | null |
hello
hello
|
{}
|
TaahaKazi/joke-identifier-1
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
hello
hello
|
[] |
[
"TAGS\n#region-us \n"
] |
null | null |
hello
|
{}
|
TaahaKazi/joke-identifier-2
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
hello
|
[] |
[
"TAGS\n#region-us \n"
] |
null | null |
hello
|
{}
|
TaahaKazi/joke-identifier-3
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
hello
|
[] |
[
"TAGS\n#region-us \n"
] |
null | null |
hello
|
{}
|
TaahaKazi/joke-identifier-bert
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
hello
|
[] |
[
"TAGS\n#region-us \n"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# neg_komrc_train
This model is a fine-tuned version of [beomi/kcbert-base](https://huggingface.co/beomi/kcbert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4016
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 1234
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.277 | 0.51 | 10000 | 0.4016 |
| 0.1671 | 1.03 | 20000 | 0.4116 |
| 0.1725 | 1.54 | 30000 | 0.4390 |
| 0.0868 | 2.06 | 40000 | 0.5147 |
| 0.0868 | 2.57 | 50000 | 0.5064 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "model-index": [{"name": "neg_komrc_train", "results": []}]}
|
Taekyoon/neg_komrc_train
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #endpoints_compatible #region-us
|
neg\_komrc\_train
=================
This model is a fine-tuned version of beomi/kcbert-base on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4016
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 1234
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.13.0
* Pytorch 1.10.0+cu111
* Datasets 1.18.4
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 1234\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.4\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 1234\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.4\n* Tokenizers 0.10.3"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-pos
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3009
- Precision: 0.9277
- Recall: 0.9329
- F1: 0.9303
- Accuracy: 0.9332
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2791 | 1.0 | 1756 | 0.3125 | 0.9212 | 0.9263 | 0.9237 | 0.9272 |
| 0.1853 | 2.0 | 3512 | 0.3038 | 0.9241 | 0.9309 | 0.9275 | 0.9307 |
| 0.1501 | 3.0 | 5268 | 0.3009 | 0.9277 | 0.9329 | 0.9303 | 0.9332 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "bert-finetuned-pos", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metrics": [{"type": "precision", "value": 0.9276736387541917, "name": "Precision"}, {"type": "recall", "value": 0.9329402916272412, "name": "Recall"}, {"type": "f1", "value": 0.9302995112982049, "name": "F1"}, {"type": "accuracy", "value": 0.933154765408842, "name": "Accuracy"}]}]}]}
|
Tahsin/BERT-finetuned-conll2003-POS
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
bert-finetuned-pos
==================
This model is a fine-tuned version of bert-base-cased on the conll2003 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3009
* Precision: 0.9277
* Recall: 0.9329
* F1: 0.9303
* Accuracy: 0.9332
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.0+cu111
* Datasets 1.17.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1561
- Accuracy: 0.9285
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 250 | 0.1635 | 0.9295 |
| 0.111 | 2.0 | 500 | 0.1515 | 0.936 |
| 0.111 | 3.0 | 750 | 0.1561 | 0.9285 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.9285, "name": "Accuracy"}]}]}]}
|
Tahsin/distilbert-base-uncased-finetuned-emotion
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-emotion
=========================================
This model is a fine-tuned version of bert-base-cased on the emotion dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1561
* Accuracy: 0.9285
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.0+cu111
* Datasets 1.17.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the OPENSLR_SLR53 - bengali dataset.
It achieves the following results on the evaluation set.
Without language model :
- Wer: 0.3110
- Cer : 0.072
With 5 gram language model trained on [indic-text](https://huggingface.co/datasets/Harveenchadha/indic-text/tree/main) dataset :
- Wer: 0.17776
- Cer : 0.04394
Note : 10% of a total 218703 samples have been used for evaluation. Evaluation set has 21871 examples. Training was stopped after 30k steps. Output predictions are available under files section.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
Note : Training and evaluation script modified from https://huggingface.co/chmanoj/xls-r-300m-te and https://github.com/huggingface/transformers/tree/master/examples/research_projects/robust-speech-event.
Bengali speech data was not available from common voice or librispeech multilingual datasets, so OpenSLR53 has been used.
Note 2 : Minimum audio duration of 0.1s has been used to filter the training data which excluded may be 10-20 samples.
# Citation
@misc {tahsin_mayeesha_2023,
author = { {Tahsin Mayeesha} },
title = { wav2vec2-bn-300m (Revision e10defc) },
year = 2023,
url = { https://huggingface.co/Tahsin-Mayeesha/wav2vec2-bn-300m },
doi = { 10.57967/hf/0939 },
publisher = { Hugging Face }
}
|
{"language": ["bn"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "hf-asr-leaderboard", "openslr_SLR53", "robust-speech-event"], "datasets": ["openslr", "SLR53", "Harveenchadha/indic-text"], "metrics": ["wer", "cer"], "model-index": [{"name": "Tahsin-Mayeesha/wav2vec2-bn-300m", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Open SLR", "type": "openslr", "args": "SLR66"}, "metrics": [{"type": "wer", "value": 0.31104373941386626, "name": "Test WER"}, {"type": "cer", "value": 0.07263099973420006, "name": "Test CER"}, {"type": "wer", "value": 0.17776164652632478, "name": "Test WER with lm"}, {"type": "cer", "value": 0.04394092712884769, "name": "Test CER with lm"}]}]}]}
|
Tahsin-Mayeesha/wav2vec2-bn-300m
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"openslr_SLR53",
"robust-speech-event",
"bn",
"dataset:openslr",
"dataset:SLR53",
"dataset:Harveenchadha/indic-text",
"doi:10.57967/hf/0939",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"bn"
] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #hf-asr-leaderboard #openslr_SLR53 #robust-speech-event #bn #dataset-openslr #dataset-SLR53 #dataset-Harveenchadha/indic-text #doi-10.57967/hf/0939 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
|
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the OPENSLR_SLR53 - bengali dataset.
It achieves the following results on the evaluation set.
Without language model :
- Wer: 0.3110
- Cer : 0.072
With 5 gram language model trained on indic-text dataset :
- Wer: 0.17776
- Cer : 0.04394
Note : 10% of a total 218703 samples have been used for evaluation. Evaluation set has 21871 examples. Training was stopped after 30k steps. Output predictions are available under files section.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
Note : Training and evaluation script modified from URL and URL
Bengali speech data was not available from common voice or librispeech multilingual datasets, so OpenSLR53 has been used.
Note 2 : Minimum audio duration of 0.1s has been used to filter the training data which excluded may be 10-20 samples.
@misc {tahsin_mayeesha_2023,
author = { {Tahsin Mayeesha} },
title = { wav2vec2-bn-300m (Revision e10defc) },
year = 2023,
url = { URL },
doi = { 10.57967/hf/0939 },
publisher = { Hugging Face }
}
|
[
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 7.5e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- gradient_accumulation_steps: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 2000\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.1+cu102\n- Datasets 1.17.1.dev0\n- Tokenizers 0.11.0\n\nNote : Training and evaluation script modified from URL and URL \nBengali speech data was not available from common voice or librispeech multilingual datasets, so OpenSLR53 has been used.\n\nNote 2 : Minimum audio duration of 0.1s has been used to filter the training data which excluded may be 10-20 samples. \n\n@misc {tahsin_mayeesha_2023,\n\tauthor = { {Tahsin Mayeesha} },\n\ttitle = { wav2vec2-bn-300m (Revision e10defc) },\n\tyear = 2023,\n\turl = { URL },\n\tdoi = { 10.57967/hf/0939 },\n\tpublisher = { Hugging Face }\n}"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #hf-asr-leaderboard #openslr_SLR53 #robust-speech-event #bn #dataset-openslr #dataset-SLR53 #dataset-Harveenchadha/indic-text #doi-10.57967/hf/0939 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 7.5e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- gradient_accumulation_steps: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 2000\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.1+cu102\n- Datasets 1.17.1.dev0\n- Tokenizers 0.11.0\n\nNote : Training and evaluation script modified from URL and URL \nBengali speech data was not available from common voice or librispeech multilingual datasets, so OpenSLR53 has been used.\n\nNote 2 : Minimum audio duration of 0.1s has been used to filter the training data which excluded may be 10-20 samples. \n\n@misc {tahsin_mayeesha_2023,\n\tauthor = { {Tahsin Mayeesha} },\n\ttitle = { wav2vec2-bn-300m (Revision e10defc) },\n\tyear = 2023,\n\turl = { URL },\n\tdoi = { 10.57967/hf/0939 },\n\tpublisher = { Hugging Face }\n}"
] |
automatic-speech-recognition
|
espnet
|
# Estonian Espnet2 ASR model
## Model description
This is a general-purpose Estonian ASR model trained in the Lab of Language Technology at TalTech.
## Intended uses & limitations
This model is intended for general-purpose speech recognition, such as broadcast conversations, interviews, talks, etc.
## How to use
```python
from espnet2.bin.asr_inference import Speech2Text
model = Speech2Text.from_pretrained(
"TalTechNLP/espnet2_estonian",
lm_weight=0.6, ctc_weight=0.4, beam_size=60
)
# read a sound file with 16k sample rate
import soundfile
speech, rate = soundfile.read("speech.wav")
assert rate == 16000
text, *_ = model(speech)
print(text[0])
```
#### Limitations and bias
Since this model was trained on mostly broadcast speech and texts from the web, it might have problems correctly decoding the following:
* Speech containing technical and other domain-specific terms
* Children's speech
* Non-native speech
* Speech recorded under very noisy conditions or with a microphone far from the speaker
* Very spontaneous and overlapping speech
## Training data
Acoustic training data:
| Type | Amount (h) |
|-----------------------|:------:|
| Broadcast speech | 591 |
| Spontaneous speech | 53 |
| Elderly speech corpus | 53 |
| Talks, lectures | 49 |
| Parliament speeches | 31 |
| *Total* | *761* |
Language model training data:
* Estonian National Corpus 2019
* OpenSubtitles
* Speech transcripts
## Training procedure
Standard EspNet2 Conformer recipe.
## Evaluation results
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_large_valid.loss.ave_5best_asr_model_valid.acc.ave/aktuaalne2021.testset|2864|56575|93.1|4.5|2.4|2.0|8.9|63.4|
|decode_asr_lm_lm_large_valid.loss.ave_5best_asr_model_valid.acc.ave/jutusaated.devset|273|4677|93.9|3.6|2.4|1.2|7.3|46.5|
|decode_asr_lm_lm_large_valid.loss.ave_5best_asr_model_valid.acc.ave/jutusaated.testset|818|11093|94.7|2.7|2.5|0.9|6.2|45.0|
|decode_asr_lm_lm_large_valid.loss.ave_5best_asr_model_valid.acc.ave/www-trans.devset|1207|13865|82.3|8.5|9.3|3.4|21.2|74.1|
|decode_asr_lm_lm_large_valid.loss.ave_5best_asr_model_valid.acc.ave/www-trans.testset|1648|22707|86.4|7.6|6.0|2.5|16.1|75.7|
### BibTeX entry and citation info
#### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
|
{"language": "et", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"]}
|
TalTechNLP/espnet2_estonian
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"et",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"et"
] |
TAGS
#espnet #audio #automatic-speech-recognition #et #license-cc-by-4.0 #region-us
|
Estonian Espnet2 ASR model
==========================
Model description
-----------------
This is a general-purpose Estonian ASR model trained in the Lab of Language Technology at TalTech.
Intended uses & limitations
---------------------------
This model is intended for general-purpose speech recognition, such as broadcast conversations, interviews, talks, etc.
How to use
----------
#### Limitations and bias
Since this model was trained on mostly broadcast speech and texts from the web, it might have problems correctly decoding the following:
* Speech containing technical and other domain-specific terms
* Children's speech
* Non-native speech
* Speech recorded under very noisy conditions or with a microphone far from the speaker
* Very spontaneous and overlapping speech
Training data
-------------
Acoustic training data:
Language model training data:
* Estonian National Corpus 2019
* OpenSubtitles
* Speech transcripts
Training procedure
------------------
Standard EspNet2 Conformer recipe.
Evaluation results
------------------
### WER
### BibTeX entry and citation info
#### Citing ESPnet
|
[
"#### Limitations and bias\n\n\nSince this model was trained on mostly broadcast speech and texts from the web, it might have problems correctly decoding the following:\n\n\n* Speech containing technical and other domain-specific terms\n* Children's speech\n* Non-native speech\n* Speech recorded under very noisy conditions or with a microphone far from the speaker\n* Very spontaneous and overlapping speech\n\n\nTraining data\n-------------\n\n\nAcoustic training data:\n\n\n\nLanguage model training data:\n\n\n* Estonian National Corpus 2019\n* OpenSubtitles\n* Speech transcripts\n\n\nTraining procedure\n------------------\n\n\nStandard EspNet2 Conformer recipe.\n\n\nEvaluation results\n------------------",
"### WER",
"### BibTeX entry and citation info",
"#### Citing ESPnet"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #et #license-cc-by-4.0 #region-us \n",
"#### Limitations and bias\n\n\nSince this model was trained on mostly broadcast speech and texts from the web, it might have problems correctly decoding the following:\n\n\n* Speech containing technical and other domain-specific terms\n* Children's speech\n* Non-native speech\n* Speech recorded under very noisy conditions or with a microphone far from the speaker\n* Very spontaneous and overlapping speech\n\n\nTraining data\n-------------\n\n\nAcoustic training data:\n\n\n\nLanguage model training data:\n\n\n* Estonian National Corpus 2019\n* OpenSubtitles\n* Speech transcripts\n\n\nTraining procedure\n------------------\n\n\nStandard EspNet2 Conformer recipe.\n\n\nEvaluation results\n------------------",
"### WER",
"### BibTeX entry and citation info",
"#### Citing ESPnet"
] |
audio-classification
|
speechbrain
|
# VoxLingua107 ECAPA-TDNN Spoken Language Identification Model (CE)
## Model description
This is a spoken language recognition model trained on the VoxLingua107 dataset using SpeechBrain.
The model uses the ECAPA-TDNN architecture that has previously been used for speaker recognition. However, it uses
more fully connected hidden layers after the embedding layer, and cross-entropy loss was used for training.
We observed that this improved the performance of extracted utterance embeddings for downstream tasks.
The model can classify a speech utterance according to the language spoken.
It covers 107 different languages (
Abkhazian,
Afrikaans,
Amharic,
Arabic,
Assamese,
Azerbaijani,
Bashkir,
Belarusian,
Bulgarian,
Bengali,
Tibetan,
Breton,
Bosnian,
Catalan,
Cebuano,
Czech,
Welsh,
Danish,
German,
Greek,
English,
Esperanto,
Spanish,
Estonian,
Basque,
Persian,
Finnish,
Faroese,
French,
Galician,
Guarani,
Gujarati,
Manx,
Hausa,
Hawaiian,
Hindi,
Croatian,
Haitian,
Hungarian,
Armenian,
Interlingua,
Indonesian,
Icelandic,
Italian,
Hebrew,
Japanese,
Javanese,
Georgian,
Kazakh,
Central Khmer,
Kannada,
Korean,
Latin,
Luxembourgish,
Lingala,
Lao,
Lithuanian,
Latvian,
Malagasy,
Maori,
Macedonian,
Malayalam,
Mongolian,
Marathi,
Malay,
Maltese,
Burmese,
Nepali,
Dutch,
Norwegian Nynorsk,
Norwegian,
Occitan,
Panjabi,
Polish,
Pushto,
Portuguese,
Romanian,
Russian,
Sanskrit,
Scots,
Sindhi,
Sinhala,
Slovak,
Slovenian,
Shona,
Somali,
Albanian,
Serbian,
Sundanese,
Swedish,
Swahili,
Tamil,
Telugu,
Tajik,
Thai,
Turkmen,
Tagalog,
Turkish,
Tatar,
Ukrainian,
Urdu,
Uzbek,
Vietnamese,
Waray,
Yiddish,
Yoruba,
Mandarin Chinese).
## Intended uses & limitations
The model has two uses:
- use 'as is' for spoken language recognition
- use as an utterance-level feature (embedding) extractor, for creating a dedicated language ID model on your own data
The model is trained on automatically collected YouTube data. For more
information about the dataset, see [here](http://bark.phon.ioc.ee/voxlingua107/).
#### How to use
```python
import torchaudio
from speechbrain.pretrained import EncoderClassifier
language_id = EncoderClassifier.from_hparams(source="TalTechNLP/voxlingua107-epaca-tdnn-ce", savedir="tmp")
# Download Thai language sample from Omniglot and cvert to suitable form
signal = language_id.load_audio("https://omniglot.com/soundfiles/udhr/udhr_th.mp3")
prediction = language_id.classify_batch(signal)
print(prediction)
(tensor([[-2.8646e+01, -3.0346e+01, -2.0748e+01, -2.9562e+01, -2.2187e+01,
-3.2668e+01, -3.6677e+01, -3.3573e+01, -3.2545e+01, -2.4365e+01,
-2.4688e+01, -3.1171e+01, -2.7743e+01, -2.9918e+01, -2.4770e+01,
-3.2250e+01, -2.4727e+01, -2.6087e+01, -2.1870e+01, -3.2821e+01,
-2.2128e+01, -2.2822e+01, -3.0888e+01, -3.3564e+01, -2.9906e+01,
-2.2392e+01, -2.5573e+01, -2.6443e+01, -3.2429e+01, -3.2652e+01,
-3.0030e+01, -2.4607e+01, -2.2967e+01, -2.4396e+01, -2.8578e+01,
-2.5153e+01, -2.8475e+01, -2.6409e+01, -2.5230e+01, -2.7957e+01,
-2.6298e+01, -2.3609e+01, -2.5863e+01, -2.8225e+01, -2.7225e+01,
-3.0486e+01, -2.1185e+01, -2.7938e+01, -3.3155e+01, -1.9076e+01,
-2.9181e+01, -2.2160e+01, -1.8352e+01, -2.5866e+01, -3.3636e+01,
-4.2016e+00, -3.1581e+01, -3.1894e+01, -2.7834e+01, -2.5429e+01,
-3.2235e+01, -3.2280e+01, -2.8786e+01, -2.3366e+01, -2.6047e+01,
-2.2075e+01, -2.3770e+01, -2.2518e+01, -2.8101e+01, -2.5745e+01,
-2.6441e+01, -2.9822e+01, -2.7109e+01, -3.0225e+01, -2.4566e+01,
-2.9268e+01, -2.7651e+01, -3.4221e+01, -2.9026e+01, -2.6009e+01,
-3.1968e+01, -3.1747e+01, -2.8156e+01, -2.9025e+01, -2.7756e+01,
-2.8052e+01, -2.9341e+01, -2.8806e+01, -2.1636e+01, -2.3992e+01,
-2.3794e+01, -3.3743e+01, -2.8332e+01, -2.7465e+01, -1.5085e-02,
-2.9094e+01, -2.1444e+01, -2.9780e+01, -3.6046e+01, -3.7401e+01,
-3.0888e+01, -3.3172e+01, -1.8931e+01, -2.2679e+01, -3.0225e+01,
-2.4995e+01, -2.1028e+01]]), tensor([-0.0151]), tensor([94]), ['th'])
# The scores in the prediction[0] tensor can be interpreted as log-likelihoods that
# the given utterance belongs to the given language (i.e., the larger the better)
# The linear-scale likelihood can be retrieved using the following:
print(prediction[1].exp())
tensor([0.9850])
# The identified language ISO code is given in prediction[3]
print(prediction[3])
['th']
# Alternatively, use the utterance embedding extractor:
emb = language_id.encode_batch(signal)
print(emb.shape)
torch.Size([1, 1, 256])
```
#### Limitations and bias
Since the model is trained on VoxLingua107, it has many limitations and biases, some of which are:
- Probably it's accuracy on smaller languages is quite limited
- Probably it works worse on female speech than male speech (because YouTube data includes much more male speech)
- Based on subjective experiments, it doesn't work well on speech with a foreign accent
- Probably it doesn't work well on children's speech and on persons with speech disorders
## Training data
The model is trained on [VoxLingua107](http://bark.phon.ioc.ee/voxlingua107/).
VoxLingua107 is a speech dataset for training spoken language identification models.
The dataset consists of short speech segments automatically extracted from YouTube videos and labeled according the language of the video title and description, with some post-processing steps to filter out false positives.
VoxLingua107 contains data for 107 languages. The total amount of speech in the training set is 6628 hours.
The average amount of data per language is 62 hours. However, the real amount per language varies a lot. There is also a seperate development set containing 1609 speech segments from 33 languages, validated by at least two volunteers to really contain the given language.
## Training procedure
We used [SpeechBrain](https://github.com/speechbrain/speechbrain) to train the model.
Training recipe will be published soon.
## Evaluation results
Error rate: 6.7% on the VoxLingua107 development dataset
### BibTeX entry and citation info
```bibtex
@inproceedings{valk2021slt,
title={{VoxLingua107}: a Dataset for Spoken Language Recognition},
author={J{\"o}rgen Valk and Tanel Alum{\"a}e},
booktitle={Proc. IEEE SLT Workshop},
year={2021},
}
```
|
{"language": "multilingual", "license": "apache-2.0", "tags": ["audio-classification", "speechbrain", "embeddings", "Language", "Identification", "pytorch", "ECAPA-TDNN", "TDNN", "VoxLingua107"], "datasets": ["VoxLingua107"], "metrics": ["Accuracy"], "widget": [{"example_title": "English Sample", "src": "https://cdn-media.huggingface.co/speech_samples/LibriSpeech_61-70968-0000.flac"}]}
|
TalTechNLP/voxlingua107-epaca-tdnn-ce
| null |
[
"speechbrain",
"audio-classification",
"embeddings",
"Language",
"Identification",
"pytorch",
"ECAPA-TDNN",
"TDNN",
"VoxLingua107",
"multilingual",
"dataset:VoxLingua107",
"license:apache-2.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"multilingual"
] |
TAGS
#speechbrain #audio-classification #embeddings #Language #Identification #pytorch #ECAPA-TDNN #TDNN #VoxLingua107 #multilingual #dataset-VoxLingua107 #license-apache-2.0 #region-us
|
# VoxLingua107 ECAPA-TDNN Spoken Language Identification Model (CE)
## Model description
This is a spoken language recognition model trained on the VoxLingua107 dataset using SpeechBrain.
The model uses the ECAPA-TDNN architecture that has previously been used for speaker recognition. However, it uses
more fully connected hidden layers after the embedding layer, and cross-entropy loss was used for training.
We observed that this improved the performance of extracted utterance embeddings for downstream tasks.
The model can classify a speech utterance according to the language spoken.
It covers 107 different languages (
Abkhazian,
Afrikaans,
Amharic,
Arabic,
Assamese,
Azerbaijani,
Bashkir,
Belarusian,
Bulgarian,
Bengali,
Tibetan,
Breton,
Bosnian,
Catalan,
Cebuano,
Czech,
Welsh,
Danish,
German,
Greek,
English,
Esperanto,
Spanish,
Estonian,
Basque,
Persian,
Finnish,
Faroese,
French,
Galician,
Guarani,
Gujarati,
Manx,
Hausa,
Hawaiian,
Hindi,
Croatian,
Haitian,
Hungarian,
Armenian,
Interlingua,
Indonesian,
Icelandic,
Italian,
Hebrew,
Japanese,
Javanese,
Georgian,
Kazakh,
Central Khmer,
Kannada,
Korean,
Latin,
Luxembourgish,
Lingala,
Lao,
Lithuanian,
Latvian,
Malagasy,
Maori,
Macedonian,
Malayalam,
Mongolian,
Marathi,
Malay,
Maltese,
Burmese,
Nepali,
Dutch,
Norwegian Nynorsk,
Norwegian,
Occitan,
Panjabi,
Polish,
Pushto,
Portuguese,
Romanian,
Russian,
Sanskrit,
Scots,
Sindhi,
Sinhala,
Slovak,
Slovenian,
Shona,
Somali,
Albanian,
Serbian,
Sundanese,
Swedish,
Swahili,
Tamil,
Telugu,
Tajik,
Thai,
Turkmen,
Tagalog,
Turkish,
Tatar,
Ukrainian,
Urdu,
Uzbek,
Vietnamese,
Waray,
Yiddish,
Yoruba,
Mandarin Chinese).
## Intended uses & limitations
The model has two uses:
- use 'as is' for spoken language recognition
- use as an utterance-level feature (embedding) extractor, for creating a dedicated language ID model on your own data
The model is trained on automatically collected YouTube data. For more
information about the dataset, see here.
#### How to use
#### Limitations and bias
Since the model is trained on VoxLingua107, it has many limitations and biases, some of which are:
- Probably it's accuracy on smaller languages is quite limited
- Probably it works worse on female speech than male speech (because YouTube data includes much more male speech)
- Based on subjective experiments, it doesn't work well on speech with a foreign accent
- Probably it doesn't work well on children's speech and on persons with speech disorders
## Training data
The model is trained on VoxLingua107.
VoxLingua107 is a speech dataset for training spoken language identification models.
The dataset consists of short speech segments automatically extracted from YouTube videos and labeled according the language of the video title and description, with some post-processing steps to filter out false positives.
VoxLingua107 contains data for 107 languages. The total amount of speech in the training set is 6628 hours.
The average amount of data per language is 62 hours. However, the real amount per language varies a lot. There is also a seperate development set containing 1609 speech segments from 33 languages, validated by at least two volunteers to really contain the given language.
## Training procedure
We used SpeechBrain to train the model.
Training recipe will be published soon.
## Evaluation results
Error rate: 6.7% on the VoxLingua107 development dataset
### BibTeX entry and citation info
|
[
"# VoxLingua107 ECAPA-TDNN Spoken Language Identification Model (CE)",
"## Model description\n\nThis is a spoken language recognition model trained on the VoxLingua107 dataset using SpeechBrain.\nThe model uses the ECAPA-TDNN architecture that has previously been used for speaker recognition. However, it uses\nmore fully connected hidden layers after the embedding layer, and cross-entropy loss was used for training. \nWe observed that this improved the performance of extracted utterance embeddings for downstream tasks.\n\nThe model can classify a speech utterance according to the language spoken.\nIt covers 107 different languages (\nAbkhazian, \nAfrikaans, \nAmharic, \nArabic, \nAssamese, \nAzerbaijani, \nBashkir, \nBelarusian, \nBulgarian, \nBengali, \nTibetan, \nBreton, \nBosnian, \nCatalan, \nCebuano, \nCzech, \nWelsh, \nDanish, \nGerman, \nGreek, \nEnglish, \nEsperanto, \nSpanish, \nEstonian, \nBasque, \nPersian, \nFinnish, \nFaroese, \nFrench, \nGalician, \nGuarani, \nGujarati, \nManx, \nHausa, \nHawaiian, \nHindi, \nCroatian, \nHaitian, \nHungarian, \nArmenian, \nInterlingua, \nIndonesian, \nIcelandic, \nItalian, \nHebrew, \nJapanese, \nJavanese, \nGeorgian, \nKazakh, \nCentral Khmer, \nKannada, \nKorean, \nLatin, \nLuxembourgish, \nLingala, \nLao, \nLithuanian, \nLatvian, \nMalagasy, \nMaori, \nMacedonian, \nMalayalam, \nMongolian, \nMarathi, \nMalay, \nMaltese, \nBurmese, \nNepali, \nDutch, \nNorwegian Nynorsk, \nNorwegian, \nOccitan, \nPanjabi, \nPolish, \nPushto, \nPortuguese, \nRomanian, \nRussian, \nSanskrit, \nScots, \nSindhi, \nSinhala, \nSlovak, \nSlovenian, \nShona, \nSomali, \nAlbanian, \nSerbian, \nSundanese, \nSwedish, \nSwahili, \nTamil, \nTelugu, \nTajik, \nThai, \nTurkmen, \nTagalog, \nTurkish, \nTatar, \nUkrainian, \nUrdu, \nUzbek, \nVietnamese, \nWaray, \nYiddish, \nYoruba, \nMandarin Chinese).",
"## Intended uses & limitations\n\nThe model has two uses:\n\n - use 'as is' for spoken language recognition\n - use as an utterance-level feature (embedding) extractor, for creating a dedicated language ID model on your own data\n \nThe model is trained on automatically collected YouTube data. For more \ninformation about the dataset, see here.",
"#### How to use",
"#### Limitations and bias\n\nSince the model is trained on VoxLingua107, it has many limitations and biases, some of which are:\n\n - Probably it's accuracy on smaller languages is quite limited\n - Probably it works worse on female speech than male speech (because YouTube data includes much more male speech)\n - Based on subjective experiments, it doesn't work well on speech with a foreign accent\n - Probably it doesn't work well on children's speech and on persons with speech disorders",
"## Training data\n\nThe model is trained on VoxLingua107.\n\nVoxLingua107 is a speech dataset for training spoken language identification models. \nThe dataset consists of short speech segments automatically extracted from YouTube videos and labeled according the language of the video title and description, with some post-processing steps to filter out false positives.\n\nVoxLingua107 contains data for 107 languages. The total amount of speech in the training set is 6628 hours. \nThe average amount of data per language is 62 hours. However, the real amount per language varies a lot. There is also a seperate development set containing 1609 speech segments from 33 languages, validated by at least two volunteers to really contain the given language.",
"## Training procedure\n\nWe used SpeechBrain to train the model.\nTraining recipe will be published soon.",
"## Evaluation results\n\nError rate: 6.7% on the VoxLingua107 development dataset",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#speechbrain #audio-classification #embeddings #Language #Identification #pytorch #ECAPA-TDNN #TDNN #VoxLingua107 #multilingual #dataset-VoxLingua107 #license-apache-2.0 #region-us \n",
"# VoxLingua107 ECAPA-TDNN Spoken Language Identification Model (CE)",
"## Model description\n\nThis is a spoken language recognition model trained on the VoxLingua107 dataset using SpeechBrain.\nThe model uses the ECAPA-TDNN architecture that has previously been used for speaker recognition. However, it uses\nmore fully connected hidden layers after the embedding layer, and cross-entropy loss was used for training. \nWe observed that this improved the performance of extracted utterance embeddings for downstream tasks.\n\nThe model can classify a speech utterance according to the language spoken.\nIt covers 107 different languages (\nAbkhazian, \nAfrikaans, \nAmharic, \nArabic, \nAssamese, \nAzerbaijani, \nBashkir, \nBelarusian, \nBulgarian, \nBengali, \nTibetan, \nBreton, \nBosnian, \nCatalan, \nCebuano, \nCzech, \nWelsh, \nDanish, \nGerman, \nGreek, \nEnglish, \nEsperanto, \nSpanish, \nEstonian, \nBasque, \nPersian, \nFinnish, \nFaroese, \nFrench, \nGalician, \nGuarani, \nGujarati, \nManx, \nHausa, \nHawaiian, \nHindi, \nCroatian, \nHaitian, \nHungarian, \nArmenian, \nInterlingua, \nIndonesian, \nIcelandic, \nItalian, \nHebrew, \nJapanese, \nJavanese, \nGeorgian, \nKazakh, \nCentral Khmer, \nKannada, \nKorean, \nLatin, \nLuxembourgish, \nLingala, \nLao, \nLithuanian, \nLatvian, \nMalagasy, \nMaori, \nMacedonian, \nMalayalam, \nMongolian, \nMarathi, \nMalay, \nMaltese, \nBurmese, \nNepali, \nDutch, \nNorwegian Nynorsk, \nNorwegian, \nOccitan, \nPanjabi, \nPolish, \nPushto, \nPortuguese, \nRomanian, \nRussian, \nSanskrit, \nScots, \nSindhi, \nSinhala, \nSlovak, \nSlovenian, \nShona, \nSomali, \nAlbanian, \nSerbian, \nSundanese, \nSwedish, \nSwahili, \nTamil, \nTelugu, \nTajik, \nThai, \nTurkmen, \nTagalog, \nTurkish, \nTatar, \nUkrainian, \nUrdu, \nUzbek, \nVietnamese, \nWaray, \nYiddish, \nYoruba, \nMandarin Chinese).",
"## Intended uses & limitations\n\nThe model has two uses:\n\n - use 'as is' for spoken language recognition\n - use as an utterance-level feature (embedding) extractor, for creating a dedicated language ID model on your own data\n \nThe model is trained on automatically collected YouTube data. For more \ninformation about the dataset, see here.",
"#### How to use",
"#### Limitations and bias\n\nSince the model is trained on VoxLingua107, it has many limitations and biases, some of which are:\n\n - Probably it's accuracy on smaller languages is quite limited\n - Probably it works worse on female speech than male speech (because YouTube data includes much more male speech)\n - Based on subjective experiments, it doesn't work well on speech with a foreign accent\n - Probably it doesn't work well on children's speech and on persons with speech disorders",
"## Training data\n\nThe model is trained on VoxLingua107.\n\nVoxLingua107 is a speech dataset for training spoken language identification models. \nThe dataset consists of short speech segments automatically extracted from YouTube videos and labeled according the language of the video title and description, with some post-processing steps to filter out false positives.\n\nVoxLingua107 contains data for 107 languages. The total amount of speech in the training set is 6628 hours. \nThe average amount of data per language is 62 hours. However, the real amount per language varies a lot. There is also a seperate development set containing 1609 speech segments from 33 languages, validated by at least two volunteers to really contain the given language.",
"## Training procedure\n\nWe used SpeechBrain to train the model.\nTraining recipe will be published soon.",
"## Evaluation results\n\nError rate: 6.7% on the VoxLingua107 development dataset",
"### BibTeX entry and citation info"
] |
audio-classification
|
speechbrain
|
# VoxLingua107 ECAPA-TDNN Spoken Language Identification Model
## Model description
This is a spoken language recognition model trained on the VoxLingua107 dataset using SpeechBrain.
The model uses the ECAPA-TDNN architecture that has previously been used for speaker recognition.
The model can classify a speech utterance according to the language spoken.
It covers 107 different languages (
Abkhazian,
Afrikaans,
Amharic,
Arabic,
Assamese,
Azerbaijani,
Bashkir,
Belarusian,
Bulgarian,
Bengali,
Tibetan,
Breton,
Bosnian,
Catalan,
Cebuano,
Czech,
Welsh,
Danish,
German,
Greek,
English,
Esperanto,
Spanish,
Estonian,
Basque,
Persian,
Finnish,
Faroese,
French,
Galician,
Guarani,
Gujarati,
Manx,
Hausa,
Hawaiian,
Hindi,
Croatian,
Haitian,
Hungarian,
Armenian,
Interlingua,
Indonesian,
Icelandic,
Italian,
Hebrew,
Japanese,
Javanese,
Georgian,
Kazakh,
Central Khmer,
Kannada,
Korean,
Latin,
Luxembourgish,
Lingala,
Lao,
Lithuanian,
Latvian,
Malagasy,
Maori,
Macedonian,
Malayalam,
Mongolian,
Marathi,
Malay,
Maltese,
Burmese,
Nepali,
Dutch,
Norwegian Nynorsk,
Norwegian,
Occitan,
Panjabi,
Polish,
Pushto,
Portuguese,
Romanian,
Russian,
Sanskrit,
Scots,
Sindhi,
Sinhala,
Slovak,
Slovenian,
Shona,
Somali,
Albanian,
Serbian,
Sundanese,
Swedish,
Swahili,
Tamil,
Telugu,
Tajik,
Thai,
Turkmen,
Tagalog,
Turkish,
Tatar,
Ukrainian,
Urdu,
Uzbek,
Vietnamese,
Waray,
Yiddish,
Yoruba,
Mandarin Chinese).
## Intended uses & limitations
The model has two uses:
- use 'as is' for spoken language recognition
- use as an utterance-level feature (embedding) extractor, for creating a dedicated language ID model on your own data
The model is trained on automatically collected YouTube data. For more
information about the dataset, see [here](http://bark.phon.ioc.ee/voxlingua107/).
#### How to use
```python
import torchaudio
from speechbrain.pretrained import EncoderClassifier
language_id = EncoderClassifier.from_hparams(source="TalTechNLP/voxlingua107-epaca-tdnn", savedir="tmp")
# Download Thai language sample from Omniglot and cvert to suitable form
signal = language_id.load_audio("https://omniglot.com/soundfiles/udhr/udhr_th.mp3")
prediction = language_id.classify_batch(signal)
print(prediction)
(tensor([[0.3210, 0.3751, 0.3680, 0.3939, 0.4026, 0.3644, 0.3689, 0.3597, 0.3508,
0.3666, 0.3895, 0.3978, 0.3848, 0.3957, 0.3949, 0.3586, 0.4360, 0.3997,
0.4106, 0.3886, 0.4177, 0.3870, 0.3764, 0.3763, 0.3672, 0.4000, 0.4256,
0.4091, 0.3563, 0.3695, 0.3320, 0.3838, 0.3850, 0.3867, 0.3878, 0.3944,
0.3924, 0.4063, 0.3803, 0.3830, 0.2996, 0.4187, 0.3976, 0.3651, 0.3950,
0.3744, 0.4295, 0.3807, 0.3613, 0.4710, 0.3530, 0.4156, 0.3651, 0.3777,
0.3813, 0.6063, 0.3708, 0.3886, 0.3766, 0.4023, 0.3785, 0.3612, 0.4193,
0.3720, 0.4406, 0.3243, 0.3866, 0.3866, 0.4104, 0.4294, 0.4175, 0.3364,
0.3595, 0.3443, 0.3565, 0.3776, 0.3985, 0.3778, 0.2382, 0.4115, 0.4017,
0.4070, 0.3266, 0.3648, 0.3888, 0.3907, 0.3755, 0.3631, 0.4460, 0.3464,
0.3898, 0.3661, 0.3883, 0.3772, 0.9289, 0.3687, 0.4298, 0.4211, 0.3838,
0.3521, 0.3515, 0.3465, 0.4772, 0.4043, 0.3844, 0.3973, 0.4343]]), tensor([0.9289]), tensor([94]), ['th'])
# The scores in the prediction[0] tensor can be interpreted as cosine scores between
# the languages and the given utterance (i.e., the larger the better)
# The identified language ISO code is given in prediction[3]
print(prediction[3])
['th']
# Alternatively, use the utterance embedding extractor:
emb = language_id.encode_batch(signal)
print(emb.shape)
torch.Size([1, 1, 256])
```
#### Limitations and bias
Since the model is trained on VoxLingua107, it has many limitations and biases, some of which are:
- Probably it's accuracy on smaller languages is quite limited
- Probably it works worse on female speech than male speech (because YouTube data includes much more male speech)
- Based on subjective experiments, it doesn't work well on speech with a foreign accent
- Probably it doesn't work well on children's speech and on persons with speech disorders
## Training data
The model is trained on [VoxLingua107](http://bark.phon.ioc.ee/voxlingua107/).
VoxLingua107 is a speech dataset for training spoken language identification models.
The dataset consists of short speech segments automatically extracted from YouTube videos and labeled according the language of the video title and description, with some post-processing steps to filter out false positives.
VoxLingua107 contains data for 107 languages. The total amount of speech in the training set is 6628 hours.
The average amount of data per language is 62 hours. However, the real amount per language varies a lot. There is also a seperate development set containing 1609 speech segments from 33 languages, validated by at least two volunteers to really contain the given language.
## Training procedure
We used [SpeechBrain](https://github.com/speechbrain/speechbrain) to train the model.
Training recipe will be published soon.
## Evaluation results
Error rate: 7% on the development dataset
### BibTeX entry and citation info
```bibtex
@inproceedings{valk2021slt,
title={{VoxLingua107}: a Dataset for Spoken Language Recognition},
author={J{\"o}rgen Valk and Tanel Alum{\"a}e},
booktitle={Proc. IEEE SLT Workshop},
year={2021},
}
```
|
{"language": "multilingual", "license": "apache-2.0", "tags": ["audio-classification", "speechbrain", "embeddings", "Language", "Identification", "pytorch", "ECAPA-TDNN", "TDNN", "VoxLingua107"], "datasets": ["VoxLingua107"], "metrics": ["Accuracy"], "widget": [{"example_title": "English Sample", "src": "https://cdn-media.huggingface.co/speech_samples/LibriSpeech_61-70968-0000.flac"}]}
|
TalTechNLP/voxlingua107-epaca-tdnn
| null |
[
"speechbrain",
"audio-classification",
"embeddings",
"Language",
"Identification",
"pytorch",
"ECAPA-TDNN",
"TDNN",
"VoxLingua107",
"multilingual",
"dataset:VoxLingua107",
"license:apache-2.0",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"multilingual"
] |
TAGS
#speechbrain #audio-classification #embeddings #Language #Identification #pytorch #ECAPA-TDNN #TDNN #VoxLingua107 #multilingual #dataset-VoxLingua107 #license-apache-2.0 #has_space #region-us
|
# VoxLingua107 ECAPA-TDNN Spoken Language Identification Model
## Model description
This is a spoken language recognition model trained on the VoxLingua107 dataset using SpeechBrain.
The model uses the ECAPA-TDNN architecture that has previously been used for speaker recognition.
The model can classify a speech utterance according to the language spoken.
It covers 107 different languages (
Abkhazian,
Afrikaans,
Amharic,
Arabic,
Assamese,
Azerbaijani,
Bashkir,
Belarusian,
Bulgarian,
Bengali,
Tibetan,
Breton,
Bosnian,
Catalan,
Cebuano,
Czech,
Welsh,
Danish,
German,
Greek,
English,
Esperanto,
Spanish,
Estonian,
Basque,
Persian,
Finnish,
Faroese,
French,
Galician,
Guarani,
Gujarati,
Manx,
Hausa,
Hawaiian,
Hindi,
Croatian,
Haitian,
Hungarian,
Armenian,
Interlingua,
Indonesian,
Icelandic,
Italian,
Hebrew,
Japanese,
Javanese,
Georgian,
Kazakh,
Central Khmer,
Kannada,
Korean,
Latin,
Luxembourgish,
Lingala,
Lao,
Lithuanian,
Latvian,
Malagasy,
Maori,
Macedonian,
Malayalam,
Mongolian,
Marathi,
Malay,
Maltese,
Burmese,
Nepali,
Dutch,
Norwegian Nynorsk,
Norwegian,
Occitan,
Panjabi,
Polish,
Pushto,
Portuguese,
Romanian,
Russian,
Sanskrit,
Scots,
Sindhi,
Sinhala,
Slovak,
Slovenian,
Shona,
Somali,
Albanian,
Serbian,
Sundanese,
Swedish,
Swahili,
Tamil,
Telugu,
Tajik,
Thai,
Turkmen,
Tagalog,
Turkish,
Tatar,
Ukrainian,
Urdu,
Uzbek,
Vietnamese,
Waray,
Yiddish,
Yoruba,
Mandarin Chinese).
## Intended uses & limitations
The model has two uses:
- use 'as is' for spoken language recognition
- use as an utterance-level feature (embedding) extractor, for creating a dedicated language ID model on your own data
The model is trained on automatically collected YouTube data. For more
information about the dataset, see here.
#### How to use
#### Limitations and bias
Since the model is trained on VoxLingua107, it has many limitations and biases, some of which are:
- Probably it's accuracy on smaller languages is quite limited
- Probably it works worse on female speech than male speech (because YouTube data includes much more male speech)
- Based on subjective experiments, it doesn't work well on speech with a foreign accent
- Probably it doesn't work well on children's speech and on persons with speech disorders
## Training data
The model is trained on VoxLingua107.
VoxLingua107 is a speech dataset for training spoken language identification models.
The dataset consists of short speech segments automatically extracted from YouTube videos and labeled according the language of the video title and description, with some post-processing steps to filter out false positives.
VoxLingua107 contains data for 107 languages. The total amount of speech in the training set is 6628 hours.
The average amount of data per language is 62 hours. However, the real amount per language varies a lot. There is also a seperate development set containing 1609 speech segments from 33 languages, validated by at least two volunteers to really contain the given language.
## Training procedure
We used SpeechBrain to train the model.
Training recipe will be published soon.
## Evaluation results
Error rate: 7% on the development dataset
### BibTeX entry and citation info
|
[
"# VoxLingua107 ECAPA-TDNN Spoken Language Identification Model",
"## Model description\n\nThis is a spoken language recognition model trained on the VoxLingua107 dataset using SpeechBrain.\nThe model uses the ECAPA-TDNN architecture that has previously been used for speaker recognition.\n\nThe model can classify a speech utterance according to the language spoken.\nIt covers 107 different languages (\nAbkhazian, \nAfrikaans, \nAmharic, \nArabic, \nAssamese, \nAzerbaijani, \nBashkir, \nBelarusian, \nBulgarian, \nBengali, \nTibetan, \nBreton, \nBosnian, \nCatalan, \nCebuano, \nCzech, \nWelsh, \nDanish, \nGerman, \nGreek, \nEnglish, \nEsperanto, \nSpanish, \nEstonian, \nBasque, \nPersian, \nFinnish, \nFaroese, \nFrench, \nGalician, \nGuarani, \nGujarati, \nManx, \nHausa, \nHawaiian, \nHindi, \nCroatian, \nHaitian, \nHungarian, \nArmenian, \nInterlingua, \nIndonesian, \nIcelandic, \nItalian, \nHebrew, \nJapanese, \nJavanese, \nGeorgian, \nKazakh, \nCentral Khmer, \nKannada, \nKorean, \nLatin, \nLuxembourgish, \nLingala, \nLao, \nLithuanian, \nLatvian, \nMalagasy, \nMaori, \nMacedonian, \nMalayalam, \nMongolian, \nMarathi, \nMalay, \nMaltese, \nBurmese, \nNepali, \nDutch, \nNorwegian Nynorsk, \nNorwegian, \nOccitan, \nPanjabi, \nPolish, \nPushto, \nPortuguese, \nRomanian, \nRussian, \nSanskrit, \nScots, \nSindhi, \nSinhala, \nSlovak, \nSlovenian, \nShona, \nSomali, \nAlbanian, \nSerbian, \nSundanese, \nSwedish, \nSwahili, \nTamil, \nTelugu, \nTajik, \nThai, \nTurkmen, \nTagalog, \nTurkish, \nTatar, \nUkrainian, \nUrdu, \nUzbek, \nVietnamese, \nWaray, \nYiddish, \nYoruba, \nMandarin Chinese).",
"## Intended uses & limitations\n\nThe model has two uses:\n\n - use 'as is' for spoken language recognition\n - use as an utterance-level feature (embedding) extractor, for creating a dedicated language ID model on your own data\n \nThe model is trained on automatically collected YouTube data. For more \ninformation about the dataset, see here.",
"#### How to use",
"#### Limitations and bias\n\nSince the model is trained on VoxLingua107, it has many limitations and biases, some of which are:\n\n - Probably it's accuracy on smaller languages is quite limited\n - Probably it works worse on female speech than male speech (because YouTube data includes much more male speech)\n - Based on subjective experiments, it doesn't work well on speech with a foreign accent\n - Probably it doesn't work well on children's speech and on persons with speech disorders",
"## Training data\n\nThe model is trained on VoxLingua107.\n\nVoxLingua107 is a speech dataset for training spoken language identification models. \nThe dataset consists of short speech segments automatically extracted from YouTube videos and labeled according the language of the video title and description, with some post-processing steps to filter out false positives.\n\nVoxLingua107 contains data for 107 languages. The total amount of speech in the training set is 6628 hours. \nThe average amount of data per language is 62 hours. However, the real amount per language varies a lot. There is also a seperate development set containing 1609 speech segments from 33 languages, validated by at least two volunteers to really contain the given language.",
"## Training procedure\n\nWe used SpeechBrain to train the model.\nTraining recipe will be published soon.",
"## Evaluation results\n\nError rate: 7% on the development dataset",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#speechbrain #audio-classification #embeddings #Language #Identification #pytorch #ECAPA-TDNN #TDNN #VoxLingua107 #multilingual #dataset-VoxLingua107 #license-apache-2.0 #has_space #region-us \n",
"# VoxLingua107 ECAPA-TDNN Spoken Language Identification Model",
"## Model description\n\nThis is a spoken language recognition model trained on the VoxLingua107 dataset using SpeechBrain.\nThe model uses the ECAPA-TDNN architecture that has previously been used for speaker recognition.\n\nThe model can classify a speech utterance according to the language spoken.\nIt covers 107 different languages (\nAbkhazian, \nAfrikaans, \nAmharic, \nArabic, \nAssamese, \nAzerbaijani, \nBashkir, \nBelarusian, \nBulgarian, \nBengali, \nTibetan, \nBreton, \nBosnian, \nCatalan, \nCebuano, \nCzech, \nWelsh, \nDanish, \nGerman, \nGreek, \nEnglish, \nEsperanto, \nSpanish, \nEstonian, \nBasque, \nPersian, \nFinnish, \nFaroese, \nFrench, \nGalician, \nGuarani, \nGujarati, \nManx, \nHausa, \nHawaiian, \nHindi, \nCroatian, \nHaitian, \nHungarian, \nArmenian, \nInterlingua, \nIndonesian, \nIcelandic, \nItalian, \nHebrew, \nJapanese, \nJavanese, \nGeorgian, \nKazakh, \nCentral Khmer, \nKannada, \nKorean, \nLatin, \nLuxembourgish, \nLingala, \nLao, \nLithuanian, \nLatvian, \nMalagasy, \nMaori, \nMacedonian, \nMalayalam, \nMongolian, \nMarathi, \nMalay, \nMaltese, \nBurmese, \nNepali, \nDutch, \nNorwegian Nynorsk, \nNorwegian, \nOccitan, \nPanjabi, \nPolish, \nPushto, \nPortuguese, \nRomanian, \nRussian, \nSanskrit, \nScots, \nSindhi, \nSinhala, \nSlovak, \nSlovenian, \nShona, \nSomali, \nAlbanian, \nSerbian, \nSundanese, \nSwedish, \nSwahili, \nTamil, \nTelugu, \nTajik, \nThai, \nTurkmen, \nTagalog, \nTurkish, \nTatar, \nUkrainian, \nUrdu, \nUzbek, \nVietnamese, \nWaray, \nYiddish, \nYoruba, \nMandarin Chinese).",
"## Intended uses & limitations\n\nThe model has two uses:\n\n - use 'as is' for spoken language recognition\n - use as an utterance-level feature (embedding) extractor, for creating a dedicated language ID model on your own data\n \nThe model is trained on automatically collected YouTube data. For more \ninformation about the dataset, see here.",
"#### How to use",
"#### Limitations and bias\n\nSince the model is trained on VoxLingua107, it has many limitations and biases, some of which are:\n\n - Probably it's accuracy on smaller languages is quite limited\n - Probably it works worse on female speech than male speech (because YouTube data includes much more male speech)\n - Based on subjective experiments, it doesn't work well on speech with a foreign accent\n - Probably it doesn't work well on children's speech and on persons with speech disorders",
"## Training data\n\nThe model is trained on VoxLingua107.\n\nVoxLingua107 is a speech dataset for training spoken language identification models. \nThe dataset consists of short speech segments automatically extracted from YouTube videos and labeled according the language of the video title and description, with some post-processing steps to filter out false positives.\n\nVoxLingua107 contains data for 107 languages. The total amount of speech in the training set is 6628 hours. \nThe average amount of data per language is 62 hours. However, the real amount per language varies a lot. There is also a seperate development set containing 1609 speech segments from 33 languages, validated by at least two volunteers to really contain the given language.",
"## Training procedure\n\nWe used SpeechBrain to train the model.\nTraining recipe will be published soon.",
"## Evaluation results\n\nError rate: 7% on the development dataset",
"### BibTeX entry and citation info"
] |
automatic-speech-recognition
|
transformers
|
# XLS-R-300m-ET
This is a XLS-R-300M model [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) finetuned on around 800 hours of diverse Estonian data.
## Model description
This is a general-purpose Estonian ASR model trained in the Lab of Language Technology at TalTech. It consists of only the CTC-based end-to-end model, no language model is currently provided.
## Intended uses & limitations
This model is intended for general-purpose speech recognition, such as broadcast conversations, interviews, talks, etc.
## How to use
TODO
#### Limitations and bias
Since this model was trained on mostly broadcast speech and texts from the web, it might have problems correctly decoding the following:
* Speech containing technical and other domain-specific terms
* Children's speech
* Non-native speech
* Speech recorded under very noisy conditions or with a microphone far from the speaker
* Very spontaneous and overlapping speech
## Training data
Acoustic training data:
| Type | Amount (h) |
|-----------------------|:------:|
| Broadcast speech | 591 |
| Spontaneous speech | 53 |
| Elderly speech corpus | 53 |
| Talks, lectures | 49 |
| Parliament speeches | 31 |
| *Total* | *761* |
## Training procedure
Finetuned using Fairseq.
## Evaluation results
### WER
|Dataset | WER |
|---|---|
| jutusaated.devset | 7.9 |
| jutusaated.testset | 6.1 |
| Common Voice 6.1 | 12.5 |
| Common Voice 8.0 | 13.4 |
|
{"language": "et", "license": "cc-by-4.0", "tags": ["audio", "automatic-speech-recognition", "hf-asr-leaderboard"], "model-index": [{"name": "xls-r-300m-et", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice", "type": "common_voice", "args": "et"}, "metrics": [{"type": "wer", "value": 12.520395591222401, "name": "Test WER"}, {"type": "cer", "value": 2.70911524386249, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "et"}, "metrics": [{"type": "wer", "value": 13.38447882323104, "name": "Test WER"}, {"type": "cer", "value": 2.9816686199500255, "name": "Test CER"}]}]}]}
|
TalTechNLP/xls-r-300m-et
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"hf-asr-leaderboard",
"et",
"license:cc-by-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"et"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #hf-asr-leaderboard #et #license-cc-by-4.0 #model-index #endpoints_compatible #region-us
|
XLS-R-300m-ET
=============
This is a XLS-R-300M model facebook/wav2vec2-xls-r-300m finetuned on around 800 hours of diverse Estonian data.
Model description
-----------------
This is a general-purpose Estonian ASR model trained in the Lab of Language Technology at TalTech. It consists of only the CTC-based end-to-end model, no language model is currently provided.
Intended uses & limitations
---------------------------
This model is intended for general-purpose speech recognition, such as broadcast conversations, interviews, talks, etc.
How to use
----------
TODO
#### Limitations and bias
Since this model was trained on mostly broadcast speech and texts from the web, it might have problems correctly decoding the following:
* Speech containing technical and other domain-specific terms
* Children's speech
* Non-native speech
* Speech recorded under very noisy conditions or with a microphone far from the speaker
* Very spontaneous and overlapping speech
Training data
-------------
Acoustic training data:
Training procedure
------------------
Finetuned using Fairseq.
Evaluation results
------------------
### WER
|
[
"#### Limitations and bias\n\n\nSince this model was trained on mostly broadcast speech and texts from the web, it might have problems correctly decoding the following:\n\n\n* Speech containing technical and other domain-specific terms\n* Children's speech\n* Non-native speech\n* Speech recorded under very noisy conditions or with a microphone far from the speaker\n* Very spontaneous and overlapping speech\n\n\nTraining data\n-------------\n\n\nAcoustic training data:\n\n\n\nTraining procedure\n------------------\n\n\nFinetuned using Fairseq.\n\n\nEvaluation results\n------------------",
"### WER"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #hf-asr-leaderboard #et #license-cc-by-4.0 #model-index #endpoints_compatible #region-us \n",
"#### Limitations and bias\n\n\nSince this model was trained on mostly broadcast speech and texts from the web, it might have problems correctly decoding the following:\n\n\n* Speech containing technical and other domain-specific terms\n* Children's speech\n* Non-native speech\n* Speech recorded under very noisy conditions or with a microphone far from the speaker\n* Very spontaneous and overlapping speech\n\n\nTraining data\n-------------\n\n\nAcoustic training data:\n\n\n\nTraining procedure\n------------------\n\n\nFinetuned using Fairseq.\n\n\nEvaluation results\n------------------",
"### WER"
] |
text-generation
|
transformers
|
<h2> GPT2 Model for German Language </h2>
Model Name: Tanhim/gpt2-model-de <br />
language: German or Deutsch <br />
thumbnail: https://huggingface.co/Tanhim/gpt2-model-de <br />
datasets: Ten Thousand German News Articles Dataset <br />
### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, I
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generation= pipeline('text-generation', model='Tanhim/gpt2-model-de', tokenizer='Tanhim/gpt2-model-de')
>>> set_seed(42)
>>> generation("Hallo, ich bin ein Sprachmodell,", max_length=30, num_return_sequences=5)
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("Tanhim/gpt2-model-de")
model = AutoModelWithLMHead.from_pretrained("Tanhim/gpt2-model-de")
text = "Ersetzen Sie mich durch einen beliebigen Text, den Sie wünschen."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
Citation request:
If you use the model of this repository in your research, please consider citing the following way:
```python
@misc{GermanTransformer,
author = {Tanhim Islam},
title = {{PyTorch Based Transformer Machine Learning Model for German Text Generation Task}},
howpublished = "\url{https://huggingface.co/Tanhim/gpt2-model-de}",
year = {2021},
note = "[Online; accessed 17-June-2021]"
}
```
|
{"language": "de", "license": "gpl", "widget": [{"text": "Hallo, ich bin ein Sprachmodell"}]}
|
Tanhim/gpt2-model-de
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"de",
"license:gpl",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"de"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #de #license-gpl #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<h2> GPT2 Model for German Language </h2>
Model Name: Tanhim/gpt2-model-de <br />
language: German or Deutsch <br />
thumbnail: URL <br />
datasets: Ten Thousand German News Articles Dataset <br />
### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, I
set a seed for reproducibility:
Here is how to use this model to get the features of a given text in PyTorch:
Citation request:
If you use the model of this repository in your research, please consider citing the following way:
|
[
"### How to use\nYou can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, I\nset a seed for reproducibility:\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nCitation request:\nIf you use the model of this repository in your research, please consider citing the following way:"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #de #license-gpl #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### How to use\nYou can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, I\nset a seed for reproducibility:\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nCitation request:\nIf you use the model of this repository in your research, please consider citing the following way:"
] |
translation
|
transformers
|
<h2> English to German Translation </h2>
Model Name: Tanhim/translation-En2De <br />
language: German or Deutsch <br />
thumbnail: https://huggingface.co/Tanhim/translation-En2De <br />
### How to use
You can use this model directly with a pipeline for machine translation. Since the generation relies on some randomness, I
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> text_En2De= pipeline('translation', model='Tanhim/translation-En2De', tokenizer='Tanhim/translation-En2De')
>>> set_seed(42)
>>> text_En2De("My name is Karl and I live in Aachen")
```
### beta version
|
{"language": "de", "license": "gpl", "tags": ["translation"], "datasets": ["wmt19"], "widget": [{"text": "My name is Karl and I live in Aachen."}]}
|
Tanhim/translation-En2De
| null |
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"de",
"dataset:wmt19",
"license:gpl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"de"
] |
TAGS
#transformers #pytorch #marian #text2text-generation #translation #de #dataset-wmt19 #license-gpl #autotrain_compatible #endpoints_compatible #region-us
|
<h2> English to German Translation </h2>
Model Name: Tanhim/translation-En2De <br />
language: German or Deutsch <br />
thumbnail: URL <br />
### How to use
You can use this model directly with a pipeline for machine translation. Since the generation relies on some randomness, I
set a seed for reproducibility:
### beta version
|
[
"### How to use\nYou can use this model directly with a pipeline for machine translation. Since the generation relies on some randomness, I\nset a seed for reproducibility:",
"### beta version"
] |
[
"TAGS\n#transformers #pytorch #marian #text2text-generation #translation #de #dataset-wmt19 #license-gpl #autotrain_compatible #endpoints_compatible #region-us \n",
"### How to use\nYou can use this model directly with a pipeline for machine translation. Since the generation relies on some randomness, I\nset a seed for reproducibility:",
"### beta version"
] |
text-generation
| null |
# Hoshiyo Kojima DialoGPT Model
|
{"tags": ["conversational"]}
|
Taramiko/DialoGPT-small-hoshiyo_kojima
| null |
[
"conversational",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#conversational #region-us
|
# Hoshiyo Kojima DialoGPT Model
|
[
"# Hoshiyo Kojima DialoGPT Model"
] |
[
"TAGS\n#conversational #region-us \n",
"# Hoshiyo Kojima DialoGPT Model"
] |
text-generation
|
transformers
|
# Hoshiyo Kojima DialoGPT Model
|
{"tags": ["conversational"]}
|
Taramiko/Hoshiyo_Kojima
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Hoshiyo Kojima DialoGPT Model
|
[
"# Hoshiyo Kojima DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Hoshiyo Kojima DialoGPT Model"
] |
text2text-generation
|
transformers
|
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 21664560
- CO2 Emissions (in grams): 5.680803958729511
## Validation Metrics
- Loss: 1.7488420009613037
- Rouge1: 38.1491
- Rouge2: 18.6257
- RougeL: 26.8448
- RougeLsum: 32.2433
- Gen Len: 49.0
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/Tarang1998/autonlp-pegasus-21664560
```
|
{"language": "unk", "tags": "autonlp", "datasets": ["Tarang1998/autonlp-data-pegasus"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 5.680803958729511}
|
Tarang1998/autonlp-pegasus-21664560
| null |
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autonlp",
"unk",
"dataset:Tarang1998/autonlp-data-pegasus",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"unk"
] |
TAGS
#transformers #pytorch #pegasus #text2text-generation #autonlp #unk #dataset-Tarang1998/autonlp-data-pegasus #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 21664560
- CO2 Emissions (in grams): 5.680803958729511
## Validation Metrics
- Loss: 1.7488420009613037
- Rouge1: 38.1491
- Rouge2: 18.6257
- RougeL: 26.8448
- RougeLsum: 32.2433
- Gen Len: 49.0
## Usage
You can use cURL to access this model:
|
[
"# Model Trained Using AutoNLP\n\n- Problem type: Summarization\n- Model ID: 21664560\n- CO2 Emissions (in grams): 5.680803958729511",
"## Validation Metrics\n\n- Loss: 1.7488420009613037\n- Rouge1: 38.1491\n- Rouge2: 18.6257\n- RougeL: 26.8448\n- RougeLsum: 32.2433\n- Gen Len: 49.0",
"## Usage\n\nYou can use cURL to access this model:"
] |
[
"TAGS\n#transformers #pytorch #pegasus #text2text-generation #autonlp #unk #dataset-Tarang1998/autonlp-data-pegasus #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Summarization\n- Model ID: 21664560\n- CO2 Emissions (in grams): 5.680803958729511",
"## Validation Metrics\n\n- Loss: 1.7488420009613037\n- Rouge1: 38.1491\n- Rouge2: 18.6257\n- RougeL: 26.8448\n- RougeLsum: 32.2433\n- Gen Len: 49.0",
"## Usage\n\nYou can use cURL to access this model:"
] |
text-classification
|
transformers
|
# Model Card for RuBERT for Sentiment Analysis
# Model Details
## Model Description
Russian texts sentiment classification.
- **Developed by:** Tatyana Voloshina
- **Shared by [Optional]:** Tatyana Voloshina
- **Model type:** Text Classification
- **Language(s) (NLP):** More information needed
- **License:** More information needed
- **Parent Model:** BERT
- **Resources for more information:**
- [GitHub Repo](https://github.com/T-Sh/Sentiment-Analysis)
# Uses
## Direct Use
This model can be used for the task of text classification.
## Downstream Use [Optional]
More information needed.
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
# Training Details
## Training Data
Model trained on [Tatyana/ru_sentiment_dataset](https://huggingface.co/datasets/Tatyana/ru_sentiment_dataset)
## Training Procedure
### Preprocessing
More information needed
### Speeds, Sizes, Times
More information needed
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
More information needed
### Factors
More information needed
### Metrics
More information needed
## Results
More information needed
# Model Examination
## Labels meaning
0: NEUTRAL
1: POSITIVE
2: NEGATIVE
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
More information needed.
# Citation
More information needed.
# Glossary [optional]
More information needed
# More Information [optional]
More information needed
# Model Card Authors [optional]
Tatyana Voloshina in collaboration with Ezi Ozoani and the Hugging Face team
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
Needed pytorch trained model presented in [Drive](https://drive.google.com/drive/folders/1EnJBq0dGfpjPxbVjybqaS7PsMaPHLUIl?usp=sharing).
Load and place model.pth.tar in folder next to another files of a model.
```python
!pip install tensorflow-gpu
!pip install deeppavlov
!python -m deeppavlov install squad_bert
!pip install fasttext
!pip install transformers
!python -m deeppavlov install bert_sentence_embedder
from deeppavlov import build_model
model = build_model(path_to_model/rubert_sentiment.json)
model(["Сегодня хорошая погода", "Я счастлив проводить с тобою время", "Мне нравится эта музыкальная композиция"])
```
</details>
|
{"language": ["ru"], "tags": ["sentiment", "text-classification"], "datasets": ["Tatyana/ru_sentiment_dataset"]}
|
MonoHime/rubert-base-cased-sentiment-new
| null |
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"sentiment",
"ru",
"dataset:Tatyana/ru_sentiment_dataset",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1910.09700"
] |
[
"ru"
] |
TAGS
#transformers #pytorch #safetensors #bert #text-classification #sentiment #ru #dataset-Tatyana/ru_sentiment_dataset #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# Model Card for RuBERT for Sentiment Analysis
# Model Details
## Model Description
Russian texts sentiment classification.
- Developed by: Tatyana Voloshina
- Shared by [Optional]: Tatyana Voloshina
- Model type: Text Classification
- Language(s) (NLP): More information needed
- License: More information needed
- Parent Model: BERT
- Resources for more information:
- GitHub Repo
# Uses
## Direct Use
This model can be used for the task of text classification.
## Downstream Use [Optional]
More information needed.
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
# Training Details
## Training Data
Model trained on Tatyana/ru_sentiment_dataset
## Training Procedure
### Preprocessing
More information needed
### Speeds, Sizes, Times
More information needed
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
More information needed
### Factors
More information needed
### Metrics
More information needed
## Results
More information needed
# Model Examination
## Labels meaning
0: NEUTRAL
1: POSITIVE
2: NEGATIVE
# Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: More information needed
- Hours used: More information needed
- Cloud Provider: More information needed
- Compute Region: More information needed
- Carbon Emitted: More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
More information needed.
More information needed.
# Glossary [optional]
More information needed
# More Information [optional]
More information needed
# Model Card Authors [optional]
Tatyana Voloshina in collaboration with Ezi Ozoani and the Hugging Face team
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
Needed pytorch trained model presented in Drive.
Load and place URL in folder next to another files of a model.
</details>
|
[
"# Model Card for RuBERT for Sentiment Analysis",
"# Model Details",
"## Model Description\n \nRussian texts sentiment classification. \n \n- Developed by: Tatyana Voloshina\n- Shared by [Optional]: Tatyana Voloshina\n- Model type: Text Classification \n- Language(s) (NLP): More information needed\n- License: More information needed \n- Parent Model: BERT\n- Resources for more information:\n - GitHub Repo",
"# Uses",
"## Direct Use\nThis model can be used for the task of text classification.",
"## Downstream Use [Optional]\n \nMore information needed.",
"## Out-of-Scope Use\n \nThe model should not be used to intentionally create hostile or alienating environments for people.",
"# Bias, Risks, and Limitations\n \n \nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.",
"## Recommendations\n \n \nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"# Training Details",
"## Training Data\n \nModel trained on Tatyana/ru_sentiment_dataset",
"## Training Procedure",
"### Preprocessing\n \nMore information needed",
"### Speeds, Sizes, Times\nMore information needed",
"# Evaluation",
"## Testing Data, Factors & Metrics",
"### Testing Data\n \nMore information needed",
"### Factors\nMore information needed",
"### Metrics\n \nMore information needed",
"## Results \n \nMore information needed",
"# Model Examination",
"## Labels meaning\n 0: NEUTRAL\n 1: POSITIVE\n 2: NEGATIVE",
"# Environmental Impact\n \nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n \n- Hardware Type: More information needed\n- Hours used: More information needed\n- Cloud Provider: More information needed\n- Compute Region: More information needed\n- Carbon Emitted: More information needed",
"# Technical Specifications [optional]",
"## Model Architecture and Objective\n\nMore information needed",
"## Compute Infrastructure\n \nMore information needed",
"### Hardware\n \n \nMore information needed",
"### Software\n \nMore information needed.\n \nMore information needed.",
"# Glossary [optional]\nMore information needed",
"# More Information [optional]\nMore information needed",
"# Model Card Authors [optional]\n \nTatyana Voloshina in collaboration with Ezi Ozoani and the Hugging Face team",
"# Model Card Contact\n \nMore information needed",
"# How to Get Started with the Model\n \nUse the code below to get started with the model.\n \n<details>\n<summary> Click to expand </summary>\n\nNeeded pytorch trained model presented in Drive.\n\nLoad and place URL in folder next to another files of a model.\n\n\n</details>"
] |
[
"TAGS\n#transformers #pytorch #safetensors #bert #text-classification #sentiment #ru #dataset-Tatyana/ru_sentiment_dataset #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# Model Card for RuBERT for Sentiment Analysis",
"# Model Details",
"## Model Description\n \nRussian texts sentiment classification. \n \n- Developed by: Tatyana Voloshina\n- Shared by [Optional]: Tatyana Voloshina\n- Model type: Text Classification \n- Language(s) (NLP): More information needed\n- License: More information needed \n- Parent Model: BERT\n- Resources for more information:\n - GitHub Repo",
"# Uses",
"## Direct Use\nThis model can be used for the task of text classification.",
"## Downstream Use [Optional]\n \nMore information needed.",
"## Out-of-Scope Use\n \nThe model should not be used to intentionally create hostile or alienating environments for people.",
"# Bias, Risks, and Limitations\n \n \nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.",
"## Recommendations\n \n \nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"# Training Details",
"## Training Data\n \nModel trained on Tatyana/ru_sentiment_dataset",
"## Training Procedure",
"### Preprocessing\n \nMore information needed",
"### Speeds, Sizes, Times\nMore information needed",
"# Evaluation",
"## Testing Data, Factors & Metrics",
"### Testing Data\n \nMore information needed",
"### Factors\nMore information needed",
"### Metrics\n \nMore information needed",
"## Results \n \nMore information needed",
"# Model Examination",
"## Labels meaning\n 0: NEUTRAL\n 1: POSITIVE\n 2: NEGATIVE",
"# Environmental Impact\n \nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n \n- Hardware Type: More information needed\n- Hours used: More information needed\n- Cloud Provider: More information needed\n- Compute Region: More information needed\n- Carbon Emitted: More information needed",
"# Technical Specifications [optional]",
"## Model Architecture and Objective\n\nMore information needed",
"## Compute Infrastructure\n \nMore information needed",
"### Hardware\n \n \nMore information needed",
"### Software\n \nMore information needed.\n \nMore information needed.",
"# Glossary [optional]\nMore information needed",
"# More Information [optional]\nMore information needed",
"# Model Card Authors [optional]\n \nTatyana Voloshina in collaboration with Ezi Ozoani and the Hugging Face team",
"# Model Card Contact\n \nMore information needed",
"# How to Get Started with the Model\n \nUse the code below to get started with the model.\n \n<details>\n<summary> Click to expand </summary>\n\nNeeded pytorch trained model presented in Drive.\n\nLoad and place URL in folder next to another files of a model.\n\n\n</details>"
] |
text-classification
|
transformers
|
# Keras model with ruBERT conversational embedder for Sentiment Analysis
Russian texts sentiment classification.
Model trained on [Tatyana/ru_sentiment_dataset](https://huggingface.co/datasets/Tatyana/ru_sentiment_dataset)
## Labels meaning
0: NEUTRAL
1: POSITIVE
2: NEGATIVE
## How to use
```python
!pip install tensorflow-gpu
!pip install deeppavlov
!python -m deeppavlov install squad_bert
!pip install fasttext
!pip install transformers
!python -m deeppavlov install bert_sentence_embedder
from deeppavlov import build_model
model = build_model(Tatyana/rubert_conversational_cased_sentiment/custom_config.json)
model(["Сегодня хорошая погода", "Я счастлив проводить с тобою время", "Мне нравится эта музыкальная композиция"])
```
|
{"language": ["ru"], "tags": ["sentiment", "text-classification"], "datasets": ["Tatyana/ru_sentiment_dataset"]}
|
MonoHime/rubert_conversational_cased_sentiment
| null |
[
"transformers",
"pytorch",
"bert",
"sentiment",
"text-classification",
"ru",
"dataset:Tatyana/ru_sentiment_dataset",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ru"
] |
TAGS
#transformers #pytorch #bert #sentiment #text-classification #ru #dataset-Tatyana/ru_sentiment_dataset #endpoints_compatible #region-us
|
# Keras model with ruBERT conversational embedder for Sentiment Analysis
Russian texts sentiment classification.
Model trained on Tatyana/ru_sentiment_dataset
## Labels meaning
0: NEUTRAL
1: POSITIVE
2: NEGATIVE
## How to use
|
[
"# Keras model with ruBERT conversational embedder for Sentiment Analysis\nRussian texts sentiment classification.\n\nModel trained on Tatyana/ru_sentiment_dataset",
"## Labels meaning\n 0: NEUTRAL\n 1: POSITIVE\n 2: NEGATIVE",
"## How to use"
] |
[
"TAGS\n#transformers #pytorch #bert #sentiment #text-classification #ru #dataset-Tatyana/ru_sentiment_dataset #endpoints_compatible #region-us \n",
"# Keras model with ruBERT conversational embedder for Sentiment Analysis\nRussian texts sentiment classification.\n\nModel trained on Tatyana/ru_sentiment_dataset",
"## Labels meaning\n 0: NEUTRAL\n 1: POSITIVE\n 2: NEGATIVE",
"## How to use"
] |
image-classification
|
generic
|
## Example
The model is by no means a state-of-the-art model, but nevertheless
produces reasonable image captioning results. It was mainly fine-tuned
as a proof-of-concept for the 🤗 FlaxVisionEncoderDecoder Framework.
The model can be used as follows:
**In PyTorch**
```python
import torch
import requests
from PIL import Image
from transformers import ViTFeatureExtractor, AutoTokenizer, VisionEncoderDecoderModel
loc = "ydshieh/vit-gpt2-coco-en"
feature_extractor = ViTFeatureExtractor.from_pretrained(loc)
tokenizer = AutoTokenizer.from_pretrained(loc)
model = VisionEncoderDecoderModel.from_pretrained(loc)
model.eval()
def predict(image):
pixel_values = feature_extractor(images=image, return_tensors="pt").pixel_values
with torch.no_grad():
output_ids = model.generate(pixel_values, max_length=16, num_beams=4, return_dict_in_generate=True).sequences
preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
preds = [pred.strip() for pred in preds]
return preds
# We will verify our results on an image of cute cats
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
with Image.open(requests.get(url, stream=True).raw) as image:
preds = predict(image)
print(preds)
# should produce
# ['a cat laying on top of a couch next to another cat']
```
**In Flax**
```python
import jax
import requests
from PIL import Image
from transformers import ViTFeatureExtractor, AutoTokenizer, FlaxVisionEncoderDecoderModel
loc = "ydshieh/vit-gpt2-coco-en"
feature_extractor = ViTFeatureExtractor.from_pretrained(loc)
tokenizer = AutoTokenizer.from_pretrained(loc)
model = FlaxVisionEncoderDecoderModel.from_pretrained(loc)
gen_kwargs = {"max_length": 16, "num_beams": 4}
# This takes sometime when compiling the first time, but the subsequent inference will be much faster
@jax.jit
def generate(pixel_values):
output_ids = model.generate(pixel_values, **gen_kwargs).sequences
return output_ids
def predict(image):
pixel_values = feature_extractor(images=image, return_tensors="np").pixel_values
output_ids = generate(pixel_values)
preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
preds = [pred.strip() for pred in preds]
return preds
# We will verify our results on an image of cute cats
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
with Image.open(requests.get(url, stream=True).raw) as image:
preds = predict(image)
print(preds)
# should produce
# ['a cat laying on top of a couch next to another cat']
```
|
{"library_name": "generic", "tags": ["image-classification"]}
|
TeamAlerito/gti-coco-en
| null |
[
"generic",
"pytorch",
"tf",
"jax",
"tensorboard",
"vision-encoder-decoder",
"image-classification",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#generic #pytorch #tf #jax #tensorboard #vision-encoder-decoder #image-classification #region-us
|
## Example
The model is by no means a state-of-the-art model, but nevertheless
produces reasonable image captioning results. It was mainly fine-tuned
as a proof-of-concept for the FlaxVisionEncoderDecoder Framework.
The model can be used as follows:
In PyTorch
In Flax
|
[
"## Example\r\n\r\nThe model is by no means a state-of-the-art model, but nevertheless\r\nproduces reasonable image captioning results. It was mainly fine-tuned \r\nas a proof-of-concept for the FlaxVisionEncoderDecoder Framework.\r\n\r\nThe model can be used as follows:\r\n\r\nIn PyTorch\r\n\r\n\r\nIn Flax"
] |
[
"TAGS\n#generic #pytorch #tf #jax #tensorboard #vision-encoder-decoder #image-classification #region-us \n",
"## Example\r\n\r\nThe model is by no means a state-of-the-art model, but nevertheless\r\nproduces reasonable image captioning results. It was mainly fine-tuned \r\nas a proof-of-concept for the FlaxVisionEncoderDecoder Framework.\r\n\r\nThe model can be used as follows:\r\n\r\nIn PyTorch\r\n\r\n\r\nIn Flax"
] |
text-classification
|
transformers
|
The uploaded model is from epoch 4 with Matthews Correlation of 61.05
"best_metric": 0.4796141982078552,<br>
"best_model_checkpoint": "/content/output_dir/checkpoint-268",<br>
"epoch": 10.0,<br>
"global_step": 2680,<br>
"is_hyper_param_search": false,<br>
"is_local_process_zero": true,<br>
"is_world_process_zero": true,<br>
"max_steps": 2680,<br>
"num_train_epochs": 10,<br>
"total_flos": 7113018526540800.0,<br>
"trial_name": null,<br>
"trial_params": null<br>
<table class="table table-bordered table-hover table-condensed" style="width: 60%; overflow: auto">
<thead><tr><th title="Field #1">epoch</th>
<th title="Field #2">eval_loss</th>
<th title="Field #3">eval_matthews_correlation</th>
<th title="Field #4">eval_runtime</th>
<th title="Field #5">eval_samples_per_second</th>
<th title="Field #6">eval_steps_per_second</th>
<th title="Field #7">step</th>
<th title="Field #8">learning_rate</th>
<th title="Field #9">loss</th>
</tr></thead>
<tbody><tr>
<td align="left">1</td>
<td align="left">0.4796141982078552</td>
<td align="left">0.5351033849356494</td>
<td align="left">8.8067</td>
<td align="left">118.433</td>
<td align="left">14.875</td>
<td align="left">268</td>
<td align="left">0.000018067415730337083</td>
<td align="left">0.4913</td>
</tr>
<tr>
<td align="left">2</td>
<td align="left">0.5334435701370239</td>
<td align="left">0.5178799252679331</td>
<td align="left">8.9439</td>
<td align="left">116.616</td>
<td align="left">14.647</td>
<td align="left">536</td>
<td align="left">0.00001605992509363296</td>
<td align="left">0.2872</td>
</tr>
<tr>
<td align="left">3</td>
<td align="left">0.5544090270996094</td>
<td align="left">0.5649788851042796</td>
<td align="left">8.9467</td>
<td align="left">116.58</td>
<td align="left">14.642</td>
<td align="left">804</td>
<td align="left">0.000014052434456928841</td>
<td align="left">0.1777</td>
</tr>
<tr>
<td align="left">4</td>
<td align="left">0.5754779577255249</td>
<td align="left">0.6105374636148787</td>
<td align="left">8.8982</td>
<td align="left">117.215</td>
<td align="left">14.722</td>
<td align="left">1072</td>
<td align="left">0.000012044943820224718</td>
<td align="left">0.1263</td>
</tr>
<tr>
<td align="left">5</td>
<td align="left">0.7263916730880737</td>
<td align="left">0.5807606001872874</td>
<td align="left">8.9705</td>
<td align="left">116.27</td>
<td align="left">14.603</td>
<td align="left">1340</td>
<td align="left">0.000010037453183520601</td>
<td align="left">0.0905</td>
</tr>
<tr>
<td align="left">6</td>
<td align="left">0.8121512532234192</td>
<td align="left">0.5651092792103851</td>
<td align="left">8.9924</td>
<td align="left">115.987</td>
<td align="left">14.568</td>
<td align="left">1608</td>
<td align="left">0.00000802996254681648</td>
<td align="left">0.0692</td>
</tr>
<tr>
<td align="left">7</td>
<td align="left">0.941014289855957</td>
<td align="left">0.5632084517291658</td>
<td align="left">8.9583</td>
<td align="left">116.428</td>
<td align="left">14.623</td>
<td align="left">1876</td>
<td align="left">0.000006022471910112359</td>
<td align="left">0.0413</td>
</tr>
<tr>
<td align="left">8</td>
<td align="left">1.0095174312591553</td>
<td align="left">0.5856531698367675</td>
<td align="left">9.0029</td>
<td align="left">115.851</td>
<td align="left">14.551</td>
<td align="left">2144</td>
<td align="left">0.00000401498127340824</td>
<td align="left">0.0327</td>
</tr>
<tr>
<td align="left">9</td>
<td align="left">1.0425965785980225</td>
<td align="left">0.5941395545037332</td>
<td align="left">8.9217</td>
<td align="left">116.906</td>
<td align="left">14.683</td>
<td align="left">2412</td>
<td align="left">0.00000200749063670412</td>
<td align="left">0.0202</td>
</tr>
<tr>
<td align="left">10</td>
<td align="left">1.0782166719436646</td>
<td align="left">0.5956649094312695</td>
<td align="left">8.9472</td>
<td align="left">116.572</td>
<td align="left">14.641</td>
<td align="left">2680</td>
<td align="left">0</td>
<td align="left">0.0104</td>
</tr>
</tbody></table>
|
{}
|
TehranNLP-org/bert-base-cased-avg-cola
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us
|
The uploaded model is from epoch 4 with Matthews Correlation of 61.05
"best_metric": 0.4796141982078552,<br>
"best_model_checkpoint": "/content/output_dir/checkpoint-268",<br>
"epoch": 10.0,<br>
"global_step": 2680,<br>
"is_hyper_param_search": false,<br>
"is_local_process_zero": true,<br>
"is_world_process_zero": true,<br>
"max_steps": 2680,<br>
"num_train_epochs": 10,<br>
"total_flos": 7113018526540800.0,<br>
"trial_name": null,<br>
"trial_params": null<br>
<table class="table table-bordered table-hover table-condensed" style="width: 60%; overflow: auto">
<thead><tr><th title="Field #1">epoch</th>
<th title="Field #2">eval_loss</th>
<th title="Field #3">eval_matthews_correlation</th>
<th title="Field #4">eval_runtime</th>
<th title="Field #5">eval_samples_per_second</th>
<th title="Field #6">eval_steps_per_second</th>
<th title="Field #7">step</th>
<th title="Field #8">learning_rate</th>
<th title="Field #9">loss</th>
</tr></thead>
<tbody><tr>
<td align="left">1</td>
<td align="left">0.4796141982078552</td>
<td align="left">0.5351033849356494</td>
<td align="left">8.8067</td>
<td align="left">118.433</td>
<td align="left">14.875</td>
<td align="left">268</td>
<td align="left">0.000018067415730337083</td>
<td align="left">0.4913</td>
</tr>
<tr>
<td align="left">2</td>
<td align="left">0.5334435701370239</td>
<td align="left">0.5178799252679331</td>
<td align="left">8.9439</td>
<td align="left">116.616</td>
<td align="left">14.647</td>
<td align="left">536</td>
<td align="left">0.00001605992509363296</td>
<td align="left">0.2872</td>
</tr>
<tr>
<td align="left">3</td>
<td align="left">0.5544090270996094</td>
<td align="left">0.5649788851042796</td>
<td align="left">8.9467</td>
<td align="left">116.58</td>
<td align="left">14.642</td>
<td align="left">804</td>
<td align="left">0.000014052434456928841</td>
<td align="left">0.1777</td>
</tr>
<tr>
<td align="left">4</td>
<td align="left">0.5754779577255249</td>
<td align="left">0.6105374636148787</td>
<td align="left">8.8982</td>
<td align="left">117.215</td>
<td align="left">14.722</td>
<td align="left">1072</td>
<td align="left">0.000012044943820224718</td>
<td align="left">0.1263</td>
</tr>
<tr>
<td align="left">5</td>
<td align="left">0.7263916730880737</td>
<td align="left">0.5807606001872874</td>
<td align="left">8.9705</td>
<td align="left">116.27</td>
<td align="left">14.603</td>
<td align="left">1340</td>
<td align="left">0.000010037453183520601</td>
<td align="left">0.0905</td>
</tr>
<tr>
<td align="left">6</td>
<td align="left">0.8121512532234192</td>
<td align="left">0.5651092792103851</td>
<td align="left">8.9924</td>
<td align="left">115.987</td>
<td align="left">14.568</td>
<td align="left">1608</td>
<td align="left">0.00000802996254681648</td>
<td align="left">0.0692</td>
</tr>
<tr>
<td align="left">7</td>
<td align="left">0.941014289855957</td>
<td align="left">0.5632084517291658</td>
<td align="left">8.9583</td>
<td align="left">116.428</td>
<td align="left">14.623</td>
<td align="left">1876</td>
<td align="left">0.000006022471910112359</td>
<td align="left">0.0413</td>
</tr>
<tr>
<td align="left">8</td>
<td align="left">1.0095174312591553</td>
<td align="left">0.5856531698367675</td>
<td align="left">9.0029</td>
<td align="left">115.851</td>
<td align="left">14.551</td>
<td align="left">2144</td>
<td align="left">0.00000401498127340824</td>
<td align="left">0.0327</td>
</tr>
<tr>
<td align="left">9</td>
<td align="left">1.0425965785980225</td>
<td align="left">0.5941395545037332</td>
<td align="left">8.9217</td>
<td align="left">116.906</td>
<td align="left">14.683</td>
<td align="left">2412</td>
<td align="left">0.00000200749063670412</td>
<td align="left">0.0202</td>
</tr>
<tr>
<td align="left">10</td>
<td align="left">1.0782166719436646</td>
<td align="left">0.5956649094312695</td>
<td align="left">8.9472</td>
<td align="left">116.572</td>
<td align="left">14.641</td>
<td align="left">2680</td>
<td align="left">0</td>
<td align="left">0.0104</td>
</tr>
</tbody></table>
|
[] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-classification
|
transformers
|
The uploaded model is from epoch 9 with Matthews Correlation of 66.77
"best_metric": 0.667660908939119,<br>
"best_model_checkpoint": "/content/output_dir/checkpoint-2412",<br>
"epoch": 10.0,<br>
"global_step": 2680,<br>
"is_hyper_param_search": false,<br>
"is_local_process_zero": true,<br>
"is_world_process_zero": true,<br>
"max_steps": 2680,<br>
"num_train_epochs": 10,<br>
"total_flos": 7189983634007040.0,<br>
"trial_name": null,<br>
"trial_params": null<br>
<table class="table table-bordered table-hover table-condensed">
<thead><tr><th title="Field #1">epoch</th>
<th title="Field #2">eval_loss</th>
<th title="Field #3">eval_matthews_correlation</th>
<th title="Field #4">eval_runtime</th>
<th title="Field #5">eval_samples_per_second</th>
<th title="Field #6">eval_steps_per_second</th>
<th title="Field #7">step</th>
<th title="Field #8">learning_rate</th>
<th title="Field #9">loss</th>
</tr></thead>
<tbody><tr>
<td align="right">1</td>
<td align="right">0.5115634202957153</td>
<td align="right">0.5385290213636863</td>
<td align="right">7.985</td>
<td align="right">130.62</td>
<td align="right">16.406</td>
<td align="right">268</td>
<td align="right">0.00009280492497114274</td>
<td align="right">0.4622</td>
</tr>
<tr>
<td align="right">2</td>
<td align="right">0.4201788902282715</td>
<td align="right">0.6035894895952164</td>
<td align="right">8.0283</td>
<td align="right">129.916</td>
<td align="right">16.317</td>
<td align="right">536</td>
<td align="right">0.00008249326664101577</td>
<td align="right">0.2823</td>
</tr>
<tr>
<td align="right">3</td>
<td align="right">0.580650806427002</td>
<td align="right">0.5574138665741355</td>
<td align="right">8.1314</td>
<td align="right">128.268</td>
<td align="right">16.11</td>
<td align="right">804</td>
<td align="right">0.00007218160831088881</td>
<td align="right">0.1804</td>
</tr>
<tr>
<td align="right">4</td>
<td align="right">0.4439031779766083</td>
<td align="right">0.6557697896854868</td>
<td align="right">8.1435</td>
<td align="right">128.078</td>
<td align="right">16.087</td>
<td align="right">1072</td>
<td align="right">0.00006186994998076183</td>
<td align="right">0.1357</td>
</tr>
<tr>
<td align="right">5</td>
<td align="right">0.5736830830574036</td>
<td align="right">0.6249925495853809</td>
<td align="right">8.0533</td>
<td align="right">129.512</td>
<td align="right">16.267</td>
<td align="right">1340</td>
<td align="right">0.00005155829165063486</td>
<td align="right">0.0913</td>
</tr>
<tr>
<td align="right">6</td>
<td align="right">0.7729296684265137</td>
<td align="right">0.6188970025554703</td>
<td align="right">8.081</td>
<td align="right">129.068</td>
<td align="right">16.211</td>
<td align="right">1608</td>
<td align="right">0.000041246633320507885</td>
<td align="right">0.065</td>
</tr>
<tr>
<td align="right">7</td>
<td align="right">0.7351673245429993</td>
<td align="right">0.6405767700619004</td>
<td align="right">8.1372</td>
<td align="right">128.176</td>
<td align="right">16.099</td>
<td align="right">1876</td>
<td align="right">0.00003093497499038092</td>
<td align="right">0.0433</td>
</tr>
<tr>
<td align="right">8</td>
<td align="right">0.7900031208992004</td>
<td align="right">0.6565021466238845</td>
<td align="right">8.1095</td>
<td align="right">128.615</td>
<td align="right">16.154</td>
<td align="right">2144</td>
<td align="right">0.000020623316660253942</td>
<td align="right">0.0199</td>
</tr>
<tr>
<td align="right">9</td>
<td align="right">0.8539554476737976</td>
<td align="right">0.667660908939119</td>
<td align="right">8.1204</td>
<td align="right">128.442</td>
<td align="right">16.132</td>
<td align="right">2412</td>
<td align="right">0.000010311658330126971</td>
<td align="right">0.0114</td>
</tr>
<tr>
<td align="right">10</td>
<td align="right">0.9261117577552795</td>
<td align="right">0.660301076782038</td>
<td align="right">8.0088</td>
<td align="right">130.231</td>
<td align="right">16.357</td>
<td align="right">2680</td>
<td align="right">0</td>
<td align="right">0.0066</td>
</tr>
</tbody></table>
|
{}
|
TehranNLP-org/electra-base-avg-cola
| null |
[
"transformers",
"pytorch",
"electra",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #electra #text-classification #autotrain_compatible #endpoints_compatible #region-us
|
The uploaded model is from epoch 9 with Matthews Correlation of 66.77
"best_metric": 0.667660908939119,<br>
"best_model_checkpoint": "/content/output_dir/checkpoint-2412",<br>
"epoch": 10.0,<br>
"global_step": 2680,<br>
"is_hyper_param_search": false,<br>
"is_local_process_zero": true,<br>
"is_world_process_zero": true,<br>
"max_steps": 2680,<br>
"num_train_epochs": 10,<br>
"total_flos": 7189983634007040.0,<br>
"trial_name": null,<br>
"trial_params": null<br>
<table class="table table-bordered table-hover table-condensed">
<thead><tr><th title="Field #1">epoch</th>
<th title="Field #2">eval_loss</th>
<th title="Field #3">eval_matthews_correlation</th>
<th title="Field #4">eval_runtime</th>
<th title="Field #5">eval_samples_per_second</th>
<th title="Field #6">eval_steps_per_second</th>
<th title="Field #7">step</th>
<th title="Field #8">learning_rate</th>
<th title="Field #9">loss</th>
</tr></thead>
<tbody><tr>
<td align="right">1</td>
<td align="right">0.5115634202957153</td>
<td align="right">0.5385290213636863</td>
<td align="right">7.985</td>
<td align="right">130.62</td>
<td align="right">16.406</td>
<td align="right">268</td>
<td align="right">0.00009280492497114274</td>
<td align="right">0.4622</td>
</tr>
<tr>
<td align="right">2</td>
<td align="right">0.4201788902282715</td>
<td align="right">0.6035894895952164</td>
<td align="right">8.0283</td>
<td align="right">129.916</td>
<td align="right">16.317</td>
<td align="right">536</td>
<td align="right">0.00008249326664101577</td>
<td align="right">0.2823</td>
</tr>
<tr>
<td align="right">3</td>
<td align="right">0.580650806427002</td>
<td align="right">0.5574138665741355</td>
<td align="right">8.1314</td>
<td align="right">128.268</td>
<td align="right">16.11</td>
<td align="right">804</td>
<td align="right">0.00007218160831088881</td>
<td align="right">0.1804</td>
</tr>
<tr>
<td align="right">4</td>
<td align="right">0.4439031779766083</td>
<td align="right">0.6557697896854868</td>
<td align="right">8.1435</td>
<td align="right">128.078</td>
<td align="right">16.087</td>
<td align="right">1072</td>
<td align="right">0.00006186994998076183</td>
<td align="right">0.1357</td>
</tr>
<tr>
<td align="right">5</td>
<td align="right">0.5736830830574036</td>
<td align="right">0.6249925495853809</td>
<td align="right">8.0533</td>
<td align="right">129.512</td>
<td align="right">16.267</td>
<td align="right">1340</td>
<td align="right">0.00005155829165063486</td>
<td align="right">0.0913</td>
</tr>
<tr>
<td align="right">6</td>
<td align="right">0.7729296684265137</td>
<td align="right">0.6188970025554703</td>
<td align="right">8.081</td>
<td align="right">129.068</td>
<td align="right">16.211</td>
<td align="right">1608</td>
<td align="right">0.000041246633320507885</td>
<td align="right">0.065</td>
</tr>
<tr>
<td align="right">7</td>
<td align="right">0.7351673245429993</td>
<td align="right">0.6405767700619004</td>
<td align="right">8.1372</td>
<td align="right">128.176</td>
<td align="right">16.099</td>
<td align="right">1876</td>
<td align="right">0.00003093497499038092</td>
<td align="right">0.0433</td>
</tr>
<tr>
<td align="right">8</td>
<td align="right">0.7900031208992004</td>
<td align="right">0.6565021466238845</td>
<td align="right">8.1095</td>
<td align="right">128.615</td>
<td align="right">16.154</td>
<td align="right">2144</td>
<td align="right">0.000020623316660253942</td>
<td align="right">0.0199</td>
</tr>
<tr>
<td align="right">9</td>
<td align="right">0.8539554476737976</td>
<td align="right">0.667660908939119</td>
<td align="right">8.1204</td>
<td align="right">128.442</td>
<td align="right">16.132</td>
<td align="right">2412</td>
<td align="right">0.000010311658330126971</td>
<td align="right">0.0114</td>
</tr>
<tr>
<td align="right">10</td>
<td align="right">0.9261117577552795</td>
<td align="right">0.660301076782038</td>
<td align="right">8.0088</td>
<td align="right">130.231</td>
<td align="right">16.357</td>
<td align="right">2680</td>
<td align="right">0</td>
<td align="right">0.0066</td>
</tr>
</tbody></table>
|
[] |
[
"TAGS\n#transformers #pytorch #electra #text-classification #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-classification
|
transformers
|
Product Review Sentiment Classification
1. Label0 - Negative
2. Label1 - Positive
Trained so far on 20000 Balanced Positive and Negative Reviews
|
{}
|
Tejas003/distillbert_base_uncased_amazon_review_sentiment_300
| null |
[
"transformers",
"tf",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #tf #distilbert #text-classification #autotrain_compatible #endpoints_compatible #region-us
|
Product Review Sentiment Classification
1. Label0 - Negative
2. Label1 - Positive
Trained so far on 20000 Balanced Positive and Negative Reviews
|
[] |
[
"TAGS\n#transformers #tf #distilbert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-53-Georgian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Georgian using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ka", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("Temur/wav2vec2-Georgian-Daytona")
model = Wav2Vec2ForCTC.from_pretrained("Temur/wav2vec2-Georgian-Daytona")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Georgian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ka", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("Temur/wav2vec2-Georgian-Daytona")
model = Wav2Vec2ForCTC.from_pretrained("Temur/wav2vec2-Georgian-Daytona")
model.to("cuda")
chars_to_ignore_regex = '[\\\\,\\\\?\\\\.\\\\!\\\\-\\\\;\\\\:\\\\"\\\\“]' # TODO: adapt this list to include all special characters you removed from the data
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
\\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\\twith torch.no_grad():
\\t\\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
\\tpred_ids = torch.argmax(logits, dim=-1)
\\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\\treturn batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 48.34 %
## Training
The Common Voice `train`, `validation`, and ... datasets were used for training as well as ... and ... # TODO: adapt to state all the datasets that were used for training.
The script used for training can be found [here](https://github.com/huggingface/transformers/blob/master/examples/research_projects/wav2vec2/FINE_TUNE_XLSR_WAV2VEC2.md)
|
{"language": "ka", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "Georgian WAV2VEC2 Daytona", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice ka", "type": "common_voice", "args": "ka"}, "metrics": [{"type": "wer", "value": 48.34, "name": "Test WER"}]}]}]}
|
Temur/wav2vec2-Georgian-Daytona
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"ka",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ka"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #ka #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-53-Georgian
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Georgian using the Common Voice dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Georgian test data of Common Voice.
Test Result: 48.34 %
## Training
The Common Voice 'train', 'validation', and ... datasets were used for training as well as ... and ... # TODO: adapt to state all the datasets that were used for training.
The script used for training can be found here
|
[
"# Wav2Vec2-Large-XLSR-53-Georgian\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Georgian using the Common Voice dataset. \nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Georgian test data of Common Voice. \n\n\n\n\nTest Result: 48.34 %",
"## Training\n\nThe Common Voice 'train', 'validation', and ... datasets were used for training as well as ... and ... # TODO: adapt to state all the datasets that were used for training.\n\nThe script used for training can be found here"
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #ka #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-53-Georgian\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Georgian using the Common Voice dataset. \nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Georgian test data of Common Voice. \n\n\n\n\nTest Result: 48.34 %",
"## Training\n\nThe Common Voice 'train', 'validation', and ... datasets were used for training as well as ... and ... # TODO: adapt to state all the datasets that were used for training.\n\nThe script used for training can be found here"
] |
null | null |
# GFPGAN (CVPR 2021)
[**Paper**](https://arxiv.org/abs/2101.04061) **|** [**Project Page**](https://xinntao.github.io/projects/gfpgan)    [English](README.md) **|** [简体中文](README_CN.md)
GitHub: https://github.com/TencentARC/GFPGAN
GFPGAN is a blind face restoration algorithm towards real-world face images.
<a href="https://colab.research.google.com/drive/1sVsoBd9AjckIXThgtZhGrHRfFI6UUYOo"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="google colab logo"></a>
[Colab Demo](https://colab.research.google.com/drive/1sVsoBd9AjckIXThgtZhGrHRfFI6UUYOo)
### :book: GFP-GAN: Towards Real-World Blind Face Restoration with Generative Facial Prior
> [[Paper](https://arxiv.org/abs/2101.04061)]   [[Project Page](https://xinntao.github.io/projects/gfpgan)]   [Demo] <br>
> [Xintao Wang](https://xinntao.github.io/), [Yu Li](https://yu-li.github.io/), [Honglun Zhang](https://scholar.google.com/citations?hl=en&user=KjQLROoAAAAJ), [Ying Shan](https://scholar.google.com/citations?user=4oXBp9UAAAAJ&hl=en) <br>
> Applied Research Center (ARC), Tencent PCG
#### Abstract
Blind face restoration usually relies on facial priors, such as facial geometry prior or reference prior, to restore realistic and faithful details. However, very low-quality inputs cannot offer accurate geometric prior while high-quality references are inaccessible, limiting the applicability in real-world scenarios. In this work, we propose GFP-GAN that leverages **rich and diverse priors encapsulated in a pretrained face GAN** for blind face restoration. This Generative Facial Prior (GFP) is incorporated into the face restoration process via novel channel-split spatial feature transform layers, which allow our method to achieve a good balance of realness and fidelity. Thanks to the powerful generative facial prior and delicate designs, our GFP-GAN could jointly restore facial details and enhance colors with just a single forward pass, while GAN inversion methods require expensive image-specific optimization at inference. Extensive experiments show that our method achieves superior performance to prior art on both synthetic and real-world datasets.
#### BibTeX
@InProceedings{wang2021gfpgan,
author = {Xintao Wang and Yu Li and Honglun Zhang and Ying Shan},
title = {Towards Real-World Blind Face Restoration with Generative Facial Prior},
booktitle={The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2021}
}
<p align="center">
<img src="https://xinntao.github.io/projects/GFPGAN_src/gfpgan_teaser.jpg">
</p>
---
## :wrench: Dependencies and Installation
- Python >= 3.7 (Recommend to use [Anaconda](https://www.anaconda.com/download/#linux) or [Miniconda](https://docs.conda.io/en/latest/miniconda.html))
- [PyTorch >= 1.7](https://pytorch.org/)
- NVIDIA GPU + [CUDA](https://developer.nvidia.com/cuda-downloads)
### Installation
1. Clone repo
```bash
git clone https://github.com/xinntao/GFPGAN.git
cd GFPGAN
```
1. Install dependent packages
```bash
# Install basicsr - https://github.com/xinntao/BasicSR
# We use BasicSR for both training and inference
# Set BASICSR_EXT=True to compile the cuda extensions in the BasicSR - It may take several minutes to compile, please be patient
BASICSR_EXT=True pip install basicsr
# Install facexlib - https://github.com/xinntao/facexlib
# We use face detection and face restoration helper in the facexlib package
pip install facexlib
pip install -r requirements.txt
```
## :zap: Quick Inference
Download pre-trained models: [GFPGANv1.pth](https://github.com/TencentARC/GFPGAN/releases/download/v0.1.0/GFPGANv1.pth)
```bash
wget https://github.com/TencentARC/GFPGAN/releases/download/v0.1.0/GFPGANv1.pth -P experiments/pretrained_models
```
```bash
python inference_gfpgan_full.py --model_path experiments/pretrained_models/GFPGANv1.pth --test_path inputs/whole_imgs
# for aligned images
python inference_gfpgan_full.py --model_path experiments/pretrained_models/GFPGANv1.pth --test_path inputs/cropped_faces --aligned
```
## :computer: Training
We provide complete training codes for GFPGAN. <br>
You could improve it according to your own needs.
1. Dataset preparation: [FFHQ](https://github.com/NVlabs/ffhq-dataset)
1. Download pre-trained models and other data. Put them in the `experiments/pretrained_models` folder.
1. [Pretrained StyleGAN2 model: StyleGAN2_512_Cmul1_FFHQ_B12G4_scratch_800k.pth](https://github.com/TencentARC/GFPGAN/releases/download/v0.1.0/StyleGAN2_512_Cmul1_FFHQ_B12G4_scratch_800k.pth)
1. [Component locations of FFHQ: FFHQ_eye_mouth_landmarks_512.pth](https://github.com/TencentARC/GFPGAN/releases/download/v0.1.0/FFHQ_eye_mouth_landmarks_512.pth)
1. [A simple ArcFace model: arcface_resnet18.pth](https://github.com/TencentARC/GFPGAN/releases/download/v0.1.0/arcface_resnet18.pth)
1. Modify the configuration file `train_gfpgan_v1.yml` accordingly.
1. Training
> python -m torch.distributed.launch --nproc_per_node=4 --master_port=22021 train.py -opt train_gfpgan_v1.yml --launcher pytorch
## :scroll: License and Acknowledgement
GFPGAN is realeased under Apache License Version 2.0.
## :e-mail: Contact
If you have any question, please email `xintao.wang@outlook.com` or `xintaowang@tencent.com`.
|
{}
|
TencentARC/GFPGANv1
| null |
[
"arxiv:2101.04061",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.04061"
] |
[] |
TAGS
#arxiv-2101.04061 #region-us
|
# GFPGAN (CVPR 2021)
Paper | Project Page    English | 简体中文
GitHub: URL
GFPGAN is a blind face restoration algorithm towards real-world face images.
<a href="URL src="URL alt="google colab logo"></a>
Colab Demo
### :book: GFP-GAN: Towards Real-World Blind Face Restoration with Generative Facial Prior
> [Paper]   [Project Page]   [Demo] <br>
> Xintao Wang, Yu Li, Honglun Zhang, Ying Shan <br>
> Applied Research Center (ARC), Tencent PCG
#### Abstract
Blind face restoration usually relies on facial priors, such as facial geometry prior or reference prior, to restore realistic and faithful details. However, very low-quality inputs cannot offer accurate geometric prior while high-quality references are inaccessible, limiting the applicability in real-world scenarios. In this work, we propose GFP-GAN that leverages rich and diverse priors encapsulated in a pretrained face GAN for blind face restoration. This Generative Facial Prior (GFP) is incorporated into the face restoration process via novel channel-split spatial feature transform layers, which allow our method to achieve a good balance of realness and fidelity. Thanks to the powerful generative facial prior and delicate designs, our GFP-GAN could jointly restore facial details and enhance colors with just a single forward pass, while GAN inversion methods require expensive image-specific optimization at inference. Extensive experiments show that our method achieves superior performance to prior art on both synthetic and real-world datasets.
#### BibTeX
@InProceedings{wang2021gfpgan,
author = {Xintao Wang and Yu Li and Honglun Zhang and Ying Shan},
title = {Towards Real-World Blind Face Restoration with Generative Facial Prior},
booktitle={The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2021}
}
<p align="center">
<img src="URL
</p>
---
## :wrench: Dependencies and Installation
- Python >= 3.7 (Recommend to use Anaconda or Miniconda)
- PyTorch >= 1.7
- NVIDIA GPU + CUDA
### Installation
1. Clone repo
1. Install dependent packages
## :zap: Quick Inference
Download pre-trained models: URL
## :computer: Training
We provide complete training codes for GFPGAN. <br>
You could improve it according to your own needs.
1. Dataset preparation: FFHQ
1. Download pre-trained models and other data. Put them in the 'experiments/pretrained_models' folder.
1. Pretrained StyleGAN2 model: StyleGAN2_512_Cmul1_FFHQ_B12G4_scratch_800k.pth
1. Component locations of FFHQ: FFHQ_eye_mouth_landmarks_512.pth
1. A simple ArcFace model: arcface_resnet18.pth
1. Modify the configuration file 'train_gfpgan_v1.yml' accordingly.
1. Training
> python -m URL --nproc_per_node=4 --master_port=22021 URL -opt train_gfpgan_v1.yml --launcher pytorch
## :scroll: License and Acknowledgement
GFPGAN is realeased under Apache License Version 2.0.
## :e-mail: Contact
If you have any question, please email 'URL@URL' or 'xintaowang@URL'.
|
[
"# GFPGAN (CVPR 2021)\n\nPaper | Project Page    English | 简体中文\n\nGitHub: URL\n\nGFPGAN is a blind face restoration algorithm towards real-world face images.\n\n<a href=\"URL src=\"URL alt=\"google colab logo\"></a>\nColab Demo",
"### :book: GFP-GAN: Towards Real-World Blind Face Restoration with Generative Facial Prior\n> [Paper]   [Project Page]   [Demo] <br>\n> Xintao Wang, Yu Li, Honglun Zhang, Ying Shan <br>\n> Applied Research Center (ARC), Tencent PCG",
"#### Abstract\n\nBlind face restoration usually relies on facial priors, such as facial geometry prior or reference prior, to restore realistic and faithful details. However, very low-quality inputs cannot offer accurate geometric prior while high-quality references are inaccessible, limiting the applicability in real-world scenarios. In this work, we propose GFP-GAN that leverages rich and diverse priors encapsulated in a pretrained face GAN for blind face restoration. This Generative Facial Prior (GFP) is incorporated into the face restoration process via novel channel-split spatial feature transform layers, which allow our method to achieve a good balance of realness and fidelity. Thanks to the powerful generative facial prior and delicate designs, our GFP-GAN could jointly restore facial details and enhance colors with just a single forward pass, while GAN inversion methods require expensive image-specific optimization at inference. Extensive experiments show that our method achieves superior performance to prior art on both synthetic and real-world datasets.",
"#### BibTeX\n\n @InProceedings{wang2021gfpgan,\n author = {Xintao Wang and Yu Li and Honglun Zhang and Ying Shan},\n title = {Towards Real-World Blind Face Restoration with Generative Facial Prior},\n booktitle={The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},\n year = {2021}\n }\n\n<p align=\"center\">\n <img src=\"URL\n</p>\n\n---",
"## :wrench: Dependencies and Installation\n\n- Python >= 3.7 (Recommend to use Anaconda or Miniconda)\n- PyTorch >= 1.7\n- NVIDIA GPU + CUDA",
"### Installation\n\n1. Clone repo\n\n \n\n1. Install dependent packages",
"## :zap: Quick Inference\n\nDownload pre-trained models: URL",
"## :computer: Training\n\nWe provide complete training codes for GFPGAN. <br>\nYou could improve it according to your own needs.\n\n1. Dataset preparation: FFHQ\n\n1. Download pre-trained models and other data. Put them in the 'experiments/pretrained_models' folder.\n 1. Pretrained StyleGAN2 model: StyleGAN2_512_Cmul1_FFHQ_B12G4_scratch_800k.pth\n 1. Component locations of FFHQ: FFHQ_eye_mouth_landmarks_512.pth\n 1. A simple ArcFace model: arcface_resnet18.pth\n\n1. Modify the configuration file 'train_gfpgan_v1.yml' accordingly.\n\n1. Training\n\n> python -m URL --nproc_per_node=4 --master_port=22021 URL -opt train_gfpgan_v1.yml --launcher pytorch",
"## :scroll: License and Acknowledgement\n\nGFPGAN is realeased under Apache License Version 2.0.",
"## :e-mail: Contact\n\nIf you have any question, please email 'URL@URL' or 'xintaowang@URL'."
] |
[
"TAGS\n#arxiv-2101.04061 #region-us \n",
"# GFPGAN (CVPR 2021)\n\nPaper | Project Page    English | 简体中文\n\nGitHub: URL\n\nGFPGAN is a blind face restoration algorithm towards real-world face images.\n\n<a href=\"URL src=\"URL alt=\"google colab logo\"></a>\nColab Demo",
"### :book: GFP-GAN: Towards Real-World Blind Face Restoration with Generative Facial Prior\n> [Paper]   [Project Page]   [Demo] <br>\n> Xintao Wang, Yu Li, Honglun Zhang, Ying Shan <br>\n> Applied Research Center (ARC), Tencent PCG",
"#### Abstract\n\nBlind face restoration usually relies on facial priors, such as facial geometry prior or reference prior, to restore realistic and faithful details. However, very low-quality inputs cannot offer accurate geometric prior while high-quality references are inaccessible, limiting the applicability in real-world scenarios. In this work, we propose GFP-GAN that leverages rich and diverse priors encapsulated in a pretrained face GAN for blind face restoration. This Generative Facial Prior (GFP) is incorporated into the face restoration process via novel channel-split spatial feature transform layers, which allow our method to achieve a good balance of realness and fidelity. Thanks to the powerful generative facial prior and delicate designs, our GFP-GAN could jointly restore facial details and enhance colors with just a single forward pass, while GAN inversion methods require expensive image-specific optimization at inference. Extensive experiments show that our method achieves superior performance to prior art on both synthetic and real-world datasets.",
"#### BibTeX\n\n @InProceedings{wang2021gfpgan,\n author = {Xintao Wang and Yu Li and Honglun Zhang and Ying Shan},\n title = {Towards Real-World Blind Face Restoration with Generative Facial Prior},\n booktitle={The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},\n year = {2021}\n }\n\n<p align=\"center\">\n <img src=\"URL\n</p>\n\n---",
"## :wrench: Dependencies and Installation\n\n- Python >= 3.7 (Recommend to use Anaconda or Miniconda)\n- PyTorch >= 1.7\n- NVIDIA GPU + CUDA",
"### Installation\n\n1. Clone repo\n\n \n\n1. Install dependent packages",
"## :zap: Quick Inference\n\nDownload pre-trained models: URL",
"## :computer: Training\n\nWe provide complete training codes for GFPGAN. <br>\nYou could improve it according to your own needs.\n\n1. Dataset preparation: FFHQ\n\n1. Download pre-trained models and other data. Put them in the 'experiments/pretrained_models' folder.\n 1. Pretrained StyleGAN2 model: StyleGAN2_512_Cmul1_FFHQ_B12G4_scratch_800k.pth\n 1. Component locations of FFHQ: FFHQ_eye_mouth_landmarks_512.pth\n 1. A simple ArcFace model: arcface_resnet18.pth\n\n1. Modify the configuration file 'train_gfpgan_v1.yml' accordingly.\n\n1. Training\n\n> python -m URL --nproc_per_node=4 --master_port=22021 URL -opt train_gfpgan_v1.yml --launcher pytorch",
"## :scroll: License and Acknowledgement\n\nGFPGAN is realeased under Apache License Version 2.0.",
"## :e-mail: Contact\n\nIf you have any question, please email 'URL@URL' or 'xintaowang@URL'."
] |
text-generation
|
transformers
|
Note: **default code snippet above won't work** because we are using `AlbertTokenizer` with `GPT2LMHeadModel`, see [issue](https://github.com/huggingface/transformers/issues/4285).
## GPT2 124M Trained on Ukranian Fiction
### Training details
Model was trained on corpus of 4040 fiction books, 2.77 GiB in total.
Evaluation on [brown-uk](https://github.com/brown-uk/corpus) gives perplexity of 50.16.
### Example usage:
```python
from transformers import AlbertTokenizer, GPT2LMHeadModel
tokenizer = AlbertTokenizer.from_pretrained("Tereveni-AI/gpt2-124M-uk-fiction")
model = GPT2LMHeadModel.from_pretrained("Tereveni-AI/gpt2-124M-uk-fiction")
input_ids = tokenizer.encode("Но зла Юнона, суча дочка,", add_special_tokens=False, return_tensors='pt')
outputs = model.generate(
input_ids,
do_sample=True,
num_return_sequences=3,
max_length=50
)
for i, out in enumerate(outputs):
print("{}: {}".format(i, tokenizer.decode(out)))
```
Prints something like this:
```bash
0: Но зла Юнона, суча дочка, яка затьмарила всі її таємниці: І хто з'їсть її душу, той помре». І, не дочекавшись гніву богів, посунула в пітьму, щоб не бачити перед собою. Але, за
1: Но зла Юнона, суча дочка, і довела мене до божевілля. Але він не знав нічого. Після того як я його побачив, мені стало зле. Я втратив рівновагу. Але в мене не було часу на роздуми. Я вже втратив надію
2: Но зла Юнона, суча дочка, не нарікала нам! — раптом вигукнула Юнона. — Це ти, старий йолопе! — мовила вона, не перестаючи сміятись. — Хіба ти не знаєш, що мені подобається ходити з тобою?
```
|
{"language": "uk", "tags": ["text-generation"], "widget": [{"text": "\u041d\u043e \u0437\u043b\u0430 \u042e\u043d\u043e\u043d\u0430, \u0441\u0443\u0447\u0430 \u0434\u043e\u0447\u043a\u0430, "}]}
|
Tereveni-AI/gpt2-124M-uk-fiction
| null |
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"uk",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"uk"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #uk #endpoints_compatible #has_space #text-generation-inference #region-us
|
Note: default code snippet above won't work because we are using 'AlbertTokenizer' with 'GPT2LMHeadModel', see issue.
## GPT2 124M Trained on Ukranian Fiction
### Training details
Model was trained on corpus of 4040 fiction books, 2.77 GiB in total.
Evaluation on brown-uk gives perplexity of 50.16.
### Example usage:
Prints something like this:
|
[
"## GPT2 124M Trained on Ukranian Fiction",
"### Training details\n\nModel was trained on corpus of 4040 fiction books, 2.77 GiB in total.\nEvaluation on brown-uk gives perplexity of 50.16.",
"### Example usage:\n\n\nPrints something like this:"
] |
[
"TAGS\n#transformers #pytorch #jax #gpt2 #text-generation #uk #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"## GPT2 124M Trained on Ukranian Fiction",
"### Training details\n\nModel was trained on corpus of 4040 fiction books, 2.77 GiB in total.\nEvaluation on brown-uk gives perplexity of 50.16.",
"### Example usage:\n\n\nPrints something like this:"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-53-Tamil
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Tamil using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "{lang_id}", split="test[:2%]") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
processor = Wav2Vec2Processor.from_pretrained("{model_id}") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
model = Wav2Vec2ForCTC.from_pretrained("{model_id}") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Tamil test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "{lang_id}", split="test") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("{model_id}") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
model = Wav2Vec2ForCTC.from_pretrained("{model_id}") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
model.to("cuda")
chars_to_ignore_regex = '[\\\\,\\\\?\\\\.\\\\!\\\\-\\\\;\\\\:\\\\"\\\\“]' # TODO: adapt this list to include all special characters you removed from the data
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
\\tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
\\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\\twith torch.no_grad():
\\t\\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
\\tpred_ids = torch.argmax(logits, dim=-1)
\\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\\treturn batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 100.00 %
## Training
The Common Voice `train`, `validation` were used for training
The script used for training can be found [https://colab.research.google.com/drive/1PC2SjxpcWMQ2qmRw21NbP38wtQQUa5os#scrollTo=YKBZdqqJG9Tv](...)
|
{"language": "ta", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "thanish wav2vec2-large-xlsr-tamil", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice ta", "type": "common_voice", "args": "ta"}, "metrics": [{"type": "wer", "value": 100.0, "name": "Test WER"}]}]}]}
|
Thanish/wav2vec2-large-xlsr-tamil
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"ta",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ta"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #ta #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-53-Tamil
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Tamil using the Common Voice dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Tamil test data of Common Voice.
Test Result: 100.00 %
## Training
The Common Voice 'train', 'validation' were used for training
The script used for training can be found URL
|
[
"# Wav2Vec2-Large-XLSR-53-Tamil\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Tamil using the Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Tamil test data of Common Voice.\n\n\n\n\nTest Result: 100.00 %",
"## Training\n\nThe Common Voice 'train', 'validation' were used for training \n\nThe script used for training can be found URL"
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #ta #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-53-Tamil\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Tamil using the Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Tamil test data of Common Voice.\n\n\n\n\nTest Result: 100.00 %",
"## Training\n\nThe Common Voice 'train', 'validation' were used for training \n\nThe script used for training can be found URL"
] |
text-generation
|
transformers
|
This is an improved version of the Joshua bot
|
{"tags": ["conversational"]}
|
ThatSkyFox/DialoGPT-medium-joshua
| null |
[
"transformers",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
This is an improved version of the Joshua bot
|
[] |
[
"TAGS\n#transformers #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation
|
transformers
|
#This is a chatbot trained on the transcript of the game "The World Ends with You"
|
{"tags": ["conversational"]}
|
ThatSkyFox/DialoGPT-small-joshua
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#This is a chatbot trained on the transcript of the game "The World Ends with You"
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation
|
transformers
|
# Tifa DialoGPT Model
|
{"tags": ["conversational"]}
|
The-Programmer-With-Cool-Pens/TifaBotAIPackage
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Tifa DialoGPT Model
|
[
"# Tifa DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Tifa DialoGPT Model"
] |
text-generation
|
transformers
|
ruGPT3-small model, trained on some 2chan posts
|
{}
|
TheBakerCat/2chan_ruGPT3_small
| null |
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
ruGPT3-small model, trained on some 2chan posts
|
[] |
[
"TAGS\n#transformers #pytorch #jax #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation
|
transformers
|
#Joshua
|
{"tags": ["conversational"]}
|
TheCatsMoo/DialoGGPT-small-joshua
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#Joshua
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation
|
transformers
|
# A Talking AI made with GPT2 trained with Harry Potter transcripts
## Currently working on Text to speech and speech recognition
## Likes to say "i'm not a wizard"
|
{"tags": ["conversational"]}
|
TheDiamondKing/DialoGPT-small-harrypotter
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# A Talking AI made with GPT2 trained with Harry Potter transcripts
## Currently working on Text to speech and speech recognition
## Likes to say "i'm not a wizard"
|
[
"# A Talking AI made with GPT2 trained with Harry Potter transcripts",
"## Currently working on Text to speech and speech recognition",
"## Likes to say \"i'm not a wizard\""
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# A Talking AI made with GPT2 trained with Harry Potter transcripts",
"## Currently working on Text to speech and speech recognition",
"## Likes to say \"i'm not a wizard\""
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-toxic
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1295
- Rouge1: 93.7659
- Rouge2: 3.6618
- Rougel: 93.7652
- Rougelsum: 93.7757
- Gen Len: 2.5481
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 0.1595 | 1.0 | 7979 | 0.1295 | 93.7659 | 3.6618 | 93.7652 | 93.7757 | 2.5481 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "model_index": [{"name": "t5-small-finetuned-toxic", "results": [{"task": {"name": "Sequence-to-sequence Language Modeling", "type": "text2text-generation"}, "metric": {"name": "Rouge1", "type": "rouge", "value": 93.7659}}]}]}
|
TheLongSentance/t5-small-finetuned-toxic
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
t5-small-finetuned-toxic
========================
This model is a fine-tuned version of t5-small on an unkown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1295
* Rouge1: 93.7659
* Rouge2: 3.6618
* Rougel: 93.7652
* Rougelsum: 93.7757
* Gen Len: 2.5481
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.9.1
* Pytorch 1.9.0+cu102
* Datasets 1.11.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.1\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.1\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3833
- Rouge1: 29.6452
- Rouge2: 8.6953
- Rougel: 23.4474
- Rougelsum: 23.4553
- Gen Len: 18.8037
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.6051 | 1.0 | 102023 | 2.3833 | 29.6452 | 8.6953 | 23.4474 | 23.4553 | 18.8037 |
### Framework versions
- Transformers 4.9.0
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["xsum"], "metrics": ["rouge"], "model_index": [{"name": "t5-small-finetuned-xsum", "results": [{"task": {"name": "Sequence-to-sequence Language Modeling", "type": "text2text-generation"}, "dataset": {"name": "xsum", "type": "xsum", "args": "default"}, "metric": {"name": "Rouge1", "type": "rouge", "value": 29.6452}}]}]}
|
TheLongSentance/t5-small-finetuned-xsum
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-xsum #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
t5-small-finetuned-xsum
=======================
This model is a fine-tuned version of t5-small on the xsum dataset.
It achieves the following results on the evaluation set:
* Loss: 2.3833
* Rouge1: 29.6452
* Rouge2: 8.6953
* Rougel: 23.4474
* Rougelsum: 23.4553
* Gen Len: 18.8037
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 2
* eval\_batch\_size: 2
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.9.0
* Pytorch 1.9.0+cu102
* Datasets 1.10.2
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.0\n* Pytorch 1.9.0+cu102\n* Datasets 1.10.2\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-xsum #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.0\n* Pytorch 1.9.0+cu102\n* Datasets 1.10.2\n* Tokenizers 0.10.3"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5_large_baseline
This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0010
- Rouge1: 99.8958
- Rouge2: 99.8696
- Rougel: 99.8958
- Rougelsum: 99.8958
- Gen Len: 46.715
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adafactor
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.9852 | 0.33 | 50 | 0.1098 | 55.1421 | 49.8248 | 54.4294 | 54.7377 | 19.0 |
| 0.1186 | 0.67 | 100 | 0.0176 | 58.0994 | 54.8973 | 57.7383 | 57.9538 | 19.0 |
| 0.0417 | 1.0 | 150 | 0.0057 | 58.3685 | 55.7353 | 58.279 | 58.2729 | 19.0 |
| 0.0225 | 1.33 | 200 | 0.0029 | 58.8981 | 56.2457 | 58.8202 | 58.7906 | 19.0 |
| 0.0131 | 1.67 | 250 | 0.0024 | 58.8439 | 56.2535 | 58.7557 | 58.7218 | 19.0 |
| 0.0112 | 2.0 | 300 | 0.0013 | 58.9538 | 56.4749 | 58.9322 | 58.8817 | 19.0 |
| 0.0077 | 2.33 | 350 | 0.0013 | 58.9538 | 56.4749 | 58.9322 | 58.8817 | 19.0 |
| 0.0043 | 2.67 | 400 | 0.0010 | 59.0124 | 56.5806 | 58.9867 | 58.9342 | 19.0 |
| 0.0052 | 3.0 | 450 | 0.0010 | 59.0402 | 56.6982 | 59.0385 | 58.986 | 19.0 |
### Framework versions
- Transformers 4.10.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.11.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "model_index": [{"name": "t5_large_baseline", "results": [{"task": {"name": "Summarization", "type": "summarization"}, "metric": {"name": "Rouge1", "type": "rouge", "value": 99.8958}}]}]}
|
TheLongSentance/t5_large_baseline
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #t5 #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
t5\_large\_baseline
===================
This model is a fine-tuned version of t5-large on an unkown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0010
* Rouge1: 99.8958
* Rouge2: 99.8696
* Rougel: 99.8958
* Rougelsum: 99.8958
* Gen Len: 46.715
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adafactor
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.10.0.dev0
* Pytorch 1.9.0+cu111
* Datasets 1.11.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adafactor\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.10.0.dev0\n* Pytorch 1.9.0+cu111\n* Datasets 1.11.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adafactor\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.10.0.dev0\n* Pytorch 1.9.0+cu111\n* Datasets 1.11.0\n* Tokenizers 0.10.3"
] |
text-generation
|
transformers
|
# Harry DialoGPT Model
|
{"tags": ["conversational"]}
|
ThePeachOx/DialoGPT-small-harry
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Harry DialoGPT Model
|
[
"# Harry DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Harry DialoGPT Model"
] |
fill-mask
|
transformers
|
EconBERTa - RoBERTa further trained for 25k steps (T=512, batch_size = 256) on text sourced from economics books.
Example usage for MLM:
```python
from transformers import RobertaTokenizer, RobertaForMaskedLM
from transformers import pipeline
tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
model = RobertaForMaskedLM.from_pretrained('models').cpu()
model.eval()
mlm = pipeline('fill-mask', model = model, tokenizer = tokenizer)
test = "ECB - euro, FED - <mask>, BoJ - yen"
print(mlm(test)[:2])
[{'sequence': 'ECB - euro, FED - dollar, BoJ - yen',
'score': 0.7342271208763123,
'token': 1404,
'token_str': ' dollar'},
{'sequence': 'ECB - euro, FED - dollars, BoJ - yen',
'score': 0.10828445851802826,
'token': 1932,
'token_str': ' dollars'}]
```
|
{}
|
ThePixOne/EconBERTa
| null |
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us
|
EconBERTa - RoBERTa further trained for 25k steps (T=512, batch_size = 256) on text sourced from economics books.
Example usage for MLM:
|
[] |
[
"TAGS\n#transformers #pytorch #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask
|
transformers
|
BERT finetuned on wallstreetbets subreddit
|
{}
|
ThePixOne/retBERT
| null |
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us
|
BERT finetuned on wallstreetbets subreddit
|
[] |
[
"TAGS\n#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-generation
| null |
#Rick DialoGPT Model
|
{"tags": ["conversational"]}
|
TheReverendWes/DialoGPT-small-rick
| null |
[
"conversational",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#conversational #region-us
|
#Rick DialoGPT Model
|
[] |
[
"TAGS\n#conversational #region-us \n"
] |
text-generation
|
transformers
|
# Hemione Chat Bot
|
{"tags": ["conversational"]}
|
TheTUFGuy/HermioneChatBot
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Hemione Chat Bot
|
[
"# Hemione Chat Bot"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Hemione Chat Bot"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-twitter_sentiment
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6907
- Accuracy: 0.7132
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.8901 | 1.0 | 1387 | 0.8592 | 0.6249 |
| 0.8085 | 2.0 | 2774 | 0.7600 | 0.6822 |
| 0.7336 | 3.0 | 4161 | 0.7170 | 0.6915 |
| 0.6938 | 4.0 | 5548 | 0.7018 | 0.7016 |
| 0.6738 | 5.0 | 6935 | 0.6926 | 0.7067 |
| 0.6496 | 6.0 | 8322 | 0.6910 | 0.7088 |
| 0.6599 | 7.0 | 9709 | 0.6902 | 0.7088 |
| 0.631 | 8.0 | 11096 | 0.6910 | 0.7095 |
| 0.6327 | 9.0 | 12483 | 0.6925 | 0.7146 |
| 0.6305 | 10.0 | 13870 | 0.6907 | 0.7132 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "bert-base-cased-twitter_sentiment", "results": []}]}
|
Theivaprakasham/bert-base-cased-twitter_sentiment
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
bert-base-cased-twitter\_sentiment
==================================
This model is a fine-tuned version of bert-base-cased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6907
* Accuracy: 0.7132
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-06
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 10
### Training results
### Framework versions
* Transformers 4.12.5
* Pytorch 1.10.0+cu111
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-06\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-06\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-finetuned-sroie
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on the sroie dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0291
- Address Precision: 0.9341
- Address Recall: 0.9395
- Address F1: 0.9368
- Address Number: 347
- Company Precision: 0.9570
- Company Recall: 0.9625
- Company F1: 0.9598
- Company Number: 347
- Date Precision: 0.9885
- Date Recall: 0.9885
- Date F1: 0.9885
- Date Number: 347
- Total Precision: 0.9253
- Total Recall: 0.9280
- Total F1: 0.9266
- Total Number: 347
- Overall Precision: 0.9512
- Overall Recall: 0.9546
- Overall F1: 0.9529
- Overall Accuracy: 0.9961
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 3000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Address Precision | Address Recall | Address F1 | Address Number | Company Precision | Company Recall | Company F1 | Company Number | Date Precision | Date Recall | Date F1 | Date Number | Total Precision | Total Recall | Total F1 | Total Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:--------------:|:-----------------:|:--------------:|:----------:|:--------------:|:--------------:|:-----------:|:-------:|:-----------:|:---------------:|:------------:|:--------:|:------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| No log | 0.05 | 157 | 0.8162 | 0.3670 | 0.7233 | 0.4869 | 347 | 0.0617 | 0.0144 | 0.0234 | 347 | 0.0 | 0.0 | 0.0 | 347 | 0.0 | 0.0 | 0.0 | 347 | 0.3346 | 0.1844 | 0.2378 | 0.9342 |
| No log | 1.05 | 314 | 0.3490 | 0.8564 | 0.8934 | 0.8745 | 347 | 0.8610 | 0.9280 | 0.8932 | 347 | 0.7297 | 0.8559 | 0.7878 | 347 | 0.0 | 0.0 | 0.0 | 347 | 0.8128 | 0.6693 | 0.7341 | 0.9826 |
| No log | 2.05 | 471 | 0.1845 | 0.7970 | 0.9049 | 0.8475 | 347 | 0.9211 | 0.9424 | 0.9316 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.0 | 0.0 | 0.0 | 347 | 0.8978 | 0.7089 | 0.7923 | 0.9835 |
| 0.7027 | 3.05 | 628 | 0.1194 | 0.9040 | 0.9222 | 0.9130 | 347 | 0.8880 | 0.9135 | 0.9006 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.0 | 0.0 | 0.0 | 347 | 0.9263 | 0.7061 | 0.8013 | 0.9853 |
| 0.7027 | 4.05 | 785 | 0.0762 | 0.9397 | 0.9424 | 0.9410 | 347 | 0.8889 | 0.9222 | 0.9052 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.7740 | 0.9078 | 0.8355 | 347 | 0.8926 | 0.9402 | 0.9158 | 0.9928 |
| 0.7027 | 5.05 | 942 | 0.0564 | 0.9282 | 0.9308 | 0.9295 | 347 | 0.9296 | 0.9510 | 0.9402 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.7801 | 0.8588 | 0.8176 | 347 | 0.9036 | 0.9323 | 0.9177 | 0.9946 |
| 0.0935 | 6.05 | 1099 | 0.0548 | 0.9222 | 0.9222 | 0.9222 | 347 | 0.6975 | 0.7378 | 0.7171 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.8608 | 0.8732 | 0.8670 | 347 | 0.8648 | 0.8804 | 0.8725 | 0.9921 |
| 0.0935 | 7.05 | 1256 | 0.0410 | 0.92 | 0.9280 | 0.9240 | 347 | 0.9486 | 0.9568 | 0.9527 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.9091 | 0.9222 | 0.9156 | 347 | 0.9414 | 0.9488 | 0.9451 | 0.9961 |
| 0.0935 | 8.05 | 1413 | 0.0369 | 0.9368 | 0.9395 | 0.9381 | 347 | 0.9569 | 0.9597 | 0.9583 | 347 | 0.9772 | 0.9885 | 0.9828 | 347 | 0.9143 | 0.9222 | 0.9182 | 347 | 0.9463 | 0.9524 | 0.9494 | 0.9960 |
| 0.038 | 9.05 | 1570 | 0.0343 | 0.9282 | 0.9308 | 0.9295 | 347 | 0.9624 | 0.9597 | 0.9610 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.9206 | 0.9020 | 0.9112 | 347 | 0.9500 | 0.9452 | 0.9476 | 0.9958 |
| 0.038 | 10.05 | 1727 | 0.0317 | 0.9395 | 0.9395 | 0.9395 | 347 | 0.9598 | 0.9625 | 0.9612 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.9280 | 0.9280 | 0.9280 | 347 | 0.9539 | 0.9546 | 0.9543 | 0.9963 |
| 0.038 | 11.05 | 1884 | 0.0312 | 0.9368 | 0.9395 | 0.9381 | 347 | 0.9514 | 0.9597 | 0.9555 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.9226 | 0.9280 | 0.9253 | 347 | 0.9498 | 0.9539 | 0.9518 | 0.9960 |
| 0.0236 | 12.05 | 2041 | 0.0318 | 0.9368 | 0.9395 | 0.9381 | 347 | 0.9570 | 0.9625 | 0.9598 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.9043 | 0.8991 | 0.9017 | 347 | 0.9467 | 0.9474 | 0.9471 | 0.9956 |
| 0.0236 | 13.05 | 2198 | 0.0291 | 0.9337 | 0.9337 | 0.9337 | 347 | 0.9598 | 0.9625 | 0.9612 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.9164 | 0.9164 | 0.9164 | 347 | 0.9496 | 0.9503 | 0.9499 | 0.9960 |
| 0.0236 | 14.05 | 2355 | 0.0300 | 0.9286 | 0.9366 | 0.9326 | 347 | 0.9459 | 0.9568 | 0.9513 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.9275 | 0.9222 | 0.9249 | 347 | 0.9476 | 0.9510 | 0.9493 | 0.9959 |
| 0.0178 | 15.05 | 2512 | 0.0307 | 0.9366 | 0.9366 | 0.9366 | 347 | 0.9513 | 0.9568 | 0.9540 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.9275 | 0.9222 | 0.9249 | 347 | 0.9510 | 0.9510 | 0.9510 | 0.9959 |
| 0.0178 | 16.05 | 2669 | 0.0300 | 0.9312 | 0.9366 | 0.9339 | 347 | 0.9543 | 0.9625 | 0.9584 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.9171 | 0.9251 | 0.9211 | 347 | 0.9477 | 0.9532 | 0.9504 | 0.9959 |
| 0.0178 | 17.05 | 2826 | 0.0292 | 0.9368 | 0.9395 | 0.9381 | 347 | 0.9570 | 0.9625 | 0.9598 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.9253 | 0.9280 | 0.9266 | 347 | 0.9519 | 0.9546 | 0.9532 | 0.9961 |
| 0.0178 | 18.05 | 2983 | 0.0291 | 0.9341 | 0.9395 | 0.9368 | 347 | 0.9570 | 0.9625 | 0.9598 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.9253 | 0.9280 | 0.9266 | 347 | 0.9512 | 0.9546 | 0.9529 | 0.9961 |
| 0.0149 | 19.01 | 3000 | 0.0291 | 0.9341 | 0.9395 | 0.9368 | 347 | 0.9570 | 0.9625 | 0.9598 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.9253 | 0.9280 | 0.9266 | 347 | 0.9512 | 0.9546 | 0.9529 | 0.9961 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.8.0+cu101
- Datasets 1.18.4.dev0
- Tokenizers 0.11.6
|
{"license": "cc-by-nc-sa-4.0", "tags": ["generated_from_trainer"], "datasets": ["sroie"], "model-index": [{"name": "layoutlmv2-finetuned-sroie", "results": []}]}
|
Theivaprakasham/layoutlmv2-finetuned-sroie
| null |
[
"transformers",
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"generated_from_trainer",
"dataset:sroie",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #layoutlmv2 #token-classification #generated_from_trainer #dataset-sroie #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
layoutlmv2-finetuned-sroie
==========================
This model is a fine-tuned version of microsoft/layoutlmv2-base-uncased on the sroie dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0291
* Address Precision: 0.9341
* Address Recall: 0.9395
* Address F1: 0.9368
* Address Number: 347
* Company Precision: 0.9570
* Company Recall: 0.9625
* Company F1: 0.9598
* Company Number: 347
* Date Precision: 0.9885
* Date Recall: 0.9885
* Date F1: 0.9885
* Date Number: 347
* Total Precision: 0.9253
* Total Recall: 0.9280
* Total F1: 0.9266
* Total Number: 347
* Overall Precision: 0.9512
* Overall Recall: 0.9546
* Overall F1: 0.9529
* Overall Accuracy: 0.9961
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.1
* training\_steps: 3000
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.8.0+cu101
* Datasets 1.18.4.dev0
* Tokenizers 0.11.6
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* training\\_steps: 3000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.8.0+cu101\n* Datasets 1.18.4.dev0\n* Tokenizers 0.11.6"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #layoutlmv2 #token-classification #generated_from_trainer #dataset-sroie #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* training\\_steps: 3000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.8.0+cu101\n* Datasets 1.18.4.dev0\n* Tokenizers 0.11.6"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-finetuned-sroie_mod
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 3000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.8.0+cu101
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "cc-by-nc-sa-4.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "layoutlmv2-finetuned-sroie_mod", "results": []}]}
|
Theivaprakasham/layoutlmv2-finetuned-sroie_mod
| null |
[
"transformers",
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #layoutlmv2 #token-classification #generated_from_trainer #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# layoutlmv2-finetuned-sroie_mod
This model is a fine-tuned version of microsoft/layoutlmv2-base-uncased on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 3000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.8.0+cu101
- Datasets 1.18.3
- Tokenizers 0.11.0
|
[
"# layoutlmv2-finetuned-sroie_mod\n\nThis model is a fine-tuned version of microsoft/layoutlmv2-base-uncased on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 3000\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.8.0+cu101\n- Datasets 1.18.3\n- Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #layoutlmv2 #token-classification #generated_from_trainer #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# layoutlmv2-finetuned-sroie_mod\n\nThis model is a fine-tuned version of microsoft/layoutlmv2-base-uncased on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 3000\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.8.0+cu101\n- Datasets 1.18.3\n- Tokenizers 0.11.0"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentence-transformers-msmarco-distilbert-base-tas-b-twitter_sentiment
This model is a fine-tuned version of [sentence-transformers/msmarco-distilbert-base-tas-b](https://huggingface.co/sentence-transformers/msmarco-distilbert-base-tas-b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6954
- Accuracy: 0.7146
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.8892 | 1.0 | 1387 | 0.8472 | 0.6180 |
| 0.7965 | 2.0 | 2774 | 0.7797 | 0.6609 |
| 0.7459 | 3.0 | 4161 | 0.7326 | 0.6872 |
| 0.7096 | 4.0 | 5548 | 0.7133 | 0.6995 |
| 0.6853 | 5.0 | 6935 | 0.6998 | 0.7002 |
| 0.6561 | 6.0 | 8322 | 0.6949 | 0.7059 |
| 0.663 | 7.0 | 9709 | 0.6956 | 0.7077 |
| 0.6352 | 8.0 | 11096 | 0.6890 | 0.7164 |
| 0.6205 | 9.0 | 12483 | 0.6888 | 0.7117 |
| 0.6203 | 10.0 | 13870 | 0.6871 | 0.7121 |
| 0.6005 | 11.0 | 15257 | 0.6879 | 0.7171 |
| 0.5985 | 12.0 | 16644 | 0.6870 | 0.7139 |
| 0.5839 | 13.0 | 18031 | 0.6882 | 0.7164 |
| 0.5861 | 14.0 | 19418 | 0.6910 | 0.7124 |
| 0.5732 | 15.0 | 20805 | 0.6916 | 0.7153 |
| 0.5797 | 16.0 | 22192 | 0.6947 | 0.7110 |
| 0.5565 | 17.0 | 23579 | 0.6930 | 0.7175 |
| 0.5636 | 18.0 | 24966 | 0.6959 | 0.7106 |
| 0.5642 | 19.0 | 26353 | 0.6952 | 0.7132 |
| 0.5717 | 20.0 | 27740 | 0.6954 | 0.7146 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "sentence-transformers-msmarco-distilbert-base-tas-b-twitter_sentiment", "results": []}]}
|
Theivaprakasham/sentence-transformers-msmarco-distilbert-base-tas-b-twitter_sentiment
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
sentence-transformers-msmarco-distilbert-base-tas-b-twitter\_sentiment
======================================================================
This model is a fine-tuned version of sentence-transformers/msmarco-distilbert-base-tas-b on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6954
* Accuracy: 0.7146
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-06
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 20
### Training results
### Framework versions
* Transformers 4.13.0.dev0
* Pytorch 1.10.0+cu111
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-06\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-06\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4475
- Wer: 0.3400
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.6929 | 4.0 | 500 | 2.4485 | 1.0009 |
| 0.9441 | 8.0 | 1000 | 0.4848 | 0.4758 |
| 0.3016 | 12.0 | 1500 | 0.4464 | 0.4016 |
| 0.1715 | 16.0 | 2000 | 0.4666 | 0.3765 |
| 0.1277 | 20.0 | 2500 | 0.4340 | 0.3515 |
| 0.1082 | 24.0 | 3000 | 0.4544 | 0.3495 |
| 0.0819 | 28.0 | 3500 | 0.4475 | 0.3400 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-timit-demo-colab", "results": []}]}
|
Theivaprakasham/wav2vec2-base-timit-demo-colab
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-base-timit-demo-colab
==============================
This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4475
* Wer: 0.3400
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 32
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 30
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.13.3
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
text-generation
|
transformers
|
#Stewie DialoGPT Model
|
{"tags": ["conversational"]}
|
Thejas/DialoGPT-small-Stewei
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#Stewie DialoGPT Model
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation
|
transformers
|
#Elon Musk DialoGPT Model
|
{"tags": ["conversational"]}
|
Thejas/DialoGPT-small-elon
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#Elon Musk DialoGPT Model
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"]}
|
Thitaree/distilbert-base-uncased-finetuned-squad
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of distilbert-base-uncased on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
[
"# distilbert-base-uncased-finetuned-squad\n\nThis model is a fine-tuned version of distilbert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Framework versions\n\n- Transformers 4.10.0\n- Pytorch 1.9.0+cu102\n- Datasets 1.11.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# distilbert-base-uncased-finetuned-squad\n\nThis model is a fine-tuned version of distilbert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Framework versions\n\n- Transformers 4.10.0\n- Pytorch 1.9.0+cu102\n- Datasets 1.11.0\n- Tokenizers 0.10.3"
] |
text2text-generation
|
transformers
|
# t5-qa_squad2neg-en
## Model description
This model is a *Question Answering* model based on T5-small.
It is actually a component of [QuestEval](https://github.com/ThomasScialom/QuestEval) metric but can be used independently as it is, for QA only.
## How to use
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("ThomasNLG/t5-qa_squad2neg-en")
model = T5ForConditionalGeneration.from_pretrained("ThomasNLG/t5-qa_squad2neg-en")
```
You can play with the model using the inference API, the text input format should follow this template (accordingly to the training stage of the model):
`text_input = "{QUESTION} </s> {CONTEXT}"`
## Training data
The model was trained on:
- SQuAD-v2
- SQuAD-v2 neg: in addition to the training data of SQuAD-v2, for each answerable example, a negative sampled example has been added with the label *unanswerable* to help the model learning when the question is not answerable given the context. For more details, see the [paper](https://arxiv.org/abs/2103.12693).
### Citation info
```bibtex
@article{scialom2020QuestEval,
title={QuestEval: Summarization Asks for Fact-based Evaluation},
author={Scialom, Thomas and Dray, Paul-Alexis and Gallinari, Patrick and Lamprier, Sylvain and Piwowarski, Benjamin and Staiano, Jacopo and Wang, Alex},
journal={arXiv preprint arXiv:2103.12693},
year={2021}
}
```
|
{"language": "en", "license": "mit", "tags": ["qa", "question", "answering", "SQuAD", "metric", "nlg", "t5-small"], "datasets": ["squad_v2"], "widget": [{"text": "Who was Louis 14? </s> Louis 14 was a French King."}]}
|
ThomasNLG/t5-qa_squad2neg-en
| null |
[
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"qa",
"question",
"answering",
"SQuAD",
"metric",
"nlg",
"t5-small",
"en",
"dataset:squad_v2",
"arxiv:2103.12693",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2103.12693"
] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #t5 #text2text-generation #qa #question #answering #SQuAD #metric #nlg #t5-small #en #dataset-squad_v2 #arxiv-2103.12693 #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# t5-qa_squad2neg-en
## Model description
This model is a *Question Answering* model based on T5-small.
It is actually a component of QuestEval metric but can be used independently as it is, for QA only.
## How to use
You can play with the model using the inference API, the text input format should follow this template (accordingly to the training stage of the model):
'text_input = "{QUESTION} </s> {CONTEXT}"'
## Training data
The model was trained on:
- SQuAD-v2
- SQuAD-v2 neg: in addition to the training data of SQuAD-v2, for each answerable example, a negative sampled example has been added with the label *unanswerable* to help the model learning when the question is not answerable given the context. For more details, see the paper.
info
|
[
"# t5-qa_squad2neg-en",
"## Model description\nThis model is a *Question Answering* model based on T5-small. \nIt is actually a component of QuestEval metric but can be used independently as it is, for QA only.",
"## How to use\n\n\nYou can play with the model using the inference API, the text input format should follow this template (accordingly to the training stage of the model):\n\n'text_input = \"{QUESTION} </s> {CONTEXT}\"'",
"## Training data\nThe model was trained on: \n- SQuAD-v2\n- SQuAD-v2 neg: in addition to the training data of SQuAD-v2, for each answerable example, a negative sampled example has been added with the label *unanswerable* to help the model learning when the question is not answerable given the context. For more details, see the paper.\n\n\ninfo"
] |
[
"TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #qa #question #answering #SQuAD #metric #nlg #t5-small #en #dataset-squad_v2 #arxiv-2103.12693 #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# t5-qa_squad2neg-en",
"## Model description\nThis model is a *Question Answering* model based on T5-small. \nIt is actually a component of QuestEval metric but can be used independently as it is, for QA only.",
"## How to use\n\n\nYou can play with the model using the inference API, the text input format should follow this template (accordingly to the training stage of the model):\n\n'text_input = \"{QUESTION} </s> {CONTEXT}\"'",
"## Training data\nThe model was trained on: \n- SQuAD-v2\n- SQuAD-v2 neg: in addition to the training data of SQuAD-v2, for each answerable example, a negative sampled example has been added with the label *unanswerable* to help the model learning when the question is not answerable given the context. For more details, see the paper.\n\n\ninfo"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.