modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-30 18:26:50
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 530
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-30 18:26:48
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
masakhane/byt5_en_yor_news
|
masakhane
| 2022-09-24T15:06:09Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"yor",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-10T12:13:58Z |
---
language:
- en
- yor
license: afl-3.0
---
|
masakhane/mt5_yor_en_news
|
masakhane
| 2022-09-24T15:06:08Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"yor",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-10T12:14:35Z |
---
language:
- yor
- en
license: afl-3.0
---
|
masakhane/m2m100_418M_en_yor_news
|
masakhane
| 2022-09-24T15:06:05Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"yor",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-10T12:19:30Z |
---
language:
- en
- yor
license: afl-3.0
---
|
masakhane/m2m100_418M_en_yor_rel_news_ft
|
masakhane
| 2022-09-24T15:06:03Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"yor",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-10T12:20:48Z |
---
language:
- en
- yor
license: afl-3.0
---
|
masakhane/m2m100_418M_yor_en_rel_news_ft
|
masakhane
| 2022-09-24T15:06:03Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"yor",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-10T12:21:05Z |
---
language:
- yor
- en
license: afl-3.0
---
|
masakhane/m2m100_418M_yor_en_rel_ft
|
masakhane
| 2022-09-24T15:06:02Z | 104 | 1 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"yor",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-10T12:21:44Z |
---
language:
- yor
- en
license: afl-3.0
---
|
masakhane/m2m100_418M_yor_en_rel
|
masakhane
| 2022-09-24T15:06:02Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"yor",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-10T12:22:03Z |
---
language:
- yor
- en
license: afl-3.0
---
|
masakhane/m2m100_418M_en_yor_rel
|
masakhane
| 2022-09-24T15:06:01Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"yor",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-10T12:22:17Z |
---
language:
- en
- yor
license: afl-3.0
---
|
masakhane/afrimt5_en_swa_news
|
masakhane
| 2022-09-24T15:06:01Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"en",
"swa",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-10T09:01:16Z |
---
language:
- en
- swa
license: afl-3.0
---
|
masakhane/afrimbart_swa_en_news
|
masakhane
| 2022-09-24T15:06:00Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"swa",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-10T10:09:23Z |
---
language:
- swa
- en
license: afl-3.0
---
|
masakhane/afribyt5_en_swa_news
|
masakhane
| 2022-09-24T15:05:59Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"swa",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-10T10:10:18Z |
---
language:
- en
- swa
license: afl-3.0
---
|
masakhane/afribyt5_swa_en_news
|
masakhane
| 2022-09-24T15:05:58Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"swa",
"en",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-10T10:10:03Z |
---
language:
- swa
- en
license: afl-3.0
---
|
masakhane/byt5_swa_en_news
|
masakhane
| 2022-09-24T15:05:58Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"swa",
"en",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-10T10:10:57Z |
---
language:
- swa
- en
license: afl-3.0
---
|
masakhane/byt5_en_swa_news
|
masakhane
| 2022-09-24T15:05:57Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"swa",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-10T10:10:38Z |
---
language:
- en
- swa
license: afl-3.0
---
|
masakhane/mt5_swa_en_news
|
masakhane
| 2022-09-24T15:05:56Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"swa",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-10T10:11:18Z |
---
language:
- swa
- en
license: afl-3.0
---
|
masakhane/mbart50_swa_en_news
|
masakhane
| 2022-09-24T15:05:55Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"swa",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-10T10:12:16Z |
---
language:
- swa
- en
license: afl-3.0
---
|
masakhane/m2m100_418M_en_swa_news
|
masakhane
| 2022-09-24T15:05:55Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"swa",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-10T10:12:36Z |
---
language:
- en
- swa
license: afl-3.0
---
|
masakhane/m2m100_418M_en_swa_rel_news
|
masakhane
| 2022-09-24T15:05:54Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"swa",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-10T10:13:30Z |
---
language:
- en
- swa
license: afl-3.0
---
|
masakhane/m2m100_418M_swa_en_news
|
masakhane
| 2022-09-24T15:05:53Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"swa",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-10T10:12:54Z |
---
language:
- swa
- en
license: afl-3.0
---
|
masakhane/m2m100_418M_en_swa_rel_news_ft
|
masakhane
| 2022-09-24T15:05:53Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"swa",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-10T10:13:52Z |
---
language:
- en
- swa
license: afl-3.0
---
|
masakhane/m2m100_418M_en_swa_rel_ft
|
masakhane
| 2022-09-24T15:05:52Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"swa",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-10T10:14:29Z |
---
language:
- en
- swa
license: afl-3.0
---
|
masakhane/m2m100_418M_swa_en_rel_news_ft
|
masakhane
| 2022-09-24T15:05:51Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"swa",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-10T10:14:10Z |
---
language:
- swa
- en
license: afl-3.0
---
|
masakhane/m2m100_418M_en_swa_rel
|
masakhane
| 2022-09-24T15:05:50Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"swa",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-10T10:15:23Z |
---
language:
- en
- swa
license: afl-3.0
---
|
masakhane/m2m100_418M_swa_en_rel
|
masakhane
| 2022-09-24T15:05:50Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"swa",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-10T10:15:07Z |
---
language:
- swa
- en
license: afl-3.0
---
|
masakhane/afrimt5_tsn_en_news
|
masakhane
| 2022-09-24T15:05:49Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"tsn",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-10T13:49:06Z |
---
language:
- tsn
- en
license: afl-3.0
---
|
masakhane/mt5_en_tsn_news
|
masakhane
| 2022-09-24T15:05:45Z | 98 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"en",
"tsn",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-10T13:59:24Z |
---
language:
- en
- tsn
license: afl-3.0
---
|
masakhane/mt5_tsn_en_news
|
masakhane
| 2022-09-24T15:05:44Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"tsn",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-10T13:59:09Z |
---
language:
- tsn
- en
license: afl-3.0
---
|
masakhane/mbart50_tsn_en_news
|
masakhane
| 2022-09-24T15:05:44Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"tsn",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-10T14:02:58Z |
---
language:
- tsn
- en
license: afl-3.0
---
|
masakhane/m2m100_418M_en_tsn_news
|
masakhane
| 2022-09-24T15:05:43Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"tsn",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-10T14:20:25Z |
---
language:
- en
- tsn
license: afl-3.0
---
|
masakhane/m2m100_418M_tsn_en_rel_news
|
masakhane
| 2022-09-24T15:05:42Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"tsn",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-10T14:23:28Z |
---
language:
- tsn
- en
license: afl-3.0
---
|
masakhane/m2m100_418M_tsn_en_rel_news_ft
|
masakhane
| 2022-09-24T15:05:41Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"tsn",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-10T14:33:32Z |
---
language:
- tsn
- en
license: afl-3.0
---
|
masakhane/m2m100_418M_en_tsn_rel_news_ft
|
masakhane
| 2022-09-24T15:05:41Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"tsn",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-10T14:33:05Z |
---
language:
- en
- tsn
license: afl-3.0
---
|
masakhane/m2m100_418M_en_tsn_rel_ft
|
masakhane
| 2022-09-24T15:05:40Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"tsn",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-10T14:32:08Z |
---
language:
- en
- tsn
license: afl-3.0
---
|
masakhane/m2m100_418M_tsn_en_rel
|
masakhane
| 2022-09-24T15:05:39Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"tsn",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-10T14:38:31Z |
---
language:
- tsn
- en
license: afl-3.0
---
|
masakhane/afrimt5_en_twi_news
|
masakhane
| 2022-09-24T15:05:38Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"en",
"twi",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-11T08:50:40Z |
---
language:
- en
- twi
license: afl-3.0
---
|
masakhane/afrimt5_twi_en_news
|
masakhane
| 2022-09-24T15:05:37Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"twi",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-11T08:50:58Z |
---
language:
- twi
- en
license: afl-3.0
---
|
masakhane/byt5_en_twi_news
|
masakhane
| 2022-09-24T15:05:35Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"twi",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-11T09:02:29Z |
---
language:
- en
- twi
license: afl-3.0
---
|
masakhane/byt5_twi_en_news
|
masakhane
| 2022-09-24T15:05:34Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"twi",
"en",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-11T09:02:12Z |
---
language:
- twi
- en
license: afl-3.0
---
|
masakhane/mt5_twi_en_news
|
masakhane
| 2022-09-24T15:05:33Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"twi",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-11T09:05:43Z |
---
language:
- twi
- en
license: afl-3.0
---
|
masakhane/mt5_en_twi_news
|
masakhane
| 2022-09-24T15:05:32Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"en",
"twi",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-11T09:06:00Z |
---
language:
- en
- twi
license: afl-3.0
---
|
masakhane/m2m100_418M_en_twi_rel_news
|
masakhane
| 2022-09-24T15:05:30Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"twi",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-11T09:10:15Z |
---
language:
- en
- twi
license: afl-3.0
---
|
masakhane/m2m100_418M_en_twi_rel_news_ft
|
masakhane
| 2022-09-24T15:05:29Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"twi",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-11T09:12:51Z |
---
language:
- en
- twi
license: afl-3.0
---
|
masakhane/m2m100_418M_twi_en_rel_ft
|
masakhane
| 2022-09-24T15:05:27Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"twi",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-11T09:15:02Z |
---
language:
- twi
- en
license: afl-3.0
---
|
masakhane/m2m100_418M_twi_en_rel
|
masakhane
| 2022-09-24T15:05:26Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"twi",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-11T09:17:38Z |
---
language:
- twi
- en
license: afl-3.0
---
|
masakhane/afrimt5_en_zul_news
|
masakhane
| 2022-09-24T15:05:24Z | 84 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"en",
"zul",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-11T08:52:03Z |
---
language:
- en
- zul
license: afl-3.0
---
|
masakhane/afribyt5_en_zul_news
|
masakhane
| 2022-09-24T15:05:21Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"zul",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-11T08:57:13Z |
---
language:
- en
- zul
license: afl-3.0
---
|
masakhane/afribyt5_zul_en_news
|
masakhane
| 2022-09-24T15:05:21Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"zul",
"en",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-11T08:57:29Z |
---
language:
- zul
- en
license: afl-3.0
---
|
masakhane/byt5_en_zul_news
|
masakhane
| 2022-09-24T15:05:20Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"zul",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-11T09:02:52Z |
---
language:
- en
- zul
license: afl-3.0
---
|
masakhane/mbart50_zul_en_news
|
masakhane
| 2022-09-24T15:05:19Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"zul",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-11T09:04:09Z |
---
language:
- zul
- en
license: afl-3.0
---
|
masakhane/mbart50_en_zul_news
|
masakhane
| 2022-09-24T15:05:18Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"en",
"zul",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-11T09:04:24Z |
---
language:
- en
- zul
license: afl-3.0
---
|
masakhane/mt5_en_zul_news
|
masakhane
| 2022-09-24T15:05:17Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"en",
"zul",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-11T09:06:39Z |
---
language:
- en
- zul
license: afl-3.0
---
|
masakhane/m2m100_418M_zul_en_news
|
masakhane
| 2022-09-24T15:05:16Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"zul",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-11T09:07:50Z |
---
language:
- zul
- en
license: afl-3.0
---
|
masakhane/m2m100_418M_en_zul_rel_news
|
masakhane
| 2022-09-24T15:05:16Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"zul",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-11T09:09:23Z |
---
language:
- en
- zul
license: afl-3.0
---
|
masakhane/m2m100_418M_zul_en_rel_news
|
masakhane
| 2022-09-24T15:05:15Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"zul",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-11T09:09:44Z |
---
language:
- zul
- en
license: afl-3.0
---
|
masakhane/m2m100_418M_en_zul_rel_news_ft
|
masakhane
| 2022-09-24T15:05:14Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"zul",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-11T09:13:58Z |
---
language:
- en
- zul
license: afl-3.0
---
|
masakhane/m2m100_418M_zul_en_rel_ft
|
masakhane
| 2022-09-24T15:05:14Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"zul",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-11T09:15:18Z |
---
language:
- zul
- en
license: afl-3.0
---
|
masakhane/m2m100_418M_en_zul_rel_ft
|
masakhane
| 2022-09-24T15:05:13Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"zul",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-11T09:15:36Z |
---
language:
- en
- zul
license: afl-3.0
---
|
masakhane/m2m100_418M_zul_en_rel
|
masakhane
| 2022-09-24T15:05:12Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"zul",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-11T09:18:45Z |
---
language:
- zul
- en
license: afl-3.0
---
|
masakhane/m2m100_418M_en_zul_rel
|
masakhane
| 2022-09-24T15:05:12Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"zul",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-11T09:18:27Z |
---
language:
- en
- zul
license: afl-3.0
---
|
masakhane/m2m100_418M_en_amh_rel
|
masakhane
| 2022-09-24T15:05:10Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"amh",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-25T22:06:14Z |
---
language:
- en
- amh
license: cc-by-nc-4.0
---
|
masakhane/m2m100_418M_amh_en_rel
|
masakhane
| 2022-09-24T15:05:10Z | 118 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"amh",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-25T22:05:47Z |
---
language:
- amh
- en
license: cc-by-nc-4.0
---
|
masakhane/m2m100_418M_kin_en_rel
|
masakhane
| 2022-09-24T15:05:09Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"kin",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-25T22:06:42Z |
---
language:
- kin
- en
license: cc-by-nc-4.0
---
|
masakhane/m2m100_418M_en_kin_rel
|
masakhane
| 2022-09-24T15:05:09Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"kin",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-25T22:07:12Z |
---
language:
- en
- kin
license: cc-by-nc-4.0
---
|
masakhane/m2m100_418M_en_sna_rel
|
masakhane
| 2022-09-24T15:05:07Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"sna",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-25T22:08:45Z |
---
language:
- en
- sna
license: cc-by-nc-4.0
---
|
CShorten/CORD-19-Title-Abstracts
|
CShorten
| 2022-09-24T14:43:13Z | 6 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-09-11T19:30:32Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
dataset_tag: cord19
---
# CShorten/CORD-19-Title-Abstracts
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('CShorten/CORD-19-Title-Abstracts')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=CShorten/CORD-19-Title-Abstracts)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 5001 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
rosamondthalken/t5-small-sci-names
|
rosamondthalken
| 2022-09-24T14:39:00Z | 166 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-09-16T17:44:50Z |
# t5-base-sci-names
Biodiversity literature is dedicated to the identification, documentation, and categorization of plants, fungi, animals, and other living organisms. Correctly extracting the name of an organism within these documents involves finding the entire scientific name–including the genus, specific epithet, and author name. Extracting these names allows biologists to access documents about a species more comprehensively, and to track an organism’s history of documentation, which includes biological changes and changes in how scientists describe them.
**t5-small-sci-names** uses advances in text-to-text generation to generate scientific names and authors from biodiversity literature. This model was trained on hand-labeled biodiversity texts, including labeled information about a mentioned organism's genus (abbreviated and expanded), specific epithet, and author. This model was trained to output 0-N scientific names with specific prefixes (e.g. "genus = " or "epithet = ") and performs best with anywhere from 20-120 words.
You can also use the model in this tutorial for [scientific names generation](https://colab.research.google.com/drive/1GEpnCaMJYiPIhuZiDJ1X1pZsGtGSm8Ds?usp=sharing).
*Note that this model is still a work in progress. Any feedback is welcome.*
|
pere/pk-nb-t5x
|
pere
| 2022-09-24T14:38:59Z | 0 | 2 | null |
[
"region:us"
] | null | 2022-04-01T06:33:23Z |
Just a placeholder for a future model
|
sd-concepts-library/repeat
|
sd-concepts-library
| 2022-09-24T14:17:05Z | 0 | 2 | null |
[
"license:mit",
"region:us"
] | null | 2022-09-24T14:16:59Z |
---
license: mit
---
### REPEAT on Stable Diffusion
This is the `<repeat>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:




|
gokuls/BERT-tiny-emotion-intent
|
gokuls
| 2022-09-24T14:11:28Z | 268 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-24T14:01:37Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: BERT-tiny-emotion-intent
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.91
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT-tiny-emotion-intent
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3620
- Accuracy: 0.91
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.2603 | 1.0 | 1000 | 0.7766 | 0.7815 |
| 0.5919 | 2.0 | 2000 | 0.4117 | 0.884 |
| 0.367 | 3.0 | 3000 | 0.3188 | 0.8995 |
| 0.2848 | 4.0 | 4000 | 0.2928 | 0.8985 |
| 0.2395 | 5.0 | 5000 | 0.2906 | 0.898 |
| 0.2094 | 6.0 | 6000 | 0.2887 | 0.907 |
| 0.1884 | 7.0 | 7000 | 0.2831 | 0.9065 |
| 0.1603 | 8.0 | 8000 | 0.3044 | 0.9065 |
| 0.1519 | 9.0 | 9000 | 0.3124 | 0.9095 |
| 0.1291 | 10.0 | 10000 | 0.3256 | 0.9065 |
| 0.1179 | 11.0 | 11000 | 0.3651 | 0.9035 |
| 0.1091 | 12.0 | 12000 | 0.3620 | 0.91 |
| 0.0977 | 13.0 | 13000 | 0.3992 | 0.907 |
| 0.0914 | 14.0 | 14000 | 0.4285 | 0.908 |
| 0.0876 | 15.0 | 15000 | 0.4268 | 0.9055 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
sd-concepts-library/osaka-jyo
|
sd-concepts-library
| 2022-09-24T13:47:07Z | 0 | 1 | null |
[
"license:mit",
"region:us"
] | null | 2022-09-24T13:47:03Z |
---
license: mit
---
### osaka jyo on Stable Diffusion
This is the `<osaka-jyo>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
gokuls/distilroberta-emotion-intent
|
gokuls
| 2022-09-24T13:36:17Z | 105 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-24T13:26:33Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: distilroberta-emotion-intent
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9435
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-emotion-intent
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1496
- Accuracy: 0.9435
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4501 | 1.0 | 1000 | 0.2432 | 0.924 |
| 0.1947 | 2.0 | 2000 | 0.1646 | 0.934 |
| 0.1497 | 3.0 | 3000 | 0.1382 | 0.9405 |
| 0.1316 | 4.0 | 4000 | 0.1496 | 0.9435 |
| 0.1145 | 5.0 | 5000 | 0.1684 | 0.9385 |
| 0.1 | 6.0 | 6000 | 0.2342 | 0.943 |
| 0.0828 | 7.0 | 7000 | 0.2807 | 0.939 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
din0s/t5-small-finetuned-en-to-it
|
din0s
| 2022-09-24T13:08:27Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:ccmatrix",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-09-24T12:08:13Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- ccmatrix
metrics:
- bleu
model-index:
- name: t5-small-finetuned-en-to-it
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: ccmatrix
type: ccmatrix
config: en-it
split: train[3000:15000]
args: en-it
metrics:
- name: Bleu
type: bleu
value: 7.3298
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-en-to-it
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the ccmatrix dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2698
- Bleu: 7.3298
- Gen Len: 62.3753
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 125 | 3.0010 | 2.7294 | 56.4513 |
| No log | 2.0 | 250 | 2.8999 | 2.3228 | 81.4993 |
| No log | 3.0 | 375 | 2.8281 | 2.3065 | 92.3353 |
| 3.3202 | 4.0 | 500 | 2.7722 | 2.5982 | 91.8093 |
| 3.3202 | 5.0 | 625 | 2.7254 | 2.9279 | 89.0907 |
| 3.3202 | 6.0 | 750 | 2.6839 | 3.0747 | 89.2827 |
| 3.3202 | 7.0 | 875 | 2.6470 | 3.207 | 87.948 |
| 3.0355 | 8.0 | 1000 | 2.6132 | 3.355 | 85.2487 |
| 3.0355 | 9.0 | 1125 | 2.5835 | 3.8401 | 80.578 |
| 3.0355 | 10.0 | 1250 | 2.5552 | 4.2905 | 75.818 |
| 3.0355 | 11.0 | 1375 | 2.5323 | 4.3866 | 75.2433 |
| 2.8903 | 12.0 | 1500 | 2.5079 | 4.5687 | 74.906 |
| 2.8903 | 13.0 | 1625 | 2.4881 | 4.7844 | 71.5773 |
| 2.8903 | 14.0 | 1750 | 2.4668 | 4.876 | 71.68 |
| 2.8903 | 15.0 | 1875 | 2.4485 | 5.1292 | 70.118 |
| 2.7891 | 16.0 | 2000 | 2.4322 | 5.3297 | 68.894 |
| 2.7891 | 17.0 | 2125 | 2.4161 | 5.555 | 68.2293 |
| 2.7891 | 18.0 | 2250 | 2.4010 | 5.7113 | 67.2907 |
| 2.7891 | 19.0 | 2375 | 2.3892 | 5.9105 | 66.6287 |
| 2.713 | 20.0 | 2500 | 2.3756 | 6.0057 | 66.112 |
| 2.713 | 21.0 | 2625 | 2.3643 | 6.3118 | 64.6193 |
| 2.713 | 22.0 | 2750 | 2.3533 | 6.476 | 64.31 |
| 2.713 | 23.0 | 2875 | 2.3432 | 6.7102 | 63.5467 |
| 2.6584 | 24.0 | 3000 | 2.3342 | 6.7604 | 63.6567 |
| 2.6584 | 25.0 | 3125 | 2.3253 | 6.8418 | 63.6573 |
| 2.6584 | 26.0 | 3250 | 2.3180 | 6.9165 | 63.5893 |
| 2.6584 | 27.0 | 3375 | 2.3120 | 7.0217 | 63.1033 |
| 2.616 | 28.0 | 3500 | 2.3056 | 6.9148 | 63.598 |
| 2.616 | 29.0 | 3625 | 2.2987 | 6.9961 | 63.6267 |
| 2.616 | 30.0 | 3750 | 2.2935 | 7.2238 | 62.8373 |
| 2.616 | 31.0 | 3875 | 2.2892 | 7.1906 | 62.7793 |
| 2.587 | 32.0 | 4000 | 2.2849 | 7.2052 | 63.126 |
| 2.587 | 33.0 | 4125 | 2.2815 | 7.3272 | 62.526 |
| 2.587 | 34.0 | 4250 | 2.2782 | 7.3603 | 62.4313 |
| 2.587 | 35.0 | 4375 | 2.2756 | 7.3072 | 62.6307 |
| 2.5673 | 36.0 | 4500 | 2.2737 | 7.3586 | 62.1633 |
| 2.5673 | 37.0 | 4625 | 2.2718 | 7.3485 | 62.358 |
| 2.5673 | 38.0 | 4750 | 2.2707 | 7.3406 | 62.298 |
| 2.5673 | 39.0 | 4875 | 2.2700 | 7.3233 | 62.42 |
| 2.5591 | 40.0 | 5000 | 2.2698 | 7.3298 | 62.3753 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1
- Datasets 2.5.1
- Tokenizers 0.11.0
|
RebekkaB/san_2409_1325
|
RebekkaB
| 2022-09-24T12:13:11Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-24T11:50:57Z |
---
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: san_2409_1325
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# san_2409_1325
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0992
- F1: 0.7727
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.91 | 5 | 1.9727 | 0.1939 |
| No log | 1.91 | 10 | 1.5642 | 0.3535 |
| No log | 2.91 | 15 | 1.2698 | 0.6818 |
| No log | 3.91 | 20 | 1.3642 | 0.6429 |
| No log | 4.91 | 25 | 1.3411 | 0.6818 |
| No log | 5.91 | 30 | 1.2627 | 0.6818 |
| No log | 6.91 | 35 | 1.1269 | 0.7727 |
| No log | 7.91 | 40 | 1.0719 | 0.7727 |
| No log | 8.91 | 45 | 1.0567 | 0.7727 |
| No log | 9.91 | 50 | 1.1256 | 0.7727 |
| No log | 10.91 | 55 | 0.7085 | 0.7727 |
| No log | 11.91 | 60 | 0.9290 | 0.7727 |
| No log | 12.91 | 65 | 1.0355 | 0.7727 |
| No log | 13.91 | 70 | 1.0866 | 0.7727 |
| No log | 14.91 | 75 | 1.0992 | 0.7727 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
sd-concepts-library/dr-strange
|
sd-concepts-library
| 2022-09-24T12:11:20Z | 0 | 28 | null |
[
"license:mit",
"region:us"
] | null | 2022-09-24T12:11:16Z |
---
license: mit
---
### <dr-strange> on Stable Diffusion
This is the `<dr-strange>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:




|
ckiplab/gpt2-tiny-chinese
|
ckiplab
| 2022-09-24T11:53:54Z | 133 | 5 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"lm-head",
"zh",
"license:gpl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-09-24T11:49:21Z |
---
language:
- zh
thumbnail: https://ckip.iis.sinica.edu.tw/files/ckip_logo.png
tags:
- pytorch
- lm-head
- gpt2
- zh
license: gpl-3.0
---
# CKIP GPT2 Tiny Chinese
This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
## Homepage
- https://github.com/ckiplab/ckip-transformers
## Contributers
- [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer)
## Usage
Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
請使用 BertTokenizerFast 而非 AutoTokenizer。
```
from transformers import (
BertTokenizerFast,
AutoModel,
)
tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese')
model = AutoModel.from_pretrained('ckiplab/gpt2-tiny-chinese')
```
For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers.
有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
|
sd-concepts-library/conway-pirate
|
sd-concepts-library
| 2022-09-24T10:44:50Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2022-09-24T10:44:44Z |
---
license: mit
---
### Conway Pirate on Stable Diffusion
This is the `<conway>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:





|
sd-concepts-library/yesdelete
|
sd-concepts-library
| 2022-09-24T09:46:05Z | 0 | 1 | null |
[
"license:mit",
"region:us"
] | null | 2022-09-24T09:46:01Z |
---
license: mit
---
### yesdelete on Stable Diffusion
This is the `<yesdelete>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:




|
sd-concepts-library/coop-himmelblau
|
sd-concepts-library
| 2022-09-24T09:06:36Z | 0 | 6 | null |
[
"license:mit",
"region:us"
] | null | 2022-09-24T09:06:32Z |
---
license: mit
---
### coop himmelblau on Stable Diffusion
This is the `<coop himmelblau>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:






|
aniketface/DialoGPT-product
|
aniketface
| 2022-09-24T09:05:12Z | 121 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"convAI",
"conversational",
"facebook",
"en",
"dataset:blended_skill_talk",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-09-24T08:41:37Z |
---
language:
- en
thumbnail:
tags:
- convAI
- conversational
- facebook
license: apache-2.0
datasets:
- blended_skill_talk
metrics:
- perplexity
---
|
mlyuya/ddpm-butterflies-128
|
mlyuya
| 2022-09-24T09:02:29Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-09-24T07:27:49Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/mlyuya/ddpm-butterflies-128/tensorboard?#scalars)
|
huggingtweets/it_airmass
|
huggingtweets
| 2022-09-24T06:49:38Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-09-24T06:49:12Z |
---
language: en
thumbnail: http://www.huggingtweets.com/it_airmass/1664002173554/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1529248676647944193/-N1UKgKg_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Airmass</div>
<div style="text-align: center; font-size: 14px;">@it_airmass</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Airmass.
| Data | Airmass |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 126 |
| Short tweets | 370 |
| Tweets kept | 2753 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2f99nys0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @it_airmass's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/nvbqf9p2) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/nvbqf9p2/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/it_airmass')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ckiplab/bert-base-chinese-qa
|
ckiplab
| 2022-09-24T05:25:07Z | 162 | 7 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"zh",
"license:gpl-3.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-09-24T05:17:36Z |
---
language:
- zh
thumbnail: https://ckip.iis.sinica.edu.tw/files/ckip_logo.png
tags:
- pytorch
- question-answering
- bert
- zh
license: gpl-3.0
---
# CKIP BERT Base Chinese
This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
## Homepage
- https://github.com/ckiplab/ckip-transformers
## Contributers
- [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer)
## Usage
Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
請使用 BertTokenizerFast 而非 AutoTokenizer。
```
from transformers import (
BertTokenizerFast,
AutoModel,
)
tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese')
model = AutoModel.from_pretrained('ckiplab/bert-base-chinese-qa')
```
For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers.
有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
|
duyduong9htv/electra-qa-3-finetuned-viet-qa
|
duyduong9htv
| 2022-09-24T03:34:33Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"electra",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-09-23T18:40:06Z |
---
tags:
- generated_from_trainer
model-index:
- name: electra-qa-3-finetuned-viet-qa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-qa-3-finetuned-viet-qa
This model is a fine-tuned version of [NlpHUST/electra-base-vn](https://huggingface.co/NlpHUST/electra-base-vn) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.5498
- eval_runtime: 98.1124
- eval_samples_per_second: 58.474
- eval_steps_per_second: 4.882
- epoch: 4.0
- step: 7648
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.23.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
masakhane/m2m100_418M_xho_en_rel
|
masakhane
| 2022-09-24T00:22:27Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"xho",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-25T22:09:49Z |
---
language:
- xho
- en
license: cc-by-nc-4.0
---
|
carbon225/canine-s-wordseg-en
|
carbon225
| 2022-09-23T23:42:11Z | 98 | 1 |
transformers
|
[
"transformers",
"pytorch",
"canine",
"token-classification",
"en",
"license:cc0-1.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-09-22T03:58:10Z |
---
license: cc0-1.0
language: en
widget:
- text: "thismodelcanperformwordsegmentation"
- text: "sometimesitdoesntworkquitewell"
- text: "expertsexchange"
---
|
HumanCompatibleAI/ppo-AsteroidsNoFrameskip-v4
|
HumanCompatibleAI
| 2022-09-23T22:37:49Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AsteroidsNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-09-23T22:35:00Z |
---
library_name: stable-baselines3
tags:
- AsteroidsNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 1666.00 +/- 472.83
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AsteroidsNoFrameskip-v4
type: AsteroidsNoFrameskip-v4
---
# **PPO** Agent playing **AsteroidsNoFrameskip-v4**
This is a trained model of a **PPO** agent playing **AsteroidsNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo ppo --env AsteroidsNoFrameskip-v4 -orga HumanCompatibleAI -f logs/
python enjoy.py --algo ppo --env AsteroidsNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo ppo --env AsteroidsNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo ppo --env AsteroidsNoFrameskip-v4 -f logs/ -orga HumanCompatibleAI
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('clip_range', 'lin_0.1'),
('ent_coef', 0.01),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('frame_stack', 4),
('learning_rate', 'lin_2.5e-4'),
('n_envs', 8),
('n_epochs', 4),
('n_steps', 128),
('n_timesteps', 10000000.0),
('policy', 'CnnPolicy'),
('vf_coef', 0.5),
('normalize', False)])
```
|
philschmid/openai-whisper-endpoint
|
philschmid
| 2022-09-23T21:26:56Z | 0 | 11 |
generic
|
[
"generic",
"audio",
"automatic-speech-recognition",
"endpoints-template",
"license:mit",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-09-23T20:27:44Z |
---
license: mit
tags:
- audio
- automatic-speech-recognition
- endpoints-template
library_name: generic
inference: false
---
# OpenAI [Whisper](https://github.com/openai/whisper) Inference Endpoint example
> Whisper is a general-purpose speech recognition model. It is trained on a large dataset of diverse audio and is also a multi-task model that can perform multilingual speech recognition as well as speech translation and language identification.
For more information about the model, license and limitations check the original repository at [openai/whisper](https://github.com/openai/whisper).
---
This repository implements a custom `handler` task for `automatic-speech-recognition` for 🤗 Inference Endpoints using OpenAIs new Whisper model. The code for the customized pipeline is in the [pipeline.py](https://huggingface.co/philschmid/openai-whisper-endpoint/blob/main/handler.py).
There is also a [notebook](https://huggingface.co/philschmid/openai-whisper-endpoint/blob/main/create_handler.ipynb) included, on how to create the `handler.py`
### Request
The endpoint expects a binary audio file. Below is a cURL example and a Python example using the `requests` library.
**curl**
```bash
# load audio file
wget https://cdn-media.huggingface.co/speech_samples/sample1.flac
# run request
curl --request POST \
--url https://{ENDPOINT}/ \
--header 'Content-Type: audio/x-flac' \
--header 'Authorization: Bearer {HF_TOKEN}' \
--data-binary '@sample1.flac'
```
**Python**
```python
import json
from typing import List
import requests as r
import base64
import mimetypes
ENDPOINT_URL=""
HF_TOKEN=""
def predict(path_to_audio:str=None):
# read audio file
with open(path_to_audio, "rb") as i:
b = i.read()
# get mimetype
content_type= mimetypes.guess_type(path_to_audio)[0]
headers= {
"Authorization": f"Bearer {HF_TOKEN}",
"Content-Type": content_type
}
response = r.post(ENDPOINT_URL, headers=headers, data=b)
return response.json()
prediction = predict(path_to_audio="sample1.flac")
prediction
```
expected output
```json
{"text": " going along slushy country roads and speaking to damp audiences in draughty school rooms day after day for a fortnight. He'll have to put in an appearance at some place of worship on Sunday morning, and he can come to us immediately afterwards."}
```
|
ericntay/stbl_clinical_bert_ft_rs5
|
ericntay
| 2022-09-23T20:39:56Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-09-23T20:21:55Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: stbl_clinical_bert_ft_rs5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# stbl_clinical_bert_ft_rs5
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0936
- F1: 0.9268
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2723 | 1.0 | 101 | 0.0875 | 0.8479 |
| 0.066 | 2.0 | 202 | 0.0688 | 0.9002 |
| 0.0328 | 3.0 | 303 | 0.0668 | 0.9070 |
| 0.0179 | 4.0 | 404 | 0.0689 | 0.9129 |
| 0.0098 | 5.0 | 505 | 0.0790 | 0.9147 |
| 0.0069 | 6.0 | 606 | 0.0805 | 0.9205 |
| 0.0033 | 7.0 | 707 | 0.0835 | 0.9268 |
| 0.0022 | 8.0 | 808 | 0.0904 | 0.9262 |
| 0.0021 | 9.0 | 909 | 0.0882 | 0.9263 |
| 0.0015 | 10.0 | 1010 | 0.0933 | 0.9289 |
| 0.0009 | 11.0 | 1111 | 0.0921 | 0.9311 |
| 0.0009 | 12.0 | 1212 | 0.0936 | 0.9268 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
ericntay/stbl_clinical_bert_ft_rs4
|
ericntay
| 2022-09-23T20:07:43Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-09-23T19:50:09Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: stbl_clinical_bert_ft_rs4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# stbl_clinical_bert_ft_rs4
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1088
- F1: 0.9076
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2994 | 1.0 | 101 | 0.0977 | 0.8416 |
| 0.0639 | 2.0 | 202 | 0.0846 | 0.8689 |
| 0.0318 | 3.0 | 303 | 0.0781 | 0.8879 |
| 0.0173 | 4.0 | 404 | 0.0770 | 0.8934 |
| 0.0099 | 5.0 | 505 | 0.0905 | 0.9021 |
| 0.005 | 6.0 | 606 | 0.0963 | 0.9020 |
| 0.0031 | 7.0 | 707 | 0.1024 | 0.9095 |
| 0.002 | 8.0 | 808 | 0.1063 | 0.9057 |
| 0.0017 | 9.0 | 909 | 0.1072 | 0.9076 |
| 0.0014 | 10.0 | 1010 | 0.1103 | 0.9089 |
| 0.0013 | 11.0 | 1111 | 0.1093 | 0.9087 |
| 0.0008 | 12.0 | 1212 | 0.1088 | 0.9076 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
sd-concepts-library/carlitos-el-mago
|
sd-concepts-library
| 2022-09-23T19:18:17Z | 0 | 1 | null |
[
"license:mit",
"region:us"
] | null | 2022-09-23T19:18:04Z |
---
license: mit
---
### carlitos el mago on Stable Diffusion
This is the `<carloscarbonell>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
g30rv17ys/ddpm-geeve-drusen-1000-200ep
|
g30rv17ys
| 2022-09-23T19:12:36Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-09-23T15:39:11Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: imagefolder
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-geeve-drusen-1000-200ep
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/geevegeorge/ddpm-geeve-drusen-1000-200ep/tensorboard?#scalars)
|
g30rv17ys/ddpm-geeve-cnv-1000-200ep
|
g30rv17ys
| 2022-09-23T19:10:42Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-09-23T15:29:54Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: imagefolder
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-geeve-cnv-1000-200ep
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/geevegeorge/ddpm-geeve-cnv-1000-200ep/tensorboard?#scalars)
|
g30rv17ys/ddpm-geeve-dme-1000-200ep
|
g30rv17ys
| 2022-09-23T19:09:23Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-09-23T15:34:37Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: imagefolder
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-geeve-dme-1000-200ep
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/geevegeorge/ddpm-geeve-dme-1000-200ep/tensorboard?#scalars)
|
gokuls/distilbert-base-Massive-intent
|
gokuls
| 2022-09-23T19:02:42Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:massive",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-23T18:50:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- massive
metrics:
- accuracy
model-index:
- name: distilbert-base-Massive-intent
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: massive
type: massive
config: en-US
split: train
args: en-US
metrics:
- name: Accuracy
type: accuracy
value: 0.8947368421052632
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-Massive-intent
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the massive dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7693
- Accuracy: 0.8947
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4555 | 1.0 | 720 | 0.5983 | 0.8426 |
| 0.407 | 2.0 | 1440 | 0.4702 | 0.8775 |
| 0.2095 | 3.0 | 2160 | 0.5319 | 0.8834 |
| 0.1172 | 4.0 | 2880 | 0.5902 | 0.8810 |
| 0.0683 | 5.0 | 3600 | 0.6555 | 0.8810 |
| 0.042 | 6.0 | 4320 | 0.6989 | 0.8879 |
| 0.0253 | 7.0 | 5040 | 0.6963 | 0.8928 |
| 0.0208 | 8.0 | 5760 | 0.7313 | 0.8908 |
| 0.0119 | 9.0 | 6480 | 0.7683 | 0.8923 |
| 0.0093 | 10.0 | 7200 | 0.7693 | 0.8947 |
| 0.0071 | 11.0 | 7920 | 0.7873 | 0.8923 |
| 0.0047 | 12.0 | 8640 | 0.8275 | 0.8893 |
| 0.003 | 13.0 | 9360 | 0.8312 | 0.8928 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
tszocinski/bart-base-squad-question-generation
|
tszocinski
| 2022-09-23T18:43:43Z | 75 | 0 |
transformers
|
[
"transformers",
"tf",
"bart",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-09-22T19:36:46Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: tszocinski/bart-base-squad-question-generation
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tszocinski/bart-base-squad-question-generation
This model is a fine-tuned version of [tszocinski/bart-base-squad-question-generation](https://huggingface.co/tszocinski/bart-base-squad-question-generation) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 6.5656
- Validation Loss: 11.1958
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'RMSprop', 'config': {'name': 'RMSprop', 'learning_rate': 0.001, 'decay': 0.0, 'rho': 0.9, 'momentum': 0.0, 'epsilon': 1e-07, 'centered': False}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 6.5656 | 11.1958 | 0 |
### Framework versions
- Transformers 4.22.1
- TensorFlow 2.8.2
- Datasets 2.5.1
- Tokenizers 0.12.1
|
g30rv17ys/ddpm-geeve-normal-1000-200ep
|
g30rv17ys
| 2022-09-23T18:24:23Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-09-23T15:24:37Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: imagefolder
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-geeve-normal-1000-200ep
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/geevegeorge/ddpm-geeve-normal-1000-200ep/tensorboard?#scalars)
|
nkkodelacruz/distilbert-base-uncased-finetuned-cola
|
nkkodelacruz
| 2022-09-23T16:17:52Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-23T09:07:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: train
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5595884617444483
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7903
- Matthews Correlation: 0.5596
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5224 | 1.0 | 535 | 0.5373 | 0.3974 |
| 0.3503 | 2.0 | 1070 | 0.5142 | 0.4942 |
| 0.2328 | 3.0 | 1605 | 0.5449 | 0.5449 |
| 0.1775 | 4.0 | 2140 | 0.7457 | 0.5487 |
| 0.1235 | 5.0 | 2675 | 0.7903 | 0.5596 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
gokuls/distilroberta-base-Massive-intent
|
gokuls
| 2022-09-23T15:34:27Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:massive",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-23T15:23:33Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- massive
metrics:
- accuracy
model-index:
- name: distilroberta-base-Massive-intent
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: massive
type: massive
config: en-US
split: train
args: en-US
metrics:
- name: Accuracy
type: accuracy
value: 0.8937530742744713
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-Massive-intent
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the massive dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6618
- Accuracy: 0.8938
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.41 | 1.0 | 720 | 0.6742 | 0.8288 |
| 0.4978 | 2.0 | 1440 | 0.5150 | 0.8751 |
| 0.3009 | 3.0 | 2160 | 0.5705 | 0.8790 |
| 0.1953 | 4.0 | 2880 | 0.5887 | 0.8795 |
| 0.127 | 5.0 | 3600 | 0.6123 | 0.8810 |
| 0.0914 | 6.0 | 4320 | 0.6575 | 0.8834 |
| 0.0583 | 7.0 | 5040 | 0.6618 | 0.8938 |
| 0.0355 | 8.0 | 5760 | 0.7591 | 0.8864 |
| 0.0259 | 9.0 | 6480 | 0.8087 | 0.8780 |
| 0.02 | 10.0 | 7200 | 0.7964 | 0.8888 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
minminzi/t5-small-finetuned-eli5
|
minminzi
| 2022-09-23T15:24:12Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:eli5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-09-22T19:21:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- eli5
metrics:
- rouge
model-index:
- name: t5-small-finetuned-eli5
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: eli5
type: eli5
config: LFQA_reddit
split: train_eli5
args: LFQA_reddit
metrics:
- name: Rouge1
type: rouge
value: 13.044
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-eli5
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the eli5 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6813
- Rouge1: 13.044
- Rouge2: 1.9483
- Rougel: 10.5237
- Rougelsum: 11.8549
- Gen Len: 18.997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:-------:|:---------:|:-------:|
| 3.8881 | 1.0 | 17040 | 3.6813 | 13.044 | 1.9483 | 10.5237 | 11.8549 | 18.997 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
Eulering/moonlight-night
|
Eulering
| 2022-09-23T14:47:20Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2022-09-23T14:47:20Z |
---
license: bigscience-openrail-m
---
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.