modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-04 06:26:56
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 538
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-04 06:26:41
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
dkimds/q-Taxi-v3
|
dkimds
| 2023-08-07T05:09:49Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-07T05:09:45Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.76
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="dkimds/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
intfloat/e5-large
|
intfloat
| 2023-08-07T04:59:49Z | 18,018 | 74 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"safetensors",
"bert",
"mteb",
"Sentence Transformers",
"sentence-similarity",
"en",
"arxiv:2212.03533",
"arxiv:2104.08663",
"arxiv:2210.07316",
"license:mit",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-12-26T06:03:12Z |
---
tags:
- mteb
- Sentence Transformers
- sentence-similarity
- sentence-transformers
model-index:
- name: e5-large
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 77.68656716417911
- type: ap
value: 41.336896075573584
- type: f1
value: 71.788561468075
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 90.04965
- type: ap
value: 86.24637009569418
- type: f1
value: 90.03896671762645
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 43.016000000000005
- type: f1
value: 42.1942431880186
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.107000000000003
- type: map_at_10
value: 40.464
- type: map_at_100
value: 41.577999999999996
- type: map_at_1000
value: 41.588
- type: map_at_3
value: 35.301
- type: map_at_5
value: 38.263000000000005
- type: mrr_at_1
value: 25.605
- type: mrr_at_10
value: 40.64
- type: mrr_at_100
value: 41.760000000000005
- type: mrr_at_1000
value: 41.77
- type: mrr_at_3
value: 35.443000000000005
- type: mrr_at_5
value: 38.448
- type: ndcg_at_1
value: 25.107000000000003
- type: ndcg_at_10
value: 49.352000000000004
- type: ndcg_at_100
value: 53.98500000000001
- type: ndcg_at_1000
value: 54.208
- type: ndcg_at_3
value: 38.671
- type: ndcg_at_5
value: 43.991
- type: precision_at_1
value: 25.107000000000003
- type: precision_at_10
value: 7.795000000000001
- type: precision_at_100
value: 0.979
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 16.145
- type: precision_at_5
value: 12.262
- type: recall_at_1
value: 25.107000000000003
- type: recall_at_10
value: 77.952
- type: recall_at_100
value: 97.866
- type: recall_at_1000
value: 99.57300000000001
- type: recall_at_3
value: 48.435
- type: recall_at_5
value: 61.309000000000005
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 46.19278045044154
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 41.37976387757665
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 60.07433334608074
- type: mrr
value: 73.44347711383723
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 86.4298072183543
- type: cos_sim_spearman
value: 84.73144873582848
- type: euclidean_pearson
value: 85.15885058870728
- type: euclidean_spearman
value: 85.42062106559356
- type: manhattan_pearson
value: 84.89409921792054
- type: manhattan_spearman
value: 85.31941394024344
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 84.14285714285714
- type: f1
value: 84.11674412565644
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 37.600076342340785
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 35.08861812135148
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.684000000000005
- type: map_at_10
value: 41.675000000000004
- type: map_at_100
value: 42.963
- type: map_at_1000
value: 43.078
- type: map_at_3
value: 38.708999999999996
- type: map_at_5
value: 40.316
- type: mrr_at_1
value: 39.485
- type: mrr_at_10
value: 47.152
- type: mrr_at_100
value: 47.96
- type: mrr_at_1000
value: 48.010000000000005
- type: mrr_at_3
value: 44.754
- type: mrr_at_5
value: 46.285
- type: ndcg_at_1
value: 39.485
- type: ndcg_at_10
value: 46.849000000000004
- type: ndcg_at_100
value: 52.059
- type: ndcg_at_1000
value: 54.358
- type: ndcg_at_3
value: 42.705
- type: ndcg_at_5
value: 44.663000000000004
- type: precision_at_1
value: 39.485
- type: precision_at_10
value: 8.455
- type: precision_at_100
value: 1.3379999999999999
- type: precision_at_1000
value: 0.178
- type: precision_at_3
value: 19.695
- type: precision_at_5
value: 13.905999999999999
- type: recall_at_1
value: 32.684000000000005
- type: recall_at_10
value: 56.227000000000004
- type: recall_at_100
value: 78.499
- type: recall_at_1000
value: 94.021
- type: recall_at_3
value: 44.157999999999994
- type: recall_at_5
value: 49.694
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.875999999999998
- type: map_at_10
value: 41.603
- type: map_at_100
value: 42.825
- type: map_at_1000
value: 42.961
- type: map_at_3
value: 38.655
- type: map_at_5
value: 40.294999999999995
- type: mrr_at_1
value: 40.127
- type: mrr_at_10
value: 47.959
- type: mrr_at_100
value: 48.59
- type: mrr_at_1000
value: 48.634
- type: mrr_at_3
value: 45.786
- type: mrr_at_5
value: 46.964
- type: ndcg_at_1
value: 40.127
- type: ndcg_at_10
value: 47.176
- type: ndcg_at_100
value: 51.346000000000004
- type: ndcg_at_1000
value: 53.502
- type: ndcg_at_3
value: 43.139
- type: ndcg_at_5
value: 44.883
- type: precision_at_1
value: 40.127
- type: precision_at_10
value: 8.72
- type: precision_at_100
value: 1.387
- type: precision_at_1000
value: 0.188
- type: precision_at_3
value: 20.637
- type: precision_at_5
value: 14.446
- type: recall_at_1
value: 31.875999999999998
- type: recall_at_10
value: 56.54900000000001
- type: recall_at_100
value: 73.939
- type: recall_at_1000
value: 87.732
- type: recall_at_3
value: 44.326
- type: recall_at_5
value: 49.445
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 41.677
- type: map_at_10
value: 52.222
- type: map_at_100
value: 53.229000000000006
- type: map_at_1000
value: 53.288000000000004
- type: map_at_3
value: 49.201
- type: map_at_5
value: 51.00599999999999
- type: mrr_at_1
value: 47.524
- type: mrr_at_10
value: 55.745999999999995
- type: mrr_at_100
value: 56.433
- type: mrr_at_1000
value: 56.464999999999996
- type: mrr_at_3
value: 53.37499999999999
- type: mrr_at_5
value: 54.858
- type: ndcg_at_1
value: 47.524
- type: ndcg_at_10
value: 57.406
- type: ndcg_at_100
value: 61.403
- type: ndcg_at_1000
value: 62.7
- type: ndcg_at_3
value: 52.298
- type: ndcg_at_5
value: 55.02
- type: precision_at_1
value: 47.524
- type: precision_at_10
value: 8.865
- type: precision_at_100
value: 1.179
- type: precision_at_1000
value: 0.134
- type: precision_at_3
value: 22.612
- type: precision_at_5
value: 15.461
- type: recall_at_1
value: 41.677
- type: recall_at_10
value: 69.346
- type: recall_at_100
value: 86.344
- type: recall_at_1000
value: 95.703
- type: recall_at_3
value: 55.789
- type: recall_at_5
value: 62.488
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.991999999999997
- type: map_at_10
value: 32.804
- type: map_at_100
value: 33.812999999999995
- type: map_at_1000
value: 33.897
- type: map_at_3
value: 30.567
- type: map_at_5
value: 31.599
- type: mrr_at_1
value: 27.797
- type: mrr_at_10
value: 34.768
- type: mrr_at_100
value: 35.702
- type: mrr_at_1000
value: 35.766
- type: mrr_at_3
value: 32.637
- type: mrr_at_5
value: 33.614
- type: ndcg_at_1
value: 27.797
- type: ndcg_at_10
value: 36.966
- type: ndcg_at_100
value: 41.972
- type: ndcg_at_1000
value: 44.139
- type: ndcg_at_3
value: 32.547
- type: ndcg_at_5
value: 34.258
- type: precision_at_1
value: 27.797
- type: precision_at_10
value: 5.514
- type: precision_at_100
value: 0.8340000000000001
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 13.333
- type: precision_at_5
value: 9.04
- type: recall_at_1
value: 25.991999999999997
- type: recall_at_10
value: 47.941
- type: recall_at_100
value: 71.039
- type: recall_at_1000
value: 87.32799999999999
- type: recall_at_3
value: 36.01
- type: recall_at_5
value: 40.056000000000004
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 17.533
- type: map_at_10
value: 24.336
- type: map_at_100
value: 25.445
- type: map_at_1000
value: 25.561
- type: map_at_3
value: 22.116
- type: map_at_5
value: 23.347
- type: mrr_at_1
value: 21.642
- type: mrr_at_10
value: 28.910999999999998
- type: mrr_at_100
value: 29.836000000000002
- type: mrr_at_1000
value: 29.907
- type: mrr_at_3
value: 26.638
- type: mrr_at_5
value: 27.857
- type: ndcg_at_1
value: 21.642
- type: ndcg_at_10
value: 28.949
- type: ndcg_at_100
value: 34.211000000000006
- type: ndcg_at_1000
value: 37.031
- type: ndcg_at_3
value: 24.788
- type: ndcg_at_5
value: 26.685
- type: precision_at_1
value: 21.642
- type: precision_at_10
value: 5.137
- type: precision_at_100
value: 0.893
- type: precision_at_1000
value: 0.127
- type: precision_at_3
value: 11.733
- type: precision_at_5
value: 8.383000000000001
- type: recall_at_1
value: 17.533
- type: recall_at_10
value: 38.839
- type: recall_at_100
value: 61.458999999999996
- type: recall_at_1000
value: 81.58
- type: recall_at_3
value: 27.328999999999997
- type: recall_at_5
value: 32.168
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 28.126
- type: map_at_10
value: 37.872
- type: map_at_100
value: 39.229
- type: map_at_1000
value: 39.353
- type: map_at_3
value: 34.93
- type: map_at_5
value: 36.59
- type: mrr_at_1
value: 34.071
- type: mrr_at_10
value: 43.056
- type: mrr_at_100
value: 43.944
- type: mrr_at_1000
value: 43.999
- type: mrr_at_3
value: 40.536
- type: mrr_at_5
value: 42.065999999999995
- type: ndcg_at_1
value: 34.071
- type: ndcg_at_10
value: 43.503
- type: ndcg_at_100
value: 49.120000000000005
- type: ndcg_at_1000
value: 51.410999999999994
- type: ndcg_at_3
value: 38.767
- type: ndcg_at_5
value: 41.075
- type: precision_at_1
value: 34.071
- type: precision_at_10
value: 7.843999999999999
- type: precision_at_100
value: 1.2489999999999999
- type: precision_at_1000
value: 0.163
- type: precision_at_3
value: 18.223
- type: precision_at_5
value: 13.050999999999998
- type: recall_at_1
value: 28.126
- type: recall_at_10
value: 54.952
- type: recall_at_100
value: 78.375
- type: recall_at_1000
value: 93.29899999999999
- type: recall_at_3
value: 41.714
- type: recall_at_5
value: 47.635
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.957
- type: map_at_10
value: 34.749
- type: map_at_100
value: 35.929
- type: map_at_1000
value: 36.043
- type: map_at_3
value: 31.947
- type: map_at_5
value: 33.575
- type: mrr_at_1
value: 32.078
- type: mrr_at_10
value: 39.844
- type: mrr_at_100
value: 40.71
- type: mrr_at_1000
value: 40.77
- type: mrr_at_3
value: 37.386
- type: mrr_at_5
value: 38.83
- type: ndcg_at_1
value: 32.078
- type: ndcg_at_10
value: 39.97
- type: ndcg_at_100
value: 45.254
- type: ndcg_at_1000
value: 47.818
- type: ndcg_at_3
value: 35.453
- type: ndcg_at_5
value: 37.631
- type: precision_at_1
value: 32.078
- type: precision_at_10
value: 7.158
- type: precision_at_100
value: 1.126
- type: precision_at_1000
value: 0.153
- type: precision_at_3
value: 16.743
- type: precision_at_5
value: 11.872
- type: recall_at_1
value: 25.957
- type: recall_at_10
value: 50.583
- type: recall_at_100
value: 73.593
- type: recall_at_1000
value: 91.23599999999999
- type: recall_at_3
value: 37.651
- type: recall_at_5
value: 43.626
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.1505
- type: map_at_10
value: 34.844833333333334
- type: map_at_100
value: 35.95216666666667
- type: map_at_1000
value: 36.06675
- type: map_at_3
value: 32.41975
- type: map_at_5
value: 33.74233333333333
- type: mrr_at_1
value: 31.923666666666662
- type: mrr_at_10
value: 38.87983333333334
- type: mrr_at_100
value: 39.706250000000004
- type: mrr_at_1000
value: 39.76708333333333
- type: mrr_at_3
value: 36.72008333333333
- type: mrr_at_5
value: 37.96933333333334
- type: ndcg_at_1
value: 31.923666666666662
- type: ndcg_at_10
value: 39.44258333333334
- type: ndcg_at_100
value: 44.31475
- type: ndcg_at_1000
value: 46.75
- type: ndcg_at_3
value: 35.36299999999999
- type: ndcg_at_5
value: 37.242333333333335
- type: precision_at_1
value: 31.923666666666662
- type: precision_at_10
value: 6.643333333333333
- type: precision_at_100
value: 1.0612499999999998
- type: precision_at_1000
value: 0.14575
- type: precision_at_3
value: 15.875250000000001
- type: precision_at_5
value: 11.088916666666664
- type: recall_at_1
value: 27.1505
- type: recall_at_10
value: 49.06349999999999
- type: recall_at_100
value: 70.60841666666666
- type: recall_at_1000
value: 87.72049999999999
- type: recall_at_3
value: 37.60575000000001
- type: recall_at_5
value: 42.511166666666675
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.101000000000003
- type: map_at_10
value: 30.147000000000002
- type: map_at_100
value: 30.98
- type: map_at_1000
value: 31.080000000000002
- type: map_at_3
value: 28.571
- type: map_at_5
value: 29.319
- type: mrr_at_1
value: 27.761000000000003
- type: mrr_at_10
value: 32.716
- type: mrr_at_100
value: 33.504
- type: mrr_at_1000
value: 33.574
- type: mrr_at_3
value: 31.135
- type: mrr_at_5
value: 32.032
- type: ndcg_at_1
value: 27.761000000000003
- type: ndcg_at_10
value: 33.358
- type: ndcg_at_100
value: 37.569
- type: ndcg_at_1000
value: 40.189
- type: ndcg_at_3
value: 30.291
- type: ndcg_at_5
value: 31.558000000000003
- type: precision_at_1
value: 27.761000000000003
- type: precision_at_10
value: 4.939
- type: precision_at_100
value: 0.759
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 12.577
- type: precision_at_5
value: 8.497
- type: recall_at_1
value: 25.101000000000003
- type: recall_at_10
value: 40.739
- type: recall_at_100
value: 60.089999999999996
- type: recall_at_1000
value: 79.768
- type: recall_at_3
value: 32.16
- type: recall_at_5
value: 35.131
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 20.112
- type: map_at_10
value: 26.119999999999997
- type: map_at_100
value: 27.031
- type: map_at_1000
value: 27.150000000000002
- type: map_at_3
value: 24.230999999999998
- type: map_at_5
value: 25.15
- type: mrr_at_1
value: 24.535
- type: mrr_at_10
value: 30.198000000000004
- type: mrr_at_100
value: 30.975
- type: mrr_at_1000
value: 31.051000000000002
- type: mrr_at_3
value: 28.338
- type: mrr_at_5
value: 29.269000000000002
- type: ndcg_at_1
value: 24.535
- type: ndcg_at_10
value: 30.147000000000002
- type: ndcg_at_100
value: 34.544000000000004
- type: ndcg_at_1000
value: 37.512
- type: ndcg_at_3
value: 26.726
- type: ndcg_at_5
value: 28.046
- type: precision_at_1
value: 24.535
- type: precision_at_10
value: 5.179
- type: precision_at_100
value: 0.859
- type: precision_at_1000
value: 0.128
- type: precision_at_3
value: 12.159
- type: precision_at_5
value: 8.424
- type: recall_at_1
value: 20.112
- type: recall_at_10
value: 38.312000000000005
- type: recall_at_100
value: 58.406000000000006
- type: recall_at_1000
value: 79.863
- type: recall_at_3
value: 28.358
- type: recall_at_5
value: 31.973000000000003
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.111
- type: map_at_10
value: 34.096
- type: map_at_100
value: 35.181000000000004
- type: map_at_1000
value: 35.276
- type: map_at_3
value: 31.745
- type: map_at_5
value: 33.045
- type: mrr_at_1
value: 31.343
- type: mrr_at_10
value: 37.994
- type: mrr_at_100
value: 38.873000000000005
- type: mrr_at_1000
value: 38.934999999999995
- type: mrr_at_3
value: 35.743
- type: mrr_at_5
value: 37.077
- type: ndcg_at_1
value: 31.343
- type: ndcg_at_10
value: 38.572
- type: ndcg_at_100
value: 43.854
- type: ndcg_at_1000
value: 46.190999999999995
- type: ndcg_at_3
value: 34.247
- type: ndcg_at_5
value: 36.28
- type: precision_at_1
value: 31.343
- type: precision_at_10
value: 6.166
- type: precision_at_100
value: 1
- type: precision_at_1000
value: 0.13
- type: precision_at_3
value: 15.081
- type: precision_at_5
value: 10.428999999999998
- type: recall_at_1
value: 27.111
- type: recall_at_10
value: 48.422
- type: recall_at_100
value: 71.846
- type: recall_at_1000
value: 88.57000000000001
- type: recall_at_3
value: 36.435
- type: recall_at_5
value: 41.765
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.264
- type: map_at_10
value: 33.522
- type: map_at_100
value: 34.963
- type: map_at_1000
value: 35.175
- type: map_at_3
value: 31.366
- type: map_at_5
value: 32.621
- type: mrr_at_1
value: 31.028
- type: mrr_at_10
value: 37.230000000000004
- type: mrr_at_100
value: 38.149
- type: mrr_at_1000
value: 38.218
- type: mrr_at_3
value: 35.046
- type: mrr_at_5
value: 36.617
- type: ndcg_at_1
value: 31.028
- type: ndcg_at_10
value: 37.964999999999996
- type: ndcg_at_100
value: 43.342000000000006
- type: ndcg_at_1000
value: 46.471000000000004
- type: ndcg_at_3
value: 34.67
- type: ndcg_at_5
value: 36.458
- type: precision_at_1
value: 31.028
- type: precision_at_10
value: 6.937
- type: precision_at_100
value: 1.346
- type: precision_at_1000
value: 0.22799999999999998
- type: precision_at_3
value: 15.942
- type: precision_at_5
value: 11.462
- type: recall_at_1
value: 26.264
- type: recall_at_10
value: 45.571
- type: recall_at_100
value: 70.246
- type: recall_at_1000
value: 90.971
- type: recall_at_3
value: 36.276
- type: recall_at_5
value: 41.162
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.372999999999998
- type: map_at_10
value: 28.992
- type: map_at_100
value: 29.837999999999997
- type: map_at_1000
value: 29.939
- type: map_at_3
value: 26.999000000000002
- type: map_at_5
value: 28.044999999999998
- type: mrr_at_1
value: 25.692999999999998
- type: mrr_at_10
value: 30.984
- type: mrr_at_100
value: 31.799
- type: mrr_at_1000
value: 31.875999999999998
- type: mrr_at_3
value: 29.267
- type: mrr_at_5
value: 30.163
- type: ndcg_at_1
value: 25.692999999999998
- type: ndcg_at_10
value: 32.45
- type: ndcg_at_100
value: 37.103
- type: ndcg_at_1000
value: 39.678000000000004
- type: ndcg_at_3
value: 28.725
- type: ndcg_at_5
value: 30.351
- type: precision_at_1
value: 25.692999999999998
- type: precision_at_10
value: 4.806
- type: precision_at_100
value: 0.765
- type: precision_at_1000
value: 0.108
- type: precision_at_3
value: 11.768
- type: precision_at_5
value: 8.096
- type: recall_at_1
value: 23.372999999999998
- type: recall_at_10
value: 41.281
- type: recall_at_100
value: 63.465
- type: recall_at_1000
value: 82.575
- type: recall_at_3
value: 31.063000000000002
- type: recall_at_5
value: 34.991
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.821
- type: map_at_10
value: 15.383
- type: map_at_100
value: 17.244999999999997
- type: map_at_1000
value: 17.445
- type: map_at_3
value: 12.64
- type: map_at_5
value: 13.941999999999998
- type: mrr_at_1
value: 19.544
- type: mrr_at_10
value: 29.738999999999997
- type: mrr_at_100
value: 30.923000000000002
- type: mrr_at_1000
value: 30.969
- type: mrr_at_3
value: 26.384
- type: mrr_at_5
value: 28.199
- type: ndcg_at_1
value: 19.544
- type: ndcg_at_10
value: 22.398
- type: ndcg_at_100
value: 30.253999999999998
- type: ndcg_at_1000
value: 33.876
- type: ndcg_at_3
value: 17.473
- type: ndcg_at_5
value: 19.154
- type: precision_at_1
value: 19.544
- type: precision_at_10
value: 7.217999999999999
- type: precision_at_100
value: 1.564
- type: precision_at_1000
value: 0.22300000000000003
- type: precision_at_3
value: 13.225000000000001
- type: precision_at_5
value: 10.319
- type: recall_at_1
value: 8.821
- type: recall_at_10
value: 28.110000000000003
- type: recall_at_100
value: 55.64
- type: recall_at_1000
value: 75.964
- type: recall_at_3
value: 16.195
- type: recall_at_5
value: 20.678
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.344
- type: map_at_10
value: 20.301
- type: map_at_100
value: 28.709
- type: map_at_1000
value: 30.470999999999997
- type: map_at_3
value: 14.584
- type: map_at_5
value: 16.930999999999997
- type: mrr_at_1
value: 67.25
- type: mrr_at_10
value: 75.393
- type: mrr_at_100
value: 75.742
- type: mrr_at_1000
value: 75.75
- type: mrr_at_3
value: 73.958
- type: mrr_at_5
value: 74.883
- type: ndcg_at_1
value: 56.00000000000001
- type: ndcg_at_10
value: 42.394
- type: ndcg_at_100
value: 47.091
- type: ndcg_at_1000
value: 54.215
- type: ndcg_at_3
value: 46.995
- type: ndcg_at_5
value: 44.214999999999996
- type: precision_at_1
value: 67.25
- type: precision_at_10
value: 33.525
- type: precision_at_100
value: 10.67
- type: precision_at_1000
value: 2.221
- type: precision_at_3
value: 49.417
- type: precision_at_5
value: 42.15
- type: recall_at_1
value: 9.344
- type: recall_at_10
value: 25.209
- type: recall_at_100
value: 52.329
- type: recall_at_1000
value: 74.2
- type: recall_at_3
value: 15.699
- type: recall_at_5
value: 19.24
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 48.05
- type: f1
value: 43.06718139212933
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 46.452
- type: map_at_10
value: 58.825
- type: map_at_100
value: 59.372
- type: map_at_1000
value: 59.399
- type: map_at_3
value: 56.264
- type: map_at_5
value: 57.879999999999995
- type: mrr_at_1
value: 49.82
- type: mrr_at_10
value: 62.178999999999995
- type: mrr_at_100
value: 62.641999999999996
- type: mrr_at_1000
value: 62.658
- type: mrr_at_3
value: 59.706
- type: mrr_at_5
value: 61.283
- type: ndcg_at_1
value: 49.82
- type: ndcg_at_10
value: 65.031
- type: ndcg_at_100
value: 67.413
- type: ndcg_at_1000
value: 68.014
- type: ndcg_at_3
value: 60.084
- type: ndcg_at_5
value: 62.858000000000004
- type: precision_at_1
value: 49.82
- type: precision_at_10
value: 8.876000000000001
- type: precision_at_100
value: 1.018
- type: precision_at_1000
value: 0.109
- type: precision_at_3
value: 24.477
- type: precision_at_5
value: 16.208
- type: recall_at_1
value: 46.452
- type: recall_at_10
value: 80.808
- type: recall_at_100
value: 91.215
- type: recall_at_1000
value: 95.52000000000001
- type: recall_at_3
value: 67.62899999999999
- type: recall_at_5
value: 74.32900000000001
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.351
- type: map_at_10
value: 30.796
- type: map_at_100
value: 32.621
- type: map_at_1000
value: 32.799
- type: map_at_3
value: 26.491
- type: map_at_5
value: 28.933999999999997
- type: mrr_at_1
value: 36.265
- type: mrr_at_10
value: 45.556999999999995
- type: mrr_at_100
value: 46.323
- type: mrr_at_1000
value: 46.359
- type: mrr_at_3
value: 42.695
- type: mrr_at_5
value: 44.324000000000005
- type: ndcg_at_1
value: 36.265
- type: ndcg_at_10
value: 38.558
- type: ndcg_at_100
value: 45.18
- type: ndcg_at_1000
value: 48.292
- type: ndcg_at_3
value: 34.204
- type: ndcg_at_5
value: 35.735
- type: precision_at_1
value: 36.265
- type: precision_at_10
value: 10.879999999999999
- type: precision_at_100
value: 1.77
- type: precision_at_1000
value: 0.234
- type: precision_at_3
value: 23.044999999999998
- type: precision_at_5
value: 17.253
- type: recall_at_1
value: 18.351
- type: recall_at_10
value: 46.116
- type: recall_at_100
value: 70.786
- type: recall_at_1000
value: 89.46300000000001
- type: recall_at_3
value: 31.404
- type: recall_at_5
value: 37.678
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 36.847
- type: map_at_10
value: 54.269999999999996
- type: map_at_100
value: 55.152
- type: map_at_1000
value: 55.223
- type: map_at_3
value: 51.166
- type: map_at_5
value: 53.055
- type: mrr_at_1
value: 73.693
- type: mrr_at_10
value: 79.975
- type: mrr_at_100
value: 80.202
- type: mrr_at_1000
value: 80.214
- type: mrr_at_3
value: 78.938
- type: mrr_at_5
value: 79.595
- type: ndcg_at_1
value: 73.693
- type: ndcg_at_10
value: 63.334999999999994
- type: ndcg_at_100
value: 66.452
- type: ndcg_at_1000
value: 67.869
- type: ndcg_at_3
value: 58.829
- type: ndcg_at_5
value: 61.266
- type: precision_at_1
value: 73.693
- type: precision_at_10
value: 13.122
- type: precision_at_100
value: 1.5559999999999998
- type: precision_at_1000
value: 0.174
- type: precision_at_3
value: 37.083
- type: precision_at_5
value: 24.169999999999998
- type: recall_at_1
value: 36.847
- type: recall_at_10
value: 65.61099999999999
- type: recall_at_100
value: 77.792
- type: recall_at_1000
value: 87.17099999999999
- type: recall_at_3
value: 55.625
- type: recall_at_5
value: 60.425
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 82.1096
- type: ap
value: 76.67089212843918
- type: f1
value: 82.03535056754939
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 24.465
- type: map_at_10
value: 37.072
- type: map_at_100
value: 38.188
- type: map_at_1000
value: 38.232
- type: map_at_3
value: 33.134
- type: map_at_5
value: 35.453
- type: mrr_at_1
value: 25.142999999999997
- type: mrr_at_10
value: 37.669999999999995
- type: mrr_at_100
value: 38.725
- type: mrr_at_1000
value: 38.765
- type: mrr_at_3
value: 33.82
- type: mrr_at_5
value: 36.111
- type: ndcg_at_1
value: 25.142999999999997
- type: ndcg_at_10
value: 44.054
- type: ndcg_at_100
value: 49.364000000000004
- type: ndcg_at_1000
value: 50.456
- type: ndcg_at_3
value: 36.095
- type: ndcg_at_5
value: 40.23
- type: precision_at_1
value: 25.142999999999997
- type: precision_at_10
value: 6.845
- type: precision_at_100
value: 0.95
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 15.204999999999998
- type: precision_at_5
value: 11.221
- type: recall_at_1
value: 24.465
- type: recall_at_10
value: 65.495
- type: recall_at_100
value: 89.888
- type: recall_at_1000
value: 98.165
- type: recall_at_3
value: 43.964
- type: recall_at_5
value: 53.891
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.86228910168718
- type: f1
value: 93.69177113259104
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 76.3999088007296
- type: f1
value: 58.96668664333438
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.21788836583727
- type: f1
value: 71.4545936552952
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.39071956960323
- type: f1
value: 77.12398952847603
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 32.255379528166955
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 29.66423362872814
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.782211620375964
- type: mrr
value: 31.773479703044956
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.863
- type: map_at_10
value: 13.831
- type: map_at_100
value: 17.534
- type: map_at_1000
value: 19.012
- type: map_at_3
value: 10.143
- type: map_at_5
value: 12.034
- type: mrr_at_1
value: 46.749
- type: mrr_at_10
value: 55.376999999999995
- type: mrr_at_100
value: 56.009
- type: mrr_at_1000
value: 56.042
- type: mrr_at_3
value: 53.30200000000001
- type: mrr_at_5
value: 54.85
- type: ndcg_at_1
value: 44.582
- type: ndcg_at_10
value: 36.07
- type: ndcg_at_100
value: 33.39
- type: ndcg_at_1000
value: 41.884
- type: ndcg_at_3
value: 41.441
- type: ndcg_at_5
value: 39.861000000000004
- type: precision_at_1
value: 46.129999999999995
- type: precision_at_10
value: 26.594
- type: precision_at_100
value: 8.365
- type: precision_at_1000
value: 2.1260000000000003
- type: precision_at_3
value: 39.009
- type: precision_at_5
value: 34.861
- type: recall_at_1
value: 5.863
- type: recall_at_10
value: 17.961
- type: recall_at_100
value: 34.026
- type: recall_at_1000
value: 64.46499999999999
- type: recall_at_3
value: 11.242
- type: recall_at_5
value: 14.493
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 38.601
- type: map_at_10
value: 55.293000000000006
- type: map_at_100
value: 56.092
- type: map_at_1000
value: 56.111999999999995
- type: map_at_3
value: 51.269
- type: map_at_5
value: 53.787
- type: mrr_at_1
value: 43.221
- type: mrr_at_10
value: 57.882999999999996
- type: mrr_at_100
value: 58.408
- type: mrr_at_1000
value: 58.421
- type: mrr_at_3
value: 54.765
- type: mrr_at_5
value: 56.809
- type: ndcg_at_1
value: 43.221
- type: ndcg_at_10
value: 62.858999999999995
- type: ndcg_at_100
value: 65.987
- type: ndcg_at_1000
value: 66.404
- type: ndcg_at_3
value: 55.605000000000004
- type: ndcg_at_5
value: 59.723000000000006
- type: precision_at_1
value: 43.221
- type: precision_at_10
value: 9.907
- type: precision_at_100
value: 1.169
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 25.019000000000002
- type: precision_at_5
value: 17.474
- type: recall_at_1
value: 38.601
- type: recall_at_10
value: 82.966
- type: recall_at_100
value: 96.154
- type: recall_at_1000
value: 99.223
- type: recall_at_3
value: 64.603
- type: recall_at_5
value: 73.97200000000001
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.77
- type: map_at_10
value: 84.429
- type: map_at_100
value: 85.04599999999999
- type: map_at_1000
value: 85.065
- type: map_at_3
value: 81.461
- type: map_at_5
value: 83.316
- type: mrr_at_1
value: 81.51
- type: mrr_at_10
value: 87.52799999999999
- type: mrr_at_100
value: 87.631
- type: mrr_at_1000
value: 87.632
- type: mrr_at_3
value: 86.533
- type: mrr_at_5
value: 87.214
- type: ndcg_at_1
value: 81.47999999999999
- type: ndcg_at_10
value: 88.181
- type: ndcg_at_100
value: 89.39200000000001
- type: ndcg_at_1000
value: 89.52
- type: ndcg_at_3
value: 85.29299999999999
- type: ndcg_at_5
value: 86.88
- type: precision_at_1
value: 81.47999999999999
- type: precision_at_10
value: 13.367
- type: precision_at_100
value: 1.5230000000000001
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.227
- type: precision_at_5
value: 24.494
- type: recall_at_1
value: 70.77
- type: recall_at_10
value: 95.199
- type: recall_at_100
value: 99.37700000000001
- type: recall_at_1000
value: 99.973
- type: recall_at_3
value: 86.895
- type: recall_at_5
value: 91.396
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 50.686353396858344
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 61.3664675312921
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.7379999999999995
- type: map_at_10
value: 12.01
- type: map_at_100
value: 14.02
- type: map_at_1000
value: 14.310999999999998
- type: map_at_3
value: 8.459
- type: map_at_5
value: 10.281
- type: mrr_at_1
value: 23.3
- type: mrr_at_10
value: 34.108
- type: mrr_at_100
value: 35.217
- type: mrr_at_1000
value: 35.272
- type: mrr_at_3
value: 30.833
- type: mrr_at_5
value: 32.768
- type: ndcg_at_1
value: 23.3
- type: ndcg_at_10
value: 20.116999999999997
- type: ndcg_at_100
value: 27.961000000000002
- type: ndcg_at_1000
value: 33.149
- type: ndcg_at_3
value: 18.902
- type: ndcg_at_5
value: 16.742
- type: precision_at_1
value: 23.3
- type: precision_at_10
value: 10.47
- type: precision_at_100
value: 2.177
- type: precision_at_1000
value: 0.34299999999999997
- type: precision_at_3
value: 17.567
- type: precision_at_5
value: 14.78
- type: recall_at_1
value: 4.7379999999999995
- type: recall_at_10
value: 21.221999999999998
- type: recall_at_100
value: 44.242
- type: recall_at_1000
value: 69.652
- type: recall_at_3
value: 10.688
- type: recall_at_5
value: 14.982999999999999
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 84.84572946827069
- type: cos_sim_spearman
value: 80.48508130408966
- type: euclidean_pearson
value: 82.0481530027767
- type: euclidean_spearman
value: 80.45902876782752
- type: manhattan_pearson
value: 82.03728222483326
- type: manhattan_spearman
value: 80.45684282911755
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 84.33476464677516
- type: cos_sim_spearman
value: 75.93057758003266
- type: euclidean_pearson
value: 80.89685744015691
- type: euclidean_spearman
value: 76.29929953441706
- type: manhattan_pearson
value: 80.91391345459995
- type: manhattan_spearman
value: 76.31985463110914
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 84.63686106359005
- type: cos_sim_spearman
value: 85.22240034668202
- type: euclidean_pearson
value: 84.6074814189106
- type: euclidean_spearman
value: 85.17169644755828
- type: manhattan_pearson
value: 84.48329306239368
- type: manhattan_spearman
value: 85.0086508544768
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 82.95455774064745
- type: cos_sim_spearman
value: 80.54074646118492
- type: euclidean_pearson
value: 81.79598955554704
- type: euclidean_spearman
value: 80.55837617606814
- type: manhattan_pearson
value: 81.78213797905386
- type: manhattan_spearman
value: 80.5666746878273
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 87.92813309124739
- type: cos_sim_spearman
value: 88.81459873052108
- type: euclidean_pearson
value: 88.21193118930564
- type: euclidean_spearman
value: 88.87072745043731
- type: manhattan_pearson
value: 88.22576929706727
- type: manhattan_spearman
value: 88.8867671095791
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 83.6881529671839
- type: cos_sim_spearman
value: 85.2807092969554
- type: euclidean_pearson
value: 84.62334178652704
- type: euclidean_spearman
value: 85.2116373296784
- type: manhattan_pearson
value: 84.54948211541777
- type: manhattan_spearman
value: 85.10737722637882
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 88.55963694458408
- type: cos_sim_spearman
value: 89.36731628848683
- type: euclidean_pearson
value: 89.64975952985465
- type: euclidean_spearman
value: 89.29689484033007
- type: manhattan_pearson
value: 89.61234491713135
- type: manhattan_spearman
value: 89.20302520255782
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 62.411800961903886
- type: cos_sim_spearman
value: 62.99105515749963
- type: euclidean_pearson
value: 65.29826669549443
- type: euclidean_spearman
value: 63.29880964105775
- type: manhattan_pearson
value: 65.00126190601183
- type: manhattan_spearman
value: 63.32011025899179
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 85.83498531837608
- type: cos_sim_spearman
value: 87.21366640615442
- type: euclidean_pearson
value: 86.74764288798261
- type: euclidean_spearman
value: 87.06060470780834
- type: manhattan_pearson
value: 86.65971223951476
- type: manhattan_spearman
value: 86.99814399831457
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 83.94448463485881
- type: mrr
value: 95.36291867174221
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 59.928000000000004
- type: map_at_10
value: 68.577
- type: map_at_100
value: 69.35900000000001
- type: map_at_1000
value: 69.37299999999999
- type: map_at_3
value: 66.217
- type: map_at_5
value: 67.581
- type: mrr_at_1
value: 63
- type: mrr_at_10
value: 69.994
- type: mrr_at_100
value: 70.553
- type: mrr_at_1000
value: 70.56700000000001
- type: mrr_at_3
value: 68.167
- type: mrr_at_5
value: 69.11699999999999
- type: ndcg_at_1
value: 63
- type: ndcg_at_10
value: 72.58
- type: ndcg_at_100
value: 75.529
- type: ndcg_at_1000
value: 76.009
- type: ndcg_at_3
value: 68.523
- type: ndcg_at_5
value: 70.301
- type: precision_at_1
value: 63
- type: precision_at_10
value: 9.333
- type: precision_at_100
value: 1.09
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 26.444000000000003
- type: precision_at_5
value: 17.067
- type: recall_at_1
value: 59.928000000000004
- type: recall_at_10
value: 83.544
- type: recall_at_100
value: 96
- type: recall_at_1000
value: 100
- type: recall_at_3
value: 72.072
- type: recall_at_5
value: 76.683
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.82178217821782
- type: cos_sim_ap
value: 95.41507679819003
- type: cos_sim_f1
value: 90.9456740442656
- type: cos_sim_precision
value: 91.49797570850203
- type: cos_sim_recall
value: 90.4
- type: dot_accuracy
value: 99.77227722772277
- type: dot_ap
value: 92.50123869445967
- type: dot_f1
value: 88.18414322250638
- type: dot_precision
value: 90.26178010471205
- type: dot_recall
value: 86.2
- type: euclidean_accuracy
value: 99.81782178217821
- type: euclidean_ap
value: 95.3935066749006
- type: euclidean_f1
value: 90.66128218071681
- type: euclidean_precision
value: 91.53924566768603
- type: euclidean_recall
value: 89.8
- type: manhattan_accuracy
value: 99.81881188118813
- type: manhattan_ap
value: 95.39767454613512
- type: manhattan_f1
value: 90.62019477191186
- type: manhattan_precision
value: 92.95478443743428
- type: manhattan_recall
value: 88.4
- type: max_accuracy
value: 99.82178217821782
- type: max_ap
value: 95.41507679819003
- type: max_f1
value: 90.9456740442656
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 64.96313921233748
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 33.602625720956745
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 51.32659230651731
- type: mrr
value: 52.33861726508785
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 31.01587644214203
- type: cos_sim_spearman
value: 30.974306908731013
- type: dot_pearson
value: 29.83339853838187
- type: dot_spearman
value: 30.07761671934048
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.22
- type: map_at_10
value: 1.9539999999999997
- type: map_at_100
value: 11.437
- type: map_at_1000
value: 27.861000000000004
- type: map_at_3
value: 0.6479999999999999
- type: map_at_5
value: 1.0410000000000001
- type: mrr_at_1
value: 84
- type: mrr_at_10
value: 90.333
- type: mrr_at_100
value: 90.333
- type: mrr_at_1000
value: 90.333
- type: mrr_at_3
value: 90.333
- type: mrr_at_5
value: 90.333
- type: ndcg_at_1
value: 80
- type: ndcg_at_10
value: 78.31700000000001
- type: ndcg_at_100
value: 59.396
- type: ndcg_at_1000
value: 52.733
- type: ndcg_at_3
value: 81.46900000000001
- type: ndcg_at_5
value: 80.74
- type: precision_at_1
value: 84
- type: precision_at_10
value: 84
- type: precision_at_100
value: 60.980000000000004
- type: precision_at_1000
value: 23.432
- type: precision_at_3
value: 87.333
- type: precision_at_5
value: 86.8
- type: recall_at_1
value: 0.22
- type: recall_at_10
value: 2.156
- type: recall_at_100
value: 14.557999999999998
- type: recall_at_1000
value: 49.553999999999995
- type: recall_at_3
value: 0.685
- type: recall_at_5
value: 1.121
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.373
- type: map_at_10
value: 11.701
- type: map_at_100
value: 17.144000000000002
- type: map_at_1000
value: 18.624
- type: map_at_3
value: 6.552
- type: map_at_5
value: 9.372
- type: mrr_at_1
value: 38.775999999999996
- type: mrr_at_10
value: 51.975
- type: mrr_at_100
value: 52.873999999999995
- type: mrr_at_1000
value: 52.873999999999995
- type: mrr_at_3
value: 47.619
- type: mrr_at_5
value: 50.578
- type: ndcg_at_1
value: 36.735
- type: ndcg_at_10
value: 27.212999999999997
- type: ndcg_at_100
value: 37.245
- type: ndcg_at_1000
value: 48.602000000000004
- type: ndcg_at_3
value: 30.916
- type: ndcg_at_5
value: 30.799
- type: precision_at_1
value: 38.775999999999996
- type: precision_at_10
value: 23.469
- type: precision_at_100
value: 7.327
- type: precision_at_1000
value: 1.486
- type: precision_at_3
value: 31.973000000000003
- type: precision_at_5
value: 32.245000000000005
- type: recall_at_1
value: 3.373
- type: recall_at_10
value: 17.404
- type: recall_at_100
value: 46.105000000000004
- type: recall_at_1000
value: 80.35
- type: recall_at_3
value: 7.4399999999999995
- type: recall_at_5
value: 12.183
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 70.5592
- type: ap
value: 14.330910591410134
- type: f1
value: 54.45745186286521
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 61.20543293718167
- type: f1
value: 61.45365480309872
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 43.81162998944145
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 86.69011146212075
- type: cos_sim_ap
value: 76.09792353652536
- type: cos_sim_f1
value: 70.10202763786646
- type: cos_sim_precision
value: 68.65671641791045
- type: cos_sim_recall
value: 71.60949868073878
- type: dot_accuracy
value: 85.33110806461227
- type: dot_ap
value: 70.19304383327554
- type: dot_f1
value: 67.22494202525122
- type: dot_precision
value: 65.6847935548842
- type: dot_recall
value: 68.83905013192611
- type: euclidean_accuracy
value: 86.5410979316922
- type: euclidean_ap
value: 75.91906915651882
- type: euclidean_f1
value: 69.6798975672215
- type: euclidean_precision
value: 67.6865671641791
- type: euclidean_recall
value: 71.79419525065963
- type: manhattan_accuracy
value: 86.60070334386363
- type: manhattan_ap
value: 75.94617413885031
- type: manhattan_f1
value: 69.52689565780946
- type: manhattan_precision
value: 68.3312101910828
- type: manhattan_recall
value: 70.76517150395777
- type: max_accuracy
value: 86.69011146212075
- type: max_ap
value: 76.09792353652536
- type: max_f1
value: 70.10202763786646
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.25951798812434
- type: cos_sim_ap
value: 86.31476416599727
- type: cos_sim_f1
value: 78.52709971038477
- type: cos_sim_precision
value: 76.7629972792117
- type: cos_sim_recall
value: 80.37419156144134
- type: dot_accuracy
value: 88.03896456708192
- type: dot_ap
value: 83.26963599196237
- type: dot_f1
value: 76.72696459492317
- type: dot_precision
value: 73.56411162133521
- type: dot_recall
value: 80.17400677548507
- type: euclidean_accuracy
value: 89.21682772538519
- type: euclidean_ap
value: 86.29306071289969
- type: euclidean_f1
value: 78.40827030519554
- type: euclidean_precision
value: 77.42250243939053
- type: euclidean_recall
value: 79.41946412072683
- type: manhattan_accuracy
value: 89.22458959133776
- type: manhattan_ap
value: 86.2901934710645
- type: manhattan_f1
value: 78.54211378440453
- type: manhattan_precision
value: 76.85505858079729
- type: manhattan_recall
value: 80.30489682784109
- type: max_accuracy
value: 89.25951798812434
- type: max_ap
value: 86.31476416599727
- type: max_f1
value: 78.54211378440453
language:
- en
license: mit
---
## E5-large
**News (May 2023): please switch to [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2), which has better performance and same method of usage.**
[Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf).
Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022
This model has 24 layers and the embedding size is 1024.
## Usage
Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset.
```python
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def average_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
# Each input text should start with "query: " or "passage: ".
# For tasks other than retrieval, you can simply use the "query: " prefix.
input_texts = ['query: how much protein should a female eat',
'query: summit define',
"passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."]
tokenizer = AutoTokenizer.from_pretrained('intfloat/e5-large')
model = AutoModel.from_pretrained('intfloat/e5-large')
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
```
## Training Details
Please refer to our paper at [https://arxiv.org/pdf/2212.03533.pdf](https://arxiv.org/pdf/2212.03533.pdf).
## Benchmark Evaluation
Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results
on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316).
## Support for Sentence Transformers
Below is an example for usage with sentence_transformers.
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('intfloat/e5-large')
input_texts = [
'query: how much protein should a female eat',
'query: summit define',
"passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."
]
embeddings = model.encode(input_texts, normalize_embeddings=True)
```
Package requirements
`pip install sentence_transformers~=2.2.2`
Contributors: [michaelfeil](https://huggingface.co/michaelfeil)
## FAQ
**1. Do I need to add the prefix "query: " and "passage: " to input texts?**
Yes, this is how the model is trained, otherwise you will see a performance degradation.
Here are some rules of thumb:
- Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval.
- Use "query: " prefix for symmetric tasks such as semantic similarity, paraphrase retrieval.
- Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering.
**2. Why are my reproduced results slightly different from reported in the model card?**
Different versions of `transformers` and `pytorch` could cause negligible but non-zero performance differences.
**3. Why does the cosine similarity scores distribute around 0.7 to 1.0?**
This is a known and expected behavior as we use a low temperature 0.01 for InfoNCE contrastive loss.
For text embedding tasks like text retrieval or semantic similarity,
what matters is the relative order of the scores instead of the absolute values,
so this should not be an issue.
## Citation
If you find our paper or models helpful, please consider cite as follows:
```
@article{wang2022text,
title={Text Embeddings by Weakly-Supervised Contrastive Pre-training},
author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu},
journal={arXiv preprint arXiv:2212.03533},
year={2022}
}
```
## Limitations
This model only works for English texts. Long texts will be truncated to at most 512 tokens.
|
intfloat/e5-small
|
intfloat
| 2023-08-07T04:58:08Z | 56,836 | 41 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"onnx",
"safetensors",
"bert",
"mteb",
"Sentence Transformers",
"sentence-similarity",
"en",
"arxiv:2212.03533",
"arxiv:2104.08663",
"arxiv:2210.07316",
"license:mit",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-12-07T06:48:03Z |
---
tags:
- mteb
- Sentence Transformers
- sentence-similarity
- sentence-transformers
model-index:
- name: e5-small
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 76.22388059701493
- type: ap
value: 40.27466219523129
- type: f1
value: 70.60533006025108
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 87.525775
- type: ap
value: 83.51063993897611
- type: f1
value: 87.49342736805572
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 42.611999999999995
- type: f1
value: 42.05088045932892
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.826
- type: map_at_10
value: 38.269
- type: map_at_100
value: 39.322
- type: map_at_1000
value: 39.344
- type: map_at_3
value: 33.428000000000004
- type: map_at_5
value: 36.063
- type: mrr_at_1
value: 24.253
- type: mrr_at_10
value: 38.425
- type: mrr_at_100
value: 39.478
- type: mrr_at_1000
value: 39.5
- type: mrr_at_3
value: 33.606
- type: mrr_at_5
value: 36.195
- type: ndcg_at_1
value: 23.826
- type: ndcg_at_10
value: 46.693
- type: ndcg_at_100
value: 51.469
- type: ndcg_at_1000
value: 52.002
- type: ndcg_at_3
value: 36.603
- type: ndcg_at_5
value: 41.365
- type: precision_at_1
value: 23.826
- type: precision_at_10
value: 7.383000000000001
- type: precision_at_100
value: 0.9530000000000001
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 15.268
- type: precision_at_5
value: 11.479000000000001
- type: recall_at_1
value: 23.826
- type: recall_at_10
value: 73.82600000000001
- type: recall_at_100
value: 95.306
- type: recall_at_1000
value: 99.431
- type: recall_at_3
value: 45.804
- type: recall_at_5
value: 57.397
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 44.13995374767436
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 37.13950072624313
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 59.35843292105327
- type: mrr
value: 73.72312359846987
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 84.55140418324174
- type: cos_sim_spearman
value: 84.21637675860022
- type: euclidean_pearson
value: 81.26069614610006
- type: euclidean_spearman
value: 83.25069210421785
- type: manhattan_pearson
value: 80.17441422581014
- type: manhattan_spearman
value: 81.87596198487877
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 81.87337662337661
- type: f1
value: 81.76647866926402
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 35.80600542614507
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 31.86321613256603
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.054
- type: map_at_10
value: 40.699999999999996
- type: map_at_100
value: 41.818
- type: map_at_1000
value: 41.959999999999994
- type: map_at_3
value: 37.742
- type: map_at_5
value: 39.427
- type: mrr_at_1
value: 38.769999999999996
- type: mrr_at_10
value: 46.150000000000006
- type: mrr_at_100
value: 46.865
- type: mrr_at_1000
value: 46.925
- type: mrr_at_3
value: 43.705
- type: mrr_at_5
value: 45.214999999999996
- type: ndcg_at_1
value: 38.769999999999996
- type: ndcg_at_10
value: 45.778
- type: ndcg_at_100
value: 50.38
- type: ndcg_at_1000
value: 52.922999999999995
- type: ndcg_at_3
value: 41.597
- type: ndcg_at_5
value: 43.631
- type: precision_at_1
value: 38.769999999999996
- type: precision_at_10
value: 8.269
- type: precision_at_100
value: 1.278
- type: precision_at_1000
value: 0.178
- type: precision_at_3
value: 19.266
- type: precision_at_5
value: 13.705
- type: recall_at_1
value: 32.054
- type: recall_at_10
value: 54.947
- type: recall_at_100
value: 74.79599999999999
- type: recall_at_1000
value: 91.40899999999999
- type: recall_at_3
value: 42.431000000000004
- type: recall_at_5
value: 48.519
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 29.035
- type: map_at_10
value: 38.007000000000005
- type: map_at_100
value: 39.125
- type: map_at_1000
value: 39.251999999999995
- type: map_at_3
value: 35.77
- type: map_at_5
value: 37.057
- type: mrr_at_1
value: 36.497
- type: mrr_at_10
value: 44.077
- type: mrr_at_100
value: 44.743
- type: mrr_at_1000
value: 44.79
- type: mrr_at_3
value: 42.123
- type: mrr_at_5
value: 43.308
- type: ndcg_at_1
value: 36.497
- type: ndcg_at_10
value: 42.986000000000004
- type: ndcg_at_100
value: 47.323
- type: ndcg_at_1000
value: 49.624
- type: ndcg_at_3
value: 39.805
- type: ndcg_at_5
value: 41.286
- type: precision_at_1
value: 36.497
- type: precision_at_10
value: 7.8340000000000005
- type: precision_at_100
value: 1.269
- type: precision_at_1000
value: 0.178
- type: precision_at_3
value: 19.023
- type: precision_at_5
value: 13.248
- type: recall_at_1
value: 29.035
- type: recall_at_10
value: 51.06
- type: recall_at_100
value: 69.64099999999999
- type: recall_at_1000
value: 84.49
- type: recall_at_3
value: 41.333999999999996
- type: recall_at_5
value: 45.663
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 37.239
- type: map_at_10
value: 47.873
- type: map_at_100
value: 48.842999999999996
- type: map_at_1000
value: 48.913000000000004
- type: map_at_3
value: 45.050000000000004
- type: map_at_5
value: 46.498
- type: mrr_at_1
value: 42.508
- type: mrr_at_10
value: 51.44
- type: mrr_at_100
value: 52.087
- type: mrr_at_1000
value: 52.129999999999995
- type: mrr_at_3
value: 49.164
- type: mrr_at_5
value: 50.343
- type: ndcg_at_1
value: 42.508
- type: ndcg_at_10
value: 53.31399999999999
- type: ndcg_at_100
value: 57.245000000000005
- type: ndcg_at_1000
value: 58.794000000000004
- type: ndcg_at_3
value: 48.295
- type: ndcg_at_5
value: 50.415
- type: precision_at_1
value: 42.508
- type: precision_at_10
value: 8.458
- type: precision_at_100
value: 1.133
- type: precision_at_1000
value: 0.132
- type: precision_at_3
value: 21.191
- type: precision_at_5
value: 14.307
- type: recall_at_1
value: 37.239
- type: recall_at_10
value: 65.99000000000001
- type: recall_at_100
value: 82.99499999999999
- type: recall_at_1000
value: 94.128
- type: recall_at_3
value: 52.382
- type: recall_at_5
value: 57.648999999999994
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.039
- type: map_at_10
value: 29.694
- type: map_at_100
value: 30.587999999999997
- type: map_at_1000
value: 30.692999999999998
- type: map_at_3
value: 27.708
- type: map_at_5
value: 28.774
- type: mrr_at_1
value: 24.633
- type: mrr_at_10
value: 31.478
- type: mrr_at_100
value: 32.299
- type: mrr_at_1000
value: 32.381
- type: mrr_at_3
value: 29.435
- type: mrr_at_5
value: 30.446
- type: ndcg_at_1
value: 24.633
- type: ndcg_at_10
value: 33.697
- type: ndcg_at_100
value: 38.080000000000005
- type: ndcg_at_1000
value: 40.812
- type: ndcg_at_3
value: 29.654000000000003
- type: ndcg_at_5
value: 31.474000000000004
- type: precision_at_1
value: 24.633
- type: precision_at_10
value: 5.0729999999999995
- type: precision_at_100
value: 0.753
- type: precision_at_1000
value: 0.10300000000000001
- type: precision_at_3
value: 12.279
- type: precision_at_5
value: 8.452
- type: recall_at_1
value: 23.039
- type: recall_at_10
value: 44.275999999999996
- type: recall_at_100
value: 64.4
- type: recall_at_1000
value: 85.135
- type: recall_at_3
value: 33.394
- type: recall_at_5
value: 37.687
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 13.594999999999999
- type: map_at_10
value: 19.933999999999997
- type: map_at_100
value: 20.966
- type: map_at_1000
value: 21.087
- type: map_at_3
value: 17.749000000000002
- type: map_at_5
value: 19.156000000000002
- type: mrr_at_1
value: 17.662
- type: mrr_at_10
value: 24.407
- type: mrr_at_100
value: 25.385
- type: mrr_at_1000
value: 25.465
- type: mrr_at_3
value: 22.056
- type: mrr_at_5
value: 23.630000000000003
- type: ndcg_at_1
value: 17.662
- type: ndcg_at_10
value: 24.391
- type: ndcg_at_100
value: 29.681
- type: ndcg_at_1000
value: 32.923
- type: ndcg_at_3
value: 20.271
- type: ndcg_at_5
value: 22.621
- type: precision_at_1
value: 17.662
- type: precision_at_10
value: 4.44
- type: precision_at_100
value: 0.8200000000000001
- type: precision_at_1000
value: 0.125
- type: precision_at_3
value: 9.577
- type: precision_at_5
value: 7.313
- type: recall_at_1
value: 13.594999999999999
- type: recall_at_10
value: 33.976
- type: recall_at_100
value: 57.43000000000001
- type: recall_at_1000
value: 80.958
- type: recall_at_3
value: 22.897000000000002
- type: recall_at_5
value: 28.714000000000002
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.683
- type: map_at_10
value: 35.068
- type: map_at_100
value: 36.311
- type: map_at_1000
value: 36.436
- type: map_at_3
value: 32.371
- type: map_at_5
value: 33.761
- type: mrr_at_1
value: 32.435
- type: mrr_at_10
value: 40.721000000000004
- type: mrr_at_100
value: 41.535
- type: mrr_at_1000
value: 41.593
- type: mrr_at_3
value: 38.401999999999994
- type: mrr_at_5
value: 39.567
- type: ndcg_at_1
value: 32.435
- type: ndcg_at_10
value: 40.538000000000004
- type: ndcg_at_100
value: 45.963
- type: ndcg_at_1000
value: 48.400999999999996
- type: ndcg_at_3
value: 36.048
- type: ndcg_at_5
value: 37.899
- type: precision_at_1
value: 32.435
- type: precision_at_10
value: 7.1129999999999995
- type: precision_at_100
value: 1.162
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 16.683
- type: precision_at_5
value: 11.684
- type: recall_at_1
value: 26.683
- type: recall_at_10
value: 51.517
- type: recall_at_100
value: 74.553
- type: recall_at_1000
value: 90.649
- type: recall_at_3
value: 38.495000000000005
- type: recall_at_5
value: 43.495
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.186
- type: map_at_10
value: 31.972
- type: map_at_100
value: 33.117000000000004
- type: map_at_1000
value: 33.243
- type: map_at_3
value: 29.423
- type: map_at_5
value: 30.847
- type: mrr_at_1
value: 29.794999999999998
- type: mrr_at_10
value: 36.767
- type: mrr_at_100
value: 37.645
- type: mrr_at_1000
value: 37.716
- type: mrr_at_3
value: 34.513
- type: mrr_at_5
value: 35.791000000000004
- type: ndcg_at_1
value: 29.794999999999998
- type: ndcg_at_10
value: 36.786
- type: ndcg_at_100
value: 41.94
- type: ndcg_at_1000
value: 44.830999999999996
- type: ndcg_at_3
value: 32.504
- type: ndcg_at_5
value: 34.404
- type: precision_at_1
value: 29.794999999999998
- type: precision_at_10
value: 6.518
- type: precision_at_100
value: 1.0659999999999998
- type: precision_at_1000
value: 0.149
- type: precision_at_3
value: 15.296999999999999
- type: precision_at_5
value: 10.731
- type: recall_at_1
value: 24.186
- type: recall_at_10
value: 46.617
- type: recall_at_100
value: 68.75
- type: recall_at_1000
value: 88.864
- type: recall_at_3
value: 34.199
- type: recall_at_5
value: 39.462
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.22083333333333
- type: map_at_10
value: 31.606666666666662
- type: map_at_100
value: 32.6195
- type: map_at_1000
value: 32.739999999999995
- type: map_at_3
value: 29.37825
- type: map_at_5
value: 30.596083333333336
- type: mrr_at_1
value: 28.607916666666668
- type: mrr_at_10
value: 35.54591666666666
- type: mrr_at_100
value: 36.33683333333333
- type: mrr_at_1000
value: 36.40624999999999
- type: mrr_at_3
value: 33.526250000000005
- type: mrr_at_5
value: 34.6605
- type: ndcg_at_1
value: 28.607916666666668
- type: ndcg_at_10
value: 36.07966666666667
- type: ndcg_at_100
value: 40.73308333333333
- type: ndcg_at_1000
value: 43.40666666666666
- type: ndcg_at_3
value: 32.23525
- type: ndcg_at_5
value: 33.97083333333333
- type: precision_at_1
value: 28.607916666666668
- type: precision_at_10
value: 6.120333333333335
- type: precision_at_100
value: 0.9921666666666668
- type: precision_at_1000
value: 0.14091666666666666
- type: precision_at_3
value: 14.54975
- type: precision_at_5
value: 10.153166666666667
- type: recall_at_1
value: 24.22083333333333
- type: recall_at_10
value: 45.49183333333334
- type: recall_at_100
value: 66.28133333333332
- type: recall_at_1000
value: 85.16541666666667
- type: recall_at_3
value: 34.6485
- type: recall_at_5
value: 39.229749999999996
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 21.842
- type: map_at_10
value: 27.573999999999998
- type: map_at_100
value: 28.410999999999998
- type: map_at_1000
value: 28.502
- type: map_at_3
value: 25.921
- type: map_at_5
value: 26.888
- type: mrr_at_1
value: 24.08
- type: mrr_at_10
value: 29.915999999999997
- type: mrr_at_100
value: 30.669
- type: mrr_at_1000
value: 30.746000000000002
- type: mrr_at_3
value: 28.349000000000004
- type: mrr_at_5
value: 29.246
- type: ndcg_at_1
value: 24.08
- type: ndcg_at_10
value: 30.898999999999997
- type: ndcg_at_100
value: 35.272999999999996
- type: ndcg_at_1000
value: 37.679
- type: ndcg_at_3
value: 27.881
- type: ndcg_at_5
value: 29.432000000000002
- type: precision_at_1
value: 24.08
- type: precision_at_10
value: 4.678
- type: precision_at_100
value: 0.744
- type: precision_at_1000
value: 0.10300000000000001
- type: precision_at_3
value: 11.860999999999999
- type: precision_at_5
value: 8.16
- type: recall_at_1
value: 21.842
- type: recall_at_10
value: 38.66
- type: recall_at_100
value: 59.169000000000004
- type: recall_at_1000
value: 76.887
- type: recall_at_3
value: 30.532999999999998
- type: recall_at_5
value: 34.354
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 17.145
- type: map_at_10
value: 22.729
- type: map_at_100
value: 23.574
- type: map_at_1000
value: 23.695
- type: map_at_3
value: 21.044
- type: map_at_5
value: 21.981
- type: mrr_at_1
value: 20.888
- type: mrr_at_10
value: 26.529000000000003
- type: mrr_at_100
value: 27.308
- type: mrr_at_1000
value: 27.389000000000003
- type: mrr_at_3
value: 24.868000000000002
- type: mrr_at_5
value: 25.825
- type: ndcg_at_1
value: 20.888
- type: ndcg_at_10
value: 26.457000000000004
- type: ndcg_at_100
value: 30.764000000000003
- type: ndcg_at_1000
value: 33.825
- type: ndcg_at_3
value: 23.483999999999998
- type: ndcg_at_5
value: 24.836
- type: precision_at_1
value: 20.888
- type: precision_at_10
value: 4.58
- type: precision_at_100
value: 0.784
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 10.874
- type: precision_at_5
value: 7.639
- type: recall_at_1
value: 17.145
- type: recall_at_10
value: 33.938
- type: recall_at_100
value: 53.672
- type: recall_at_1000
value: 76.023
- type: recall_at_3
value: 25.363000000000003
- type: recall_at_5
value: 29.023
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.275
- type: map_at_10
value: 30.438
- type: map_at_100
value: 31.489
- type: map_at_1000
value: 31.601000000000003
- type: map_at_3
value: 28.647
- type: map_at_5
value: 29.660999999999998
- type: mrr_at_1
value: 28.077999999999996
- type: mrr_at_10
value: 34.098
- type: mrr_at_100
value: 35.025
- type: mrr_at_1000
value: 35.109
- type: mrr_at_3
value: 32.4
- type: mrr_at_5
value: 33.379999999999995
- type: ndcg_at_1
value: 28.077999999999996
- type: ndcg_at_10
value: 34.271
- type: ndcg_at_100
value: 39.352
- type: ndcg_at_1000
value: 42.199
- type: ndcg_at_3
value: 30.978
- type: ndcg_at_5
value: 32.498
- type: precision_at_1
value: 28.077999999999996
- type: precision_at_10
value: 5.345
- type: precision_at_100
value: 0.897
- type: precision_at_1000
value: 0.125
- type: precision_at_3
value: 13.526
- type: precision_at_5
value: 9.16
- type: recall_at_1
value: 24.275
- type: recall_at_10
value: 42.362
- type: recall_at_100
value: 64.461
- type: recall_at_1000
value: 84.981
- type: recall_at_3
value: 33.249
- type: recall_at_5
value: 37.214999999999996
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.358
- type: map_at_10
value: 30.062
- type: map_at_100
value: 31.189
- type: map_at_1000
value: 31.386999999999997
- type: map_at_3
value: 27.672
- type: map_at_5
value: 28.76
- type: mrr_at_1
value: 26.877000000000002
- type: mrr_at_10
value: 33.948
- type: mrr_at_100
value: 34.746
- type: mrr_at_1000
value: 34.816
- type: mrr_at_3
value: 31.884
- type: mrr_at_5
value: 33.001000000000005
- type: ndcg_at_1
value: 26.877000000000002
- type: ndcg_at_10
value: 34.977000000000004
- type: ndcg_at_100
value: 39.753
- type: ndcg_at_1000
value: 42.866
- type: ndcg_at_3
value: 30.956
- type: ndcg_at_5
value: 32.381
- type: precision_at_1
value: 26.877000000000002
- type: precision_at_10
value: 6.7
- type: precision_at_100
value: 1.287
- type: precision_at_1000
value: 0.215
- type: precision_at_3
value: 14.360999999999999
- type: precision_at_5
value: 10.119
- type: recall_at_1
value: 22.358
- type: recall_at_10
value: 44.183
- type: recall_at_100
value: 67.14
- type: recall_at_1000
value: 87.53999999999999
- type: recall_at_3
value: 32.79
- type: recall_at_5
value: 36.829
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.198999999999998
- type: map_at_10
value: 25.229000000000003
- type: map_at_100
value: 26.003
- type: map_at_1000
value: 26.111
- type: map_at_3
value: 23.442
- type: map_at_5
value: 24.343
- type: mrr_at_1
value: 21.072
- type: mrr_at_10
value: 27.02
- type: mrr_at_100
value: 27.735
- type: mrr_at_1000
value: 27.815
- type: mrr_at_3
value: 25.416
- type: mrr_at_5
value: 26.173999999999996
- type: ndcg_at_1
value: 21.072
- type: ndcg_at_10
value: 28.862
- type: ndcg_at_100
value: 33.043
- type: ndcg_at_1000
value: 36.003
- type: ndcg_at_3
value: 25.35
- type: ndcg_at_5
value: 26.773000000000003
- type: precision_at_1
value: 21.072
- type: precision_at_10
value: 4.436
- type: precision_at_100
value: 0.713
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 10.659
- type: precision_at_5
value: 7.32
- type: recall_at_1
value: 19.198999999999998
- type: recall_at_10
value: 38.376
- type: recall_at_100
value: 58.36900000000001
- type: recall_at_1000
value: 80.92099999999999
- type: recall_at_3
value: 28.715000000000003
- type: recall_at_5
value: 32.147
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.9319999999999995
- type: map_at_10
value: 10.483
- type: map_at_100
value: 11.97
- type: map_at_1000
value: 12.171999999999999
- type: map_at_3
value: 8.477
- type: map_at_5
value: 9.495000000000001
- type: mrr_at_1
value: 13.094
- type: mrr_at_10
value: 21.282
- type: mrr_at_100
value: 22.556
- type: mrr_at_1000
value: 22.628999999999998
- type: mrr_at_3
value: 18.218999999999998
- type: mrr_at_5
value: 19.900000000000002
- type: ndcg_at_1
value: 13.094
- type: ndcg_at_10
value: 15.811
- type: ndcg_at_100
value: 23.035
- type: ndcg_at_1000
value: 27.089999999999996
- type: ndcg_at_3
value: 11.905000000000001
- type: ndcg_at_5
value: 13.377
- type: precision_at_1
value: 13.094
- type: precision_at_10
value: 5.225
- type: precision_at_100
value: 1.2970000000000002
- type: precision_at_1000
value: 0.203
- type: precision_at_3
value: 8.86
- type: precision_at_5
value: 7.309
- type: recall_at_1
value: 5.9319999999999995
- type: recall_at_10
value: 20.305
- type: recall_at_100
value: 46.314
- type: recall_at_1000
value: 69.612
- type: recall_at_3
value: 11.21
- type: recall_at_5
value: 14.773
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.674
- type: map_at_10
value: 17.822
- type: map_at_100
value: 24.794
- type: map_at_1000
value: 26.214
- type: map_at_3
value: 12.690999999999999
- type: map_at_5
value: 15.033
- type: mrr_at_1
value: 61.75000000000001
- type: mrr_at_10
value: 71.58
- type: mrr_at_100
value: 71.923
- type: mrr_at_1000
value: 71.932
- type: mrr_at_3
value: 70.125
- type: mrr_at_5
value: 71.038
- type: ndcg_at_1
value: 51
- type: ndcg_at_10
value: 38.637
- type: ndcg_at_100
value: 42.398
- type: ndcg_at_1000
value: 48.962
- type: ndcg_at_3
value: 43.29
- type: ndcg_at_5
value: 40.763
- type: precision_at_1
value: 61.75000000000001
- type: precision_at_10
value: 30.125
- type: precision_at_100
value: 9.53
- type: precision_at_1000
value: 1.9619999999999997
- type: precision_at_3
value: 45.583
- type: precision_at_5
value: 38.95
- type: recall_at_1
value: 8.674
- type: recall_at_10
value: 23.122
- type: recall_at_100
value: 47.46
- type: recall_at_1000
value: 67.662
- type: recall_at_3
value: 13.946
- type: recall_at_5
value: 17.768
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 46.86000000000001
- type: f1
value: 41.343580452760776
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 36.609
- type: map_at_10
value: 47.552
- type: map_at_100
value: 48.283
- type: map_at_1000
value: 48.321
- type: map_at_3
value: 44.869
- type: map_at_5
value: 46.509
- type: mrr_at_1
value: 39.214
- type: mrr_at_10
value: 50.434999999999995
- type: mrr_at_100
value: 51.122
- type: mrr_at_1000
value: 51.151
- type: mrr_at_3
value: 47.735
- type: mrr_at_5
value: 49.394
- type: ndcg_at_1
value: 39.214
- type: ndcg_at_10
value: 53.52400000000001
- type: ndcg_at_100
value: 56.997
- type: ndcg_at_1000
value: 57.975
- type: ndcg_at_3
value: 48.173
- type: ndcg_at_5
value: 51.05800000000001
- type: precision_at_1
value: 39.214
- type: precision_at_10
value: 7.573
- type: precision_at_100
value: 0.9440000000000001
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 19.782
- type: precision_at_5
value: 13.453000000000001
- type: recall_at_1
value: 36.609
- type: recall_at_10
value: 69.247
- type: recall_at_100
value: 84.99600000000001
- type: recall_at_1000
value: 92.40899999999999
- type: recall_at_3
value: 54.856
- type: recall_at_5
value: 61.797000000000004
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 16.466
- type: map_at_10
value: 27.060000000000002
- type: map_at_100
value: 28.511999999999997
- type: map_at_1000
value: 28.693
- type: map_at_3
value: 22.777
- type: map_at_5
value: 25.086000000000002
- type: mrr_at_1
value: 32.716
- type: mrr_at_10
value: 41.593999999999994
- type: mrr_at_100
value: 42.370000000000005
- type: mrr_at_1000
value: 42.419000000000004
- type: mrr_at_3
value: 38.143
- type: mrr_at_5
value: 40.288000000000004
- type: ndcg_at_1
value: 32.716
- type: ndcg_at_10
value: 34.795
- type: ndcg_at_100
value: 40.58
- type: ndcg_at_1000
value: 43.993
- type: ndcg_at_3
value: 29.573
- type: ndcg_at_5
value: 31.583
- type: precision_at_1
value: 32.716
- type: precision_at_10
value: 9.937999999999999
- type: precision_at_100
value: 1.585
- type: precision_at_1000
value: 0.22
- type: precision_at_3
value: 19.496
- type: precision_at_5
value: 15.247
- type: recall_at_1
value: 16.466
- type: recall_at_10
value: 42.886
- type: recall_at_100
value: 64.724
- type: recall_at_1000
value: 85.347
- type: recall_at_3
value: 26.765
- type: recall_at_5
value: 33.603
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 33.025
- type: map_at_10
value: 47.343
- type: map_at_100
value: 48.207
- type: map_at_1000
value: 48.281
- type: map_at_3
value: 44.519
- type: map_at_5
value: 46.217000000000006
- type: mrr_at_1
value: 66.05
- type: mrr_at_10
value: 72.94699999999999
- type: mrr_at_100
value: 73.289
- type: mrr_at_1000
value: 73.30499999999999
- type: mrr_at_3
value: 71.686
- type: mrr_at_5
value: 72.491
- type: ndcg_at_1
value: 66.05
- type: ndcg_at_10
value: 56.338
- type: ndcg_at_100
value: 59.599999999999994
- type: ndcg_at_1000
value: 61.138000000000005
- type: ndcg_at_3
value: 52.034000000000006
- type: ndcg_at_5
value: 54.352000000000004
- type: precision_at_1
value: 66.05
- type: precision_at_10
value: 11.693000000000001
- type: precision_at_100
value: 1.425
- type: precision_at_1000
value: 0.163
- type: precision_at_3
value: 32.613
- type: precision_at_5
value: 21.401999999999997
- type: recall_at_1
value: 33.025
- type: recall_at_10
value: 58.467
- type: recall_at_100
value: 71.242
- type: recall_at_1000
value: 81.452
- type: recall_at_3
value: 48.92
- type: recall_at_5
value: 53.504
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 75.5492
- type: ap
value: 69.42911637216271
- type: f1
value: 75.39113704261024
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 23.173
- type: map_at_10
value: 35.453
- type: map_at_100
value: 36.573
- type: map_at_1000
value: 36.620999999999995
- type: map_at_3
value: 31.655
- type: map_at_5
value: 33.823
- type: mrr_at_1
value: 23.868000000000002
- type: mrr_at_10
value: 36.085
- type: mrr_at_100
value: 37.15
- type: mrr_at_1000
value: 37.193
- type: mrr_at_3
value: 32.376
- type: mrr_at_5
value: 34.501
- type: ndcg_at_1
value: 23.854
- type: ndcg_at_10
value: 42.33
- type: ndcg_at_100
value: 47.705999999999996
- type: ndcg_at_1000
value: 48.91
- type: ndcg_at_3
value: 34.604
- type: ndcg_at_5
value: 38.473
- type: precision_at_1
value: 23.854
- type: precision_at_10
value: 6.639
- type: precision_at_100
value: 0.932
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.685
- type: precision_at_5
value: 10.782
- type: recall_at_1
value: 23.173
- type: recall_at_10
value: 63.441
- type: recall_at_100
value: 88.25
- type: recall_at_1000
value: 97.438
- type: recall_at_3
value: 42.434
- type: recall_at_5
value: 51.745
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 92.05426356589147
- type: f1
value: 91.88068588063942
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 73.23985408116735
- type: f1
value: 55.858906745287506
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.21923335574984
- type: f1
value: 70.0174116204253
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.77673167451245
- type: f1
value: 75.44811354778666
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 31.340414710728737
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 28.196676760061578
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 29.564149683482206
- type: mrr
value: 30.28995474250486
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.93
- type: map_at_10
value: 12.828000000000001
- type: map_at_100
value: 15.501000000000001
- type: map_at_1000
value: 16.791
- type: map_at_3
value: 9.727
- type: map_at_5
value: 11.318999999999999
- type: mrr_at_1
value: 47.678
- type: mrr_at_10
value: 55.893
- type: mrr_at_100
value: 56.491
- type: mrr_at_1000
value: 56.53
- type: mrr_at_3
value: 54.386
- type: mrr_at_5
value: 55.516
- type: ndcg_at_1
value: 45.975
- type: ndcg_at_10
value: 33.928999999999995
- type: ndcg_at_100
value: 30.164
- type: ndcg_at_1000
value: 38.756
- type: ndcg_at_3
value: 41.077000000000005
- type: ndcg_at_5
value: 38.415
- type: precision_at_1
value: 47.678
- type: precision_at_10
value: 24.365000000000002
- type: precision_at_100
value: 7.344
- type: precision_at_1000
value: 1.994
- type: precision_at_3
value: 38.184000000000005
- type: precision_at_5
value: 33.003
- type: recall_at_1
value: 5.93
- type: recall_at_10
value: 16.239
- type: recall_at_100
value: 28.782999999999998
- type: recall_at_1000
value: 60.11
- type: recall_at_3
value: 10.700999999999999
- type: recall_at_5
value: 13.584
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 36.163000000000004
- type: map_at_10
value: 51.520999999999994
- type: map_at_100
value: 52.449
- type: map_at_1000
value: 52.473000000000006
- type: map_at_3
value: 47.666
- type: map_at_5
value: 50.043000000000006
- type: mrr_at_1
value: 40.266999999999996
- type: mrr_at_10
value: 54.074
- type: mrr_at_100
value: 54.722
- type: mrr_at_1000
value: 54.739000000000004
- type: mrr_at_3
value: 51.043000000000006
- type: mrr_at_5
value: 52.956
- type: ndcg_at_1
value: 40.238
- type: ndcg_at_10
value: 58.73199999999999
- type: ndcg_at_100
value: 62.470000000000006
- type: ndcg_at_1000
value: 63.083999999999996
- type: ndcg_at_3
value: 51.672
- type: ndcg_at_5
value: 55.564
- type: precision_at_1
value: 40.238
- type: precision_at_10
value: 9.279
- type: precision_at_100
value: 1.139
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 23.078000000000003
- type: precision_at_5
value: 16.176
- type: recall_at_1
value: 36.163000000000004
- type: recall_at_10
value: 77.88199999999999
- type: recall_at_100
value: 93.83399999999999
- type: recall_at_1000
value: 98.465
- type: recall_at_3
value: 59.857000000000006
- type: recall_at_5
value: 68.73599999999999
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.344
- type: map_at_10
value: 83.907
- type: map_at_100
value: 84.536
- type: map_at_1000
value: 84.557
- type: map_at_3
value: 80.984
- type: map_at_5
value: 82.844
- type: mrr_at_1
value: 81.02000000000001
- type: mrr_at_10
value: 87.158
- type: mrr_at_100
value: 87.268
- type: mrr_at_1000
value: 87.26899999999999
- type: mrr_at_3
value: 86.17
- type: mrr_at_5
value: 86.87
- type: ndcg_at_1
value: 81.02000000000001
- type: ndcg_at_10
value: 87.70700000000001
- type: ndcg_at_100
value: 89.004
- type: ndcg_at_1000
value: 89.139
- type: ndcg_at_3
value: 84.841
- type: ndcg_at_5
value: 86.455
- type: precision_at_1
value: 81.02000000000001
- type: precision_at_10
value: 13.248999999999999
- type: precision_at_100
value: 1.516
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 36.963
- type: precision_at_5
value: 24.33
- type: recall_at_1
value: 70.344
- type: recall_at_10
value: 94.75099999999999
- type: recall_at_100
value: 99.30499999999999
- type: recall_at_1000
value: 99.928
- type: recall_at_3
value: 86.506
- type: recall_at_5
value: 91.083
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 42.873718018378305
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 56.39477366450528
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.868
- type: map_at_10
value: 9.611
- type: map_at_100
value: 11.087
- type: map_at_1000
value: 11.332
- type: map_at_3
value: 6.813
- type: map_at_5
value: 8.233
- type: mrr_at_1
value: 19
- type: mrr_at_10
value: 28.457
- type: mrr_at_100
value: 29.613
- type: mrr_at_1000
value: 29.695
- type: mrr_at_3
value: 25.55
- type: mrr_at_5
value: 27.29
- type: ndcg_at_1
value: 19
- type: ndcg_at_10
value: 16.419
- type: ndcg_at_100
value: 22.817999999999998
- type: ndcg_at_1000
value: 27.72
- type: ndcg_at_3
value: 15.379000000000001
- type: ndcg_at_5
value: 13.645
- type: precision_at_1
value: 19
- type: precision_at_10
value: 8.540000000000001
- type: precision_at_100
value: 1.7819999999999998
- type: precision_at_1000
value: 0.297
- type: precision_at_3
value: 14.267
- type: precision_at_5
value: 12.04
- type: recall_at_1
value: 3.868
- type: recall_at_10
value: 17.288
- type: recall_at_100
value: 36.144999999999996
- type: recall_at_1000
value: 60.199999999999996
- type: recall_at_3
value: 8.688
- type: recall_at_5
value: 12.198
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 83.96614722598582
- type: cos_sim_spearman
value: 78.9003023008781
- type: euclidean_pearson
value: 81.01829384436505
- type: euclidean_spearman
value: 78.93248416788914
- type: manhattan_pearson
value: 81.1665428926402
- type: manhattan_spearman
value: 78.93264116287453
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 83.54613363895993
- type: cos_sim_spearman
value: 75.1883451602451
- type: euclidean_pearson
value: 79.70320886899894
- type: euclidean_spearman
value: 74.5917140136796
- type: manhattan_pearson
value: 79.82157067185999
- type: manhattan_spearman
value: 74.74185720594735
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 81.30430156721782
- type: cos_sim_spearman
value: 81.79962989974364
- type: euclidean_pearson
value: 80.89058823224924
- type: euclidean_spearman
value: 81.35929372984597
- type: manhattan_pearson
value: 81.12204370487478
- type: manhattan_spearman
value: 81.6248963282232
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 81.13064504403134
- type: cos_sim_spearman
value: 78.48371403924872
- type: euclidean_pearson
value: 80.16794919665591
- type: euclidean_spearman
value: 78.29216082221699
- type: manhattan_pearson
value: 80.22308565207301
- type: manhattan_spearman
value: 78.37829229948022
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 86.52918899541099
- type: cos_sim_spearman
value: 87.49276894673142
- type: euclidean_pearson
value: 86.77440570164254
- type: euclidean_spearman
value: 87.5753295736756
- type: manhattan_pearson
value: 86.86098573892133
- type: manhattan_spearman
value: 87.65848591821947
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 82.86805307244882
- type: cos_sim_spearman
value: 84.58066253757511
- type: euclidean_pearson
value: 84.38377000876991
- type: euclidean_spearman
value: 85.1837278784528
- type: manhattan_pearson
value: 84.41903291363842
- type: manhattan_spearman
value: 85.19023736251052
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 86.77218560282436
- type: cos_sim_spearman
value: 87.94243515296604
- type: euclidean_pearson
value: 88.22800939214864
- type: euclidean_spearman
value: 87.91106839439841
- type: manhattan_pearson
value: 88.17063269848741
- type: manhattan_spearman
value: 87.72751904126062
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 60.40731554300387
- type: cos_sim_spearman
value: 63.76300532966479
- type: euclidean_pearson
value: 62.94727878229085
- type: euclidean_spearman
value: 63.678039531461216
- type: manhattan_pearson
value: 63.00661039863549
- type: manhattan_spearman
value: 63.6282591984376
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 84.92731569745344
- type: cos_sim_spearman
value: 86.36336704300167
- type: euclidean_pearson
value: 86.09122224841195
- type: euclidean_spearman
value: 86.2116149319238
- type: manhattan_pearson
value: 86.07879456717032
- type: manhattan_spearman
value: 86.2022069635119
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 79.75976311752326
- type: mrr
value: 94.15782837351466
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 51.193999999999996
- type: map_at_10
value: 61.224999999999994
- type: map_at_100
value: 62.031000000000006
- type: map_at_1000
value: 62.066
- type: map_at_3
value: 59.269000000000005
- type: map_at_5
value: 60.159
- type: mrr_at_1
value: 53.667
- type: mrr_at_10
value: 62.74999999999999
- type: mrr_at_100
value: 63.39399999999999
- type: mrr_at_1000
value: 63.425
- type: mrr_at_3
value: 61.389
- type: mrr_at_5
value: 61.989000000000004
- type: ndcg_at_1
value: 53.667
- type: ndcg_at_10
value: 65.596
- type: ndcg_at_100
value: 68.906
- type: ndcg_at_1000
value: 69.78999999999999
- type: ndcg_at_3
value: 62.261
- type: ndcg_at_5
value: 63.453
- type: precision_at_1
value: 53.667
- type: precision_at_10
value: 8.667
- type: precision_at_100
value: 1.04
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 24.556
- type: precision_at_5
value: 15.6
- type: recall_at_1
value: 51.193999999999996
- type: recall_at_10
value: 77.156
- type: recall_at_100
value: 91.43299999999999
- type: recall_at_1000
value: 98.333
- type: recall_at_3
value: 67.994
- type: recall_at_5
value: 71.14399999999999
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.81485148514851
- type: cos_sim_ap
value: 95.28896513388551
- type: cos_sim_f1
value: 90.43478260869566
- type: cos_sim_precision
value: 92.56544502617801
- type: cos_sim_recall
value: 88.4
- type: dot_accuracy
value: 99.30594059405941
- type: dot_ap
value: 61.6432597455472
- type: dot_f1
value: 59.46481665014866
- type: dot_precision
value: 58.93909626719057
- type: dot_recall
value: 60
- type: euclidean_accuracy
value: 99.81980198019802
- type: euclidean_ap
value: 95.21411049527
- type: euclidean_f1
value: 91.06090373280944
- type: euclidean_precision
value: 89.47876447876449
- type: euclidean_recall
value: 92.7
- type: manhattan_accuracy
value: 99.81782178217821
- type: manhattan_ap
value: 95.32449994414968
- type: manhattan_f1
value: 90.86395233366436
- type: manhattan_precision
value: 90.23668639053254
- type: manhattan_recall
value: 91.5
- type: max_accuracy
value: 99.81980198019802
- type: max_ap
value: 95.32449994414968
- type: max_f1
value: 91.06090373280944
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 59.08045614613064
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 30.297802606804748
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 49.12801740706292
- type: mrr
value: 50.05592956879722
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 31.523347880124497
- type: cos_sim_spearman
value: 31.388214436391014
- type: dot_pearson
value: 24.55403435439901
- type: dot_spearman
value: 23.50153210841191
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.243
- type: map_at_10
value: 1.886
- type: map_at_100
value: 10.040000000000001
- type: map_at_1000
value: 23.768
- type: map_at_3
value: 0.674
- type: map_at_5
value: 1.079
- type: mrr_at_1
value: 88
- type: mrr_at_10
value: 93.667
- type: mrr_at_100
value: 93.667
- type: mrr_at_1000
value: 93.667
- type: mrr_at_3
value: 93.667
- type: mrr_at_5
value: 93.667
- type: ndcg_at_1
value: 83
- type: ndcg_at_10
value: 76.777
- type: ndcg_at_100
value: 55.153
- type: ndcg_at_1000
value: 47.912
- type: ndcg_at_3
value: 81.358
- type: ndcg_at_5
value: 80.74799999999999
- type: precision_at_1
value: 88
- type: precision_at_10
value: 80.80000000000001
- type: precision_at_100
value: 56.02
- type: precision_at_1000
value: 21.51
- type: precision_at_3
value: 86
- type: precision_at_5
value: 86
- type: recall_at_1
value: 0.243
- type: recall_at_10
value: 2.0869999999999997
- type: recall_at_100
value: 13.014000000000001
- type: recall_at_1000
value: 44.433
- type: recall_at_3
value: 0.6910000000000001
- type: recall_at_5
value: 1.1440000000000001
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.066
- type: map_at_10
value: 10.615
- type: map_at_100
value: 16.463
- type: map_at_1000
value: 17.815
- type: map_at_3
value: 5.7860000000000005
- type: map_at_5
value: 7.353999999999999
- type: mrr_at_1
value: 38.775999999999996
- type: mrr_at_10
value: 53.846000000000004
- type: mrr_at_100
value: 54.37
- type: mrr_at_1000
value: 54.37
- type: mrr_at_3
value: 48.980000000000004
- type: mrr_at_5
value: 51.735
- type: ndcg_at_1
value: 34.694
- type: ndcg_at_10
value: 26.811
- type: ndcg_at_100
value: 37.342999999999996
- type: ndcg_at_1000
value: 47.964
- type: ndcg_at_3
value: 30.906
- type: ndcg_at_5
value: 27.77
- type: precision_at_1
value: 38.775999999999996
- type: precision_at_10
value: 23.878
- type: precision_at_100
value: 7.632999999999999
- type: precision_at_1000
value: 1.469
- type: precision_at_3
value: 31.973000000000003
- type: precision_at_5
value: 26.939
- type: recall_at_1
value: 3.066
- type: recall_at_10
value: 17.112
- type: recall_at_100
value: 47.723
- type: recall_at_1000
value: 79.50500000000001
- type: recall_at_3
value: 6.825
- type: recall_at_5
value: 9.584
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 72.76460000000002
- type: ap
value: 14.944240012137053
- type: f1
value: 55.89805777266571
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 63.30503678551217
- type: f1
value: 63.57492701921179
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 37.51066495006874
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 86.07021517553794
- type: cos_sim_ap
value: 74.15520712370555
- type: cos_sim_f1
value: 68.64321608040201
- type: cos_sim_precision
value: 65.51558752997602
- type: cos_sim_recall
value: 72.0844327176781
- type: dot_accuracy
value: 80.23484532395541
- type: dot_ap
value: 54.298763810214176
- type: dot_f1
value: 53.22254659779924
- type: dot_precision
value: 46.32525410476936
- type: dot_recall
value: 62.532981530343015
- type: euclidean_accuracy
value: 86.04637301066937
- type: euclidean_ap
value: 73.85333854233123
- type: euclidean_f1
value: 68.77723660599845
- type: euclidean_precision
value: 66.87437686939182
- type: euclidean_recall
value: 70.79155672823218
- type: manhattan_accuracy
value: 85.98676759849795
- type: manhattan_ap
value: 73.56016090035973
- type: manhattan_f1
value: 68.48878539036647
- type: manhattan_precision
value: 63.9505607690547
- type: manhattan_recall
value: 73.7203166226913
- type: max_accuracy
value: 86.07021517553794
- type: max_ap
value: 74.15520712370555
- type: max_f1
value: 68.77723660599845
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.92769821865176
- type: cos_sim_ap
value: 85.78879502899773
- type: cos_sim_f1
value: 78.14414083990464
- type: cos_sim_precision
value: 74.61651607480563
- type: cos_sim_recall
value: 82.0218663381583
- type: dot_accuracy
value: 84.95750378390964
- type: dot_ap
value: 75.80219641857563
- type: dot_f1
value: 70.13966179585681
- type: dot_precision
value: 65.71140262361251
- type: dot_recall
value: 75.20788420080073
- type: euclidean_accuracy
value: 88.93546008460433
- type: euclidean_ap
value: 85.72056428301667
- type: euclidean_f1
value: 78.14387902598124
- type: euclidean_precision
value: 75.3376688344172
- type: euclidean_recall
value: 81.16723129042192
- type: manhattan_accuracy
value: 88.96262661543835
- type: manhattan_ap
value: 85.76605136314335
- type: manhattan_f1
value: 78.26696165191743
- type: manhattan_precision
value: 75.0990659496179
- type: manhattan_recall
value: 81.71388974437943
- type: max_accuracy
value: 88.96262661543835
- type: max_ap
value: 85.78879502899773
- type: max_f1
value: 78.26696165191743
language:
- en
license: mit
---
# E5-small
**News (May 2023): please switch to [e5-small-v2](https://huggingface.co/intfloat/e5-small-v2), which has better performance and same method of usage.**
[Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf).
Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022
This model has 12 layers and the embedding size is 384.
## Usage
Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset.
```python
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def average_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
# Each input text should start with "query: " or "passage: ".
# For tasks other than retrieval, you can simply use the "query: " prefix.
input_texts = ['query: how much protein should a female eat',
'query: summit define',
"passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."]
tokenizer = AutoTokenizer.from_pretrained('intfloat/e5-small')
model = AutoModel.from_pretrained('intfloat/e5-small')
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
```
## Training Details
Please refer to our paper at [https://arxiv.org/pdf/2212.03533.pdf](https://arxiv.org/pdf/2212.03533.pdf).
## Benchmark Evaluation
Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results
on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316).
## Support for Sentence Transformers
Below is an example for usage with sentence_transformers.
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('intfloat/e5-small')
input_texts = [
'query: how much protein should a female eat',
'query: summit define',
"passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."
]
embeddings = model.encode(input_texts, normalize_embeddings=True)
```
Package requirements
`pip install sentence_transformers~=2.2.2`
Contributors: [michaelfeil](https://huggingface.co/michaelfeil)
## FAQ
**1. Do I need to add the prefix "query: " and "passage: " to input texts?**
Yes, this is how the model is trained, otherwise you will see a performance degradation.
Here are some rules of thumb:
- Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval.
- Use "query: " prefix for symmetric tasks such as semantic similarity, paraphrase retrieval.
- Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering.
**2. Why are my reproduced results slightly different from reported in the model card?**
Different versions of `transformers` and `pytorch` could cause negligible but non-zero performance differences.
**3. Why does the cosine similarity scores distribute around 0.7 to 1.0?**
This is a known and expected behavior as we use a low temperature 0.01 for InfoNCE contrastive loss.
For text embedding tasks like text retrieval or semantic similarity,
what matters is the relative order of the scores instead of the absolute values,
so this should not be an issue.
## Citation
If you find our paper or models helpful, please consider cite as follows:
```
@article{wang2022text,
title={Text Embeddings by Weakly-Supervised Contrastive Pre-training},
author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu},
journal={arXiv preprint arXiv:2212.03533},
year={2022}
}
```
## Limitations
This model only works for English texts. Long texts will be truncated to at most 512 tokens.
|
doshisha-mil/llama-2-70b-chat-4bit-japanese-v1
|
doshisha-mil
| 2023-08-07T04:25:55Z | 5 | 4 |
peft
|
[
"peft",
"llama-2",
"pytorch",
"facebook",
"meta",
"text-generation-inference",
"text-generation",
"ja",
"license:llama2",
"region:us"
] |
text-generation
| 2023-08-03T03:21:13Z |
---
library_name: peft
license: llama2
language:
- ja
pipeline_tag: text-generation
inference: false
tags:
- llama-2
- pytorch
- facebook
- meta
- text-generation-inference
---
# doshisha-mil/llama-2-70b-chat-4bit-japanese-v1
This model is Llama-2-Chat 70B fine-tuned with the following Japanese version of the alpaca dataset.
https://github.com/shi3z/alpaca_ja
## Copyright Notice
Since this model is built on the copyright of Meta's LLaMA series, users of this model must also agree to Meta's license.
https://ai.meta.com/llama/
## How to use
```
from huggingface_hub import notebook_login
notebook_login()
```
```python
import torch
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
model_id = "meta-llama/Llama-2-70b-chat-hf"
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config, device_map="auto")
peft_name = "doshisha-mil/llama-2-70b-chat-4bit-japanese-v1"
model = PeftModel.from_pretrained(
model,
peft_name,
is_trainable=True
)
model.eval()
device = "cuda:0"
text = "# Q: 日本一高い山は何ですか? # A: "
inputs = tokenizer(text, return_tensors="pt").to(device)
with torch.no_grad():
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
|
huhu233/opus-mt-en-zh-finetuned-en-to-zh-News_Commentary_v13
|
huhu233
| 2023-08-07T04:15:59Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"generated_from_trainer",
"en",
"zh",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-07T03:48:52Z |
---
license: cc-by-4.0
tags:
- generated_from_trainer
model-index:
- name: opus-mt-en-zh-finetuned-en-to-zh
results: []
language:
- en
- zh
---
# opus-mt-zh-en-finetuned-chn-to-eng
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-zh](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh) on the dataset [News Commentary v13](http://data.statmt.org/wmt18/translation-task/training-parallel-nc-v13.tgz) which you can find in [EMNLP 2018 THIRD CONFERENCE ON MACHINE TRANSLATION (WMT18)](https://statmt.org/wmt18/translation-task.html).
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- num_epochs: 10
### Framework versions
- Transformers 4.30.2
- Pytorch 1.13.1+cu117
- Datasets 2.14.3
- Tokenizers 0.13.3
- sentencepiece 0.1.99
|
timxiaohangt/ardt-simplest-dataset_combo_train_halfcheetah-0708_0012
|
timxiaohangt
| 2023-08-07T04:00:15Z | 33 | 0 |
transformers
|
[
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-08-06T23:20:14Z |
---
base_model: ''
tags:
- generated_from_trainer
model-index:
- name: ardt-simplest-dataset_combo_train_halfcheetah-0708_0012
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ardt-simplest-dataset_combo_train_halfcheetah-0708_0012
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1024
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.0
- Tokenizers 0.13.3
|
elenahuang/llama2-qlora-finetunined-french
|
elenahuang
| 2023-08-07T03:55:05Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-07T03:54:59Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
hw2942/chinese-bigbird-wwm-base-4096-wallstreetcn-morning-news-market-overview-open-SSEC-f1-v1
|
hw2942
| 2023-08-07T03:50:49Z | 97 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"big_bird",
"text-classification",
"generated_from_trainer",
"base_model:Lowin/chinese-bigbird-wwm-base-4096",
"base_model:finetune:Lowin/chinese-bigbird-wwm-base-4096",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-07T03:23:10Z |
---
license: apache-2.0
base_model: Lowin/chinese-bigbird-wwm-base-4096
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: chinese-bigbird-wwm-base-4096-wallstreetcn-morning-news-market-overview-open-SSEC-f1-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chinese-bigbird-wwm-base-4096-wallstreetcn-morning-news-market-overview-open-SSEC-f1-v1
This model is a fine-tuned version of [Lowin/chinese-bigbird-wwm-base-4096](https://huggingface.co/Lowin/chinese-bigbird-wwm-base-4096) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9660
- F1: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 75 | 0.6832 | 0.1538 |
| No log | 2.0 | 150 | 0.6909 | 0.0 |
| No log | 3.0 | 225 | 0.6766 | 0.4 |
| No log | 4.0 | 300 | 0.9574 | 0.5161 |
| No log | 5.0 | 375 | 1.0109 | 0.4348 |
| No log | 6.0 | 450 | 1.1757 | 0.3333 |
| 0.5475 | 7.0 | 525 | 1.6141 | 0.5 |
| 0.5475 | 8.0 | 600 | 1.7908 | 0.3810 |
| 0.5475 | 9.0 | 675 | 1.9172 | 0.5 |
| 0.5475 | 10.0 | 750 | 1.9660 | 0.5 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
thisiskeithkwan/whisper-medium-1000steps
|
thisiskeithkwan
| 2023-08-07T03:50:36Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"zh",
"dataset:thisiskeithkwan/canto",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-07T01:06:39Z |
---
language:
- zh
license: apache-2.0
base_model: openai/whisper-medium
tags:
- generated_from_trainer
datasets:
- thisiskeithkwan/canto
model-index:
- name: whisper-medium-cantonese
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-medium-cantonese
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the thisiskeithkwan/canto dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7006
- Cer: 3.6111
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6458 | 0.76 | 500 | 0.7109 | 3.5960 |
| 0.4183 | 1.52 | 1000 | 0.7006 | 3.6111 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
TheRains/yt-special-batch12-small
|
TheRains
| 2023-08-07T03:49:24Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"dataset:yt",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-06T14:31:41Z |
---
license: apache-2.0
base_model: openai/whisper-small
tags:
- whisper-event
- generated_from_trainer
datasets:
- yt
metrics:
- wer
model-index:
- name: Whisper Small Indonesian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: yt id
type: yt
metrics:
- name: Wer
type: wer
value: 40.08170676350431
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Indonesian
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the yt id dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6718
- Wer: 40.0817
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 12
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.8104 | 0.26 | 1000 | 0.8244 | 49.7374 |
| 0.7059 | 0.52 | 2000 | 0.7380 | 47.9671 |
| 0.7127 | 0.77 | 3000 | 0.6957 | 48.8360 |
| 0.5311 | 1.03 | 4000 | 0.6718 | 40.0817 |
| 0.47 | 1.29 | 5000 | 0.6645 | 40.4254 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
shubhamagarwal92/a2c-AntBulletEnv-v0
|
shubhamagarwal92
| 2023-08-07T03:28:34Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-06T07:05:36Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1457.50 +/- 109.67
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
PeterBrendan/AdsGPT2
|
PeterBrendan
| 2023-08-07T03:13:15Z | 204 | 9 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-28T03:14:46Z |
---
license: mit
widget:
- text: "Nike Air Force Ones"
- text: "Used Cars"
- text: "Hockey Skates"
---
**Model:** GPT-2
**Model name:** AdsGPT2
**Model description:**
This is a fine-tuned version of the GPT-2 model trained on a dataset of 10,000+ programmatic ad creatives. This model is designed to generate ad content given a product or a brand. For instance, when given the input "Nike Basketball", it will generate a sample ad and also suggest an ad size. The model's main purpose is to inspire ad creatives and provide a starting point for creating effective marketing content.
**Intended uses:**
This model is designed to be used as a starting point for creating ad creatives. You could use it in the early stages of your ad design process to generate creative ideas and inspiration.
**Limitations:**
This model has the potential to produce unusual or unexpected results, due to the varied and complex nature of advertising language. It should not be relied upon to produce perfect ad copy, but rather as a tool to inspire creative ideas. Also, the model might not have complete understanding of specific brand guidelines and may not adhere to them.
**How to use:**
You can use this model by providing a product or brand name as an input. For example: *Nike Air Force Ones*
**Training data:**
This model was trained on a dataset consisting of over 10,000 programmatic ad creatives, which included a variety of different product and brand advertisements. The data was collected from various ad platforms and represents a wide range of ad styles and formats.
**Training procedure:**
The model was fine-tuned using the GPT-2 base model with the aforementioned training data.
**Evaluation results:**
As this model's primary objective is to generate creative ads, traditional evaluation metrics such as accuracy or F1 score are not applicable. However, the model's performance has been informally assessed based on the relevancy and creativity of the generated ads.
**Safety and bias considerations:**
This model shares the same safety and bias considerations as the base GPT-2 model. It may generate content that is offensive or inappropriate. Also, as the model is trained on data from the internet, it may reflect the biases present in those sources.
Users should carefully review the generated ads to ensure they align with their brand's values and guidelines before using them. The model is not intended to replace the role of a human in creating ad copy, but rather to assist and provide inspiration.
|
hw2942/Erlangshen-Longformer-110M-wallstreetcn-morning-news-market-overview-open-SSEC-f1-v1
|
hw2942
| 2023-08-07T03:10:38Z | 88 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"longformer",
"text-classification",
"generated_from_trainer",
"base_model:IDEA-CCNL/Erlangshen-Longformer-110M",
"base_model:finetune:IDEA-CCNL/Erlangshen-Longformer-110M",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-07T02:47:52Z |
---
license: apache-2.0
base_model: IDEA-CCNL/Erlangshen-Longformer-110M
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: Erlangshen-Longformer-110M-wallstreetcn-morning-news-market-overview-open-SSEC-f1-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Erlangshen-Longformer-110M-wallstreetcn-morning-news-market-overview-open-SSEC-f1-v1
This model is a fine-tuned version of [IDEA-CCNL/Erlangshen-Longformer-110M](https://huggingface.co/IDEA-CCNL/Erlangshen-Longformer-110M) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2093
- F1: 0.3636
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 38 | 0.6873 | 0.0 |
| No log | 2.0 | 76 | 0.6933 | 0.0 |
| No log | 3.0 | 114 | 0.7401 | 0.5854 |
| No log | 4.0 | 152 | 0.6913 | 0.0 |
| No log | 5.0 | 190 | 1.0142 | 0.4706 |
| No log | 6.0 | 228 | 0.8925 | 0.2353 |
| No log | 7.0 | 266 | 0.9258 | 0.1333 |
| No log | 8.0 | 304 | 1.0290 | 0.3636 |
| No log | 9.0 | 342 | 1.1018 | 0.4 |
| No log | 10.0 | 380 | 1.2093 | 0.3636 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
Eggsbena/model_008
|
Eggsbena
| 2023-08-07T03:09:38Z | 29 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-07T02:57:23Z |
---
library_name: diffusers
pipeline_tag: text-to-image
---
|
saefro991/tts_bytes_css10_7lang_textpretrain_residual_freeze
|
saefro991
| 2023-08-07T03:01:26Z | 3 | 1 |
espnet
|
[
"espnet",
"audio",
"text-to-speech",
"multilingual",
"dataset:masmultts",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
text-to-speech
| 2023-08-07T02:45:09Z |
---
tags:
- espnet
- audio
- text-to-speech
language: multilingual
datasets:
- masmultts
license: cc-by-4.0
---
## ESPnet2 TTS model
### `saefro991/tts_bytes_css10_7lang_textpretrain_residual_freeze`
This model was trained by Takaaki-Saeki using masmultts recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout 11a7d61312439111d4996d55935ede718d494262
pip install -e .
cd egs2/masmultts/tts_byte_css10_adap_residual_freeze
./run.sh --skip_data_prep false --skip_train true --download_model saefro991/tts_bytes_css10_7lang_textpretrain_residual_freeze
```
## TTS config
<details><summary>expand</summary>
```
config: conf/train.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/tts_train_raw_byte
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 1
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 200
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- loss
- min
- - train
- loss
- min
keep_nbest_models: 3
nbest_averaging_interval: 0
grad_clip: 2.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 4
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param:
- ../tts_pretrain_byte_residual/exp/tts_train_byte/2epoch.pth:tts_pretrain.encoder:tts.encoder
- ../tts_pretrain_byte_residual/exp/tts_train_byte/2epoch.pth:tts_pretrain.lid_emb:tts.lid_emb
ignore_init_mismatch: false
freeze_param:
- tts.encoder.adapter
- tts.encoder.embed
- tts.lid_emb
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 400000
valid_batch_bins: null
train_shape_file:
- exp/tts_stats_raw_byte/train/text_shape.byte
- exp/tts_stats_raw_byte/train/speech_shape
valid_shape_file:
- exp/tts_stats_raw_byte/valid/text_shape.byte
- exp/tts_stats_raw_byte/valid/speech_shape
batch_type: numel
valid_batch_type: null
fold_length:
- 150
- 204800
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - /local/11399690.1.gpu/dump/raw/train/text
- text
- text
- - /local/11399690.1.gpu/dump/raw/train/wav.scp
- speech
- sound
- - /local/11399690.1.gpu/dump/xvector/train/xvector.scp
- spembs
- kaldi_ark
- - /local/11399690.1.gpu/dump/raw/train/utt2lid
- lids
- text_int
valid_data_path_and_name_and_type:
- - /local/11399690.1.gpu/dump/raw/dev/text
- text
- text
- - /local/11399690.1.gpu/dump/raw/dev/wav.scp
- speech
- sound
- - /local/11399690.1.gpu/dump/xvector/dev/xvector.scp
- spembs
- kaldi_ark
- - /local/11399690.1.gpu/dump/raw/dev/utt2lid
- lids
- text_int
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 1.0
scheduler: noamlr
scheduler_conf:
model_size: 512
warmup_steps: 50000
token_list:
- <blank>
- <unk>
- '32'
- '101'
- '97'
- '105'
- '110'
- '116'
- '111'
- '115'
- '114'
- '108'
- '100'
- '117'
- '109'
- '99'
- '195'
- '112'
- '104'
- '118'
- '107'
- '103'
- '98'
- '122'
- '102'
- '106'
- '121'
- '119'
- '164'
- '169'
- '197'
- '196'
- '161'
- '113'
- '179'
- '173'
- '188'
- '182'
- '190'
- '208'
- '120'
- '141'
- '153'
- '160'
- '155'
- '189'
- '131'
- '186'
- '168'
- '133'
- '209'
- '130'
- '181'
- '159'
- '151'
- '175'
- '177'
- '145'
- '171'
- '174'
- '165'
- '135'
- '200'
- '180'
- '170'
- '178'
- '176'
- '163'
- '184'
- '185'
- '187'
- '129'
- '132'
- '128'
- '136'
- '143'
- '162'
- '191'
- '150'
- '206'
- '183'
- '140'
- '172'
- '167'
- '207'
- '139'
- '142'
- '147'
- '134'
- '137'
- '148'
- '194'
- '149'
- '166'
- '49'
- '50'
- '48'
- '51'
- '138'
- '56'
- '53'
- '55'
- '52'
- '54'
- '57'
- '199'
- '226'
- '210'
- '144'
- '203'
- '225'
- '202'
- '232'
- '201'
- '157'
- '231'
- '156'
- '220'
- <sos/eos>
odim: null
model_conf: {}
use_preprocessor: true
token_type: byte
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: byte
feats_extract: fbank
feats_extract_conf:
n_fft: 1024
hop_length: 256
win_length: null
fs: 16000
fmin: 80
fmax: 7600
n_mels: 80
normalize: global_mvn
normalize_conf:
stats_file: exp/tts_stats_raw_byte/train/feats_stats.npz
tts: transformer
tts_conf:
embed_dim: 0
eprenet_conv_layers: 0
eprenet_conv_filts: 0
eprenet_conv_chans: 0
dprenet_layers: 2
dprenet_units: 256
adim: 512
aheads: 8
elayers: 6
eunits: 1024
dlayers: 6
dunits: 1024
positionwise_layer_type: conv1d
positionwise_conv_kernel_size: 1
postnet_layers: 5
postnet_filts: 5
postnet_chans: 256
spk_embed_dim: 192
spk_embed_integration_type: add
use_gst: true
gst_heads: 4
gst_tokens: 16
use_masking: true
bce_pos_weight: 5.0
use_scaled_pos_enc: true
encoder_normalize_before: true
decoder_normalize_before: true
reduction_factor: 1
init_type: xavier_uniform
init_enc_alpha: 1.0
init_dec_alpha: 1.0
eprenet_dropout_rate: 0.0
dprenet_dropout_rate: 0.5
postnet_dropout_rate: 0.5
transformer_enc_dropout_rate: 0.1
transformer_enc_positional_dropout_rate: 0.1
transformer_enc_attn_dropout_rate: 0.1
transformer_dec_dropout_rate: 0.1
transformer_dec_positional_dropout_rate: 0.1
transformer_dec_attn_dropout_rate: 0.1
transformer_enc_dec_attn_dropout_rate: 0.1
use_guided_attn_loss: true
num_heads_applied_guided_attn: 2
num_layers_applied_guided_attn: 2
modules_applied_guided_attn:
- encoder-decoder
guided_attn_loss_sigma: 0.4
guided_attn_loss_lambda: 10.0
langs: 21
lang_family_encoding: false
num_lang_family: 7
use_adapter: true
adapter_type: residual
use_encoder_w_lid: true
pitch_extract: null
pitch_extract_conf: {}
pitch_normalize: null
pitch_normalize_conf: {}
energy_extract: null
energy_extract_conf: {}
energy_normalize: null
energy_normalize_conf: {}
required:
- output_dir
- token_list
version: '202209'
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
mrkusypl/Magik
|
mrkusypl
| 2023-08-07T03:00:48Z | 0 | 0 | null |
[
"pl",
"region:us"
] | null | 2023-08-02T22:39:07Z |
---
language:
- pl
---
<center>
<img src="https://cdn.discordapp.com/attachments/1136428972939419789/1136428973279154228/latest.png"></img>
<h1>Magik (RVC v2) (Mangio Crepe 64) (400 Epochs)</h1>
**Model by:** kusy <br/>
**Voice Actor:** Piotr "Magik" Łuszcz <br/>
**Dataset:** 00:18:49 <br/>
<audio controls>
<source src="https://cdn.discordapp.com/attachments/1136428972939419789/1137073748781047848/example.mp3" type="audio/mpeg">
</audio><br />
<audio controls>
<source src="https://cdn.discordapp.com/attachments/1136428972939419789/1137931072777244673/gadanie.wav" type="audio/wav">
</audio>
<a href="https://huggingface.co/mrkusypl/Magik/resolve/main/Magik%20%5B400%20epoch%20%2B%20RVC%20v2%5D.zip">Download or copy the link</a>
</center>
|
mrkusypl/Nitrodolski
|
mrkusypl
| 2023-08-07T02:59:13Z | 0 | 0 | null |
[
"pl",
"region:us"
] | null | 2023-07-27T10:37:42Z |
---
language:
- pl
---
<center>
<img src="https://cdn.discordapp.com/attachments/1134073942835986442/1134073943100248064/Major-Suchodolski-prokuratura-wszczela-sledztwo-w-sprawie-smierci-patostreamera_article_north.png"></img>
<h1>Major Suchodolski (RVC v2) (Mangio Crepe 64) (250 Epochs)</h1>
**Model by:** kusy <br/>
**Voice Actor:** Wojciech "Major" Suchodolski <br/>
**Dataset:** 00:16:44 <br/>
<audio controls>
<source src="https://cdn.discordapp.com/attachments/1134073942835986442/1134073976491081799/example.mp3" type="audio/mpeg">
</audio><br />
<audio controls>
<source src="https://cdn.discordapp.com/attachments/1134073942835986442/1137932924612784178/gadanie.wav" type="audio/wav">
</audio>
<a href="https://huggingface.co/mrkusypl/Nitrodolski/resolve/main/Nitrodolski%20%5B250%20epoch%20%2B%20RVC%20v2%5D.zip">Download or copy the link</a>
</center>
|
mrkusypl/MexicanoTV
|
mrkusypl
| 2023-08-07T02:57:15Z | 0 | 0 | null |
[
"pl",
"region:us"
] | null | 2023-08-01T20:57:37Z |
---
language:
- pl
---
<center>
<img src="https://cdn.discordapp.com/attachments/1136043395123515465/1136043395928825957/comment_7oiVx1SlO3f8Ub44Vb0718v2vZin7XUk.png"></img>
<h1>MexicanoTV (RVC v2) (Mangio Crepe 64) (400 Epochs)</h1>
**Model by:** kusy <br/>
**Voice Actor:** Jarosław Andrzejewski <br/>
**Dataset:** 00:17:40 <br/>
<audio controls>
<source src="https://cdn.discordapp.com/attachments/1136043395123515465/1137050343440650341/example.mp3" type="audio/mpeg">
</audio><br />
<audio controls>
<source src="https://cdn.discordapp.com/attachments/1136043395123515465/1137932262139248741/gadanie.wav" type="audio/wav">
</audio>
<a href="https://huggingface.co/mrkusypl/MexicanoTV/resolve/main/MexicanoTV%20%5B400%20epoch%20%2B%20RVC%20v2%5D.zip">Download or copy the link</a>
</center>
|
saefro991/tts_ipa_css10_7lang_textpretrain_residual_freeze
|
saefro991
| 2023-08-07T02:39:31Z | 1 | 2 |
espnet
|
[
"espnet",
"audio",
"text-to-speech",
"multilingual",
"dataset:masmultts",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
text-to-speech
| 2023-08-07T02:26:59Z |
---
tags:
- espnet
- audio
- text-to-speech
language: multilingual
datasets:
- masmultts
license: cc-by-4.0
---
## ESPnet2 TTS model
### `saefro991/tts_ipa_css10_7lang_textpretrain_residual_freeze`
This model was trained by Takaaki-Saeki using masmultts recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout 11a7d61312439111d4996d55935ede718d494262
pip install -e .
cd egs2/masmultts/tts_phn_css10_adap_residual_freeze
./run.sh --skip_data_prep false --skip_train true --download_model saefro991/tts_ipa_css10_7lang_textpretrain_residual_freeze
```
## TTS config
<details><summary>expand</summary>
```
config: conf/train.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/tts_train_raw_phn_none
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 1
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 200
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- loss
- min
- - train
- loss
- min
keep_nbest_models: 3
nbest_averaging_interval: 0
grad_clip: 2.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 4
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param:
- ../tts_pretrain_phn_residual/exp/tts_train_phn_none/2epoch.pth:tts_pretrain.encoder:tts.encoder
- ../tts_pretrain_phn_residual/exp/tts_train_phn_none/2epoch.pth:tts_pretrain.lid_emb:tts.lid_emb
ignore_init_mismatch: false
freeze_param:
- tts.encoder.adapter
- tts.encoder.embed
- tts.lid_emb
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 400000
valid_batch_bins: null
train_shape_file:
- exp/tts_stats_raw_phn_none/train/text_shape.phn
- exp/tts_stats_raw_phn_none/train/speech_shape
valid_shape_file:
- exp/tts_stats_raw_phn_none/valid/text_shape.phn
- exp/tts_stats_raw_phn_none/valid/speech_shape
batch_type: numel
valid_batch_type: null
fold_length:
- 150
- 204800
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - /local/11454483.1.gpu/dump/raw/train/text
- text
- text
- - /local/11454483.1.gpu/dump/raw/train/wav.scp
- speech
- sound
- - /local/11454483.1.gpu/dump/xvector/train/xvector.scp
- spembs
- kaldi_ark
- - /local/11454483.1.gpu/dump/raw/train/utt2lid
- lids
- text_int
valid_data_path_and_name_and_type:
- - /local/11454483.1.gpu/dump/raw/dev/text
- text
- text
- - /local/11454483.1.gpu/dump/raw/dev/wav.scp
- speech
- sound
- - /local/11454483.1.gpu/dump/xvector/dev/xvector.scp
- spembs
- kaldi_ark
- - /local/11454483.1.gpu/dump/raw/dev/utt2lid
- lids
- text_int
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 1.0
scheduler: noamlr
scheduler_conf:
model_size: 512
warmup_steps: 50000
token_list:
- <blank>
- <unk>
- n
- t
- s
- l
- a
- e
- k
- d
- m
- ə
- r
- i
- p
- o
- v
- ɪ
- ˈa
- ɾ
- j
- z
- ˈɛ
- ˈe
- ɛ
- b
- ˈo
- f
- ˈi
- u
- ð
- ʁ
- h
- ɡ
- ɔ
- ʃ
- ˈu
- w
- ˌe
- ts
- ŋ
- ˌa
- æ
- iː
- ˈɪ
- ˈiː
- ˈaː
- ɹ
- ʊ
- ɑ
- ˈeː
- ˈɔ
- x
- aː
- tʃ
- ˌi
- ˌo
- tː
- oː
- ɣ
- ˈoː
- eː
- y
- θ
- ɲ
- ə-
- ʋ
- ʒ
- ˌɛ
- ˈɑ
- β
- uː
- ˈuː
- ˈaɪ
- ç
- ˈɑ̃
- ˈɔ̃
- ˈæ
- ɚ
- ˌɪ
- ɑ̃
- ˌu
- ˌɔ
- ˈy
- ɜ
- tʲ
- ˈeɪ
- ˈɑː
- ˌeː
- ʌ
- ᵻ
- ɐ
- ˌɑ
- ɨ
- ɔ̃
- dʒ
- e-
- ˌiː
- a-
- ˈʌ
- ˌʊ
- əl
- ʎ
- ˌaɪ
- aɪ
- ˈɔː
- ss
- ˈaʊ
- rʲ
- kː
- ˈoʊ
- ˌaː
- ɑː
- nʲ
- ˌoː
- ø
- ˈɛɪ
- ɛɪ
- ˌæ
- ʂ
- ɲʲ
- ˌɑː
- ɕ
- ˈai
- vʲ
- dʲ
- ai
- ei
- ɛ̃
- mʲ
- ˈø
- ɭ
- ˈɵ
- pː
- ˈɛ̃
- ɔː
- oʊ
- ˈɜː
- ˈʊ
- tɕ
- ɟ
- ˌaʊ
- ˈœ
- kʲ
- ˈuo
- ˈoi
- æː
- dʑ
- l̩
- ˈie
- ɪː
- ie
- oi
- ˌeɪ
- ˈɨ
- yː
- ˈɪː
- ˌy
- øː
- ˈʏ
- ˈɛː
- ˈoːɹ
- ˌuː
- ˌʌ
- ˈeu
- ˈei
- aʊ
- ˌoi
- bː
- ˌai
- ˈœy
- ˈøː
- ˈɑːɹ
- œ̃
- ˈæː
- au
- y-
- r̝̊
- ɵ
- ˌɵ
- c
- ˌɛɪ
- ˈɔø
- ˈyː
- ee
- pʲ
- ˈee
- bʲ
- ˈyø
- iə
- ˈiə
- ˌɨ
- ˌøː
- ɔːɹ
- ɔø
- eɪ
- ʑ
- ˈau
- ˈʊɹ
- r̝
- dʒː
- ˌeʊ
- ˈɔːɹ
- ˌoʊ
- ˌʊɹ
- ɑːɹ
- ˈæy
- ˌyː
- s^
- eu
- ˌə
- tʃː
- ˈə
- ˌei
- ea
- tsʲ
- ẽ
- ʌʊ
- œy
- ˈʌʊ
- nʲʲ
- ˌæi
- ˌʏ
- ˌɛː
- ˈɪɹ
- æi
- ˈɛɹ
- ˈæi
- ˈɔɪ
- ã
- dzː
- r̩
- ˈẽ
- ou
- œ
- ɜː
- uo
- tʲʲ
- ˌø
- ɛɹ
- ɭʲ
- iɪ
- (en)
- ʂʲ
- tsː
- ˌuo
- ˌʌʊ
- oːɹ
- ˈou
- ˌɛ̃
- ʝ
- eʊ
- ɨ̃
- ˈɔa
- ɟː
- ʊɐ
- ˈr̩
- tʃʲ
- uɪ
- ɡʲ
- ˈea
- ˌʊɐ
- ˈʊɐ
- ɛː
- ˌyi
- t^
- tɕʲ
- ˌea
- (fr)
- ɕʲ
- ʀ
- ˌɔø
- ʏ
- ˌœ
- ˈoɪ
- ˌau
- eɑ
- ˌɪː
- ˈeʊ
- ˈiɪ
- ˈã
- ˌɔː
- ˌã
- sʲ
- ˈaɪɚ
- ˌɑ̃
- ˌæː
- ey
- ˌœy
- ˈaɪə
- d̪
- ɾʲ
- ˌøi
- dː
- ˌie
- ui
- fʲ
- n̩
- ʔ
- ˌou
- yi
- ˌɑːɹ
- tsʲʲ
- ˌɐ
- ˈœ̃
- ˌyø
- dz
- ɡː
- ɾʲʲ
- ˈl̩
- ˈøy
- ˌæy
- cː
- æy
- ʊɹ
- ʑʲ
- ˌɜː
- yʊ
- ˌɛɹ
- pf
- dʑʲ
- ˌoːɹ
- ˈɨ̃
- ˈiʊ
- õ
- ɔa
- ˌɔa
- ˌee
- ˈĩ
- ˌiɪ
- ˌɔːɹ
- ˈɒ
- ja
- ĩ
- ˈũ
- ɒ
- ũ
- ʃʲ
- ɪɹ
- ju
- (de)
- yø
- ˌeu
- d^
- ˈiu
- ˈja
- øi
- ˈeɑ
- ˈyi
- ɾʲˌʲ
- ʃʲʲ
- ʃʲˌʲ
- aɪə
- ˈuɪ
- iu
- ˈõ
- iɐ
- ˌẽ
- iʊ
- ˌr̩
- ˈui
- əʊ
- u"
- ˌɔ̃
- ˈəʊ
- iy
- ʲ
- zʲˌʲ
- (it)
- ˌɒ
- ɔɪ
- ˌɪɹ
- ˈɵː
- ˈu"
- nʲˌʲ
- (nl)
- ˌl̩
- ˈey
- βː
- lʲʲ
- oɪ
- ˈiɐ
- ˌiɐ
- lʲ
- tsʲˌʲ
- xʲ
- ˌũ
- mʲʲ
- dʒʲ
- ˌeo
- ˈju
- r̩ː
- lʲˌʲ
- ˈøi
- t^ː
- əɪ
- l̩ː
- tʃˌʲ
- eo
- zʲʲ
- ˌiy
- aʲ
- ˌoɪ
- tl#
- ˈyɪ
- ˌiə
- ˌey
- øy
- dʲʲ
- ˈl̩ː
- ˈyʊ
- ˌɨ̃
- ʀʲ
- ɣː
- ˈeo
- ˈʊə
- ˌiu
- ˌøy
- ˈəɪ
- ˈeə
- aɪɚ
- ɪ^
- eə
- ˌĩ
- t̪
- vʲʲ
- (es)
- (gn)
- zʲ
- ˌõ
- əː
- bʲʲ
- (base)
- ˌəʊ
- ˈə-
- (ru)
- ˌɔɪ
- ˈæiː
- tsˌʲ
- ˈr̩ː
- ə--
- ˌn̩
- uʲ
- ˈw
- hʲ
- ˌeə
- yɪ
- fʲʲ
- ˌyʊ
- (el)
- ˌaɪɚ
- ˈəː
- ˌʊə
- ɵː
- t̪ː
- w-
- (sl)
- eʲ
- ˈa-
- ˌr̩ː
- mʲˌʲ
- (fi)
- ʒʲʲ
- çʲ
- ˌaɪə
- ˈɚ
- (lt)
- pʲʲ
- ˈɜ
- ˌuɪ
- ˌja
- (pl)
- ˈe-
- ˌe-
- (et)
- ˈoːʲ
- (kl)
- ˈõː
- (hu)
- ˈiy
- ʊə
- ˈaʲ
- ˌl̩ː
- lˌʲ
- '1'
- ʒʲ
- (cs)
- ˈææ
- ˈts-
- ts-
- ˌʊː
- ˌy"
- cʲ
- wʲ
- ˈãː
- ˈuʲ
- (ro)
- ˌɜ
- (sk)
- oːʲ
- ʊː
- ˈtl#tl#
- ʃˈʲ
- ɬ
- ˌə-
- (hr)
- tl#tl#
- ˌœ̃
- ˈʊː
- l̩ʲ
- dʒˌʲ
- tsˈʲ
- pʲˌʲ
- ˈʌː
- ˈeʲ
- aːʲ
- vʲˌʲ
- ˈj
- ()
- eːː
- ˌãː
- ˈuːʲ
- ˈeeʲ
- <sos/eos>
odim: null
model_conf: {}
use_preprocessor: true
token_type: phn
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
feats_extract: fbank
feats_extract_conf:
n_fft: 1024
hop_length: 256
win_length: null
fs: 16000
fmin: 80
fmax: 7600
n_mels: 80
normalize: global_mvn
normalize_conf:
stats_file: exp/tts_stats_raw_phn_none/train/feats_stats.npz
tts: transformer
tts_conf:
embed_dim: 0
eprenet_conv_layers: 0
eprenet_conv_filts: 0
eprenet_conv_chans: 0
dprenet_layers: 2
dprenet_units: 256
adim: 512
aheads: 8
elayers: 6
eunits: 1024
dlayers: 6
dunits: 1024
positionwise_layer_type: conv1d
positionwise_conv_kernel_size: 1
postnet_layers: 5
postnet_filts: 5
postnet_chans: 256
spk_embed_dim: 192
spk_embed_integration_type: add
use_gst: true
gst_heads: 4
gst_tokens: 16
use_masking: true
bce_pos_weight: 5.0
use_scaled_pos_enc: true
encoder_normalize_before: true
decoder_normalize_before: true
reduction_factor: 1
init_type: xavier_uniform
init_enc_alpha: 1.0
init_dec_alpha: 1.0
eprenet_dropout_rate: 0.0
dprenet_dropout_rate: 0.5
postnet_dropout_rate: 0.5
transformer_enc_dropout_rate: 0.1
transformer_enc_positional_dropout_rate: 0.1
transformer_enc_attn_dropout_rate: 0.1
transformer_dec_dropout_rate: 0.1
transformer_dec_positional_dropout_rate: 0.1
transformer_dec_attn_dropout_rate: 0.1
transformer_enc_dec_attn_dropout_rate: 0.1
use_guided_attn_loss: true
num_heads_applied_guided_attn: 2
num_layers_applied_guided_attn: 2
modules_applied_guided_attn:
- encoder-decoder
guided_attn_loss_sigma: 0.4
guided_attn_loss_lambda: 10.0
langs: 21
lang_family_encoding: false
num_lang_family: 7
use_adapter: true
adapter_type: residual
use_encoder_w_lid: true
pitch_extract: null
pitch_extract_conf: {}
pitch_normalize: null
pitch_normalize_conf: {}
energy_extract: null
energy_extract_conf: {}
energy_normalize: null
energy_normalize_conf: {}
required:
- output_dir
- token_list
version: '202209'
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
AmelieSchreiber/esm2_t6_8M_UR50D_LoRA_RNA-binding
|
AmelieSchreiber
| 2023-08-07T02:34:08Z | 4 | 1 |
peft
|
[
"peft",
"transformers",
"biology",
"esm",
"esm2",
"protein",
"protein language model",
"en",
"license:mit",
"region:us"
] | null | 2023-08-07T00:12:16Z |
---
library_name: peft
license: mit
language:
- en
tags:
- transformers
- biology
- esm
- esm2
- protein
- protein language model
---
# ESM-2 RNA Binding Site LoRA
This is a Parameter Efficient Fine Tuning (PEFT) Low Rank Adaptation (LoRA) of
the [esm2_t6_8M_UR50D](https://huggingface.co/facebook/esm2_t6_8M_UR50D) model for the (binary) token classification task of
predicting RNA binding sites of proteins. The Github with the training script and conda env YAML can be
[found here](https://github.com/Amelie-Schreiber/esm2_LoRA_binding_sites/tree/main). You can also find a version of this model
that was fine-tuned without LoRA [here](https://huggingface.co/AmelieSchreiber/esm2_t6_8M_UR50D_rna_binding_site_predictor).
## Training procedure
This is a Low Rank Adaptation (LoRA) of `esm2_t6_8M_UR50D`,
trained on `166` protein sequences in the [RNA binding sites dataset](https://huggingface.co/datasets/AmelieSchreiber/data_of_protein-rna_binding_sites)
using a `75/25` train/test split. It achieves an evaluation loss of `0.1791934072971344`.
### Framework versions
- PEFT 0.4.0
## Using the Model
To use, try running:
```python
from transformers import AutoModelForTokenClassification, AutoTokenizer
from peft import PeftModel
import torch
# Path to the saved LoRA model
model_path = "AmelieSchreiber/esm2_t6_8M_UR50D_LoRA_RNA-binding"
# ESM2 base model
base_model_path = "facebook/esm2_t6_8M_UR50D"
# Load the model
base_model = AutoModelForTokenClassification.from_pretrained(base_model_path)
loaded_model = PeftModel.from_pretrained(base_model, model_path)
# Ensure the model is in evaluation mode
loaded_model.eval()
# Load the tokenizer
loaded_tokenizer = AutoTokenizer.from_pretrained(base_model_path)
# Protein sequence for inference
protein_sequence = "MAVPETRPNHTIYINNLNEKIKKDELKKSLHAIFSRFGQILDILVSRSLKMRGQAFVIFKEVSSATNALRSMQGFPFYDKPMRIQYAKTDSDIIAKMKGT" # Replace with your actual sequence
# Tokenize the sequence
inputs = loaded_tokenizer(protein_sequence, return_tensors="pt", truncation=True, max_length=1024, padding='max_length')
# Run the model
with torch.no_grad():
logits = loaded_model(**inputs).logits
# Get predictions
tokens = loaded_tokenizer.convert_ids_to_tokens(inputs["input_ids"][0]) # Convert input ids back to tokens
predictions = torch.argmax(logits, dim=2)
# Define labels
id2label = {
0: "No binding site",
1: "Binding site"
}
# Print the predicted labels for each token
for token, prediction in zip(tokens, predictions[0].numpy()):
if token not in ['<pad>', '<cls>', '<eos>']:
print((token, id2label[prediction]))
```
|
sunnyZX/huggingface_practice
|
sunnyZX
| 2023-08-07T02:17:48Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-08-04T07:40:39Z |
## huggingface学习笔记
学习理解huggingface的主要功能,学习使用huggingface的各个工具,理解其原理。
### 0huggingface.ipynb
huggingface简介、安装、注意事项
### 1pipeline.ipynb
理解使用pipeline的提供的自然语言处理任务的便捷用法。
### 2transformers.ipynb
理解使用transformers库提供的分词和模型的用法。
### 3finetune.ipynb
基于预训练模型进行模型微调,包括数据加载、模型训练-Trainer实现、模型训练-pytorch实现、模型评估
### 4datasets.ipynb
理解使用datasets库,包括数据加载、数据预处理、分词、数据格式转换、加载大规模数据集。
实战:基于github issues进行网络爬虫构建数据集进行相似性检索
### 5tokenizers.ipynb
理解使用tokrnizers库,包括:
- 基于微调已有的tokenizer;
- 理解Fast Tokenizer的并行化和偏移映射的能力(通过token classification和QA任务进行深刻理解);
- 理解tokenizer的四个处理步骤:标准化、预标记化、三种标记化模型(BPE、WordPiece、Unigram)、后处理;
- 基于三种标记化模型构建自定义的tokenizer。
### translations.ipynb
实战:翻译任务的完整过程:数据加载、数据预处理、模型微调训练、模型评估。
|
Xillolxlbln/khaled-model
|
Xillolxlbln
| 2023-08-07T01:45:51Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-08-06T22:21:53Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: khaled-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# khaled-model
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
Baronco98/Sudoku-Number-Classifier
|
Baronco98
| 2023-08-07T01:18:56Z | 2 | 0 |
keras
|
[
"keras",
"en",
"dataset:mnist",
"license:apache-2.0",
"region:us"
] | null | 2023-08-07T00:25:58Z |
---
license: apache-2.0
datasets:
- mnist
language:
- en
metrics:
- accuracy
library_name: keras
---
# Description
This model is a convolutional neural network built with transfer learning using the pre-trained model 'VGG16.' The 'block5_conv1' layer is retrained, and a final dense layer with 128 neurons is added.
The model will be used as a preliminary step in solving Sudokus through linear programming. Model It is responsible for classifying the content of each sudoku cell:
- class_0: empty cell
- class_1: cell contains the number 1
- class_2: cell contains the number 2
- class_3: cell contains the number 3
- class_4: cell contains the number 4
- class_5: cell contains the number 5
- class_6: cell contains the number 6
- class_7: cell contains the number 7
- class_8: cell contains the number 8
- class_9: cell contains the number 9
The dataset is constructed with balanced classes using images from the famous "MNIST digits classification" dataset, as well as images of numbers written digitally.
# Dataset schema
The image size it is 28x28 pixels. After applying data augmentation to the dataset, the total number of images is as follows:
- Training images: 5,600
- Validation images: 2,400
- Test images: 2,000
Test Accuracy: 0.9810
# Other validations:
An initial validation is performed. It remains pending to increase the size of the validations to understand the reliability of the mode
<div style="text-align: center;">
<img src="https://i.imgur.com/kdj9udt.jpg" width="300">
</div>
</div>
The results of the inference are as follows:
<div style="text-align: center;">
<img src="https://i.imgur.com/U2MJzH6.jpg" width="500">
</div>
|
taohoang/speecht5_finetuned_fleurs_en_us
|
taohoang
| 2023-08-07T01:18:29Z | 83 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"text-to-speech",
"dataset:google/fleurs",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-08-07T01:04:34Z |
---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
datasets:
- google/fleurs
model-index:
- name: speecht5_finetuned_fleurs_en_us
results: []
pipeline_tag: text-to-speech
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_fleurs_en_us
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the google/fleurs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4831
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- training_steps: 54
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.719 | 0.33 | 9 | 0.5634 |
| 0.5994 | 0.67 | 18 | 0.5290 |
| 0.584 | 1.0 | 27 | 0.4924 |
| 0.5589 | 1.33 | 36 | 0.4828 |
| 0.5747 | 1.67 | 45 | 0.4848 |
| 0.5904 | 2.0 | 54 | 0.4831 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
xiangxiang/chatglm2-6b-WaJiaBank
|
xiangxiang
| 2023-08-07T00:55:26Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"chatglm",
"feature-extraction",
"custom_code",
"region:us"
] |
feature-extraction
| 2023-08-04T09:57:48Z |
## 模型介绍
ChatGLM2-6B 是清华开源中英双语对话模型 ChatGLM-6B 的第二代版本,具有模型对话流畅、部署门槛较低等众多优秀特性,ChatGLM2-6B 使用了 GLM 的混合目标函数上下文长度(Context Length)由 ChatGLM-6B 的 2K 扩展到了 32K,基于 Multi-Query Attention 技术,ChatGLM2-6B 有更高效的推理速度和更低的显存占用,推理速度相比初代提升了 42%,INT4 量化下,6G 显存支持的对话长度由 1K 提升到了 8K
**chatglm2-6b-WaJiaBank** 是基于清华 chatglm2-6b 进行量化+轻量微调,使用数据为网络公开数据。当前使用的数据量相对较少,模型泛化能力还需进一步提升。
#### 优化方向:
- 数据增强
- 性能调优
- 模型参数
## 调用方法
```python
from transformers import AutoTokenizer,AutoConfig, AutoModel, BitsAndBytesConfig
tokenizer = AutoTokenizer.from_pretrained("xiangxiang/chatglm2-6b-WaJiaBank", trust_remote_code=True)
model = AutoModel.from_pretrained("xiangxiang/chatglm2-6b-WaJiaBank", trust_remote_code=True).float() ## GPU cuda
```
提高模型推理速度,可以参考ChatGLM2-6B多卡部署方式
```python
from utils import load_model_on_gpus
model = load_model_on_gpus("THUDM/chatglm2-6b", num_gpus=2)
```
## 参考链接
https://github.com/THUDM/ChatGLM2-6B
|
brunoboat/Pixelcopter-PLE-v4
|
brunoboat
| 2023-08-07T00:48:34Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-07T00:48:32Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v4
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 10.50 +/- 11.24
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
brunoboat/Pixelcopter-PLE-v3
|
brunoboat
| 2023-08-07T00:42:31Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-07T00:42:27Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 43.20 +/- 35.30
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
skhaghighi/roberta-finetuned-subjqa-movies_2
|
skhaghighi
| 2023-08-07T00:39:17Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"base_model:deepset/roberta-base-squad2",
"base_model:finetune:deepset/roberta-base-squad2",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-08-07T00:25:40Z |
---
license: cc-by-4.0
base_model: deepset/roberta-base-squad2
tags:
- generated_from_trainer
model-index:
- name: roberta-finetuned-subjqa-movies_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-subjqa-movies_2
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
Yacong/my_dreambooth_out_dir
|
Yacong
| 2023-08-07T00:23:49Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:stabilityai/stable-diffusion-2",
"base_model:finetune:stabilityai/stable-diffusion-2",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-06T15:09:47Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - Yacong/my_dreambooth_out_dir
This is a dreambooth model derived from stabilityai/stable-diffusion-2. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
zwangab91/Taxi-v3
|
zwangab91
| 2023-08-07T00:16:15Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-06T17:51:32Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="zwangab91/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
naasirfar/distilbert-base-uncased-finetuned-emotion
|
naasirfar
| 2023-08-06T23:52:39Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-06T23:10:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9295
- name: F1
type: f1
value: 0.9294307352150123
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2137
- Accuracy: 0.9295
- F1: 0.9294
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8048 | 1.0 | 250 | 0.3007 | 0.908 | 0.9047 |
| 0.2455 | 2.0 | 500 | 0.2137 | 0.9295 | 0.9294 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.13.3
|
manyet1k/deberta-v3-base-finetuned-mcqa
|
manyet1k
| 2023-08-06T23:43:39Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-01T06:09:37Z |
---
license: mit
base_model: microsoft/deberta-v3-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-base-finetuned-mcqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-base-finetuned-mcqa
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3869
- Accuracy: 0.262
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3888 | 1.0 | 563 | 1.3869 | 0.262 |
| 1.3881 | 2.0 | 1126 | 1.3875 | 0.262 |
| 1.3877 | 3.0 | 1689 | 1.3871 | 0.236 |
| 1.3877 | 4.0 | 2252 | 1.3871 | 0.262 |
| 1.3873 | 5.0 | 2815 | 1.3867 | 0.236 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
bonzo1971/setfit-model
|
bonzo1971
| 2023-08-06T23:41:18Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-08-06T23:41:03Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# bonzo1971/setfit-model
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("bonzo1971/setfit-model")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
vurdenko/ppo-LunarLander-v2
|
vurdenko
| 2023-08-06T23:18:29Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-06T22:12:47Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 267.01 +/- 16.41
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
manyet1k/roberta-base-finetuned-projectile
|
manyet1k
| 2023-08-06T23:13:37Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-06T22:23:45Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-base-finetuned-projectile
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-projectile
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3867
- Accuracy: 0.262
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3906 | 1.0 | 563 | 1.3867 | 0.236 |
| 1.3888 | 2.0 | 1126 | 1.3902 | 0.236 |
| 1.3876 | 3.0 | 1689 | 1.3874 | 0.236 |
| 1.388 | 4.0 | 2252 | 1.3867 | 0.262 |
| 1.3871 | 5.0 | 2815 | 1.3870 | 0.236 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
harshV27/my-falcon-7b
|
harshV27
| 2023-08-06T23:04:54Z | 0 | 0 |
peft
|
[
"peft",
"pytorch",
"falcon",
"custom_code",
"region:us"
] | null | 2023-08-06T14:37:51Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
HydraLM/Nous-Hermes-llama-2-7b_7b_cluster031_partitioned_v3_standardized_031
|
HydraLM
| 2023-08-06T23:00:46Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T18:17:09Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
ailabturkiye/ToronKaracaoglu
|
ailabturkiye
| 2023-08-06T22:58:14Z | 0 | 0 | null |
[
"tr",
"license:openrail",
"region:us"
] | null | 2023-08-06T22:30:23Z |
---
license: openrail
language:
- tr
---
|
joelniklaus/legal-swiss-longformer-base
|
joelniklaus
| 2023-08-06T22:57:02Z | 22 | 2 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"longformer",
"fill-mask",
"multilingual",
"de",
"fr",
"it",
"dataset:MultiLegalPile",
"dataset:LEXTREME",
"dataset:LEXGLUE",
"arxiv:2306.02069",
"arxiv:2306.09237",
"arxiv:2301.13126",
"arxiv:2110.00976",
"license:cc",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-04-27T20:51:53Z |
---
license: cc
language:
- multilingual
- de
- fr
- it
tags:
- multilingual
datasets:
- MultiLegalPile
- LEXTREME
- LEXGLUE
---
# Model Card for joelito/legal-swiss-longformer-base
This model is a multilingual model pretrained on legal data. It is based on XLM-R ([base](https://huggingface.co/xlm-roberta-base) and [large](https://huggingface.co/xlm-roberta-large)). For pretraining we used [Multi Legal Pile](https://huggingface.co/datasets/joelito/Multi_Legal_Pile) ([Niklaus et al. 2023](https://arxiv.org/abs/2306.02069?utm_source=tldrai)), a multilingual dataset from various legal sources covering 24 languages.
## Model Details
### Model Description
- **Developed by:** Joel Niklaus: [huggingface](https://huggingface.co/joelito); [email](mailto:joel.niklaus.2@bfh.ch)
- **Model type:** Transformer-based language model (Longformer)
- **Language(s) (NLP):** de, fr, it
- **License:** CC BY-SA
## Uses
### Direct Use and Downstream Use
You can utilize the raw model for masked language modeling since we did not perform next sentence prediction. However, its main purpose is to be fine-tuned for downstream tasks.
It's important to note that this model is primarily designed for fine-tuning on tasks that rely on the entire sentence, potentially with masked elements, to make decisions. Examples of such tasks include sequence classification, token classification, or question answering. For text generation tasks, models like GPT-2 are more suitable.
Additionally, the model is specifically trained on legal data, aiming to deliver strong performance in that domain. Its performance may vary when applied to non-legal data.
### Out-of-Scope Use
For tasks such as text generation you should look at model like GPT2.
The model should not be used to intentionally create hostile or alienating environments for people. The model was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model.
## Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
## How to Get Started with the Model
See [huggingface tutorials](https://huggingface.co/learn/nlp-course/chapter7/1?fw=pt). For masked word prediction see [this tutorial](https://huggingface.co/tasks/fill-mask).
## Training Details
This model was pretrained on [Multi Legal Pile](https://huggingface.co/datasets/joelito/Multi_Legal_Pile) ([Niklaus et al. 2023](https://arxiv.org/abs/2306.02069?utm_source=tldrai)).
Our pretraining procedure includes the following key steps:
(a) Warm-starting: We initialize our models from the original XLM-R checkpoints ([base](https://huggingface.co/xlm-roberta-base) and [large](https://huggingface.co/xlm-roberta-large)) of [Conneau et al. (2019)](https://proceedings.neurips.cc/paper/2019/file/c04c19c2c2474dbf5f7ac4372c5b9af1-Paper.pdf) to benefit from a well-trained base.
(b) Tokenization: We train a new tokenizer of 128K BPEs to cover legal language better. However, we reuse the original XLM-R embeddings for lexically overlapping tokens and use random embeddings for the rest.
(c) Pretraining: We continue pretraining on Multi Legal Pile with batches of 512 samples for an additional 1M/500K steps for the base/large model. We use warm-up steps, a linearly increasing learning rate, and cosine decay scheduling. During the warm-up phase, only the embeddings are updated, and a higher masking rate and percentage of predictions based on masked tokens are used compared to [Devlin et al. (2019)](https://aclanthology.org/N19-1423).
(d) Sentence Sampling: We employ a sentence sampler with exponential smoothing to handle disparate token proportions across cantons and languages, preserving per-canton and language capacity.
(e) Mixed Cased Models: Our models cover both upper- and lowercase letters, similar to recently developed large PLMs.
(f) Long Context Training: To account for long contexts in legal documents, we train the base-size multilingual model on long contexts with windowed attention. This variant, named Legal-Swiss-LF-base, uses a 15% masking probability, increased learning rate, and similar settings to small-context models.
### Training Data
This model was pretrained on [Multi Legal Pile](https://huggingface.co/datasets/joelito/Multi_Legal_Pile) ([Niklaus et al. 2023](https://arxiv.org/abs/2306.02069?utm_source=tldrai)).
#### Preprocessing
For further details see [Niklaus et al. 2023](https://arxiv.org/abs/2306.02069?utm_source=tldrai)
#### Training Hyperparameters
- batche size: 512 samples
- Number of steps: 1M/500K for the base/large model
- Warm-up steps for the first 5\% of the total training steps
- Learning rate: (linearly increasing up to) 1e-4
- Word masking: increased 20/30\% masking rate for base/large models respectively
## Evaluation
We compare joelito/legal-swiss-longformer-base with the other multilingual models.
The results are based on the text classification tasks presented in [Niklaus et al. (2023)](https://arxiv.org/abs/2306.09237) which are part of [LEXTREME](https://huggingface.co/datasets/joelito/lextreme).
We provide the arithmetic mean over three seeds (1, 2, 3) based on the macro-F1-score on the test set.
The highest values are in bold.
| \_name_or_path | SCP-BC | SCP-BF | SCP-CC | SCP-CF | SJPXL-C | SJPXL-F | SLAP-SC | SLAP-SF |
| :------------------------------------------------------------------------------------------------------ | :-------- | :-------- | :-------- | :-------- | :-------- | :-------- | :------- | :-------- |
| [ZurichNLP/swissbert-xlm-vocab](https://huggingface.co/ZurichNLP/swissbert-xlm-vocab) | 71.36 | 57.48 | 27.33 | 23.37 | 80.81 | 61.75 | 77.89 | 71.27 |
| [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) | 66.56 | 56.58 | 22.67 | 21.31 | 77.26 | 60.79 | 73.54 | 72.24 |
| [facebook/xmod-base](https://huggingface.co/facebook/xmod-base) | 70.35 | 58.16 | 23.87 | 19.57 | 80.55 | 60.84 | 73.16 | 69.03 |
| [joelito/legal-swiss-longformer-base](https://huggingface.co/joelito/legal-swiss-longformer-base) | **73.25** | **60.06** | **28.68** | 24.39 | 87.46 | **65.23** | 83.84 | 77.96 |
| [joelito/legal-swiss-roberta-base](https://huggingface.co/joelito/legal-swiss-roberta-base) | 72.41 | 59.31 | 25.99 | 23.27 | 87.48 | 64.16 | **86.8** | **81.56** |
| [joelito/legal-swiss-roberta-large](https://huggingface.co/joelito/legal-swiss-roberta-large) | 70.95 | 57.59 | 27.86 | 23.48 | **88.33** | 62.92 | 82.1 | 78.62 |
| [microsoft/Multilingual-MiniLM-L12-H384](https://huggingface.co/microsoft/Multilingual-MiniLM-L12-H384) | 67.29 | 56.56 | 24.23 | 14.9 | 79.52 | 58.29 | 63.03 | 67.57 |
| [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) | 72.01 | 57.59 | 22.93 | **25.18** | 79.41 | 60.89 | 67.64 | 74.13 |
| [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) | 68.55 | 58.48 | 25.66 | 21.52 | 80.98 | 61.45 | 79.3 | 74.47 |
| [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) | 69.5 | 58.15 | 27.9 | 22.05 | 82.19 | 61.24 | 81.09 | 71.82 |
For more detailed insights into the performance on downstream tasks, such as [LEXTREME](https://huggingface.co/datasets/joelito/lextreme) ([Niklaus et al. 2023](https://arxiv.org/abs/2301.13126)) or [LEXGLUE](https://huggingface.co/datasets/lex_glue) ([Chalkidis et al. 2021](https://arxiv.org/abs/2110.00976)), we refer to the results presented in Niklaus et al. (2023) [1](https://arxiv.org/abs/2306.02069), [2](https://arxiv.org/abs/2306.09237).
### Model Architecture and Objective
It is a RoBERTa-based model. Run the following code to view the architecture:
```
from transformers import AutoModel
model = AutoModel.from_pretrained('joelito/legal-swiss-longformer-base')
print(model)
LongformerModel(
(embeddings): LongformerEmbeddings(
(word_embeddings): Embedding(128000, 768, padding_idx=0)
(position_embeddings): Embedding(4098, 768, padding_idx=0)
(token_type_embeddings): Embedding(1, 768)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(encoder): LongformerEncoder(
(layer): ModuleList(
(0-11): 12 x LongformerLayer(
(attention): LongformerAttention(
(self): LongformerSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(query_global): Linear(in_features=768, out_features=768, bias=True)
(key_global): Linear(in_features=768, out_features=768, bias=True)
(value_global): Linear(in_features=768, out_features=768, bias=True)
)
(output): LongformerSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): LongformerIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): LongformerOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(pooler): LongformerPooler(
(dense): Linear(in_features=768, out_features=768, bias=True)
(activation): Tanh()
)
)
```
### Compute Infrastructure
Google TPU.
#### Hardware
Google TPU v3-8
#### Software
pytorch, transformers.
## Citation
```
@misc{rasiah2023scale,
title={SCALE: Scaling up the Complexity for Advanced Language Model Evaluation},
author={Vishvaksenan Rasiah and Ronja Stern and Veton Matoshi and Matthias Stürmer and Ilias Chalkidis and Daniel E. Ho and Joel Niklaus},
year={2023},
eprint={2306.09237},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@article{Niklaus2023MultiLegalPileA6,
title={MultiLegalPile: A 689GB Multilingual Legal Corpus},
author={Joel Niklaus and Veton Matoshi and Matthias Sturmer and Ilias Chalkidis and Daniel E. Ho},
journal={ArXiv},
year={2023},
volume={abs/2306.02069}
}
```
## Model Card Authors
Joel Niklaus: [huggingface](https://huggingface.co/joelito); [email](mailto:joel.niklaus.2@bfh.ch)
Veton Matoshi: [huggingface](https://huggingface.co/kapllan); [email](mailto:msv3@bfh.ch)
## Model Card Contact
Joel Niklaus: [huggingface](https://huggingface.co/joelito); [email](mailto:joel.niklaus.2@bfh.ch)
Veton Matoshi: [huggingface](https://huggingface.co/kapllan); [email](mailto:msv3@bfh.ch)
|
joelniklaus/legal-english-longformer-base
|
joelniklaus
| 2023-08-06T22:55:40Z | 0 | 2 | null |
[
"en",
"dataset:MultiLegalPile",
"dataset:LEXTREME",
"dataset:LEXGLUE",
"arxiv:2306.02069",
"arxiv:2301.13126",
"arxiv:2110.00976",
"arxiv:2306.09237",
"license:cc",
"region:us"
] | null | 2023-04-27T06:52:14Z |
---
license: cc
language:
- en
datasets:
- MultiLegalPile
- LEXTREME
- LEXGLUE
---
# Model Card for joelito/legal-english-longformer-base
This model is a multilingual model pretrained on legal data. It is based on XLM-R ([base](https://huggingface.co/xlm-roberta-base) and [large](https://huggingface.co/xlm-roberta-large)). For pretraining we used [Multi Legal Pile](https://huggingface.co/datasets/joelito/Multi_Legal_Pile) ([Niklaus et al. 2023](https://arxiv.org/abs/2306.02069)), a multilingual dataset from various legal sources covering 24 languages.
## Model Details
### Model Description
- **Developed by:** Joel Niklaus: [huggingface](https://huggingface.co/joelito); [email](mailto:joel.niklaus.2@bfh.ch)
- **Model type:** Transformer-based language model (Longformer)
- **Language(s) (NLP):** en
- **License:** CC BY-SA
## Uses
### Direct Use and Downstream Use
You can utilize the raw model for masked language modeling since we did not perform next sentence prediction. However, its main purpose is to be fine-tuned for downstream tasks.
It's important to note that this model is primarily designed for fine-tuning on tasks that rely on the entire sentence, potentially with masked elements, to make decisions. Examples of such tasks include sequence classification, token classification, or question answering. For text generation tasks, models like GPT-2 are more suitable.
Additionally, the model is specifically trained on legal data, aiming to deliver strong performance in that domain. Its performance may vary when applied to non-legal data.
### Out-of-Scope Use
For tasks such as text generation you should look at model like GPT2.
The model should not be used to intentionally create hostile or alienating environments for people. The model was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model.
## Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
## How to Get Started with the Model
See [huggingface tutorials](https://huggingface.co/learn/nlp-course/chapter7/1?fw=pt). For masked word prediction see [this tutorial](https://huggingface.co/tasks/fill-mask).
## Training Details
This model was pretrained on [Multi Legal Pile](https://huggingface.co/datasets/joelito/Multi_Legal_Pile) ([Niklaus et al. 2023](https://arxiv.org/abs/2306.02069?utm_source=tldrai)).
Our pretraining procedure includes the following key steps:
(a) Warm-starting: We initialize our models from the original XLM-R checkpoints ([base](https://huggingface.co/xlm-roberta-base) and [large](https://huggingface.co/xlm-roberta-large)) of [Conneau et al. (2019)](https://proceedings.neurips.cc/paper/2019/file/c04c19c2c2474dbf5f7ac4372c5b9af1-Paper.pdf) to benefit from a well-trained base.
(b) Tokenization: We train a new tokenizer of 128K BPEs to cover legal language better. However, we reuse the original XLM-R embeddings for lexically overlapping tokens and use random embeddings for the rest.
(c) Pretraining: We continue pretraining on Multi Legal Pile with batches of 512 samples for an additional 1M/500K steps for the base/large model. We use warm-up steps, a linearly increasing learning rate, and cosine decay scheduling. During the warm-up phase, only the embeddings are updated, and a higher masking rate and percentage of predictions based on masked tokens are used compared to [Devlin et al. (2019)](https://aclanthology.org/N19-1423).
(d) Sentence Sampling: We employ a sentence sampler with exponential smoothing to handle disparate token proportions across cantons and languages, preserving per-canton and language capacity.
(e) Mixed Cased Models: Our models cover both upper- and lowercase letters, similar to recently developed large PLMs.
(f) Long Context Training: To account for long contexts in legal documents, we train the base-size multilingual model on long contexts with windowed attention. This variant, named Legal-Swiss-LF-base, uses a 15% masking probability, increased learning rate, and similar settings to small-context models.
### Training Data
This model was pretrained on [Multi Legal Pile](https://huggingface.co/datasets/joelito/Multi_Legal_Pile) ([Niklaus et al. 2023](https://arxiv.org/abs/2306.02069?utm_source=tldrai)).
#### Preprocessing
For further details see [Niklaus et al. 2023](https://arxiv.org/abs/2306.02069?utm_source=tldrai)
#### Training Hyperparameters
- batche size: 512 samples
- Number of steps: 1M/500K for the base/large model
- Warm-up steps for the first 5\% of the total training steps
- Learning rate: (linearly increasing up to) 1e-4
- Word masking: increased 20/30\% masking rate for base/large models respectively
## Evaluation
For performance on downstream tasks, such as [LEXTREME](https://huggingface.co/datasets/joelito/lextreme) ([Niklaus et al. 2023](https://arxiv.org/abs/2301.13126)) or [LEXGLUE](https://huggingface.co/datasets/lex_glue) ([Chalkidis et al. 2021](https://arxiv.org/abs/2110.00976)), we refer to the results presented in Niklaus et al. (2023) [1](https://arxiv.org/abs/2306.02069), [2](https://arxiv.org/abs/2306.09237).
### Model Architecture and Objective
It is a RoBERTa-based model. Run the following code to view the architecture:
```
from transformers import AutoModel
model = AutoModel.from_pretrained('joelito/legal-english-longformer-base')
print(model)
LongformerModel(
(embeddings): LongformerEmbeddings(
(word_embeddings): Embedding(128000, 768, padding_idx=0)
(position_embeddings): Embedding(4098, 768, padding_idx=0)
(token_type_embeddings): Embedding(1, 768)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(encoder): LongformerEncoder(
(layer): ModuleList(
(0-11): 12 x LongformerLayer(
(attention): LongformerAttention(
(self): LongformerSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(query_global): Linear(in_features=768, out_features=768, bias=True)
(key_global): Linear(in_features=768, out_features=768, bias=True)
(value_global): Linear(in_features=768, out_features=768, bias=True)
)
(output): LongformerSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): LongformerIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): LongformerOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(pooler): LongformerPooler(
(dense): Linear(in_features=768, out_features=768, bias=True)
(activation): Tanh()
)
)
```
### Compute Infrastructure
Google TPU.
#### Hardware
Google TPU v3-8
#### Software
pytorch, transformers.
## Citation
```
@article{Niklaus2023MultiLegalPileA6,
title={MultiLegalPile: A 689GB Multilingual Legal Corpus},
author={Joel Niklaus and Veton Matoshi and Matthias Sturmer and Ilias Chalkidis and Daniel E. Ho},
journal={ArXiv},
year={2023},
volume={abs/2306.02069}
}
```
## Model Card Authors
Joel Niklaus: [huggingface](https://huggingface.co/joelito); [email](mailto:joel.niklaus.2@bfh.ch)
Veton Matoshi: [huggingface](https://huggingface.co/kapllan); [email](mailto:msv3@bfh.ch)
## Model Card Contact
Joel Niklaus: [huggingface](https://huggingface.co/joelito); [email](mailto:joel.niklaus.2@bfh.ch)
Veton Matoshi: [huggingface](https://huggingface.co/kapllan); [email](mailto:msv3@bfh.ch)
|
joelniklaus/legal-english-roberta-large
|
joelniklaus
| 2023-08-06T22:55:38Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"en",
"dataset:MultiLegalPile",
"dataset:LEXTREME",
"dataset:LEXGLUE",
"arxiv:2306.02069",
"arxiv:2301.13126",
"arxiv:2110.00976",
"arxiv:2306.09237",
"license:cc",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-02-13T06:38:24Z |
---
language:
- en
license: cc
datasets:
- MultiLegalPile
- LEXTREME
- LEXGLUE
---
# Model Card for joelito/legal-english-roberta-large
This model is a multilingual model pretrained on legal data. It is based on XLM-R ([base](https://huggingface.co/xlm-roberta-base) and [large](https://huggingface.co/xlm-roberta-large)). For pretraining we used [Multi Legal Pile](https://huggingface.co/datasets/joelito/Multi_Legal_Pile) ([Niklaus et al. 2023](https://arxiv.org/abs/2306.02069?utm_source=tldrai)), a multilingual dataset from various legal sources covering 24 languages.
## Model Details
### Model Description
- **Developed by:** Joel Niklaus: [huggingface](https://huggingface.co/joelito); [email](mailto:joel.niklaus.2@bfh.ch)
- **Model type:** Transformer-based language model (RoBERTa)
- **Language(s) (NLP):** en
- **License:** CC BY-SA
## Uses
### Direct Use and Downstream Use
You can utilize the raw model for masked language modeling since we did not perform next sentence prediction. However, its main purpose is to be fine-tuned for downstream tasks.
It's important to note that this model is primarily designed for fine-tuning on tasks that rely on the entire sentence, potentially with masked elements, to make decisions. Examples of such tasks include sequence classification, token classification, or question answering. For text generation tasks, models like GPT-2 are more suitable.
Additionally, the model is specifically trained on legal data, aiming to deliver strong performance in that domain. Its performance may vary when applied to non-legal data.
### Out-of-Scope Use
For tasks such as text generation you should look at model like GPT2.
The model should not be used to intentionally create hostile or alienating environments for people. The model was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model.
## Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
## How to Get Started with the Model
See [huggingface tutorials](https://huggingface.co/learn/nlp-course/chapter7/1?fw=pt). For masked word prediction see [this tutorial](https://huggingface.co/tasks/fill-mask).
## Training Details
This model was pretrained on [Multi Legal Pile](https://huggingface.co/datasets/joelito/Multi_Legal_Pile) ([Niklaus et al. 2023](https://arxiv.org/abs/2306.02069?utm_source=tldrai)).
Our pretraining procedure includes the following key steps:
(a) Warm-starting: We initialize our models from the original XLM-R checkpoints ([base](https://huggingface.co/xlm-roberta-base) and [large](https://huggingface.co/xlm-roberta-large)) of [Conneau et al. (2019)](https://proceedings.neurips.cc/paper/2019/file/c04c19c2c2474dbf5f7ac4372c5b9af1-Paper.pdf) to benefit from a well-trained base.
(b) Tokenization: We train a new tokenizer of 128K BPEs to cover legal language better. However, we reuse the original XLM-R embeddings for lexically overlapping tokens and use random embeddings for the rest.
(c) Pretraining: We continue pretraining on Multi Legal Pile with batches of 512 samples for an additional 1M/500K steps for the base/large model. We use warm-up steps, a linearly increasing learning rate, and cosine decay scheduling. During the warm-up phase, only the embeddings are updated, and a higher masking rate and percentage of predictions based on masked tokens are used compared to [Devlin et al. (2019)](https://aclanthology.org/N19-1423).
(d) Sentence Sampling: We employ a sentence sampler with exponential smoothing to handle disparate token proportions across cantons and languages, preserving per-canton and language capacity.
(e) Mixed Cased Models: Our models cover both upper- and lowercase letters, similar to recently developed large PLMs.
(f) Long Context Training: To account for long contexts in legal documents, we train the base-size multilingual model on long contexts with windowed attention. This variant, named Legal-Swiss-LF-base, uses a 15% masking probability, increased learning rate, and similar settings to small-context models.
### Training Data
This model was pretrained on [Multi Legal Pile](https://huggingface.co/datasets/joelito/Multi_Legal_Pile) ([Niklaus et al. 2023](https://arxiv.org/abs/2306.02069?utm_source=tldrai)).
#### Preprocessing
For further details see [Niklaus et al. 2023](https://arxiv.org/abs/2306.02069?utm_source=tldrai)
#### Training Hyperparameters
- batche size: 512 samples
- Number of steps: 1M/500K for the base/large model
- Warm-up steps for the first 5\% of the total training steps
- Learning rate: (linearly increasing up to) 1e-4
- Word masking: increased 20/30\% masking rate for base/large models respectively
## Evaluation
For further insights into the evaluation, we refer to the [trainer state](https://huggingface.co/joelito/legal-xlm-roberta-large/blob/main/last-checkpoint/trainer_state.json). Additional information is available in the [tensorboard](https://huggingface.co/joelito/legal-xlm-roberta-large/tensorboard).
For performance on downstream tasks, such as [LEXTREME](https://huggingface.co/datasets/joelito/lextreme) ([Niklaus et al. 2023](https://arxiv.org/abs/2301.13126)) or [LEXGLUE](https://huggingface.co/datasets/lex_glue) ([Chalkidis et al. 2021](https://arxiv.org/abs/2110.00976)), we refer to the results presented in Niklaus et al. (2023) [1](https://arxiv.org/abs/2306.02069), [2](https://arxiv.org/abs/2306.09237).
### Model Architecture and Objective
It is a RoBERTa-based model. Run the following code to view the architecture:
```
from transformers import AutoModel
model = AutoModel.from_pretrained('joelito/legal-english-roberta-large')
print(model)
RobertaModel(
(embeddings): RobertaEmbeddings(
(word_embeddings): Embedding(128000, 1024, padding_idx=0)
(position_embeddings): Embedding(514, 1024, padding_idx=0)
(token_type_embeddings): Embedding(1, 1024)
(LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(encoder): RobertaEncoder(
(layer): ModuleList(
(0-23): 24 x RobertaLayer(
(attention): RobertaAttention(
(self): RobertaSelfAttention(
(query): Linear(in_features=1024, out_features=1024, bias=True)
(key): Linear(in_features=1024, out_features=1024, bias=True)
(value): Linear(in_features=1024, out_features=1024, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): RobertaSelfOutput(
(dense): Linear(in_features=1024, out_features=1024, bias=True)
(LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): RobertaIntermediate(
(dense): Linear(in_features=1024, out_features=4096, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): RobertaOutput(
(dense): Linear(in_features=4096, out_features=1024, bias=True)
(LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(pooler): RobertaPooler(
(dense): Linear(in_features=1024, out_features=1024, bias=True)
(activation): Tanh()
)
)
```
### Compute Infrastructure
Google TPU.
#### Hardware
Google TPU v3-8
#### Software
pytorch, transformers.
## Citation
```
@article{Niklaus2023MultiLegalPileA6,
title={MultiLegalPile: A 689GB Multilingual Legal Corpus},
author={Joel Niklaus and Veton Matoshi and Matthias Sturmer and Ilias Chalkidis and Daniel E. Ho},
journal={ArXiv},
year={2023},
volume={abs/2306.02069}
}
```
## Model Card Authors
Joel Niklaus: [huggingface](https://huggingface.co/joelito); [email](mailto:joel.niklaus.2@bfh.ch)
Veton Matoshi: [huggingface](https://huggingface.co/kapllan); [email](mailto:msv3@bfh.ch)
## Model Card Contact
Joel Niklaus: [huggingface](https://huggingface.co/joelito); [email](mailto:joel.niklaus.2@bfh.ch)
Veton Matoshi: [huggingface](https://huggingface.co/kapllan); [email](mailto:msv3@bfh.ch)
|
joelniklaus/legal-english-roberta-base
|
joelniklaus
| 2023-08-06T22:55:36Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"en",
"arxiv:2306.02069",
"arxiv:2301.13126",
"arxiv:2110.00976",
"arxiv:2306.09237",
"license:cc",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-02-13T06:38:38Z |
---
license: cc
language:
- en
---
# Model Card for joelito/legal-english-roberta-base
This model is a multilingual model pretrained on legal data. It is based on XLM-R ([base](https://huggingface.co/xlm-roberta-base) and [large](https://huggingface.co/xlm-roberta-large)). For pretraining we used [Multi Legal Pile](https://huggingface.co/datasets/joelito/Multi_Legal_Pile) ([Niklaus et al. 2023](https://arxiv.org/abs/2306.02069?utm_source=tldrai)), a multilingual dataset from various legal sources covering 24 languages.
## Model Details
### Model Description
- **Developed by:** Joel Niklaus: [huggingface](https://huggingface.co/joelito); [email](mailto:joel.niklaus.2@bfh.ch)
- **Model type:** Transformer-based language model (RoBERTa)
- **Language(s) (NLP):** en
- **License:** CC BY-SA
## Uses
### Direct Use and Downstream Use
You can utilize the raw model for masked language modeling since we did not perform next sentence prediction. However, its main purpose is to be fine-tuned for downstream tasks.
It's important to note that this model is primarily designed for fine-tuning on tasks that rely on the entire sentence, potentially with masked elements, to make decisions. Examples of such tasks include sequence classification, token classification, or question answering. For text generation tasks, models like GPT-2 are more suitable.
Additionally, the model is specifically trained on legal data, aiming to deliver strong performance in that domain. Its performance may vary when applied to non-legal data.
### Out-of-Scope Use
For tasks such as text generation you should look at model like GPT2.
The model should not be used to intentionally create hostile or alienating environments for people. The model was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model.
## Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
## How to Get Started with the Model
See [huggingface tutorials](https://huggingface.co/learn/nlp-course/chapter7/1?fw=pt). For masked word prediction see [this tutorial](https://huggingface.co/tasks/fill-mask).
## Training Details
This model was pretrained on [Multi Legal Pile](https://huggingface.co/datasets/joelito/Multi_Legal_Pile) ([Niklaus et al. 2023](https://arxiv.org/abs/2306.02069?utm_source=tldrai)).
Our pretraining procedure includes the following key steps:
(a) Warm-starting: We initialize our models from the original XLM-R checkpoints ([base](https://huggingface.co/xlm-roberta-base) and [large](https://huggingface.co/xlm-roberta-large)) of [Conneau et al. (2019)](https://proceedings.neurips.cc/paper/2019/file/c04c19c2c2474dbf5f7ac4372c5b9af1-Paper.pdf) to benefit from a well-trained base.
(b) Tokenization: We train a new tokenizer of 128K BPEs to cover legal language better. However, we reuse the original XLM-R embeddings for lexically overlapping tokens and use random embeddings for the rest.
(c) Pretraining: We continue pretraining on Multi Legal Pile with batches of 512 samples for an additional 1M/500K steps for the base/large model. We use warm-up steps, a linearly increasing learning rate, and cosine decay scheduling. During the warm-up phase, only the embeddings are updated, and a higher masking rate and percentage of predictions based on masked tokens are used compared to [Devlin et al. (2019)](https://aclanthology.org/N19-1423).
(d) Sentence Sampling: We employ a sentence sampler with exponential smoothing to handle disparate token proportions across cantons and languages, preserving per-canton and language capacity.
(e) Mixed Cased Models: Our models cover both upper- and lowercase letters, similar to recently developed large PLMs.
(f) Long Context Training: To account for long contexts in legal documents, we train the base-size multilingual model on long contexts with windowed attention. This variant, named Legal-Swiss-LF-base, uses a 15% masking probability, increased learning rate, and similar settings to small-context models.
### Training Data
This model was pretrained on [Multi Legal Pile](https://huggingface.co/datasets/joelito/Multi_Legal_Pile) ([Niklaus et al. 2023](https://arxiv.org/abs/2306.02069?utm_source=tldrai)).
#### Preprocessing
For further details see [Niklaus et al. 2023](https://arxiv.org/abs/2306.02069?utm_source=tldrai)
#### Training Hyperparameters
- batche size: 512 samples
- Number of steps: 1M/500K for the base/large model
- Warm-up steps for the first 5\% of the total training steps
- Learning rate: (linearly increasing up to) 1e-4
- Word masking: increased 20/30\% masking rate for base/large models respectively
## Evaluation
For further insights into the evaluation, we refer to the [trainer state](https://huggingface.co/joelito/legal-swiss-roberta-base/blob/main/last-checkpoint/trainer_state.json). Additional information is available in the [tensorboard](https://huggingface.co/joelito/legal-swiss-roberta-base/tensorboard).
For performance on downstream tasks, such as [LEXTREME](https://huggingface.co/datasets/joelito/lextreme) ([Niklaus et al. 2023](https://arxiv.org/abs/2301.13126)) or [LEXGLUE](https://huggingface.co/datasets/lex_glue) ([Chalkidis et al. 2021](https://arxiv.org/abs/2110.00976)), we refer to the results presented in Niklaus et al. (2023) [1](https://arxiv.org/abs/2306.02069), [2](https://arxiv.org/abs/2306.09237).
### Model Architecture and Objective
It is a RoBERTa-based model. Run the following code to view the architecture:
```
from transformers import AutoModel
model = AutoModel.from_pretrained('joelito/legal-english-roberta-base')
print(model)
RobertaModel(
(embeddings): RobertaEmbeddings(
(word_embeddings): Embedding(128000, 768, padding_idx=0)
(position_embeddings): Embedding(514, 768, padding_idx=0)
(token_type_embeddings): Embedding(1, 768)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(encoder): RobertaEncoder(
(layer): ModuleList(
(0-11): 12 x RobertaLayer(
(attention): RobertaAttention(
(self): RobertaSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): RobertaSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): RobertaIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): RobertaOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(pooler): RobertaPooler(
(dense): Linear(in_features=768, out_features=768, bias=True)
(activation): Tanh()
)
)
```
### Compute Infrastructure
Google TPU.
#### Hardware
Google TPU v3-8
#### Software
pytorch, transformers.
## Citation
```
@article{Niklaus2023MultiLegalPileA6,
title={MultiLegalPile: A 689GB Multilingual Legal Corpus},
author={Joel Niklaus and Veton Matoshi and Matthias Sturmer and Ilias Chalkidis and Daniel E. Ho},
journal={ArXiv},
year={2023},
volume={abs/2306.02069}
}
```
## Model Card Authors
Joel Niklaus: [huggingface](https://huggingface.co/joelito); [email](mailto:joel.niklaus.2@bfh.ch)
Veton Matoshi: [huggingface](https://huggingface.co/kapllan); [email](mailto:msv3@bfh.ch)
## Model Card Contact
Joel Niklaus: [huggingface](https://huggingface.co/joelito); [email](mailto:joel.niklaus.2@bfh.ch)
Veton Matoshi: [huggingface](https://huggingface.co/kapllan); [email](mailto:msv3@bfh.ch)
|
joelniklaus/legal-xlm-roberta-large
|
joelniklaus
| 2023-08-06T22:55:31Z | 119 | 4 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"roberta",
"fill-mask",
"multilingual",
"bg",
"cs",
"da",
"de",
"el",
"en",
"es",
"et",
"fi",
"fr",
"ga",
"hr",
"hu",
"it",
"lt",
"lv",
"mt",
"nl",
"pl",
"pt",
"ro",
"sk",
"sl",
"sv",
"dataset:MultiLegalPile",
"dataset:LEXTREME",
"dataset:LEXGLUE",
"arxiv:2306.02069",
"arxiv:2301.13126",
"arxiv:2110.00976",
"arxiv:2306.09237",
"license:cc",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-12-30T18:43:43Z |
---
language:
- multilingual
- bg
- cs
- da
- de
- el
- en
- es
- et
- fi
- fr
- ga
- hr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sk
- sl
- sv
tags:
- multilingual
license: cc
datasets:
- MultiLegalPile
- LEXTREME
- LEXGLUE
---
# Model Card for joelito/legal-xlm-roberta-large
This model is a multilingual model pretrained on legal data. It is based on XLM-R ([base](https://huggingface.co/xlm-roberta-base) and [large](https://huggingface.co/xlm-roberta-large)). For pretraining we used [Multi Legal Pile](https://huggingface.co/datasets/joelito/Multi_Legal_Pile) ([Niklaus et al. 2023](https://arxiv.org/abs/2306.02069?utm_source=tldrai)), a multilingual dataset from various legal sources covering 24 languages.
## Model Details
### Model Description
- **Developed by:** Joel Niklaus: [huggingface](https://huggingface.co/joelito); [email](mailto:joel.niklaus.2@bfh.ch)
- **Model type:** Transformer-based language model (RoBERTa)
- **Language(s) (NLP):** bg, cs, da, de, el, en, es, et, fi, fr, ga, hr, hu, it, lt, lv, mt, nl, pl, pt, ro, sk, sl, sv
- **License:** CC BY-SA
## Uses
### Direct Use and Downstream Use
You can utilize the raw model for masked language modeling since we did not perform next sentence prediction. However, its main purpose is to be fine-tuned for downstream tasks.
It's important to note that this model is primarily designed for fine-tuning on tasks that rely on the entire sentence, potentially with masked elements, to make decisions. Examples of such tasks include sequence classification, token classification, or question answering. For text generation tasks, models like GPT-2 are more suitable.
Additionally, the model is specifically trained on legal data, aiming to deliver strong performance in that domain. Its performance may vary when applied to non-legal data.
### Out-of-Scope Use
For tasks such as text generation you should look at model like GPT2.
The model should not be used to intentionally create hostile or alienating environments for people. The model was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model.
## Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
## How to Get Started with the Model
See [huggingface tutorials](https://huggingface.co/learn/nlp-course/chapter7/1?fw=pt). For masked word prediction see [this tutorial](https://huggingface.co/tasks/fill-mask).
## Training Details
This model was pretrained on [Multi Legal Pile](https://huggingface.co/datasets/joelito/Multi_Legal_Pile) ([Niklaus et al. 2023](https://arxiv.org/abs/2306.02069?utm_source=tldrai)).
Our pretraining procedure includes the following key steps:
(a) Warm-starting: We initialize our models from the original XLM-R checkpoints ([base](https://huggingface.co/xlm-roberta-base) and [large](https://huggingface.co/xlm-roberta-large)) of [Conneau et al. (2019)](https://proceedings.neurips.cc/paper/2019/file/c04c19c2c2474dbf5f7ac4372c5b9af1-Paper.pdf) to benefit from a well-trained base.
(b) Tokenization: We train a new tokenizer of 128K BPEs to cover legal language better. However, we reuse the original XLM-R embeddings for lexically overlapping tokens and use random embeddings for the rest.
(c) Pretraining: We continue pretraining on Multi Legal Pile with batches of 512 samples for an additional 1M/500K steps for the base/large model. We use warm-up steps, a linearly increasing learning rate, and cosine decay scheduling. During the warm-up phase, only the embeddings are updated, and a higher masking rate and percentage of predictions based on masked tokens are used compared to [Devlin et al. (2019)](https://aclanthology.org/N19-1423).
(d) Sentence Sampling: We employ a sentence sampler with exponential smoothing to handle disparate token proportions across cantons and languages, preserving per-canton and language capacity.
(e) Mixed Cased Models: Our models cover both upper- and lowercase letters, similar to recently developed large PLMs.
(f) Long Context Training: To account for long contexts in legal documents, we train the base-size multilingual model on long contexts with windowed attention. This variant, named Legal-Swiss-LF-base, uses a 15% masking probability, increased learning rate, and similar settings to small-context models.
### Training Data
This model was pretrained on [Multi Legal Pile](https://huggingface.co/datasets/joelito/Multi_Legal_Pile) ([Niklaus et al. 2023](https://arxiv.org/abs/2306.02069?utm_source=tldrai)).
#### Preprocessing
For further details see [Niklaus et al. 2023](https://arxiv.org/abs/2306.02069?utm_source=tldrai)
#### Training Hyperparameters
- batche size: 512 samples
- Number of steps: 1M/500K for the base/large model
- Warm-up steps for the first 5\% of the total training steps
- Learning rate: (linearly increasing up to) 1e-4
- Word masking: increased 20/30\% masking rate for base/large models respectively
## Evaluation
For further insights into the evaluation, we refer to the [trainer state](https://huggingface.co/joelito/legal-xlm-roberta-large/blob/main/last-checkpoint/trainer_state.json). Additional information is available in the [tensorboard](https://huggingface.co/joelito/legal-xlm-roberta-large/tensorboard).
For performance on downstream tasks, such as [LEXTREME](https://huggingface.co/datasets/joelito/lextreme) ([Niklaus et al. 2023](https://arxiv.org/abs/2301.13126)) or [LEXGLUE](https://huggingface.co/datasets/lex_glue) ([Chalkidis et al. 2021](https://arxiv.org/abs/2110.00976)), we refer to the results presented in Niklaus et al. (2023) [1](https://arxiv.org/abs/2306.02069), [2](https://arxiv.org/abs/2306.09237).
### Model Architecture and Objective
It is a RoBERTa-based model. Run the following code to view the architecture:
```
from transformers import AutoModel
model = AutoModel.from_pretrained('joelito/legal-xlm-roberta-large')
print(model)
RobertaModel(
(embeddings): RobertaEmbeddings(
(word_embeddings): Embedding(128000, 1024, padding_idx=0)
(position_embeddings): Embedding(514, 1024, padding_idx=0)
(token_type_embeddings): Embedding(1, 1024)
(LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(encoder): RobertaEncoder(
(layer): ModuleList(
(0-23): 24 x RobertaLayer(
(attention): RobertaAttention(
(self): RobertaSelfAttention(
(query): Linear(in_features=1024, out_features=1024, bias=True)
(key): Linear(in_features=1024, out_features=1024, bias=True)
(value): Linear(in_features=1024, out_features=1024, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): RobertaSelfOutput(
(dense): Linear(in_features=1024, out_features=1024, bias=True)
(LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): RobertaIntermediate(
(dense): Linear(in_features=1024, out_features=4096, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): RobertaOutput(
(dense): Linear(in_features=4096, out_features=1024, bias=True)
(LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(pooler): RobertaPooler(
(dense): Linear(in_features=1024, out_features=1024, bias=True)
(activation): Tanh()
)
)
```
### Compute Infrastructure
Google TPU.
#### Hardware
Google TPU v3-8
#### Software
pytorch, transformers.
## Citation
```
@article{Niklaus2023MultiLegalPileA6,
title={MultiLegalPile: A 689GB Multilingual Legal Corpus},
author={Joel Niklaus and Veton Matoshi and Matthias Sturmer and Ilias Chalkidis and Daniel E. Ho},
journal={ArXiv},
year={2023},
volume={abs/2306.02069}
}
```
## Model Card Authors
Joel Niklaus: [huggingface](https://huggingface.co/joelito); [email](mailto:joel.niklaus.2@bfh.ch)
Veton Matoshi: [huggingface](https://huggingface.co/kapllan); [email](mailto:msv3@bfh.ch)
## Model Card Contact
Joel Niklaus: [huggingface](https://huggingface.co/joelito); [email](mailto:joel.niklaus.2@bfh.ch)
Veton Matoshi: [huggingface](https://huggingface.co/kapllan); [email](mailto:msv3@bfh.ch)
|
HydraLM/Nous-Hermes-llama-2-7b_7b_cluster030_partitioned_v3_standardized_030
|
HydraLM
| 2023-08-06T22:55:06Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T17:53:43Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
joelniklaus/legal-portuguese-roberta-base
|
joelniklaus
| 2023-08-06T22:55:00Z | 187 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"legal",
"pt",
"dataset:MultiLegalPile",
"dataset:LEXTREME",
"dataset:LEXGLUE",
"arxiv:2306.02069",
"arxiv:2301.13126",
"arxiv:2110.00976",
"arxiv:2306.09237",
"license:cc",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-02-13T06:39:06Z |
---
license: cc
datasets:
- MultiLegalPile
- LEXTREME
- LEXGLUE
language:
- pt
tags:
- legal
---
# Model Card for joelito/legal-portuguese-roberta-base
This model is a monolingual model pretrained on legal data. It is based on XLM-R ([base](https://huggingface.co/xlm-roberta-base) and [large](https://huggingface.co/xlm-roberta-large)). For pretraining we used the Portuguese portion of [Multi Legal Pile](https://huggingface.co/datasets/joelito/Multi_Legal_Pile) ([Niklaus et al. 2023](https://arxiv.org/abs/2306.02069?utm_source=tldrai)), a multilingual dataset from various legal sources covering 24 languages.
## Model Details
### Model Description
- **Developed by:** Joel Niklaus: [huggingface](https://huggingface.co/joelito); [email](mailto:joel.niklaus.2@bfh.ch)
- **Model type:** Transformer-based language model (RoBERTa)
- **Language(s) (NLP):** Portuguese
- **License:** CC BY-SA
## Uses
### Direct Use and Downstream Use
You can utilize the raw model for masked language modeling since we did not perform next sentence prediction. However, its main purpose is to be fine-tuned for downstream tasks.
It's important to note that this model is primarily designed for fine-tuning on tasks that rely on the entire sentence, potentially with masked elements, to make decisions. Examples of such tasks include sequence classification, token classification, or question answering. For text generation tasks, models like GPT-2 are more suitable.
Additionally, the model is specifically trained on legal data, aiming to deliver strong performance in that domain. Its performance may vary when applied to non-legal data.
### Out-of-Scope Use
For tasks such as text generation you should look at model like GPT2.
The model should not be used to intentionally create hostile or alienating environments for people. The model was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model.
## Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
## How to Get Started with the Model
See [huggingface tutorials](https://huggingface.co/learn/nlp-course/chapter7/1?fw=pt). For masked word prediction see [this tutorial](https://huggingface.co/tasks/fill-mask).
## Training Details
This model was pretrained on [Multi Legal Pile](https://huggingface.co/datasets/joelito/Multi_Legal_Pile) ([Niklaus et al. 2023](https://arxiv.org/abs/2306.02069?utm_source=tldrai)).
Our pretraining procedure includes the following key steps:
(a) Warm-starting: We initialize our models from the original XLM-R checkpoints ([base](https://huggingface.co/xlm-roberta-base) and [large](https://huggingface.co/xlm-roberta-large)) of [Conneau et al. (2019)](https://proceedings.neurips.cc/paper/2019/file/c04c19c2c2474dbf5f7ac4372c5b9af1-Paper.pdf) to benefit from a well-trained base.
(b) Tokenization: We train a new tokenizer of 128K BPEs to cover legal language better. However, we reuse the original XLM-R embeddings for lexically overlapping tokens and use random embeddings for the rest.
(c) Pretraining: We continue pretraining on Multi Legal Pile with batches of 512 samples for an additional 1M/500K steps for the base/large model. We use warm-up steps, a linearly increasing learning rate, and cosine decay scheduling. During the warm-up phase, only the embeddings are updated, and a higher masking rate and percentage of predictions based on masked tokens are used compared to [Devlin et al. (2019)](https://aclanthology.org/N19-1423).
(d) Sentence Sampling: We employ a sentence sampler with exponential smoothing to handle disparate token proportions across cantons and languages, preserving per-canton and language capacity.
(e) Mixed Cased Models: Our models cover both upper- and lowercase letters, similar to recently developed large PLMs.
### Training Data
This model was pretrained on the Portuguese portion of [Multi Legal Pile](https://huggingface.co/datasets/joelito/Multi_Legal_Pile) ([Niklaus et al. 2023](https://arxiv.org/abs/2306.02069?utm_source=tldrai)).
#### Preprocessing
For further details see [Niklaus et al. 2023](https://arxiv.org/abs/2306.02069?utm_source=tldrai)
#### Training Hyperparameters
- batche size: 512 samples
- Number of steps: 1M/500K for the base/large model
- Warm-up steps for the first 5\% of the total training steps
- Learning rate: (linearly increasing up to) 1e-4
- Word masking: increased 20/30\% masking rate for base/large models respectively
## Evaluation
For more detailed insights into the performance on downstream tasks, such as [LEXTREME](https://huggingface.co/datasets/joelito/lextreme) ([Niklaus et al. 2023](https://arxiv.org/abs/2301.13126)) or [LEXGLUE](https://huggingface.co/datasets/lex_glue) ([Chalkidis et al. 2021](https://arxiv.org/abs/2110.00976)), we refer to the results presented in Niklaus et al. (2023) [1](https://arxiv.org/abs/2306.02069), [2](https://arxiv.org/abs/2306.09237).
For further insights into the evaluation, we refer to the [trainer state](https://huggingface.co/joelito/legal-xlm-roberta-large/blob/main/last-checkpoint/trainer_state.json). Additional information is available in the [tensorboard](https://huggingface.co/joelito/legal-xlm-roberta-large/tensorboard).
### Model Architecture and Objective
It is a RoBERTa-based model. Run the following code to view the architecture:
```
from transformers import AutoModel
model = AutoModel.from_pretrained('joelito/legal-portuguese-roberta-base')
print(model)
RobertaModel(
(embeddings): RobertaEmbeddings(
(word_embeddings): Embedding(32000, 768, padding_idx=0)
(position_embeddings): Embedding(514, 768, padding_idx=0)
(token_type_embeddings): Embedding(1, 768)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(encoder): RobertaEncoder(
(layer): ModuleList(
(0-11): 12 x RobertaLayer(
(attention): RobertaAttention(
(self): RobertaSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): RobertaSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): RobertaIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): RobertaOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(pooler): RobertaPooler(
(dense): Linear(in_features=768, out_features=768, bias=True)
(activation): Tanh()
)
)
```
### Compute Infrastructure
Google TPU.
#### Hardware
Google TPU v3-8
#### Software
pytorch, transformers.
## Citation
```
@article{Niklaus2023MultiLegalPileA6,
title={MultiLegalPile: A 689GB Multilingual Legal Corpus},
author={Joel Niklaus and Veton Matoshi and Matthias Sturmer and Ilias Chalkidis and Daniel E. Ho},
journal={ArXiv},
year={2023},
volume={abs/2306.02069}
}
```
## Model Card Authors
Joel Niklaus: [huggingface](https://huggingface.co/joelito); [email](mailto:joel.niklaus.2@bfh.ch)
Veton Matoshi: [huggingface](https://huggingface.co/kapllan); [email](mailto:msv3@bfh.ch)
## Model Card Contact
Joel Niklaus: [huggingface](https://huggingface.co/joelito); [email](mailto:joel.niklaus.2@bfh.ch)
Veton Matoshi: [huggingface](https://huggingface.co/kapllan); [email](mailto:msv3@bfh.ch)
|
joelniklaus/legal-xlm-longformer-base
|
joelniklaus
| 2023-08-06T22:53:55Z | 14 | 3 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"longformer",
"fill-mask",
"multilingual",
"bg",
"cs",
"da",
"de",
"el",
"en",
"es",
"et",
"fi",
"fr",
"ga",
"hr",
"hu",
"it",
"lt",
"lv",
"mt",
"nl",
"pl",
"pt",
"ro",
"sk",
"sl",
"sv",
"dataset:MultiLegalPile",
"dataset:LEXTREME",
"dataset:LEXGLUE",
"arxiv:2306.02069",
"arxiv:2301.13126",
"arxiv:2110.00976",
"arxiv:2306.09237",
"license:cc",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-05-10T08:04:00Z |
---
license: cc
language:
- multilingual
- bg
- cs
- da
- de
- el
- en
- es
- et
- fi
- fr
- ga
- hr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sk
- sl
- sv
tags:
- multilingual
datasets:
- MultiLegalPile
- LEXTREME
- LEXGLUE
---
# Model Card for joelito/legal-xlm-longformer-base
This model is a multilingual model pretrained on legal data. It is based on XLM-R ([base](https://huggingface.co/xlm-roberta-base) and [large](https://huggingface.co/xlm-roberta-large)). For pretraining we used [Multi Legal Pile](https://huggingface.co/datasets/joelito/Multi_Legal_Pile) ([Niklaus et al. 2023](https://arxiv.org/abs/2306.02069)), a multilingual dataset from various legal sources covering 24 languages.
## Model Details
### Model Description
- **Developed by:** Joel Niklaus: [huggingface](https://huggingface.co/joelito); [email](mailto:joel.niklaus.2@bfh.ch)
- **Model type:** Transformer-based language model (Longformer)
- **Language(s) (NLP):** bg, cs, da, de, el, en, es, et, fi, fr, ga, hr, hu, it, lt, lv, mt, nl, pl, pt, ro, sk, sl, sv
- **License:** CC BY-SA
## Uses
### Direct Use and Downstream Use
You can utilize the raw model for masked language modeling since we did not perform next sentence prediction. However, its main purpose is to be fine-tuned for downstream tasks.
It's important to note that this model is primarily designed for fine-tuning on tasks that rely on the entire sentence, potentially with masked elements, to make decisions. Examples of such tasks include sequence classification, token classification, or question answering. For text generation tasks, models like GPT-2 are more suitable.
Additionally, the model is specifically trained on legal data, aiming to deliver strong performance in that domain. Its performance may vary when applied to non-legal data.
### Out-of-Scope Use
For tasks such as text generation you should look at model like GPT2.
The model should not be used to intentionally create hostile or alienating environments for people. The model was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model.
## Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
## How to Get Started with the Model
See [huggingface tutorials](https://huggingface.co/learn/nlp-course/chapter7/1?fw=pt). For masked word prediction see [this tutorial](https://huggingface.co/tasks/fill-mask).
## Training Details
This model was pretrained on [Multi Legal Pile](https://huggingface.co/datasets/joelito/Multi_Legal_Pile) ([Niklaus et al. 2023](https://arxiv.org/abs/2306.02069?utm_source=tldrai)).
Our pretraining procedure includes the following key steps:
(a) Warm-starting: We initialize our models from the original XLM-R checkpoints ([base](https://huggingface.co/xlm-roberta-base) and [large](https://huggingface.co/xlm-roberta-large)) of [Conneau et al. (2019)](https://proceedings.neurips.cc/paper/2019/file/c04c19c2c2474dbf5f7ac4372c5b9af1-Paper.pdf) to benefit from a well-trained base.
(b) Tokenization: We train a new tokenizer of 128K BPEs to cover legal language better. However, we reuse the original XLM-R embeddings for lexically overlapping tokens and use random embeddings for the rest.
(c) Pretraining: We continue pretraining on Multi Legal Pile with batches of 512 samples for an additional 1M/500K steps for the base/large model. We use warm-up steps, a linearly increasing learning rate, and cosine decay scheduling. During the warm-up phase, only the embeddings are updated, and a higher masking rate and percentage of predictions based on masked tokens are used compared to [Devlin et al. (2019)](https://aclanthology.org/N19-1423).
(d) Sentence Sampling: We employ a sentence sampler with exponential smoothing to handle disparate token proportions across cantons and languages, preserving per-canton and language capacity.
(e) Mixed Cased Models: Our models cover both upper- and lowercase letters, similar to recently developed large PLMs.
(f) Long Context Training: To account for long contexts in legal documents, we train the base-size multilingual model on long contexts with windowed attention. This variant, named Legal-Swiss-LF-base, uses a 15% masking probability, increased learning rate, and similar settings to small-context models.
### Training Data
This model was pretrained on [Multi Legal Pile](https://huggingface.co/datasets/joelito/Multi_Legal_Pile) ([Niklaus et al. 2023](https://arxiv.org/abs/2306.02069?utm_source=tldrai)).
#### Preprocessing
For further details see [Niklaus et al. 2023](https://arxiv.org/abs/2306.02069?utm_source=tldrai)
#### Training Hyperparameters
- batche size: 512 samples
- Number of steps: 1M/500K for the base/large model
- Warm-up steps for the first 5\% of the total training steps
- Learning rate: (linearly increasing up to) 1e-4
- Word masking: increased 20/30\% masking rate for base/large models respectively
## Evaluation
For performance on downstream tasks, such as [LEXTREME](https://huggingface.co/datasets/joelito/lextreme) ([Niklaus et al. 2023](https://arxiv.org/abs/2301.13126)) or [LEXGLUE](https://huggingface.co/datasets/lex_glue) ([Chalkidis et al. 2021](https://arxiv.org/abs/2110.00976)), we refer to the results presented in Niklaus et al. (2023) [1](https://arxiv.org/abs/2306.02069), [2](https://arxiv.org/abs/2306.09237).
### Model Architecture and Objective
It is a RoBERTa-based model. Run the following code to view the architecture:
```
from transformers import AutoModel
model = AutoModel.from_pretrained('joelito/legal-xlm-longformer-base')
print(model)
LongformerModel(
(embeddings): LongformerEmbeddings(
(word_embeddings): Embedding(128000, 768, padding_idx=0)
(position_embeddings): Embedding(4098, 768, padding_idx=0)
(token_type_embeddings): Embedding(1, 768)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(encoder): LongformerEncoder(
(layer): ModuleList(
(0-11): 12 x LongformerLayer(
(attention): LongformerAttention(
(self): LongformerSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(query_global): Linear(in_features=768, out_features=768, bias=True)
(key_global): Linear(in_features=768, out_features=768, bias=True)
(value_global): Linear(in_features=768, out_features=768, bias=True)
)
(output): LongformerSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): LongformerIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): LongformerOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(pooler): LongformerPooler(
(dense): Linear(in_features=768, out_features=768, bias=True)
(activation): Tanh()
)
)
```
### Compute Infrastructure
Google TPU.
#### Hardware
Google TPU v3-8
#### Software
pytorch, transformers.
## Citation
```
@article{Niklaus2023MultiLegalPileA6,
title={MultiLegalPile: A 689GB Multilingual Legal Corpus},
author={Joel Niklaus and Veton Matoshi and Matthias Sturmer and Ilias Chalkidis and Daniel E. Ho},
journal={ArXiv},
year={2023},
volume={abs/2306.02069}
}
```
## Model Card Authors
Joel Niklaus: [huggingface](https://huggingface.co/joelito); [email](mailto:joel.niklaus.2@bfh.ch)
Veton Matoshi: [huggingface](https://huggingface.co/kapllan); [email](mailto:msv3@bfh.ch)
## Model Card Contact
Joel Niklaus: [huggingface](https://huggingface.co/joelito); [email](mailto:joel.niklaus.2@bfh.ch)
Veton Matoshi: [huggingface](https://huggingface.co/kapllan); [email](mailto:msv3@bfh.ch)
|
smd142/model
|
smd142
| 2023-08-06T22:53:17Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-06T06:31:01Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
DRAGOO/whisper_Fr_Ht
|
DRAGOO
| 2023-08-06T22:47:37Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:qanastek/whisper-small-french-uncased",
"base_model:finetune:qanastek/whisper-small-french-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-06T18:11:00Z |
---
license: apache-2.0
base_model: qanastek/whisper-small-french-uncased
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper_Fr_Ht
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper_Fr_Ht
This model is a fine-tuned version of [qanastek/whisper-small-french-uncased](https://huggingface.co/qanastek/whisper-small-french-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8968
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 0.293 | 3.95 | 1000 | 0.6567 | 1.0 |
| 0.0541 | 7.91 | 2000 | 0.7640 | 1.0 |
| 0.0063 | 11.86 | 3000 | 0.8664 | 1.0 |
| 0.0016 | 15.81 | 4000 | 0.8968 | 1.0 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.3
- Tokenizers 0.13.3
|
HydraLM/Nous-Hermes-llama-2-7b_7b_cluster029_partitioned_v3_standardized_029
|
HydraLM
| 2023-08-06T22:45:01Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T05:54:20Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
HydraLM/Nous-Hermes-llama-2-7b_7b_cluster028_partitioned_v3_standardized_028
|
HydraLM
| 2023-08-06T22:38:15Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T05:54:26Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
CyberHarem/dusevnyj_neuralcloud
|
CyberHarem
| 2023-08-06T22:18:21Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/dusevnyj_neuralcloud",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-06T22:15:00Z |
---
license: mit
datasets:
- CyberHarem/dusevnyj_neuralcloud
pipeline_tag: text-to-image
tags:
- art
---
# Lora of dusevnyj_neuralcloud
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/dusevnyj_neuralcloud.pt` as the embedding and `1500/dusevnyj_neuralcloud.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `dusevnyj_neuralcloud`.**
These are available steps:
| Steps | bikini | free | nude | Download |
|--------:|:-----------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------|
| 1500 |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/dusevnyj_neuralcloud.zip) |
| 1400 |  |  | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/dusevnyj_neuralcloud.zip) |
| 1300 |  |  | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/dusevnyj_neuralcloud.zip) |
| 1200 |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/dusevnyj_neuralcloud.zip) |
| 1100 |  |  | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/dusevnyj_neuralcloud.zip) |
| 1000 |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/dusevnyj_neuralcloud.zip) |
| 900 |  |  | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/dusevnyj_neuralcloud.zip) |
| 800 |  |  | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/dusevnyj_neuralcloud.zip) |
| 700 |  |  | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/dusevnyj_neuralcloud.zip) |
| 600 |  |  | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/dusevnyj_neuralcloud.zip) |
| 500 |  |  | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/dusevnyj_neuralcloud.zip) |
| 400 |  |  | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/dusevnyj_neuralcloud.zip) |
| 300 |  |  | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/dusevnyj_neuralcloud.zip) |
| 200 |  |  | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/dusevnyj_neuralcloud.zip) |
| 100 |  |  | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/dusevnyj_neuralcloud.zip) |
|
HydraLM/Nous-Hermes-llama-2-7b_7b_cluster023_partitioned_v3_standardized_023
|
HydraLM
| 2023-08-06T22:13:41Z | 4 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T17:52:50Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
iproskurina/zlata-tinystories
|
iproskurina
| 2023-08-06T22:09:16Z | 144 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"en",
"dataset:roneneldan/TinyStories",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-03T16:48:59Z |
---
license: apache-2.0
metrics:
- perplexity
model-index:
- name: zlata-tinystories
results: []
datasets:
- roneneldan/TinyStories
language:
- en
widget:
- text: Once upon a time, there was a little bunny named Fluffy. Fluffy loved to play in the garden and eat carrots.
- text: Nina wanted a new bike. Her parents said they would give
- text: Kitty was walking home from school when she came across something strange. She saw a
- text: John was out in the backyard playing. He saw a funny looking insect and
- text: Once upon a time,
library_name: transformers
---
**Small-GPT-2**
A small version of GPT-2 pre-trained on TinyStories dataset.
|
HydraLM/Nous-Hermes-llama-2-7b_7b_cluster022_partitioned_v3_standardized_022
|
HydraLM
| 2023-08-06T22:08:50Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T17:53:03Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
HydraLM/Nous-Hermes-llama-2-7b_7b_cluster020_partitioned_v3_standardized_020
|
HydraLM
| 2023-08-06T21:55:57Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T06:04:59Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
HydraLM/Nous-Hermes-llama-2-7b_7b_cluster018_partitioned_v3_standardized_018
|
HydraLM
| 2023-08-06T21:44:59Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T17:53:01Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
HydraLM/Nous-Hermes-llama-2-7b_7b_cluster017_partitioned_v3_standardized_017
|
HydraLM
| 2023-08-06T21:42:42Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T17:52:31Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
spicecloud/bert-yelp-local
|
spicecloud
| 2023-08-06T21:40:56Z | 126 | 0 |
transformers
|
[
"transformers",
"pytorch",
"coreml",
"safetensors",
"bert",
"fill-mask",
"exbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-06T21:40:25Z |
---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# BERT base model (uncased)
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally masks the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences, for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Model variations
BERT has originally been released in base and large variations, for cased and uncased input text. The uncased models also strips out an accent markers.
Chinese and multilingual uncased and cased versions followed shortly after.
Modified preprocessing with whole word masking has replaced subpiece masking in a following work, with the release of two models.
Other 24 smaller models are released afterward.
The detailed release history can be found on the [google-research/bert readme](https://github.com/google-research/bert/blob/master/README.md) on github.
| Model | #params | Language |
|------------------------|--------------------------------|-------|
| [`bert-base-uncased`](https://huggingface.co/bert-base-uncased) | 110M | English |
| [`bert-large-uncased`](https://huggingface.co/bert-large-uncased) | 340M | English | sub
| [`bert-base-cased`](https://huggingface.co/bert-base-cased) | 110M | English |
| [`bert-large-cased`](https://huggingface.co/bert-large-cased) | 340M | English |
| [`bert-base-chinese`](https://huggingface.co/bert-base-chinese) | 110M | Chinese |
| [`bert-base-multilingual-cased`](https://huggingface.co/bert-base-multilingual-cased) | 110M | Multiple |
| [`bert-large-uncased-whole-word-masking`](https://huggingface.co/bert-large-uncased-whole-word-masking) | 340M | English |
| [`bert-large-cased-whole-word-masking`](https://huggingface.co/bert-large-cased-whole-word-masking) | 340M | English |
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for
fine-tuned versions of a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-uncased')
>>> unmasker("Hello I'm a [MASK] model.")
[{'sequence': "[CLS] hello i'm a fashion model. [SEP]",
'score': 0.1073106899857521,
'token': 4827,
'token_str': 'fashion'},
{'sequence': "[CLS] hello i'm a role model. [SEP]",
'score': 0.08774490654468536,
'token': 2535,
'token_str': 'role'},
{'sequence': "[CLS] hello i'm a new model. [SEP]",
'score': 0.05338378623127937,
'token': 2047,
'token_str': 'new'},
{'sequence': "[CLS] hello i'm a super model. [SEP]",
'score': 0.04667217284440994,
'token': 3565,
'token_str': 'super'},
{'sequence': "[CLS] hello i'm a fine model. [SEP]",
'score': 0.027095865458250046,
'token': 2986,
'token_str': 'fine'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained("bert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = TFBertModel.from_pretrained("bert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-uncased')
>>> unmasker("The man worked as a [MASK].")
[{'sequence': '[CLS] the man worked as a carpenter. [SEP]',
'score': 0.09747550636529922,
'token': 10533,
'token_str': 'carpenter'},
{'sequence': '[CLS] the man worked as a waiter. [SEP]',
'score': 0.0523831807076931,
'token': 15610,
'token_str': 'waiter'},
{'sequence': '[CLS] the man worked as a barber. [SEP]',
'score': 0.04962705448269844,
'token': 13362,
'token_str': 'barber'},
{'sequence': '[CLS] the man worked as a mechanic. [SEP]',
'score': 0.03788609802722931,
'token': 15893,
'token_str': 'mechanic'},
{'sequence': '[CLS] the man worked as a salesman. [SEP]',
'score': 0.037680890411138535,
'token': 18968,
'token_str': 'salesman'}]
>>> unmasker("The woman worked as a [MASK].")
[{'sequence': '[CLS] the woman worked as a nurse. [SEP]',
'score': 0.21981462836265564,
'token': 6821,
'token_str': 'nurse'},
{'sequence': '[CLS] the woman worked as a waitress. [SEP]',
'score': 0.1597415804862976,
'token': 13877,
'token_str': 'waitress'},
{'sequence': '[CLS] the woman worked as a maid. [SEP]',
'score': 0.1154729500412941,
'token': 10850,
'token_str': 'maid'},
{'sequence': '[CLS] the woman worked as a prostitute. [SEP]',
'score': 0.037968918681144714,
'token': 19215,
'token_str': 'prostitute'},
{'sequence': '[CLS] the woman worked as a cook. [SEP]',
'score': 0.03042375110089779,
'token': 5660,
'token_str': 'cook'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus, and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
| Task | MNLI-(m/mm) | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | Average |
|:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:|
| | 84.6/83.4 | 71.2 | 90.5 | 93.5 | 52.1 | 85.8 | 88.9 | 66.4 | 79.6 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=bert-base-uncased">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
parthsuresh/LunarLander-tutorial
|
parthsuresh
| 2023-08-06T21:37:24Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-06T21:37:02Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 239.86 +/- 54.24
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
HydraLM/Nous-Hermes-llama-2-7b_7b_cluster015_partitioned_v3_standardized_015
|
HydraLM
| 2023-08-06T21:33:42Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T17:52:31Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
Xillolxlbln/my_awesome_qa_model
|
Xillolxlbln
| 2023-08-06T21:33:09Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-08-04T21:00:33Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0252
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 125 | 3.0587 |
| No log | 2.0 | 250 | 2.1943 |
| No log | 3.0 | 375 | 2.0252 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
nrakocz/distilhubert-finetuned-gtzan
|
nrakocz
| 2023-08-06T21:30:23Z | 158 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-08-06T19:46:04Z |
---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.84
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5565
- Accuracy: 0.84
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9919 | 1.0 | 113 | 1.8205 | 0.48 |
| 1.3634 | 2.0 | 226 | 1.1723 | 0.68 |
| 0.9779 | 3.0 | 339 | 0.8990 | 0.77 |
| 0.8092 | 4.0 | 452 | 0.8420 | 0.74 |
| 0.7011 | 5.0 | 565 | 0.7290 | 0.79 |
| 0.3831 | 6.0 | 678 | 0.7509 | 0.77 |
| 0.3852 | 7.0 | 791 | 0.6150 | 0.84 |
| 0.1792 | 8.0 | 904 | 0.5968 | 0.82 |
| 0.2193 | 9.0 | 1017 | 0.6058 | 0.82 |
| 0.1887 | 10.0 | 1130 | 0.5565 | 0.84 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
HydraLM/Nous-Hermes-llama-2-7b_7b_cluster014_partitioned_v3_standardized_014
|
HydraLM
| 2023-08-06T21:28:09Z | 4 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T05:52:53Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
HydraLM/Nous-Hermes-llama-2-7b_7b_cluster013_partitioned_v3_standardized_013
|
HydraLM
| 2023-08-06T21:16:21Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T17:52:34Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
muhtasham/bert-tiny-finetuned-glue-rte
|
muhtasham
| 2023-08-06T21:06:42Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-01T23:42:21Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-tiny-finetuned-glue-rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: rte
split: train
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.631768953068592
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-tiny-finetuned-glue-rte
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6673
- Accuracy: 0.6318
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.4294744851376705e-05
- train_batch_size: 64
- eval_batch_size: 16
- seed: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 156 | 0.6852 | 0.5776 |
| No log | 2.0 | 312 | 0.6800 | 0.5993 |
| No log | 3.0 | 468 | 0.6737 | 0.6173 |
| 0.6845 | 4.0 | 624 | 0.6690 | 0.6101 |
| 0.6845 | 5.0 | 780 | 0.6673 | 0.6318 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
simonycl/roberta-large-sst-2-32-13-smoothed
|
simonycl
| 2023-08-06T21:04:21Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-06T20:55:53Z |
---
license: mit
base_model: roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-large-sst-2-32-13-smoothed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-sst-2-32-13-smoothed
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5917
- Accuracy: 0.8906
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 75
- label_smoothing_factor: 0.45
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 2 | 0.7430 | 0.5 |
| No log | 2.0 | 4 | 0.7414 | 0.5 |
| No log | 3.0 | 6 | 0.7386 | 0.5 |
| No log | 4.0 | 8 | 0.7348 | 0.5 |
| 0.7439 | 5.0 | 10 | 0.7302 | 0.5 |
| 0.7439 | 6.0 | 12 | 0.7248 | 0.5 |
| 0.7439 | 7.0 | 14 | 0.7195 | 0.5 |
| 0.7439 | 8.0 | 16 | 0.7143 | 0.5 |
| 0.7439 | 9.0 | 18 | 0.7082 | 0.5 |
| 0.7171 | 10.0 | 20 | 0.7022 | 0.5 |
| 0.7171 | 11.0 | 22 | 0.6977 | 0.5 |
| 0.7171 | 12.0 | 24 | 0.6954 | 0.5312 |
| 0.7171 | 13.0 | 26 | 0.6936 | 0.5156 |
| 0.7171 | 14.0 | 28 | 0.6926 | 0.5156 |
| 0.7024 | 15.0 | 30 | 0.6922 | 0.5312 |
| 0.7024 | 16.0 | 32 | 0.6921 | 0.5469 |
| 0.7024 | 17.0 | 34 | 0.6927 | 0.5312 |
| 0.7024 | 18.0 | 36 | 0.6938 | 0.5312 |
| 0.7024 | 19.0 | 38 | 0.6958 | 0.5156 |
| 0.6826 | 20.0 | 40 | 0.6982 | 0.5156 |
| 0.6826 | 21.0 | 42 | 0.7138 | 0.5 |
| 0.6826 | 22.0 | 44 | 0.7064 | 0.5312 |
| 0.6826 | 23.0 | 46 | 0.6992 | 0.5625 |
| 0.6826 | 24.0 | 48 | 0.6926 | 0.5625 |
| 0.6474 | 25.0 | 50 | 0.6836 | 0.5781 |
| 0.6474 | 26.0 | 52 | 0.6617 | 0.7344 |
| 0.6474 | 27.0 | 54 | 0.6450 | 0.7656 |
| 0.6474 | 28.0 | 56 | 0.6392 | 0.7812 |
| 0.6474 | 29.0 | 58 | 0.6513 | 0.7344 |
| 0.5878 | 30.0 | 60 | 0.6481 | 0.7812 |
| 0.5878 | 31.0 | 62 | 0.6583 | 0.7969 |
| 0.5878 | 32.0 | 64 | 0.6649 | 0.7812 |
| 0.5878 | 33.0 | 66 | 0.6280 | 0.8125 |
| 0.5878 | 34.0 | 68 | 0.6212 | 0.8594 |
| 0.5602 | 35.0 | 70 | 0.6214 | 0.8281 |
| 0.5602 | 36.0 | 72 | 0.6534 | 0.75 |
| 0.5602 | 37.0 | 74 | 0.6334 | 0.8594 |
| 0.5602 | 38.0 | 76 | 0.6060 | 0.875 |
| 0.5602 | 39.0 | 78 | 0.6048 | 0.875 |
| 0.55 | 40.0 | 80 | 0.6064 | 0.8594 |
| 0.55 | 41.0 | 82 | 0.6095 | 0.8438 |
| 0.55 | 42.0 | 84 | 0.6161 | 0.8438 |
| 0.55 | 43.0 | 86 | 0.6068 | 0.8594 |
| 0.55 | 44.0 | 88 | 0.5929 | 0.875 |
| 0.5425 | 45.0 | 90 | 0.5918 | 0.8906 |
| 0.5425 | 46.0 | 92 | 0.5919 | 0.8906 |
| 0.5425 | 47.0 | 94 | 0.5921 | 0.875 |
| 0.5425 | 48.0 | 96 | 0.5925 | 0.875 |
| 0.5425 | 49.0 | 98 | 0.5970 | 0.8906 |
| 0.5415 | 50.0 | 100 | 0.6128 | 0.8438 |
| 0.5415 | 51.0 | 102 | 0.6187 | 0.8438 |
| 0.5415 | 52.0 | 104 | 0.6012 | 0.8906 |
| 0.5415 | 53.0 | 106 | 0.5981 | 0.8906 |
| 0.5415 | 54.0 | 108 | 0.6085 | 0.8125 |
| 0.5434 | 55.0 | 110 | 0.6028 | 0.8438 |
| 0.5434 | 56.0 | 112 | 0.5970 | 0.8594 |
| 0.5434 | 57.0 | 114 | 0.6013 | 0.8906 |
| 0.5434 | 58.0 | 116 | 0.6023 | 0.8906 |
| 0.5434 | 59.0 | 118 | 0.6002 | 0.8906 |
| 0.5397 | 60.0 | 120 | 0.5964 | 0.8906 |
| 0.5397 | 61.0 | 122 | 0.5940 | 0.8906 |
| 0.5397 | 62.0 | 124 | 0.5934 | 0.8906 |
| 0.5397 | 63.0 | 126 | 0.5936 | 0.8906 |
| 0.5397 | 64.0 | 128 | 0.5936 | 0.8906 |
| 0.5403 | 65.0 | 130 | 0.5939 | 0.8906 |
| 0.5403 | 66.0 | 132 | 0.5939 | 0.8906 |
| 0.5403 | 67.0 | 134 | 0.5933 | 0.8906 |
| 0.5403 | 68.0 | 136 | 0.5933 | 0.8906 |
| 0.5403 | 69.0 | 138 | 0.5934 | 0.8906 |
| 0.5394 | 70.0 | 140 | 0.5931 | 0.8906 |
| 0.5394 | 71.0 | 142 | 0.5926 | 0.8906 |
| 0.5394 | 72.0 | 144 | 0.5921 | 0.8906 |
| 0.5394 | 73.0 | 146 | 0.5919 | 0.8906 |
| 0.5394 | 74.0 | 148 | 0.5918 | 0.8906 |
| 0.5394 | 75.0 | 150 | 0.5917 | 0.8906 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.4.0
- Tokenizers 0.13.3
|
HydraLM/Nous-Hermes-llama-2-7b_7b_cluster010_partitioned_v3_standardized_010
|
HydraLM
| 2023-08-06T21:01:19Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T05:53:11Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
LarryAIDraw/Patchi_V1
|
LarryAIDraw
| 2023-08-06T20:59:25Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-06T20:52:00Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/123345/skv-patchouli-knowledge-touhou-lora
|
LarryAIDraw/eden
|
LarryAIDraw
| 2023-08-06T20:58:56Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-06T20:51:36Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/123264/eden-honkai-impact-3rd-or-3-or-3rd
|
LarryAIDraw/ryuu_v1
|
LarryAIDraw
| 2023-08-06T20:58:33Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-06T20:50:48Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/123575/ryuu-lion-or-danmachi-lora
|
LarryAIDraw/mudrock-03
|
LarryAIDraw
| 2023-08-06T20:57:53Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-06T20:49:45Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/123709/mudrock-or-arknights-or-lora
|
LarryAIDraw/HorikitaLora-12
|
LarryAIDraw
| 2023-08-06T20:57:37Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-06T20:49:21Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/123805/suzune-horikita-classroom-of-the-elite-lora
|
estelle1emerson/whisper-small-pt
|
estelle1emerson
| 2023-08-06T20:51:58Z | 76 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"pt",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-02T00:14:43Z |
---
language:
- pt
license: apache-2.0
base_model: openai/whisper-small
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Pt POC
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: pt
split: test[:10%]
args: 'config: pt, split: test'
metrics:
- name: Wer
type: wer
value: 69.33979189092214
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Pt POC
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4973
- Wer: 69.3398
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0035 | 8.77 | 1000 | 0.4042 | 70.8647 |
| 0.0004 | 17.54 | 2000 | 0.4718 | 71.8873 |
| 0.0002 | 26.32 | 3000 | 0.4895 | 70.3265 |
| 0.0002 | 35.09 | 4000 | 0.4973 | 69.3398 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.2
- Tokenizers 0.13.3
|
HydraLM/Nous-Hermes-llama-2-7b_7b_cluster08_partitioned_v3_standardized_08
|
HydraLM
| 2023-08-06T20:47:17Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T05:53:18Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
li-ping/summary_llama_3_epoch_ver2_fix_wavedrom
|
li-ping
| 2023-08-06T20:38:39Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-06T20:07:37Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
HydraLM/Nous-Hermes-llama-2-7b_7b_cluster06_partitioned_v3_standardized_06
|
HydraLM
| 2023-08-06T20:36:07Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T17:51:52Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
quantumaikr/llama-2-70b-fb16-guanaco-1k
|
quantumaikr
| 2023-08-06T20:35:45Z | 1,513 | 2 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-06T19:54:02Z |
---
license: cc-by-nc-4.0
language:
- en
pipeline_tag: text-generation
---
# quantumaikr/llama-2-70b-fb16-guanaco-1k
## Model Description
`quantumaikr/llama-2-70b-fb16-guanaco-1k` is a Llama2 70B model finetuned on an guanaco, mlabonne/guanaco-llama2-1k Dataset
## Usage
Start chatting with `quantumaikr/llama-2-70b-fb16-guanaco-1k` using the following code snippet:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("quantumaikr/llama-2-70b-fb16-guanaco-1k")
model = AutoModelForCausalLM.from_pretrained("quantumaikr/llama-2-70b-fb16-guanaco-1k", torch_dtype=torch.float16, device_map="auto")
system_prompt = "### System:\nYou are QuantumLM, an AI that follows instructions extremely well. Help as much as you can. Remember, be safe, and don't do anything illegal.\n\n"
message = "Write me a poem please"
prompt = f"{system_prompt}### User: {message}\n\n### Assistant:\n"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
output = model.generate(**inputs, do_sample=True, top_p=0.95, top_k=0, max_new_tokens=256)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
QuantumLM should be used with this prompt format:
```
### System:
This is a system prompt, please behave and help the user.
### User:
Your prompt here
### Assistant
The output of QuantumLM
```
## Use and Limitations
### Intended Use
These models are intended for research only, in adherence with the [CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/) license.
### Limitations and bias
Although the aforementioned dataset helps to steer the base language models into "safer" distributions of text, not all biases and toxicity can be mitigated through fine-tuning. We ask that users be mindful of such potential issues that can arise in generated responses. Do not treat model outputs as substitutes for human judgment or as sources of truth. Please use it responsibly.
Contact us : hi@quantumai.kr
|
HydraLM/Nous-Hermes-llama-2-7b_7b_cluster05_partitioned_v3_standardized_05
|
HydraLM
| 2023-08-06T20:29:53Z | 9 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T05:53:14Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
MattStammers/ppo-lunarlandercontinuous
|
MattStammers
| 2023-08-06T20:27:37Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-06T19:47:07Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 279.83 +/- 22.33
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CristoJV/q-FrozenLake-v1-4x4-noSlippery
|
CristoJV
| 2023-08-06T19:52:20Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-06T19:52:16Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="CristoJV/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
HydraLM/Nous-Hermes-llama-2-7b_7b_cluster03_partitioned_v3_standardized_03
|
HydraLM
| 2023-08-06T19:51:03Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T05:46:04Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
s3nh/ziya-llama-13b-medical-merged-GGML
|
s3nh
| 2023-08-06T19:31:36Z | 0 | 6 |
transformers
|
[
"transformers",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-06T15:48:44Z |
---
license: openrail
language:
- zh
- en
pipeline_tag: text-generation
library_name: transformers
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGML Format model files for [This project](https://huggingface.co/shibing624/ziya-llama-13b-medical-merged).
### inference
```python
import ctransformers
from ctransformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(output_dir, ggml_file,
gpu_layers=32, model_type="llama")
manual_input: str = "Tell me about your last dream, please."
llm(manual_input,
max_new_tokens=256,
temperature=0.9,
top_p= 0.7)
```
# Original model card
# Medical LLaMA-13B Model
基于LLaMA-13B的中英医疗问答模型
`shibing624/ziya-llama-13b-medical-merged` evaluate test data:
The overall performance of model on QA **test**:
|input_text|predict|
|:-- |:--- |
|一岁宝宝发烧能吃啥药?|孩子的身体是很脆弱的,如果不注意保护好自己就会出现各种疾病。有些家长在看到孩子生病后都非常着急,想要立刻给他们用药治疗,但其实并没有那么简单,因为每个人对于药物的敏感度和耐受性都是不同的,所以我们需要根据具体情况来选择合适的方法进行治疗才可以达到最佳效果。1、首先大多数小儿退热药中含有解热镇痛成分阿司匹林或布洛芬等,这类药品虽然副作用较少,但也存在过量使用引起胃肠道损伤甚至消化道溃疡出血的风险;2、还有部分小儿退热药中含有扑尔敏等抗组胺药,它们本身属于处方药,应该严格按照医嘱服用,切勿擅自加减剂量或者延长用药时间;3、此外,某些小儿退热药中含有维生素C及葡萄糖酸钙等营养补充剂,只有当患儿伴随脱水症状(如口渴、尿色深)时才考虑添加,否则容易造成电解质紊乱。|
在中文开放测试集中的表现优异,继承了两方面的优势:1)微调训练的底座是Ziya-LLaMA-13B模型,是较强的中英文底座模型,2)微调使用的是高质量240万条中英文医疗指令数据集,和多种通用指令数据集,微调后的模型在医疗行业答复能力达到领先水平,在通用问题上的答复能力不弱于LLaMA-13B。
## Training details
training args:
```json
{"per_device_train_batch_size": 8, "per_device_eval_batch_size": 8, "per_gpu_train_batch_size": null, "per_gpu_eval_batch_size": null, "gradient_accumulation_steps": 1, "eval_accumulation_steps": null, "eval_delay": 0, "learning_rate": 2e-05, "weight_decay": 0.0, "adam_beta1": 0.9, "adam_beta2": 0.999, "adam_epsilon": 1e-08, "max_grad_norm": 1.0, "num_train_epochs": 10.0, "max_steps": -1, "lr_scheduler_type": "linear", "warmup_ratio": 0.0, "warmup_steps": 50, "log_level": "passive", "log_level_replica": "warning", "log_on_each_node": true, "logging_dir": "outputs-ziya-llama-13b-sft-med-v2/logs", "logging_strategy": "steps", "logging_first_step": false, "logging_steps": 50, "logging_nan_inf_filter": true, "save_strategy": "steps", "save_steps": 50, "save_total_limit": 3, "save_safetensors": false, "save_on_each_node": false, "no_cuda": false, "use_mps_device": false, "seed": 42, "data_seed": null, "jit_mode_eval": false, "use_ipex": false, "bf16": false, "fp16": true, "fp16_opt_level": "O1", "half_precision_backend": "cuda_amp", "bf16_full_eval": false, "fp16_full_eval": false, "tf32": null, "local_rank": 0, "xpu_backend": null, "tpu_num_cores": null, "tpu_metrics_debug": false, "debug": [], "dataloader_drop_last": false, "eval_steps": 50, "dataloader_num_workers": 0, "past_index": -1, "run_name": "outputs-ziya-llama-13b-sft-med-v2", "disable_tqdm": false, "remove_unused_columns": false, "label_names": null, "load_best_model_at_end": true, "metric_for_best_model": "loss", "greater_is_better": false, "ignore_data_skip": false, "sharded_ddp": [], "fsdp": [], "fsdp_min_num_params": 0, "fsdp_config": { "fsdp_min_num_params": 0, "xla": false, "xla_fsdp_grad_ckpt": false }, "fsdp_transformer_layer_cls_to_wrap": null, "deepspeed": null, "label_smoothing_factor": 0.0, "optim": "adamw_torch", "optim_args": null, "adafactor": false, "group_by_length": false, "length_column_name": "length", "report_to": [ "tensorboard" ], "ddp_find_unused_parameters": false, "ddp_bucket_cap_mb": null, "dataloader_pin_memory": true, "skip_memory_metrics": true, "use_legacy_prediction_loop": false, "push_to_hub": false, "resume_from_checkpoint": null, "hub_model_id": null, "hub_strategy": "every_save", "hub_token": "<hub_token>", "hub_private_repo": false, "gradient_checkpointing": false, "include_inputs_for_metrics": false, "fp16_backend": "auto", "push_to_hub_model_id": null, "push_to_hub_organization": null, "push_to_hub_token": "<push_to_hub_token>", "mp_parameters": "", "auto_find_batch_size": false, "full_determinism": false, "torchdynamo": null, "ray_scope": "last", "ddp_timeout": 1800, "torch_compile": false, "torch_compile_backend": null, "torch_compile_mode": null }
```
train loss:
<img src="https://huggingface.co/shibing624/ziya-llama-13b-medical-merged/resolve/main/trainloss.png" alt="trainloss">
evaluate loss:
<img src="https://huggingface.co/shibing624/ziya-llama-13b-medical-merged/resolve/main/evalloss.png" alt="trainloss">
## Usage
本项目开源在 github repo:
- [shibing624/textgen](https://github.com/shibing624/textgen)
- [shibing624/MedicalGPT](https://github.com/shibing624/MedicalGPT)
使用textgen库:[textgen](https://github.com/shibing624/textgen),可调用LLaMA模型:
Install package:
```shell
pip install -U textgen
```
```python
from textgen import GptModel
def generate_prompt(instruction):
return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:{instruction}\n\n### Response: """
model = GptModel("llama", "shibing624/ziya-llama-13b-medical-merged")
predict_sentence = generate_prompt("一岁宝宝发烧能吃啥药?")
r = model.predict([predict_sentence])
print(r) # ["1、首先大多数小儿退热药中含有解热镇痛成分阿司匹林或布洛芬等,这类药品虽然副作用较少..."]
```
## Usage (HuggingFace Transformers)
Without [textgen](https://github.com/shibing624/textgen), you can use the model like this:
First, you pass your input through the transformer model, then you get the generated sentence.
Install package:
```
pip install transformers
```
```python
import sys
from peft import PeftModel
from transformers import LlamaForCausalLM, LlamaTokenizer
model = LlamaForCausalLM.from_pretrained("shibing624/ziya-llama-13b-medical-merged", device_map='auto')
tokenizer = LlamaTokenizer.from_pretrained("shibing624/ziya-llama-13b-medical-merged")
device = "cuda" if torch.cuda.is_available() else "cpu"
def generate_prompt(instruction):
return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:{instruction}\n\n### Response: """
sents = ['一岁宝宝发烧能吃啥药', "who are you?"]
for s in sents:
q = generate_prompt(s)
inputs = tokenizer(q, return_tensors="pt")
inputs = inputs.to(device=device)
generate_ids = ref_model.generate(
**inputs,
max_new_tokens=120,
do_sample=True,
top_p=0.85,
temperature=1.0,
repetition_penalty=1.0,
eos_token_id=tokenizer.eos_token_id,
bos_token_id=tokenizer.bos_token_id,
pad_token_id=tokenizer.pad_token_id,
)
output = tokenizer.batch_decode(generate_ids, skip_special_tokens=True)[0]
print(output)
print()
```
output:
```shell
一岁宝宝发烧能吃啥药
孩子的身体是很脆弱的,如果不注意保护好自己就会出现各种疾病。有些家长在看到孩子生病后都非常着急,想要立刻给他们用药治疗,但其实并没有那么简单,因为每个人对于药物的敏感度和耐受性都是不同的,所以我们需要根据具体情况来选择合适的方法进行治疗才可以达到最佳效果。1、首先大多数小儿退热药中含有解热镇痛成分阿司匹林或布洛芬等,这类药品虽然副作用较少,但也存在过量使用引起胃肠道损伤甚至消化道溃疡出血的风险;2、还有部分小儿退热药中含有扑尔敏等抗组胺药,它们本身属于处方药,应该严格按照医嘱服用,切勿擅自加减剂量或者延长用药时间;3、此外,某些小儿退热药中含有维生素C及葡萄糖酸钙等营养补充剂,只有当患儿伴随脱水症状(如口渴、尿色深)时才考虑添加,否则容易造成电解质紊乱。
```
模型文件组成:
```
ziya-llama-13b-medical-merged
|-- added_tokens.json
|-- config.json
|-- generation_config.json
|-- pytorch_model-00001-of-00003.bin
|-- pytorch_model-00002-of-00003.bin
|-- pytorch_model-00003-of-00003.bin
|-- pytorch_model.bin.index.json
|-- README.md
|-- special_tokens_map.json
|-- tokenizer_config.json
`-- tokenizer.model
```
### Inference Examples
| Human | Assitant |
|------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 小孩发烧怎么办 | 宝宝的抵抗力是比较差的,所以很容易就会出现感冒或者其他疾病。如果宝宝在平时没有注意保暖,那么也可能导致身体受凉而引起了感冒等症状,这样不仅对于健康造成影响,还可能让宝宝出现高热的情况。因此当家长们发现自己的宝宝出现了高热的问题之后一定要及早进行治疗和护理才好。<br/>1、首先我们应该给宝宝服用退烧药物来帮助降温,但是千万不要盲目使用退烧药物,否则只会加重宝宝的病情,最好带着宝宝到医院去检查清楚具体原因之后再针对性地采取措施处理。<br/>2、另外大人们也需要做好预防工作,尤其是天气变化的时候更应该多关心下宝宝的身体状态,避免宝宝受寒受凉从而诱发各种疾病,同时也要尽量减少宝宝接触细菌病毒的机会,增强宝宝的免疫力。<br/>3、除了上面两点之外,大人们还可以通过按摩推拿穴位的方式来缓解宝宝的病痛,促进血液循环,提升新陈代谢速度,达到快速消炎止疼的效果 |
| 一岁宝宝发烧能吃啥药? | 孩子的身体是很脆弱的,如果不注意保护好自己就会出现各种疾病。有些家长在看到孩子生病后都非常着急,想要立刻给他们用药治疗,但其实并没有那么简单,因为每个人对于药物的敏感度和耐受性都是不同的,所以我们需要根据具体情况来选择合适的方法进行治疗才可以达到最佳效果。1、首先大多数小儿退热药中含有解热镇痛成分阿司匹林或布洛芬等,这类药品虽然副作用较少,但也存在过量使用引起胃肠道损伤甚至消化道溃疡出血的风险;2、还有部分小儿退热药中含有扑尔敏等抗组胺药,它们本身属于处方药,应该严格按照医嘱服用,切勿擅自加减剂量或者延长用药时间;3、此外,某些小儿退热药中含有维生素C及葡萄糖酸钙等营养补充剂,只有当患儿伴随脱水症状(如口渴、尿色深)时才考虑添加,否则容易造成电解质紊乱。 |
| 肛门病变可能是什么疾病的症状 | 痔疮、直肠息肉等。 |
| 医生,您好白带增多泛黄怎么回事? | 你这种情况考虑可能会是阴道炎症引起的建议平时留意卫生清洁,勤换洗衣物内裤,必要时去正规公立医院妇科复诊做分泌物常规等相关检查明确病因后对症治疗,不同原因所致的疾病在临床表现上也存在差异性,如果患者出现了类似于的症状应当及早到专业、权威的医院进行详细的检验和救治 |
| 药流后有巧克力白带是没流干净吗 | 在日常生活中,我们要或多或少了解一些常见疾病的相关知识,不仅能起到增加知识水平的作用,更能让自己更好地预防此类疾病的产生。很多女性都会出现月经紊乱、腹痛等症状,这时候就需要考虑妇科炎症可能,比如说宫颈糜烂和盆腔积液。那么下面小编来给大家介绍一种情况:药物流产后白带呈巧克力色怎么回事?1、首先,对于药物流产之后发现阴道分泌物异常,应该及时去医院做检查确诊具体原因,然后再根据结果进行治疗调理。2、其次,由于每个人身体素质以及患病程度存在差别,所以采取的措施也不尽相同,但无论选择哪种方式处理,都必须严格按照规定剂量服用,并且注意观察副作用反映情况,若出现明显不适感觉,则应立即停止使用。3、最后,建议你保持外阴清洁卫生,勤换内裤,避免吃辛辣刺激食品,而且还要禁忌房事,否则容易引起交叉感染影响恢复健康。 |
| 维胺酯维E乳膏能治理什么疾病 | 皮肤淀粉样变性;脂溢性角化症 |
| 痤疮是什么原因导致的? | 痤疮,又称青春痘、粉刺。它主要发生在面部和胸背部位,表现为黑头粉刺、白头粉刺、丘疹、脓疱、结节等多种形式。其中以黑头粉刺最常见,约占全身皮肤病患者的70%左右。那么,痤疮是什么原因引起的呢?下面就给大家介绍一些相关内容。1、雄性激素分泌过旺:男女都会长痘痘,但是女孩子比较喜欢用化妆品来掩盖脸上的问题,而且很多人不知道这样做反而加重了自己的毛囊炎症状,所以说平时应该少使用含有雌激素类的产品。2、饮食习惯差:经常吃辛辣油腻的东西或者甜食也可能造成体内毒素堆积,从而诱发痤疮。3、压力太大:如果你总是处于紧张焦虑的情绪之中,同时还没有及时调整好心态,那么很容易出现内分泌失调的情况,进而影响到肌肤健康。4、睡眠质量差:熬夜对我们的身体伤害非常严重,尤其是对于年轻人群更甚,他们需要保证充足的睡眠才行,否则很容易让肌肤变得暗沉无光泽。 |
| 脸上长青春痘怎么办,能吃啥药,需要注意啥 | 你好,根据您的描述考虑是痤疮。建议外用维胺酯乳膏或者阿达帕林凝胶进行治疗看看,平时禁忌辛辣和刺激性食物,如海鲜、狗肉、羊肉等“发”动品,以及葱蒜韭姜等刺激性调料。多喝水,少吃油腻高热量食物。 |
| 由于输精管堵塞会不会导致男性不育?,医生您好,我同事这几天不开心,我问她怎么了,她说她老公输精管堵塞,所以一直没孩子,不是不想要,请问输精管堵塞会不会导致男性不育? | 你好!根据你的叙述及症状推测:如果有输精管阻塞或者狭窄等情况都可能会引来无法排出来精液而影响怀上。建议到正规专业的快孕医院实施进行咨询和治疗为宜。对于输精管患者来说,及时救治是可以减低病发,如果不及时救治会影响到受孕情况。因此患者要及时药物救治,并且日常生活中要多看重自身护理 |
| 心肌梗塞能治疗吗? | 心脏病是一种危害性极大的疾病,我们要对它有足够多的了解。在生活中很多人都会患上这个疾病,而且还不知道怎么去预防和护理,所以就导致自己的身体越来越差,那么下面小编给你介绍几点关于心脏病的常识吧!1、冠状动脉粥样硬化性心脏病:又称为冠心病或缺血型心脏病(IHD),是世界各地最重要的死亡原因之一。2、风湿热:主要侵犯心脏瓣膜,可引起二尖瓣狭窄及其他类型的心脏瓣膜病变;3、先天性心脏病:包括房间隔缺损、室间隔缺损等;4、高脂蛋白血症:由于胆固醇与低密度脂蛋白升高,使得动脉壁增厚,管腔缩小,易形成斑块并脱落造成急性栓塞;5、肥胖:过量摄入食物,特别是含糖饮料,加速脂肪堆积,促进动脉粥样硬化发展;6、吸烟:尼古丁刺激交感神经兴奋,使外周阻力增强,收缩压升高,舒张期减短,从而降低左右心室射血分数,诱发心律失常;7、酗酒:长期嗜酒者,肝功受到影响,代谢紊乱,容易出现脂质代谢异常。 |
| 怎么让自己精力充沛,列5点建议 | 1、保证睡眠质量:每天晚上7-8小时的高质量睡眠是提升身体和大脑活跃度最有效的方法。2、合理饮食:均衡营养摄入对于维持人体正常运转至关重要,多吃蔬菜水果,少油腻辛辣刺激性食物。3、适当锻炼:坚持一定强度的锻炼可以促进血液循环,增加肌肉耐力,改善心肺功能等。4、学习新知识:不断接触新事物,开拓视野,思考问题,都会使得我们更具创造力与灵感。5、放松心情:通过冥想或者瑜伽来缓解压力,也可以听音乐或看电影来调节情绪。 |
| 写个春天的诗 | 春风吹拂,花儿绽放。<br/>阳光明媚,鸟语鸣叫。<br/>树木抽芽,草地变绿。<br/>万物复苏,生机勃发。 |
| 南极洲为啥没有北极熊? |南极是世界上最大的冰盖,覆盖着98%以上的地球表面积。它位于南纬60度至74度之间,环绕在海洋中心周围。由于其独特的气候条件和生态系统,南极被称为“真正的荒野”。南极洲拥有丰富多样的动植物资源,但这里并不存在北极熊。 那么,为什么南极洲没有北极熊呢?首先,我们需要了解一下两者栖息地的差异。北极熊主要分布在北半球寒冷的高纬度地区,如加拿大、格陵兰岛等地。而南极洲则处于南半球,属于温带或寒带气候类型。虽然南极洲也有很低的平均气温(-25℃左右),但与北极相比还是太热了。因此,即使北极熊能够适应更严酷的气候条件,它们也无法在南极找到合适的栖息地。另外,南极洲缺乏陆地哺乳动物食物来源,包括鱼类、鲸鱼和企鹅等。尽管南极洲的水域中也有各种鱼类,但数量远少于北极圈内。同时,南极洲的土著居民——企鹅群体繁殖季节期间会消耗掉大部分可用的食物资源,导致当地的鱼类数量减少甚至枯竭。|
### 训练数据集
- 50万条中文ChatGPT指令Belle数据集:[BelleGroup/train_0.5M_CN](https://huggingface.co/datasets/BelleGroup/train_0.5M_CN)
- 100万条中文ChatGPT指令Belle数据集:[BelleGroup/train_1M_CN](https://huggingface.co/datasets/BelleGroup/train_1M_CN)
- 5万条英文ChatGPT指令Alpaca数据集:[50k English Stanford Alpaca dataset](https://github.com/tatsu-lab/stanford_alpaca#data-release)
- 2万条中文ChatGPT指令Alpaca数据集:[shibing624/alpaca-zh](https://huggingface.co/datasets/shibing624/alpaca-zh)
- 69万条中文指令Guanaco数据集(Belle50万条+Guanaco19万条):[Chinese-Vicuna/guanaco_belle_merge_v1.0](https://huggingface.co/datasets/Chinese-Vicuna/guanaco_belle_merge_v1.0)
- 240万条中文医疗数据集(包括预训练数据和指令微调数据集):[shibing624/medical](https://huggingface.co/datasets/shibing624/medical)
如果需要训练ChatGLM/LLAMA/BLOOM模型,请参考[https://github.com/shibing624/textgen](https://github.com/shibing624/textgen)
## Citation
```latex
@software{textgen,
author = {Ming Xu},
title = {textgen: Implementation of language model finetune},
year = {2023},
url = {https://github.com/shibing624/textgen},
}
```
|
s3nh/MedLLaMA_13B-GGML
|
s3nh
| 2023-08-06T19:30:00Z | 0 | 4 |
transformers
|
[
"transformers",
"text-generation",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-06T15:46:34Z |
---
license: openrail
language:
- en
pipeline_tag: text-generation
library_name: transformers
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGML Format model files for [This project](https://huggingface.co/chaoyi-wu/MedLLaMA_13B).
### inference
```python
import ctransformers
from ctransformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(output_dir, ggml_file,
gpu_layers=32, model_type="llama")
manual_input: str = "Tell me about your last dream, please."
llm(manual_input,
max_new_tokens=256,
temperature=0.9,
top_p= 0.7)
```
|
BigSyal/keisya
|
BigSyal
| 2023-08-06T19:28:21Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-06T19:26:26Z |
---
license: creativeml-openrail-m
---
|
HydraLM/Nous-Hermes-llama-2-7b_7b_cluster00_partitioned_v3_standardized_00
|
HydraLM
| 2023-08-06T19:23:47Z | 10 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T17:51:50Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
MattStammers/Bipedal_Walker_v3_Hardcore_Flat_Optimised
|
MattStammers
| 2023-08-06T19:15:39Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"BipedalWalker-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-06T19:14:56Z |
---
library_name: stable-baselines3
tags:
- BipedalWalker-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BipedalWalker-v3
type: BipedalWalker-v3
metrics:
- type: mean_reward
value: -85.95 +/- 18.79
name: mean_reward
verified: false
---
# **PPO** Agent playing **BipedalWalker-v3**
This is a trained model of a **PPO** agent playing **BipedalWalker-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
HydraLM/Nous-Hermes-llama-2-7b_7b_cluster01_partitioned_v3_standardized_01
|
HydraLM
| 2023-08-06T19:13:00Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T05:46:10Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
Bschleter/llama-2-7b-hermes-financecompliance
|
Bschleter
| 2023-08-06T19:11:56Z | 19 | 4 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"finance",
"compliance",
"zero-shot-classification",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2023-08-05T00:59:15Z |
---
language:
- en
pipeline_tag: zero-shot-classification
tags:
- finance
- compliance
---
# Model Card for Model ID
<!--
-->
## Model Details
Based of the full weight llama 2-hermes from Nous Research.
### Model Description
This model was fine tuned off the full weight llama-2-hermes-7B from Nous Research. This model is a preemptive V1, and a hastily put together model to assist
in finance and compliance tasks, mostly tuned to the new SEC Marketing and Compliance rules established in 2021. Later iterations will have more guidelines and rulings
unrelated to the SEC Marketing rule.
https://www.sec.gov/files/rules/final/2020/ia-5653.pdf
<!-- -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [Enlgish]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [llama 2-hermes-7b]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
This is to help companies and individuals within compliance and marketing departments to determine and find issues within their marketing or public facing documents.
Since the new marketing rule is principles based it requires logic, experience, and reasoning to determine if a statement or advertisement would be compliant within
the SEC's new guidelines. This can lead to multiple viewpoints of compliant or not depending on the viewer. Thus this is a small/high quality dataset version
to aid or provide an second viewpoint of a public facing statement to help determine if something is compliant per the SEC's guidelines. The dataset was crafted by
reviewing the SEC Marketing rule, other scenarios, and providing reasoning within the ###n\ Response n\### to help guide the model in reasoning tasks.
Further versions will be reviewed more for accuracy, bias, and more data.
<!-- -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
For use by marketing and compliance finance teams to assist in determination and interpretation of SEC Marketing rule and other SEC interpretations. No outputs should be guaranteed as fact,
and review of data is encouraged. This is to simply assist, and aid those in remembering certain aspects and interpretation of aspects of the long SEC Marketing guidelines
amongst other SEC rulings.
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
This model should not be intended to be used as fact, as evidence/proof in a trial hearing, or be used as indication of innocence in an SEC audit/investigation.
This model should be used by professionals deeply familiar with the SEC's guidelines and compliance procedures.
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
This is the first model iteration, and has not be fully reviewed by multiple professional peers for its accuracy, bias, and output variations.
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. -->
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- -->
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Training Hyperparameters
- <!--# Compute dtype for 4-bit base models
bnb_4bit_compute_dtype = "float16"
bnb_4bit_quant_type = "nf4"
use_nested_quant = False
fp16 = False
bf16 = False - this will be True for next training run.
per_device_train_batch_size = 4
per_device_eval_batch_size = 4
gradient_accumulation_steps = 1
gradient_checkpointing = True
max_grad_norm = 0.3
learning_rate = 2e-5 -1 e-4 for a 13B will be applied.
weight_decay = 0.001
optim = "paged_adamw_32bit"
lr_scheduler_type = "constant"
max_steps = 13000
warmup_ratio = 0.03
group_by_length = True
-->
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Metrics
<!-- -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[Google Colab]
#### Hardware
[1xA100]
|
Robayet2023/esm2_t12_35M_UR50D-finetuned-localization
|
Robayet2023
| 2023-08-06T19:10:45Z | 100 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"esm",
"text-classification",
"generated_from_trainer",
"base_model:facebook/esm2_t12_35M_UR50D",
"base_model:finetune:facebook/esm2_t12_35M_UR50D",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-01T22:55:53Z |
---
license: mit
base_model: facebook/esm2_t12_35M_UR50D
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: esm2_t12_35M_UR50D-finetuned-localization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# esm2_t12_35M_UR50D-finetuned-localization
This model is a fine-tuned version of [facebook/esm2_t12_35M_UR50D](https://huggingface.co/facebook/esm2_t12_35M_UR50D) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0331
- Accuracy: 0.4835
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.042 | 1.0 | 23758 | 0.0388 | 0.4835 |
| 0.0325 | 2.0 | 47516 | 0.0351 | 0.4835 |
| 0.0259 | 3.0 | 71274 | 0.0331 | 0.4835 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1
- Datasets 2.14.3
- Tokenizers 0.13.3
|
strnam/instruction-bloom-7b1
|
strnam
| 2023-08-06T18:52:54Z | 8 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-06T18:52:48Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: True
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
ThuyNT03/xlm-roberta-base-finetuned-panx-it
|
ThuyNT03
| 2023-08-06T18:46:09Z | 88 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-06T18:42:50Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.it
split: validation
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.8199265006124948
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2533
- F1: 0.8199
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 70 | 0.3206 | 0.7644 |
| No log | 2.0 | 140 | 0.2674 | 0.8118 |
| No log | 3.0 | 210 | 0.2533 | 0.8199 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
Peniis2/Airplane
|
Peniis2
| 2023-08-06T18:43:04Z | 0 | 0 | null |
[
"en",
"dataset:databricks/databricks-dolly-15k",
"region:us"
] | null | 2023-08-06T18:41:29Z |
---
datasets:
- databricks/databricks-dolly-15k
language:
- en
---
|
Surya-Teja-Menta/ppo-Huggy
|
Surya-Teja-Menta
| 2023-08-06T18:40:20Z | 6 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-08-06T18:40:14Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Surya-Teja-Menta/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ThuyNT03/xlm-roberta-base-finetuned-panx-de-fr
|
ThuyNT03
| 2023-08-06T18:37:02Z | 95 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-06T18:23:38Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1603
- F1: 0.8595
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 715 | 0.1777 | 0.8240 |
| No log | 2.0 | 1430 | 0.1603 | 0.8420 |
| No log | 3.0 | 2145 | 0.1603 | 0.8595 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
UHS/PPO_Bipedal_Walker_Flat_Optimised
|
UHS
| 2023-08-06T18:22:30Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"BipedalWalker-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-06T18:21:21Z |
---
library_name: stable-baselines3
tags:
- BipedalWalker-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BipedalWalker-v3
type: BipedalWalker-v3
metrics:
- type: mean_reward
value: 302.24 +/- 1.27
name: mean_reward
verified: false
---
# **PPO** Agent playing **BipedalWalker-v3**
This is a trained model of a **PPO** agent playing **BipedalWalker-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
textgain/allnli-GroNLP-bert-base-dutch-cased
|
textgain
| 2023-08-06T18:09:12Z | 553 | 3 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"nl",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-01-16T13:17:02Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
language:
- nl
widget:
- source_sentence: "De kat slaapt op het bed."
sentences:
- "De poes rust op het matras."
- "De hond slaapt naast het bed."
- "Het bed is gemaakt van hout."
---
# allnli-GroNLP-bert-base-dutch-cased
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["De kat slaapt op het bed.", "De poes rust op het matras."]
model = SentenceTransformer('textgain/allnli-GroNLP-bert-base-dutch-cased')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["De kat slaapt op het bed.", "De poes rust op het matras."]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('textgain/allnli-GroNLP-bert-base-dutch-cased')
model = AutoModel.from_pretrained('textgain/allnli-GroNLP-bert-base-dutch-cased')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 4388 with parameters:
```
{'batch_size': 128}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 438,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 439,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
ishwarbb23/t52
|
ishwarbb23
| 2023-08-06T17:53:05Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:ThomasSimonini/t5-end2end-question-generation",
"base_model:finetune:ThomasSimonini/t5-end2end-question-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-05T18:12:16Z |
---
license: apache-2.0
base_model: ThomasSimonini/t5-end2end-question-generation
tags:
- generated_from_trainer
model-index:
- name: t52
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t52
This model is a fine-tuned version of [ThomasSimonini/t5-end2end-question-generation](https://huggingface.co/ThomasSimonini/t5-end2end-question-generation) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6944
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.2217 | 0.65 | 100 | 2.9125 |
| 2.9732 | 1.3 | 200 | 2.8349 |
| 2.8996 | 1.95 | 300 | 2.7879 |
| 2.8009 | 2.59 | 400 | 2.7614 |
| 2.7532 | 3.24 | 500 | 2.7406 |
| 2.6964 | 3.89 | 600 | 2.7208 |
| 2.6462 | 4.54 | 700 | 2.7153 |
| 2.6265 | 5.19 | 800 | 2.7037 |
| 2.6089 | 5.84 | 900 | 2.6968 |
| 2.5522 | 6.49 | 1000 | 2.6944 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
voxxer/Huggy-PPO
|
voxxer
| 2023-08-06T17:19:40Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-08-06T17:19:34Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: voxxer/Huggy-PPO
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
gioca91/ppo-LunarLander-v2-optuna
|
gioca91
| 2023-08-06T17:09:35Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-06T17:05:27Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 279.02 +/- 24.34
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ailabturkiye/Lilith
|
ailabturkiye
| 2023-08-06T17:07:29Z | 0 | 0 | null |
[
"diabloV",
"diablo v",
"lilith",
"villain",
"license:openrail",
"region:us"
] | null | 2023-08-06T16:38:09Z |
---
license: openrail
metrics:
- character
tags:
- diabloV
- diablo v
- lilith
- villain
---
Lilith -Diablo V-
Lilith, Diablo V oyununun baş kötü karakteridir, Model 500 Epoch olup s4500 değerindedir.
Modelin TRAIN ve DATASET'i bana aittir. İzinsiz kullanmak yasaktır. İzin alma halinde, paylaşacağınız sosyal medya platformlarında "Cast" kısmında model sahibi belirtilmelidir.
Discord: Alastor#3115
YouTube: https://www.youtube.com/@NahParti
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.