modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-08-27 18:28:06
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
523 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-08-27 18:27:40
card
stringlengths
11
1.01M
ibm-research/ColD-Fusion-itr15-seed4
ibm-research
2022-12-06T09:50:09Z
5
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "exbert", "en", "arxiv:2212.01378", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-06T09:49:36Z
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
ibm-research/ColD-Fusion-itr15-seed2
ibm-research
2022-12-06T09:49:05Z
3
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "exbert", "en", "arxiv:2212.01378", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-06T09:48:30Z
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
ibm-research/ColD-Fusion-itr15-seed1
ibm-research
2022-12-06T09:48:27Z
3
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "exbert", "en", "arxiv:2212.01378", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-06T09:47:50Z
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
ibm-research/ColD-Fusion-itr14-seed4
ibm-research
2022-12-06T09:47:22Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "exbert", "en", "arxiv:2212.01378", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-06T09:46:54Z
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
ibm-research/ColD-Fusion-itr14-seed3
ibm-research
2022-12-06T09:46:52Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "exbert", "en", "arxiv:2212.01378", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-06T09:46:33Z
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
ibm-research/ColD-Fusion-itr14-seed2
ibm-research
2022-12-06T09:46:30Z
3
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "exbert", "en", "arxiv:2212.01378", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-06T09:46:18Z
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
ibm-research/ColD-Fusion-itr13-seed4
ibm-research
2022-12-06T09:45:57Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "exbert", "en", "arxiv:2212.01378", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-06T09:45:46Z
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
ibm-research/ColD-Fusion-itr13-seed3
ibm-research
2022-12-06T09:45:30Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "exbert", "en", "arxiv:2212.01378", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-06T09:45:19Z
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
ibm-research/ColD-Fusion-itr12-seed0
ibm-research
2022-12-06T09:44:37Z
3
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "exbert", "en", "arxiv:2212.01378", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-06T09:44:27Z
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
ibm-research/ColD-Fusion-itr12-seed1
ibm-research
2022-12-06T09:43:59Z
5
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "exbert", "en", "arxiv:2212.01378", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-06T09:43:49Z
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
ibm-research/ColD-Fusion-itr11-seed4
ibm-research
2022-12-06T09:43:46Z
6
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "exbert", "en", "arxiv:2212.01378", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-06T09:43:36Z
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
ibm-research/ColD-Fusion-itr11-seed3
ibm-research
2022-12-06T09:43:19Z
5
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "exbert", "en", "arxiv:2212.01378", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-06T09:43:08Z
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
ibm-research/ColD-Fusion-itr11-seed1
ibm-research
2022-12-06T09:42:54Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "exbert", "en", "arxiv:2212.01378", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-06T09:42:42Z
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
ibm-research/ColD-Fusion-itr10-seed4
ibm-research
2022-12-06T09:42:40Z
5
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "exbert", "en", "arxiv:2212.01378", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-06T09:42:31Z
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
ibm-research/ColD-Fusion-itr10-seed3
ibm-research
2022-12-06T09:42:28Z
5
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "exbert", "en", "arxiv:2212.01378", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-06T09:42:19Z
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
ibm-research/ColD-Fusion-itr10-seed0
ibm-research
2022-12-06T09:42:17Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "exbert", "en", "arxiv:2212.01378", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-06T09:42:07Z
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
ibm-research/ColD-Fusion-itr10-seed1
ibm-research
2022-12-06T09:41:51Z
5
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "exbert", "en", "arxiv:2212.01378", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-06T09:41:41Z
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
ibm-research/ColD-Fusion-itr9-seed0
ibm-research
2022-12-06T09:41:08Z
3
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "exbert", "en", "arxiv:2212.01378", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-06T09:40:57Z
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
ibm-research/ColD-Fusion-itr9-seed2
ibm-research
2022-12-06T09:40:52Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "exbert", "en", "arxiv:2212.01378", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-06T09:40:41Z
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
tomekkorbak/musing_hoover
tomekkorbak
2022-12-06T09:35:33Z
1
0
transformers
[ "transformers", "pytorch", "gpt2", "generated_from_trainer", "en", "dataset:tomekkorbak/detoxify-pile-chunk3-0-50000", "dataset:tomekkorbak/detoxify-pile-chunk3-50000-100000", "dataset:tomekkorbak/detoxify-pile-chunk3-100000-150000", "dataset:tomekkorbak/detoxify-pile-chunk3-150000-200000", "dataset:tomekkorbak/detoxify-pile-chunk3-200000-250000", "dataset:tomekkorbak/detoxify-pile-chunk3-250000-300000", "dataset:tomekkorbak/detoxify-pile-chunk3-300000-350000", "dataset:tomekkorbak/detoxify-pile-chunk3-350000-400000", "dataset:tomekkorbak/detoxify-pile-chunk3-400000-450000", "dataset:tomekkorbak/detoxify-pile-chunk3-450000-500000", "dataset:tomekkorbak/detoxify-pile-chunk3-500000-550000", "dataset:tomekkorbak/detoxify-pile-chunk3-550000-600000", "dataset:tomekkorbak/detoxify-pile-chunk3-600000-650000", "dataset:tomekkorbak/detoxify-pile-chunk3-650000-700000", "dataset:tomekkorbak/detoxify-pile-chunk3-700000-750000", "dataset:tomekkorbak/detoxify-pile-chunk3-750000-800000", "dataset:tomekkorbak/detoxify-pile-chunk3-800000-850000", "dataset:tomekkorbak/detoxify-pile-chunk3-850000-900000", "dataset:tomekkorbak/detoxify-pile-chunk3-900000-950000", "dataset:tomekkorbak/detoxify-pile-chunk3-950000-1000000", "dataset:tomekkorbak/detoxify-pile-chunk3-1000000-1050000", "dataset:tomekkorbak/detoxify-pile-chunk3-1050000-1100000", "dataset:tomekkorbak/detoxify-pile-chunk3-1100000-1150000", "dataset:tomekkorbak/detoxify-pile-chunk3-1150000-1200000", "dataset:tomekkorbak/detoxify-pile-chunk3-1200000-1250000", "dataset:tomekkorbak/detoxify-pile-chunk3-1250000-1300000", "dataset:tomekkorbak/detoxify-pile-chunk3-1300000-1350000", "dataset:tomekkorbak/detoxify-pile-chunk3-1350000-1400000", "dataset:tomekkorbak/detoxify-pile-chunk3-1400000-1450000", "dataset:tomekkorbak/detoxify-pile-chunk3-1450000-1500000", "dataset:tomekkorbak/detoxify-pile-chunk3-1500000-1550000", "dataset:tomekkorbak/detoxify-pile-chunk3-1550000-1600000", "dataset:tomekkorbak/detoxify-pile-chunk3-1600000-1650000", "dataset:tomekkorbak/detoxify-pile-chunk3-1650000-1700000", "dataset:tomekkorbak/detoxify-pile-chunk3-1700000-1750000", "dataset:tomekkorbak/detoxify-pile-chunk3-1750000-1800000", "dataset:tomekkorbak/detoxify-pile-chunk3-1800000-1850000", "dataset:tomekkorbak/detoxify-pile-chunk3-1850000-1900000", "dataset:tomekkorbak/detoxify-pile-chunk3-1900000-1950000", "license:mit", "text-generation-inference", "endpoints_compatible", "region:us" ]
null
2022-12-05T16:55:01Z
--- language: - en license: mit tags: - generated_from_trainer datasets: - tomekkorbak/detoxify-pile-chunk3-0-50000 - tomekkorbak/detoxify-pile-chunk3-50000-100000 - tomekkorbak/detoxify-pile-chunk3-100000-150000 - tomekkorbak/detoxify-pile-chunk3-150000-200000 - tomekkorbak/detoxify-pile-chunk3-200000-250000 - tomekkorbak/detoxify-pile-chunk3-250000-300000 - tomekkorbak/detoxify-pile-chunk3-300000-350000 - tomekkorbak/detoxify-pile-chunk3-350000-400000 - tomekkorbak/detoxify-pile-chunk3-400000-450000 - tomekkorbak/detoxify-pile-chunk3-450000-500000 - tomekkorbak/detoxify-pile-chunk3-500000-550000 - tomekkorbak/detoxify-pile-chunk3-550000-600000 - tomekkorbak/detoxify-pile-chunk3-600000-650000 - tomekkorbak/detoxify-pile-chunk3-650000-700000 - tomekkorbak/detoxify-pile-chunk3-700000-750000 - tomekkorbak/detoxify-pile-chunk3-750000-800000 - tomekkorbak/detoxify-pile-chunk3-800000-850000 - tomekkorbak/detoxify-pile-chunk3-850000-900000 - tomekkorbak/detoxify-pile-chunk3-900000-950000 - tomekkorbak/detoxify-pile-chunk3-950000-1000000 - tomekkorbak/detoxify-pile-chunk3-1000000-1050000 - tomekkorbak/detoxify-pile-chunk3-1050000-1100000 - tomekkorbak/detoxify-pile-chunk3-1100000-1150000 - tomekkorbak/detoxify-pile-chunk3-1150000-1200000 - tomekkorbak/detoxify-pile-chunk3-1200000-1250000 - tomekkorbak/detoxify-pile-chunk3-1250000-1300000 - tomekkorbak/detoxify-pile-chunk3-1300000-1350000 - tomekkorbak/detoxify-pile-chunk3-1350000-1400000 - tomekkorbak/detoxify-pile-chunk3-1400000-1450000 - tomekkorbak/detoxify-pile-chunk3-1450000-1500000 - tomekkorbak/detoxify-pile-chunk3-1500000-1550000 - tomekkorbak/detoxify-pile-chunk3-1550000-1600000 - tomekkorbak/detoxify-pile-chunk3-1600000-1650000 - tomekkorbak/detoxify-pile-chunk3-1650000-1700000 - tomekkorbak/detoxify-pile-chunk3-1700000-1750000 - tomekkorbak/detoxify-pile-chunk3-1750000-1800000 - tomekkorbak/detoxify-pile-chunk3-1800000-1850000 - tomekkorbak/detoxify-pile-chunk3-1850000-1900000 - tomekkorbak/detoxify-pile-chunk3-1900000-1950000 model-index: - name: musing_hoover results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # musing_hoover This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 64 - total_train_batch_size: 1024 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.01 - training_steps: 3147 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.24.0 - Pytorch 1.11.0+cu113 - Datasets 2.5.1 - Tokenizers 0.11.6 # Full config {'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000', 'tomekkorbak/detoxify-pile-chunk3-50000-100000', 'tomekkorbak/detoxify-pile-chunk3-100000-150000', 'tomekkorbak/detoxify-pile-chunk3-150000-200000', 'tomekkorbak/detoxify-pile-chunk3-200000-250000', 'tomekkorbak/detoxify-pile-chunk3-250000-300000', 'tomekkorbak/detoxify-pile-chunk3-300000-350000', 'tomekkorbak/detoxify-pile-chunk3-350000-400000', 'tomekkorbak/detoxify-pile-chunk3-400000-450000', 'tomekkorbak/detoxify-pile-chunk3-450000-500000', 'tomekkorbak/detoxify-pile-chunk3-500000-550000', 'tomekkorbak/detoxify-pile-chunk3-550000-600000', 'tomekkorbak/detoxify-pile-chunk3-600000-650000', 'tomekkorbak/detoxify-pile-chunk3-650000-700000', 'tomekkorbak/detoxify-pile-chunk3-700000-750000', 'tomekkorbak/detoxify-pile-chunk3-750000-800000', 'tomekkorbak/detoxify-pile-chunk3-800000-850000', 'tomekkorbak/detoxify-pile-chunk3-850000-900000', 'tomekkorbak/detoxify-pile-chunk3-900000-950000', 'tomekkorbak/detoxify-pile-chunk3-950000-1000000', 'tomekkorbak/detoxify-pile-chunk3-1000000-1050000', 'tomekkorbak/detoxify-pile-chunk3-1050000-1100000', 'tomekkorbak/detoxify-pile-chunk3-1100000-1150000', 'tomekkorbak/detoxify-pile-chunk3-1150000-1200000', 'tomekkorbak/detoxify-pile-chunk3-1200000-1250000', 'tomekkorbak/detoxify-pile-chunk3-1250000-1300000', 'tomekkorbak/detoxify-pile-chunk3-1300000-1350000', 'tomekkorbak/detoxify-pile-chunk3-1350000-1400000', 'tomekkorbak/detoxify-pile-chunk3-1400000-1450000', 'tomekkorbak/detoxify-pile-chunk3-1450000-1500000', 'tomekkorbak/detoxify-pile-chunk3-1500000-1550000', 'tomekkorbak/detoxify-pile-chunk3-1550000-1600000', 'tomekkorbak/detoxify-pile-chunk3-1600000-1650000', 'tomekkorbak/detoxify-pile-chunk3-1650000-1700000', 'tomekkorbak/detoxify-pile-chunk3-1700000-1750000', 'tomekkorbak/detoxify-pile-chunk3-1750000-1800000', 'tomekkorbak/detoxify-pile-chunk3-1800000-1850000', 'tomekkorbak/detoxify-pile-chunk3-1850000-1900000', 'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'], 'is_split_by_sentences': True}, 'generation': {'every_n_steps': 16, 'force_call_on': [25354], 'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}], 'scenario_configs': [{'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 2048}, {'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'challenging_rtp', 'num_samples': 2048, 'prompts_path': 'resources/challenging_rtp.jsonl'}], 'scorer_config': {'device': 'cuda:0'}}, 'kl_gpt3_callback': {'every_n_steps': 16, 'force_call_on': [25354], 'max_tokens': 64, 'num_samples': 4096}, 'model': {'from_scratch': True, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'model_kwargs': {'value_head_config': {'is_detached': False}}, 'path_or_name': 'gpt2'}, 'objective': {'alpha': 1, 'beta': 10, 'name': 'AWR'}, 'tokenizer': {'path_or_name': 'gpt2'}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 1024, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'musing_hoover', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0005, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output104340', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 1673, 'save_strategy': 'steps', 'seed': 42, 'warmup_ratio': 0.01, 'weight_decay': 0.1}} # Wandb URL: https://wandb.ai/tomekkorbak/apo/runs/1nm8napp
fanzru/t5-small-finetuned-xsum-xlsum
fanzru
2022-12-06T09:28:20Z
11
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:scientific_papers", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-12-05T11:55:31Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - scientific_papers metrics: - rouge model-index: - name: t5-small-finetuned-xsum-xlsum results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: scientific_papers type: scientific_papers config: pubmed split: train args: pubmed metrics: - name: Rouge1 type: rouge value: 14.3541 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-xsum-xlsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the scientific_papers dataset. It achieves the following results on the evaluation set: - Loss: 1.9963 - Rouge1: 14.3541 - Rouge2: 6.1674 - Rougel: 12.2975 - Rougelsum: 13.2515 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 2.3055 | 1.0 | 7496 | 2.0773 | 14.3312 | 6.153 | 12.2551 | 13.2033 | 19.0 | | 2.2512 | 2.0 | 14992 | 2.0330 | 14.3048 | 6.1346 | 12.2343 | 13.1992 | 19.0 | | 2.2034 | 3.0 | 22488 | 2.0106 | 14.3866 | 6.1752 | 12.3205 | 13.2743 | 19.0 | | 2.2054 | 4.0 | 29984 | 2.0004 | 14.3629 | 6.167 | 12.2928 | 13.2506 | 19.0 | | 2.1944 | 5.0 | 37480 | 1.9963 | 14.3541 | 6.1674 | 12.2975 | 13.2515 | 19.0 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.13.0+cu117 - Datasets 2.7.1 - Tokenizers 0.13.2
ShadoWxShinigamI/SD2-Vray-Style
ShadoWxShinigamI
2022-12-06T09:24:19Z
0
4
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2022-12-06T09:18:10Z
--- license: creativeml-openrail-m --- ##Textual Inversion Embed For SD 2.0 By ShadoWxShinigamI This embed attempts to emulate the style and lighting of vray renderer. It has been trained for a total of 1000 steps based on 44 of my personal renders. Model used for training:- SD 2.0 (512 Base). [Works well with the 768 Model] This embed mixes well with other 2.0 embeds. Mix and have fun! Examples:- ![house exterior-2.png](https://s3.amazonaws.com/moonup/production/uploads/1670318459704-633a520aecbd8b19357b4806.png) ![batman.png](https://s3.amazonaws.com/moonup/production/uploads/1670318471630-633a520aecbd8b19357b4806.png) ![car.png](https://s3.amazonaws.com/moonup/production/uploads/1670318481599-633a520aecbd8b19357b4806.png) ![tiger landscape.png](https://s3.amazonaws.com/moonup/production/uploads/1670318498394-633a520aecbd8b19357b4806.png) ![car-2.png](https://s3.amazonaws.com/moonup/production/uploads/1670318513965-633a520aecbd8b19357b4806.png)
fathyshalab/all-roberta-large-v1-meta-16-16-5-oos
fathyshalab
2022-12-06T08:47:06Z
3
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-06T08:22:21Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: all-roberta-large-v1-meta-16-16-5-oos results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # all-roberta-large-v1-meta-16-16-5-oos This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.4797 - Accuracy: 0.28 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.7721 | 1.0 | 1 | 2.6529 | 0.1889 | | 2.2569 | 2.0 | 2 | 2.5866 | 0.2333 | | 1.9837 | 3.0 | 3 | 2.5340 | 0.2644 | | 1.6425 | 4.0 | 4 | 2.4980 | 0.2756 | | 1.4612 | 5.0 | 5 | 2.4797 | 0.28 | ### Framework versions - Transformers 4.20.0 - Pytorch 1.11.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
AlanB/clip_guided_stable_diffusion_mod
AlanB
2022-12-06T08:35:13Z
0
2
null
[ "license:openrail", "region:us" ]
null
2022-11-27T20:37:48Z
--- license: openrail --- Modified version of diffusers CLIP-Guided Community Pipeline. Fixed incompatibility with Stable Diffusion v2, eliminated Safety warning Made to go with my [Stable Diffusion Deluxe](https://colab.research.google.com/github/Skquark/AI-Friends/blob/main/Stable_Diffusion_Deluxe.ipynb) Notebook.
lmqg/t5-base-squad-ae
lmqg
2022-12-06T08:33:54Z
40
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "answer extraction", "en", "dataset:lmqg/qg_squad", "arxiv:2210.03992", "license:cc-by-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-12-06T08:32:32Z
--- license: cc-by-4.0 metrics: - bleu4 - meteor - rouge-l - bertscore - moverscore language: en datasets: - lmqg/qg_squad pipeline_tag: text2text-generation tags: - answer extraction widget: - text: "extract answers: <hl> Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. <hl> Her performance in the film received praise from critics, and she garnered several nominations for her portrayal of James, including a Satellite Award nomination for Best Supporting Actress, and a NAACP Image Award nomination for Outstanding Supporting Actress." example_title: "Answering Extraction Example 1" - text: "extract answers: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. <hl> Her performance in the film received praise from critics, and she garnered several nominations for her portrayal of James, including a Satellite Award nomination for Best Supporting Actress, and a NAACP Image Award nomination for Outstanding Supporting Actress. <hl>" example_title: "Answering Extraction Example 2" model-index: - name: lmqg/t5-base-squad-ae results: - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_squad type: default args: default metrics: - name: BLEU4 (Answer Extraction) type: bleu4_answer_extraction value: 54.28 - name: ROUGE-L (Answer Extraction) type: rouge_l_answer_extraction value: 69.72 - name: METEOR (Answer Extraction) type: meteor_answer_extraction value: 43.62 - name: BERTScore (Answer Extraction) type: bertscore_answer_extraction value: 91.87 - name: MoverScore (Answer Extraction) type: moverscore_answer_extraction value: 82.69 - name: AnswerF1Score (Answer Extraction) type: answer_f1_score__answer_extraction value: 70.32 - name: AnswerExactMatch (Answer Extraction) type: answer_exact_match_answer_extraction value: 59.48 --- # Model Card of `lmqg/t5-base-squad-ae` This model is fine-tuned version of [t5-base](https://huggingface.co/t5-base) for answer extraction on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [t5-base](https://huggingface.co/t5-base) - **Language:** en - **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="en", model="lmqg/t5-base-squad-ae") # model prediction answers = model.generate_a("William Turner was an English painter who specialised in watercolour landscapes") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/t5-base-squad-ae") output = pipe("extract answers: <hl> Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. <hl> Her performance in the film received praise from critics, and she garnered several nominations for her portrayal of James, including a Satellite Award nomination for Best Supporting Actress, and a NAACP Image Award nomination for Outstanding Supporting Actress.") ``` ## Evaluation - ***Metric (Answer Extraction)***: [raw metric file](https://huggingface.co/lmqg/t5-base-squad-ae/raw/main/eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_squad.default.json) | | Score | Type | Dataset | |:-----------------|--------:|:--------|:---------------------------------------------------------------| | AnswerExactMatch | 59.48 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | AnswerF1Score | 70.32 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | BERTScore | 91.87 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_1 | 64.27 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_2 | 60.78 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_3 | 57.35 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_4 | 54.28 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | METEOR | 43.62 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | MoverScore | 82.69 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | ROUGE_L | 69.72 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_squad - dataset_name: default - input_types: ['paragraph_sentence'] - output_types: ['answer'] - prefix_types: ['ae'] - model: t5-base - max_length: 512 - max_length_output: 32 - epoch: 8 - batch: 16 - lr: 0.0001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 4 - label_smoothing: 0.0 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/t5-base-squad-ae/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
fathyshalab/all-roberta-large-v1-meta-8-16-5-oos
fathyshalab
2022-12-06T08:22:01Z
3
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-06T07:55:47Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: all-roberta-large-v1-meta-8-16-5-oos results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # all-roberta-large-v1-meta-8-16-5-oos This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.4797 - Accuracy: 0.28 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.7721 | 1.0 | 1 | 2.6529 | 0.1889 | | 2.2569 | 2.0 | 2 | 2.5866 | 0.2333 | | 1.9837 | 3.0 | 3 | 2.5340 | 0.2644 | | 1.6425 | 4.0 | 4 | 2.4980 | 0.2756 | | 1.4612 | 5.0 | 5 | 2.4797 | 0.28 | ### Framework versions - Transformers 4.20.0 - Pytorch 1.11.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
betbhai9/Betbhai9
betbhai9
2022-12-06T08:11:04Z
0
0
null
[ "region:us" ]
null
2022-12-06T08:09:43Z
The Betbhai9id se ap online earning karsaktehai, but iskeliyeapko online play karnapadega aur play karnekeliyeapko pay karnapadega. Uskeliyeapko ek id ki need padegi to apko wo ki zrurathogi. [Betbhai9](https://betbhai9.app) id ham user ko provide kartehai. Ap hamari website par visit kareurwhatsappke throw Betbhai9id le saktehai.
fathyshalab/all-roberta-large-v1-meta-4-16-5-oos
fathyshalab
2022-12-06T07:55:27Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-06T07:31:16Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: all-roberta-large-v1-meta-4-16-5-oos results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # all-roberta-large-v1-meta-4-16-5-oos This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.4797 - Accuracy: 0.28 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.7721 | 1.0 | 1 | 2.6529 | 0.1889 | | 2.2569 | 2.0 | 2 | 2.5866 | 0.2333 | | 1.9837 | 3.0 | 3 | 2.5340 | 0.2644 | | 1.6425 | 4.0 | 4 | 2.4980 | 0.2756 | | 1.4612 | 5.0 | 5 | 2.4797 | 0.28 | ### Framework versions - Transformers 4.20.0 - Pytorch 1.11.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
HaojiePan/wav2vec2-base-ft-keyword-spotting
HaojiePan
2022-12-06T07:33:58Z
10
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "audio-classification", "generated_from_trainer", "dataset:superb", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2022-12-06T07:13:37Z
--- license: apache-2.0 tags: - audio-classification - generated_from_trainer datasets: - superb metrics: - accuracy model-index: - name: wav2vec2-base-ft-keyword-spotting results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-ft-keyword-spotting This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset. It achieves the following results on the evaluation set: - Loss: 0.0795 - Accuracy: 0.9829 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 0 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5546 | 1.0 | 399 | 0.4250 | 0.9618 | | 0.2128 | 2.0 | 798 | 0.1331 | 0.9781 | | 0.1763 | 3.0 | 1197 | 0.0935 | 0.9807 | | 0.1485 | 4.0 | 1596 | 0.0852 | 0.9828 | | 0.1335 | 5.0 | 1995 | 0.0795 | 0.9829 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.10.0+cu111 - Datasets 2.7.1 - Tokenizers 0.13.2
fathyshalab/all-roberta-large-v1-meta-2-16-5-oos
fathyshalab
2022-12-06T07:30:57Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-06T07:07:07Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: all-roberta-large-v1-meta-2-16-5-oos results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # all-roberta-large-v1-meta-2-16-5-oos This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.4797 - Accuracy: 0.28 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.7721 | 1.0 | 1 | 2.6529 | 0.1889 | | 2.2569 | 2.0 | 2 | 2.5866 | 0.2333 | | 1.9837 | 3.0 | 3 | 2.5340 | 0.2644 | | 1.6425 | 4.0 | 4 | 2.4980 | 0.2756 | | 1.4612 | 5.0 | 5 | 2.4797 | 0.28 | ### Framework versions - Transformers 4.20.0 - Pytorch 1.11.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
AlekseyKorshuk/6.7b-ri-reproduce-combined-4-gpu-20-val-v3
AlekseyKorshuk
2022-12-06T07:07:47Z
5
0
transformers
[ "transformers", "pytorch", "opt", "text-generation", "generated_from_trainer", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-12-05T11:54:11Z
--- license: other tags: - generated_from_trainer metrics: - accuracy model-index: - name: 6.7b-ri-reproduce-combined-4-gpu-20-val-v3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 6.7b-ri-reproduce-combined-4-gpu-20-val-v3 This model is a fine-tuned version of [facebook/opt-6.7b](https://huggingface.co/facebook/opt-6.7b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.9434 - Accuracy: 0.0329 - Perplexity: 51.5916 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 9e-07 - train_batch_size: 1 - eval_batch_size: 8 - seed: 100 - distributed_type: multi-GPU - num_devices: 4 - total_train_batch_size: 4 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Perplexity | |:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:| | 2.5731 | 1.0 | 79 | 2.6113 | 0.0317 | 13.6171 | | 2.206 | 2.0 | 158 | 2.4805 | 0.0328 | 11.9469 | | 1.9105 | 3.0 | 237 | 2.4512 | 0.0333 | 11.6019 | | 1.6301 | 4.0 | 316 | 2.5078 | 0.0345 | 12.2780 | | 1.3733 | 5.0 | 395 | 2.6816 | 0.0342 | 14.6090 | | 1.1337 | 6.0 | 474 | 3.0078 | 0.0330 | 20.2431 | | 0.9619 | 7.0 | 553 | 3.1777 | 0.0330 | 23.9923 | | 0.798 | 8.0 | 632 | 3.2559 | 0.0330 | 25.9419 | | 0.6653 | 9.0 | 711 | 3.4277 | 0.0331 | 30.8068 | | 0.552 | 10.0 | 790 | 3.5566 | 0.0333 | 35.0453 | | 0.4568 | 11.0 | 869 | 3.7324 | 0.0324 | 41.7802 | | 0.3756 | 12.0 | 948 | 3.8184 | 0.0328 | 45.5295 | | 0.3119 | 13.0 | 1027 | 3.8477 | 0.0331 | 46.8831 | | 0.2448 | 14.0 | 1106 | 3.9062 | 0.0329 | 49.7122 | | 0.1986 | 15.0 | 1185 | 3.9434 | 0.0329 | 51.5916 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
huggingtweets/emilyhxrrera-floguo-lucy_guo-saraduit-shrawberryy
huggingtweets
2022-12-06T06:51:32Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-12-06T06:51:23Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1596198683179159557/-l7jFkeQ_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1472319181097824256/hY5RmhQs_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1571251248342650882/6YDG9PGc_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">floguo & em herrera is in NY 🌃 & Shravani🍓 & Sara Du & Lucy Guo (Hiring Engineers & Designers)</div> <div style="text-align: center; font-size: 14px;">@emilyhxrrera-floguo-lucy_guo-saraduit-shrawberryy</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from floguo & em herrera is in NY 🌃 & Shravani🍓 & Sara Du & Lucy Guo (Hiring Engineers & Designers). | Data | floguo | em herrera is in NY 🌃 | Shravani🍓 | Sara Du | Lucy Guo (Hiring Engineers & Designers) | | --- | --- | --- | --- | --- | --- | | Tweets downloaded | 3193 | 3234 | 1049 | 1635 | 3239 | | Retweets | 662 | 488 | 92 | 17 | 68 | | Short tweets | 423 | 829 | 328 | 287 | 275 | | Tweets kept | 2108 | 1917 | 629 | 1331 | 2896 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3kqf9fmj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @emilyhxrrera-floguo-lucy_guo-saraduit-shrawberryy's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1h2quh2b) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1h2quh2b/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/emilyhxrrera-floguo-lucy_guo-saraduit-shrawberryy') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
m-aliabbas/idrak_wav2vec_tr
m-aliabbas
2022-12-06T06:38:53Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-12-06T05:57:48Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: idrak_wav2vec_tr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # idrak_wav2vec_tr This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
fathyshalab/all-roberta-large-v1-small_talk-8-16-5-oos
fathyshalab
2022-12-06T06:18:14Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-06T05:54:47Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: all-roberta-large-v1-small_talk-8-16-5-oos results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # all-roberta-large-v1-small_talk-8-16-5-oos This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3566 - Accuracy: 0.3855 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.7259 | 1.0 | 1 | 2.5917 | 0.2551 | | 2.217 | 2.0 | 2 | 2.5059 | 0.3275 | | 1.7237 | 3.0 | 3 | 2.4355 | 0.3768 | | 1.4001 | 4.0 | 4 | 2.3837 | 0.3739 | | 1.1937 | 5.0 | 5 | 2.3566 | 0.3855 | ### Framework versions - Transformers 4.20.0 - Pytorch 1.11.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
Nhat1904/32-shot-twitter
Nhat1904
2022-12-06T06:17:21Z
1
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-12-06T06:17:07Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 384 with parameters: ``` {'batch_size': 10, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 384, "warmup_steps": 39, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
StonyBrookNLP/teabreac-preasm-large-iirc-retrieved
StonyBrookNLP
2022-12-06T06:05:43Z
5
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "question-answering, multi-step-reasoning, multi-hop-reasoning", "arxiv:2205.12496", "license:cc-by-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-12-03T23:11:18Z
--- tags: - question-answering, multi-step-reasoning, multi-hop-reasoning thumbnail: https://raw.githubusercontent.com/StonyBrookNLP/teabreac/main/teabreac_icon.png license: cc-by-4.0 --- # What's this? This is one of the models reported in the paper: ["Teaching Broad Reasoning Skills for Multi-Step QA by Generating Hard Contexts".](https://arxiv.org/abs/2205.12496). This paper proposes a procedure to synthetically generate a QA dataset, TeaBReaC, for pretraining language models for robust multi-step reasoning. Pretraining plain LMs like Bart, T5 and numerate LMs like NT5, PReasM, POET on TeaBReaC leads to improvemed downstream performance on several multi-step QA datasets. Please checkout out the paper for the details. We release the following models: - **A:** Base Models finetuned on target datasets: `{base_model}-{target_dataset}` - **B:** Base models pretrained on TeaBReaC: `teabreac-{base_model}` - **C:** Base models pretrained on TeaBReaC and then finetuned on target datasets: `teabreac-{base_model}-{target_dataset}` The `base_model` above can be from: `bart-large`, `t5-large`, `t5-3b`, `nt5-small`, `preasm-large`. The `target_dataset` above can be from: `drop`, `tatqa`, `iirc-gold`, `iirc-retrieved`, `numglue`. The **A** models are only released for completeness / reproducibility. In your end application you probably just want to use either **B** or **C**. # How to use it? Please checkout the details in our [github repository](https://github.com/stonybrooknlp/teabreac), but in a nutshell: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM from digit_tokenization import enable_digit_tokenization # digit_tokenization.py from https://github.com/stonybrooknlp/teabreac model_name = "StonyBrookNLP/teabreac-preasm-large-iirc-retrieved" tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) # Fast doesn't work with digit tokenization model = AutoModelForSeq2SeqLM.from_pretrained(model_name) enable_digit_tokenization(tokenizer) input_texts = [ "Who scored the first touchdown of the game?\n" + "... Oakland would get the early lead in the first quarter as quarterback JaMarcus Russell completed a 20-yard touchdown pass to rookie wide receiver Chaz Schilens..." # Note: some models have slightly different qn/ctxt format. See the github repo. ] input_ids = tokenizer( input_texts, return_tensors="pt", truncation=True, max_length=800, add_special_tokens=True, padding=True, )["input_ids"] generated_ids = model.generate(input_ids, min_length=1, max_length=50) generated_predictions = tokenizer.batch_decode(generated_ids, skip_special_tokens=False) generated_predictions = [ tokenizer.fix_decoded_text(generated_prediction) for generated_prediction in generated_predictions ] # => ["Chaz Schilens"] ```
StonyBrookNLP/teabreac-preasm-large-iirc-gold
StonyBrookNLP
2022-12-06T06:03:50Z
5
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "question-answering, multi-step-reasoning, multi-hop-reasoning", "arxiv:2205.12496", "license:cc-by-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-12-03T23:09:27Z
--- tags: - question-answering, multi-step-reasoning, multi-hop-reasoning thumbnail: https://raw.githubusercontent.com/StonyBrookNLP/teabreac/main/teabreac_icon.png license: cc-by-4.0 --- # What's this? This is one of the models reported in the paper: ["Teaching Broad Reasoning Skills for Multi-Step QA by Generating Hard Contexts".](https://arxiv.org/abs/2205.12496). This paper proposes a procedure to synthetically generate a QA dataset, TeaBReaC, for pretraining language models for robust multi-step reasoning. Pretraining plain LMs like Bart, T5 and numerate LMs like NT5, PReasM, POET on TeaBReaC leads to improvemed downstream performance on several multi-step QA datasets. Please checkout out the paper for the details. We release the following models: - **A:** Base Models finetuned on target datasets: `{base_model}-{target_dataset}` - **B:** Base models pretrained on TeaBReaC: `teabreac-{base_model}` - **C:** Base models pretrained on TeaBReaC and then finetuned on target datasets: `teabreac-{base_model}-{target_dataset}` The `base_model` above can be from: `bart-large`, `t5-large`, `t5-3b`, `nt5-small`, `preasm-large`. The `target_dataset` above can be from: `drop`, `tatqa`, `iirc-gold`, `iirc-retrieved`, `numglue`. The **A** models are only released for completeness / reproducibility. In your end application you probably just want to use either **B** or **C**. # How to use it? Please checkout the details in our [github repository](https://github.com/stonybrooknlp/teabreac), but in a nutshell: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM from digit_tokenization import enable_digit_tokenization # digit_tokenization.py from https://github.com/stonybrooknlp/teabreac model_name = "StonyBrookNLP/teabreac-preasm-large-iirc-gold" tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) # Fast doesn't work with digit tokenization model = AutoModelForSeq2SeqLM.from_pretrained(model_name) enable_digit_tokenization(tokenizer) input_texts = [ "Who scored the first touchdown of the game?\n" + "... Oakland would get the early lead in the first quarter as quarterback JaMarcus Russell completed a 20-yard touchdown pass to rookie wide receiver Chaz Schilens..." # Note: some models have slightly different qn/ctxt format. See the github repo. ] input_ids = tokenizer( input_texts, return_tensors="pt", truncation=True, max_length=800, add_special_tokens=True, padding=True, )["input_ids"] generated_ids = model.generate(input_ids, min_length=1, max_length=50) generated_predictions = tokenizer.batch_decode(generated_ids, skip_special_tokens=False) generated_predictions = [ tokenizer.fix_decoded_text(generated_prediction) for generated_prediction in generated_predictions ] # => ["Chaz Schilens"] ```
StonyBrookNLP/teabreac-preasm-large-drop
StonyBrookNLP
2022-12-06T06:02:04Z
5
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "question-answering, multi-step-reasoning, multi-hop-reasoning", "arxiv:2205.12496", "license:cc-by-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-12-03T23:07:27Z
--- tags: - question-answering, multi-step-reasoning, multi-hop-reasoning thumbnail: https://raw.githubusercontent.com/StonyBrookNLP/teabreac/main/teabreac_icon.png license: cc-by-4.0 --- # What's this? This is one of the models reported in the paper: ["Teaching Broad Reasoning Skills for Multi-Step QA by Generating Hard Contexts".](https://arxiv.org/abs/2205.12496). This paper proposes a procedure to synthetically generate a QA dataset, TeaBReaC, for pretraining language models for robust multi-step reasoning. Pretraining plain LMs like Bart, T5 and numerate LMs like NT5, PReasM, POET on TeaBReaC leads to improvemed downstream performance on several multi-step QA datasets. Please checkout out the paper for the details. We release the following models: - **A:** Base Models finetuned on target datasets: `{base_model}-{target_dataset}` - **B:** Base models pretrained on TeaBReaC: `teabreac-{base_model}` - **C:** Base models pretrained on TeaBReaC and then finetuned on target datasets: `teabreac-{base_model}-{target_dataset}` The `base_model` above can be from: `bart-large`, `t5-large`, `t5-3b`, `nt5-small`, `preasm-large`. The `target_dataset` above can be from: `drop`, `tatqa`, `iirc-gold`, `iirc-retrieved`, `numglue`. The **A** models are only released for completeness / reproducibility. In your end application you probably just want to use either **B** or **C**. # How to use it? Please checkout the details in our [github repository](https://github.com/stonybrooknlp/teabreac), but in a nutshell: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM from digit_tokenization import enable_digit_tokenization # digit_tokenization.py from https://github.com/stonybrooknlp/teabreac model_name = "StonyBrookNLP/teabreac-preasm-large-drop" tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) # Fast doesn't work with digit tokenization model = AutoModelForSeq2SeqLM.from_pretrained(model_name) enable_digit_tokenization(tokenizer) input_texts = [ "Who scored the first touchdown of the game?\n" + "... Oakland would get the early lead in the first quarter as quarterback JaMarcus Russell completed a 20-yard touchdown pass to rookie wide receiver Chaz Schilens..." # Note: some models have slightly different qn/ctxt format. See the github repo. ] input_ids = tokenizer( input_texts, return_tensors="pt", truncation=True, max_length=800, add_special_tokens=True, padding=True, )["input_ids"] generated_ids = model.generate(input_ids, min_length=1, max_length=50) generated_predictions = tokenizer.batch_decode(generated_ids, skip_special_tokens=False) generated_predictions = [ tokenizer.fix_decoded_text(generated_prediction) for generated_prediction in generated_predictions ] # => ["Chaz Schilens"] ```
StonyBrookNLP/teabreac-preasm-large
StonyBrookNLP
2022-12-06T05:59:46Z
5
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "question-answering, multi-step-reasoning, multi-hop-reasoning", "arxiv:2205.12496", "license:cc-by-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-12-03T23:05:35Z
--- tags: - question-answering, multi-step-reasoning, multi-hop-reasoning thumbnail: https://raw.githubusercontent.com/StonyBrookNLP/teabreac/main/teabreac_icon.png license: cc-by-4.0 --- # What's this? This is one of the models reported in the paper: ["Teaching Broad Reasoning Skills for Multi-Step QA by Generating Hard Contexts".](https://arxiv.org/abs/2205.12496). This paper proposes a procedure to synthetically generate a QA dataset, TeaBReaC, for pretraining language models for robust multi-step reasoning. Pretraining plain LMs like Bart, T5 and numerate LMs like NT5, PReasM, POET on TeaBReaC leads to improvemed downstream performance on several multi-step QA datasets. Please checkout out the paper for the details. We release the following models: - **A:** Base Models finetuned on target datasets: `{base_model}-{target_dataset}` - **B:** Base models pretrained on TeaBReaC: `teabreac-{base_model}` - **C:** Base models pretrained on TeaBReaC and then finetuned on target datasets: `teabreac-{base_model}-{target_dataset}` The `base_model` above can be from: `bart-large`, `t5-large`, `t5-3b`, `nt5-small`, `preasm-large`. The `target_dataset` above can be from: `drop`, `tatqa`, `iirc-gold`, `iirc-retrieved`, `numglue`. The **A** models are only released for completeness / reproducibility. In your end application you probably just want to use either **B** or **C**. # How to use it? Please checkout the details in our [github repository](https://github.com/stonybrooknlp/teabreac), but in a nutshell: ```python # NOTE: This model is only pretrained on TeaBReaC, and not on any real QA dataset. from transformers import AutoTokenizer, AutoModelForSeq2SeqLM from digit_tokenization import enable_digit_tokenization # digit_tokenization.py from https://github.com/stonybrooknlp/teabreac model_name = "StonyBrookNLP/teabreac-preasm-large" tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) # Fast doesn't work with digit tokenization model = AutoModelForSeq2SeqLM.from_pretrained(model_name) enable_digit_tokenization(tokenizer) input_texts = [ "Who scored the first touchdown of the game?\n" + "... Oakland would get the early lead in the first quarter as quarterback JaMarcus Russell completed a 20-yard touchdown pass to rookie wide receiver Chaz Schilens..." # Note: some models have slightly different qn/ctxt format. See the github repo. ] input_ids = tokenizer( input_texts, return_tensors="pt", truncation=True, max_length=800, add_special_tokens=True, padding=True, )["input_ids"] generated_ids = model.generate(input_ids, min_length=1, max_length=50) generated_predictions = tokenizer.batch_decode(generated_ids, skip_special_tokens=False) generated_predictions = [ tokenizer.fix_decoded_text(generated_prediction) for generated_prediction in generated_predictions ] # => ["Chaz Schilens"] ```
StonyBrookNLP/teabreac-nt5-small-tatqa
StonyBrookNLP
2022-12-06T05:58:33Z
4
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "question-answering, multi-step-reasoning, multi-hop-reasoning", "arxiv:2205.12496", "license:cc-by-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-12-03T23:05:16Z
--- tags: - question-answering, multi-step-reasoning, multi-hop-reasoning thumbnail: https://raw.githubusercontent.com/StonyBrookNLP/teabreac/main/teabreac_icon.png license: cc-by-4.0 --- # What's this? This is one of the models reported in the paper: ["Teaching Broad Reasoning Skills for Multi-Step QA by Generating Hard Contexts".](https://arxiv.org/abs/2205.12496). This paper proposes a procedure to synthetically generate a QA dataset, TeaBReaC, for pretraining language models for robust multi-step reasoning. Pretraining plain LMs like Bart, T5 and numerate LMs like NT5, PReasM, POET on TeaBReaC leads to improvemed downstream performance on several multi-step QA datasets. Please checkout out the paper for the details. We release the following models: - **A:** Base Models finetuned on target datasets: `{base_model}-{target_dataset}` - **B:** Base models pretrained on TeaBReaC: `teabreac-{base_model}` - **C:** Base models pretrained on TeaBReaC and then finetuned on target datasets: `teabreac-{base_model}-{target_dataset}` The `base_model` above can be from: `bart-large`, `t5-large`, `t5-3b`, `nt5-small`, `preasm-large`. The `target_dataset` above can be from: `drop`, `tatqa`, `iirc-gold`, `iirc-retrieved`, `numglue`. The **A** models are only released for completeness / reproducibility. In your end application you probably just want to use either **B** or **C**. # How to use it? Please checkout the details in our [github repository](https://github.com/stonybrooknlp/teabreac), but in a nutshell: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM from digit_tokenization import enable_digit_tokenization # digit_tokenization.py from https://github.com/stonybrooknlp/teabreac model_name = "StonyBrookNLP/teabreac-nt5-small-tatqa" tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) # Fast doesn't work with digit tokenization model = AutoModelForSeq2SeqLM.from_pretrained(model_name) enable_digit_tokenization(tokenizer) input_texts = [ "answer_me: Who scored the first touchdown of the game?" + "context: ... Oakland would get the early lead in the first quarter as quarterback JaMarcus Russell completed a 20-yard touchdown pass to rookie wide receiver Chaz Schilens..." # Note: some models have slightly different qn/ctxt format. See the github repo. ] input_ids = tokenizer( input_texts, return_tensors="pt", truncation=True, max_length=800, add_special_tokens=True, padding=True, )["input_ids"] generated_ids = model.generate(input_ids, min_length=1, max_length=50) generated_predictions = tokenizer.batch_decode(generated_ids, skip_special_tokens=False) generated_predictions = [ tokenizer.fix_decoded_text(generated_prediction) for generated_prediction in generated_predictions ] # => ["Chaz Schilens"] ```
StonyBrookNLP/teabreac-nt5-small-iirc-gold
StonyBrookNLP
2022-12-06T05:57:25Z
4
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "question-answering, multi-step-reasoning, multi-hop-reasoning", "arxiv:2205.12496", "license:cc-by-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-12-03T23:04:20Z
--- tags: - question-answering, multi-step-reasoning, multi-hop-reasoning thumbnail: https://raw.githubusercontent.com/StonyBrookNLP/teabreac/main/teabreac_icon.png license: cc-by-4.0 --- # What's this? This is one of the models reported in the paper: ["Teaching Broad Reasoning Skills for Multi-Step QA by Generating Hard Contexts".](https://arxiv.org/abs/2205.12496). This paper proposes a procedure to synthetically generate a QA dataset, TeaBReaC, for pretraining language models for robust multi-step reasoning. Pretraining plain LMs like Bart, T5 and numerate LMs like NT5, PReasM, POET on TeaBReaC leads to improvemed downstream performance on several multi-step QA datasets. Please checkout out the paper for the details. We release the following models: - **A:** Base Models finetuned on target datasets: `{base_model}-{target_dataset}` - **B:** Base models pretrained on TeaBReaC: `teabreac-{base_model}` - **C:** Base models pretrained on TeaBReaC and then finetuned on target datasets: `teabreac-{base_model}-{target_dataset}` The `base_model` above can be from: `bart-large`, `t5-large`, `t5-3b`, `nt5-small`, `preasm-large`. The `target_dataset` above can be from: `drop`, `tatqa`, `iirc-gold`, `iirc-retrieved`, `numglue`. The **A** models are only released for completeness / reproducibility. In your end application you probably just want to use either **B** or **C**. # How to use it? Please checkout the details in our [github repository](https://github.com/stonybrooknlp/teabreac), but in a nutshell: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM from digit_tokenization import enable_digit_tokenization # digit_tokenization.py from https://github.com/stonybrooknlp/teabreac model_name = "StonyBrookNLP/teabreac-nt5-small-iirc-gold" tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) # Fast doesn't work with digit tokenization model = AutoModelForSeq2SeqLM.from_pretrained(model_name) enable_digit_tokenization(tokenizer) input_texts = [ "answer_me: Who scored the first touchdown of the game?" + "context: ... Oakland would get the early lead in the first quarter as quarterback JaMarcus Russell completed a 20-yard touchdown pass to rookie wide receiver Chaz Schilens..." # Note: some models have slightly different qn/ctxt format. See the github repo. ] input_ids = tokenizer( input_texts, return_tensors="pt", truncation=True, max_length=800, add_special_tokens=True, padding=True, )["input_ids"] generated_ids = model.generate(input_ids, min_length=1, max_length=50) generated_predictions = tokenizer.batch_decode(generated_ids, skip_special_tokens=False) generated_predictions = [ tokenizer.fix_decoded_text(generated_prediction) for generated_prediction in generated_predictions ] # => ["Chaz Schilens"] ```
StonyBrookNLP/teabreac-t5-3b-tatqa
StonyBrookNLP
2022-12-06T05:53:08Z
3
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "question-answering, multi-step-reasoning, multi-hop-reasoning", "arxiv:2205.12496", "license:cc-by-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-12-03T22:56:36Z
--- tags: - question-answering, multi-step-reasoning, multi-hop-reasoning thumbnail: https://raw.githubusercontent.com/StonyBrookNLP/teabreac/main/teabreac_icon.png license: cc-by-4.0 --- # What's this? This is one of the models reported in the paper: ["Teaching Broad Reasoning Skills for Multi-Step QA by Generating Hard Contexts".](https://arxiv.org/abs/2205.12496). This paper proposes a procedure to synthetically generate a QA dataset, TeaBReaC, for pretraining language models for robust multi-step reasoning. Pretraining plain LMs like Bart, T5 and numerate LMs like NT5, PReasM, POET on TeaBReaC leads to improvemed downstream performance on several multi-step QA datasets. Please checkout out the paper for the details. We release the following models: - **A:** Base Models finetuned on target datasets: `{base_model}-{target_dataset}` - **B:** Base models pretrained on TeaBReaC: `teabreac-{base_model}` - **C:** Base models pretrained on TeaBReaC and then finetuned on target datasets: `teabreac-{base_model}-{target_dataset}` The `base_model` above can be from: `bart-large`, `t5-large`, `t5-3b`, `nt5-small`, `preasm-large`. The `target_dataset` above can be from: `drop`, `tatqa`, `iirc-gold`, `iirc-retrieved`, `numglue`. The **A** models are only released for completeness / reproducibility. In your end application you probably just want to use either **B** or **C**. # How to use it? Please checkout the details in our [github repository](https://github.com/stonybrooknlp/teabreac), but in a nutshell: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM from digit_tokenization import enable_digit_tokenization # digit_tokenization.py from https://github.com/stonybrooknlp/teabreac model_name = "StonyBrookNLP/teabreac-t5-3b-tatqa" tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) # Fast doesn't work with digit tokenization model = AutoModelForSeq2SeqLM.from_pretrained(model_name) enable_digit_tokenization(tokenizer) input_texts = [ "answer_me: Who scored the first touchdown of the game?" + "context: ... Oakland would get the early lead in the first quarter as quarterback JaMarcus Russell completed a 20-yard touchdown pass to rookie wide receiver Chaz Schilens..." # Note: some models have slightly different qn/ctxt format. See the github repo. ] input_ids = tokenizer( input_texts, return_tensors="pt", truncation=True, max_length=800, add_special_tokens=True, padding=True, )["input_ids"] generated_ids = model.generate(input_ids, min_length=1, max_length=50) generated_predictions = tokenizer.batch_decode(generated_ids, skip_special_tokens=False) generated_predictions = [ tokenizer.fix_decoded_text(generated_prediction) for generated_prediction in generated_predictions ] # => ["Chaz Schilens"] ```
StonyBrookNLP/teabreac-t5-3b-iirc-gold
StonyBrookNLP
2022-12-06T05:33:31Z
4
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "question-answering, multi-step-reasoning, multi-hop-reasoning", "arxiv:2205.12496", "license:cc-by-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-12-03T22:36:44Z
--- tags: - question-answering, multi-step-reasoning, multi-hop-reasoning thumbnail: https://raw.githubusercontent.com/StonyBrookNLP/teabreac/main/teabreac_icon.png license: cc-by-4.0 --- # What's this? This is one of the models reported in the paper: ["Teaching Broad Reasoning Skills for Multi-Step QA by Generating Hard Contexts".](https://arxiv.org/abs/2205.12496). This paper proposes a procedure to synthetically generate a QA dataset, TeaBReaC, for pretraining language models for robust multi-step reasoning. Pretraining plain LMs like Bart, T5 and numerate LMs like NT5, PReasM, POET on TeaBReaC leads to improvemed downstream performance on several multi-step QA datasets. Please checkout out the paper for the details. We release the following models: - **A:** Base Models finetuned on target datasets: `{base_model}-{target_dataset}` - **B:** Base models pretrained on TeaBReaC: `teabreac-{base_model}` - **C:** Base models pretrained on TeaBReaC and then finetuned on target datasets: `teabreac-{base_model}-{target_dataset}` The `base_model` above can be from: `bart-large`, `t5-large`, `t5-3b`, `nt5-small`, `preasm-large`. The `target_dataset` above can be from: `drop`, `tatqa`, `iirc-gold`, `iirc-retrieved`, `numglue`. The **A** models are only released for completeness / reproducibility. In your end application you probably just want to use either **B** or **C**. # How to use it? Please checkout the details in our [github repository](https://github.com/stonybrooknlp/teabreac), but in a nutshell: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM from digit_tokenization import enable_digit_tokenization # digit_tokenization.py from https://github.com/stonybrooknlp/teabreac model_name = "StonyBrookNLP/teabreac-t5-3b-iirc-gold" tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) # Fast doesn't work with digit tokenization model = AutoModelForSeq2SeqLM.from_pretrained(model_name) enable_digit_tokenization(tokenizer) input_texts = [ "answer_me: Who scored the first touchdown of the game?" + "context: ... Oakland would get the early lead in the first quarter as quarterback JaMarcus Russell completed a 20-yard touchdown pass to rookie wide receiver Chaz Schilens..." # Note: some models have slightly different qn/ctxt format. See the github repo. ] input_ids = tokenizer( input_texts, return_tensors="pt", truncation=True, max_length=800, add_special_tokens=True, padding=True, )["input_ids"] generated_ids = model.generate(input_ids, min_length=1, max_length=50) generated_predictions = tokenizer.batch_decode(generated_ids, skip_special_tokens=False) generated_predictions = [ tokenizer.fix_decoded_text(generated_prediction) for generated_prediction in generated_predictions ] # => ["Chaz Schilens"] ```
dung1308/RM_system_not_mixed__NLP_model_80_20_CPU
dung1308
2022-12-06T05:31:31Z
3
0
transformers
[ "transformers", "tf", "roberta", "fill-mask", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-12-05T16:45:43Z
--- tags: - generated_from_keras_callback model-index: - name: dung1308/RM_system_not_mixed__NLP_model_80_20_CPU results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # dung1308/RM_system_not_mixed__NLP_model_80_20_CPU This model is a fine-tuned version of [vinai/phobert-base](https://huggingface.co/vinai/phobert-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 4.2681 - Validation Loss: 4.2461 - Epoch: 3 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -356, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 5.1497 | 4.3969 | 0 | | 4.3110 | 4.2424 | 1 | | 4.2373 | 4.2722 | 2 | | 4.2681 | 4.2461 | 3 | ### Framework versions - Transformers 4.18.0 - TensorFlow 2.8.0 - Datasets 2.7.0 - Tokenizers 0.11.0
fathyshalab/all-roberta-large-v1-small_talk-2-16-5-oos
fathyshalab
2022-12-06T05:30:14Z
3
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-06T05:06:55Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: all-roberta-large-v1-small_talk-2-16-5-oos results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # all-roberta-large-v1-small_talk-2-16-5-oos This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3566 - Accuracy: 0.3855 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.7259 | 1.0 | 1 | 2.5917 | 0.2551 | | 2.217 | 2.0 | 2 | 2.5059 | 0.3275 | | 1.7237 | 3.0 | 3 | 2.4355 | 0.3768 | | 1.4001 | 4.0 | 4 | 2.3837 | 0.3739 | | 1.1937 | 5.0 | 5 | 2.3566 | 0.3855 | ### Framework versions - Transformers 4.20.0 - Pytorch 1.11.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
StonyBrookNLP/teabreac-t5-3b-drop
StonyBrookNLP
2022-12-06T05:27:19Z
3
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "question-answering, multi-step-reasoning, multi-hop-reasoning", "arxiv:2205.12496", "license:cc-by-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-12-03T07:38:36Z
--- tags: - question-answering, multi-step-reasoning, multi-hop-reasoning thumbnail: https://raw.githubusercontent.com/StonyBrookNLP/teabreac/main/teabreac_icon.png license: cc-by-4.0 --- # What's this? This is one of the models reported in the paper: ["Teaching Broad Reasoning Skills for Multi-Step QA by Generating Hard Contexts".](https://arxiv.org/abs/2205.12496). This paper proposes a procedure to synthetically generate a QA dataset, TeaBReaC, for pretraining language models for robust multi-step reasoning. Pretraining plain LMs like Bart, T5 and numerate LMs like NT5, PReasM, POET on TeaBReaC leads to improvemed downstream performance on several multi-step QA datasets. Please checkout out the paper for the details. We release the following models: - **A:** Base Models finetuned on target datasets: `{base_model}-{target_dataset}` - **B:** Base models pretrained on TeaBReaC: `teabreac-{base_model}` - **C:** Base models pretrained on TeaBReaC and then finetuned on target datasets: `teabreac-{base_model}-{target_dataset}` The `base_model` above can be from: `bart-large`, `t5-large`, `t5-3b`, `nt5-small`, `preasm-large`. The `target_dataset` above can be from: `drop`, `tatqa`, `iirc-gold`, `iirc-retrieved`, `numglue`. The **A** models are only released for completeness / reproducibility. In your end application you probably just want to use either **B** or **C**. # How to use it? Please checkout the details in our [github repository](https://github.com/stonybrooknlp/teabreac), but in a nutshell: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM from digit_tokenization import enable_digit_tokenization # digit_tokenization.py from https://github.com/stonybrooknlp/teabreac model_name = "StonyBrookNLP/teabreac-t5-3b-drop" tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) # Fast doesn't work with digit tokenization model = AutoModelForSeq2SeqLM.from_pretrained(model_name) enable_digit_tokenization(tokenizer) input_texts = [ "answer_me: Who scored the first touchdown of the game?" + "context: ... Oakland would get the early lead in the first quarter as quarterback JaMarcus Russell completed a 20-yard touchdown pass to rookie wide receiver Chaz Schilens..." # Note: some models have slightly different qn/ctxt format. See the github repo. ] input_ids = tokenizer( input_texts, return_tensors="pt", truncation=True, max_length=800, add_special_tokens=True, padding=True, )["input_ids"] generated_ids = model.generate(input_ids, min_length=1, max_length=50) generated_predictions = tokenizer.batch_decode(generated_ids, skip_special_tokens=False) generated_predictions = [ tokenizer.fix_decoded_text(generated_prediction) for generated_prediction in generated_predictions ] # => ["Chaz Schilens"] ```
StonyBrookNLP/teabreac-t5-3b
StonyBrookNLP
2022-12-06T05:21:13Z
3
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "question-answering, multi-step-reasoning, multi-hop-reasoning", "arxiv:2205.12496", "license:cc-by-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-12-03T22:23:02Z
--- tags: - question-answering, multi-step-reasoning, multi-hop-reasoning thumbnail: https://raw.githubusercontent.com/StonyBrookNLP/teabreac/main/teabreac_icon.png license: cc-by-4.0 --- # What's this? This is one of the models reported in the paper: ["Teaching Broad Reasoning Skills for Multi-Step QA by Generating Hard Contexts".](https://arxiv.org/abs/2205.12496). This paper proposes a procedure to synthetically generate a QA dataset, TeaBReaC, for pretraining language models for robust multi-step reasoning. Pretraining plain LMs like Bart, T5 and numerate LMs like NT5, PReasM, POET on TeaBReaC leads to improvemed downstream performance on several multi-step QA datasets. Please checkout out the paper for the details. We release the following models: - **A:** Base Models finetuned on target datasets: `{base_model}-{target_dataset}` - **B:** Base models pretrained on TeaBReaC: `teabreac-{base_model}` - **C:** Base models pretrained on TeaBReaC and then finetuned on target datasets: `teabreac-{base_model}-{target_dataset}` The `base_model` above can be from: `bart-large`, `t5-large`, `t5-3b`, `nt5-small`, `preasm-large`. The `target_dataset` above can be from: `drop`, `tatqa`, `iirc-gold`, `iirc-retrieved`, `numglue`. The **A** models are only released for completeness / reproducibility. In your end application you probably just want to use either **B** or **C**. # How to use it? Please checkout the details in our [github repository](https://github.com/stonybrooknlp/teabreac), but in a nutshell: ```python # NOTE: This model is only pretrained on TeaBReaC, and not on any real QA dataset. from transformers import AutoTokenizer, AutoModelForSeq2SeqLM from digit_tokenization import enable_digit_tokenization # digit_tokenization.py from https://github.com/stonybrooknlp/teabreac model_name = "StonyBrookNLP/teabreac-t5-3b" tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) # Fast doesn't work with digit tokenization model = AutoModelForSeq2SeqLM.from_pretrained(model_name) enable_digit_tokenization(tokenizer) input_texts = [ "answer_me: Who scored the first touchdown of the game?" + "context: ... Oakland would get the early lead in the first quarter as quarterback JaMarcus Russell completed a 20-yard touchdown pass to rookie wide receiver Chaz Schilens..." # Note: some models have slightly different qn/ctxt format. See the github repo. ] input_ids = tokenizer( input_texts, return_tensors="pt", truncation=True, max_length=800, add_special_tokens=True, padding=True, )["input_ids"] generated_ids = model.generate(input_ids, min_length=1, max_length=50) generated_predictions = tokenizer.batch_decode(generated_ids, skip_special_tokens=False) generated_predictions = [ tokenizer.fix_decoded_text(generated_prediction) for generated_prediction in generated_predictions ] # => ["Chaz Schilens"] ```
StonyBrookNLP/teabreac-t5-large-tatqa
StonyBrookNLP
2022-12-06T05:16:13Z
3
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "question-answering, multi-step-reasoning, multi-hop-reasoning", "arxiv:2205.12496", "license:cc-by-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-12-03T07:37:43Z
--- tags: - question-answering, multi-step-reasoning, multi-hop-reasoning thumbnail: https://raw.githubusercontent.com/StonyBrookNLP/teabreac/main/teabreac_icon.png license: cc-by-4.0 --- # What's this? This is one of the models reported in the paper: ["Teaching Broad Reasoning Skills for Multi-Step QA by Generating Hard Contexts".](https://arxiv.org/abs/2205.12496). This paper proposes a procedure to synthetically generate a QA dataset, TeaBReaC, for pretraining language models for robust multi-step reasoning. Pretraining plain LMs like Bart, T5 and numerate LMs like NT5, PReasM, POET on TeaBReaC leads to improvemed downstream performance on several multi-step QA datasets. Please checkout out the paper for the details. We release the following models: - **A:** Base Models finetuned on target datasets: `{base_model}-{target_dataset}` - **B:** Base models pretrained on TeaBReaC: `teabreac-{base_model}` - **C:** Base models pretrained on TeaBReaC and then finetuned on target datasets: `teabreac-{base_model}-{target_dataset}` The `base_model` above can be from: `bart-large`, `t5-large`, `t5-3b`, `nt5-small`, `preasm-large`. The `target_dataset` above can be from: `drop`, `tatqa`, `iirc-gold`, `iirc-retrieved`, `numglue`. The **A** models are only released for completeness / reproducibility. In your end application you probably just want to use either **B** or **C**. # How to use it? Please checkout the details in our [github repository](https://github.com/stonybrooknlp/teabreac), but in a nutshell: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM from digit_tokenization import enable_digit_tokenization # digit_tokenization.py from https://github.com/stonybrooknlp/teabreac model_name = "StonyBrookNLP/teabreac-t5-large-tatqa" tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) # Fast doesn't work with digit tokenization model = AutoModelForSeq2SeqLM.from_pretrained(model_name) enable_digit_tokenization(tokenizer) input_texts = [ "answer_me: Who scored the first touchdown of the game?" + "context: ... Oakland would get the early lead in the first quarter as quarterback JaMarcus Russell completed a 20-yard touchdown pass to rookie wide receiver Chaz Schilens..." # Note: some models have slightly different qn/ctxt format. See the github repo. ] input_ids = tokenizer( input_texts, return_tensors="pt", truncation=True, max_length=800, add_special_tokens=True, padding=True, )["input_ids"] generated_ids = model.generate(input_ids, min_length=1, max_length=50) generated_predictions = tokenizer.batch_decode(generated_ids, skip_special_tokens=False) generated_predictions = [ tokenizer.fix_decoded_text(generated_prediction) for generated_prediction in generated_predictions ] # => ["Chaz Schilens"] ```
StonyBrookNLP/teabreac-t5-large-numglue
StonyBrookNLP
2022-12-06T05:14:10Z
5
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "question-answering, multi-step-reasoning, multi-hop-reasoning", "arxiv:2205.12496", "license:cc-by-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-12-03T07:35:56Z
--- tags: - question-answering, multi-step-reasoning, multi-hop-reasoning thumbnail: https://raw.githubusercontent.com/StonyBrookNLP/teabreac/main/teabreac_icon.png license: cc-by-4.0 --- # What's this? This is one of the models reported in the paper: ["Teaching Broad Reasoning Skills for Multi-Step QA by Generating Hard Contexts".](https://arxiv.org/abs/2205.12496). This paper proposes a procedure to synthetically generate a QA dataset, TeaBReaC, for pretraining language models for robust multi-step reasoning. Pretraining plain LMs like Bart, T5 and numerate LMs like NT5, PReasM, POET on TeaBReaC leads to improvemed downstream performance on several multi-step QA datasets. Please checkout out the paper for the details. We release the following models: - **A:** Base Models finetuned on target datasets: `{base_model}-{target_dataset}` - **B:** Base models pretrained on TeaBReaC: `teabreac-{base_model}` - **C:** Base models pretrained on TeaBReaC and then finetuned on target datasets: `teabreac-{base_model}-{target_dataset}` The `base_model` above can be from: `bart-large`, `t5-large`, `t5-3b`, `nt5-small`, `preasm-large`. The `target_dataset` above can be from: `drop`, `tatqa`, `iirc-gold`, `iirc-retrieved`, `numglue`. The **A** models are only released for completeness / reproducibility. In your end application you probably just want to use either **B** or **C**. # How to use it? Please checkout the details in our [github repository](https://github.com/stonybrooknlp/teabreac), but in a nutshell: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM from digit_tokenization import enable_digit_tokenization # digit_tokenization.py from https://github.com/stonybrooknlp/teabreac model_name = "StonyBrookNLP/teabreac-t5-large-numglue" tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) # Fast doesn't work with digit tokenization model = AutoModelForSeq2SeqLM.from_pretrained(model_name) enable_digit_tokenization(tokenizer) input_texts = [ "answer_me: Who scored the first touchdown of the game?" + "context: ... Oakland would get the early lead in the first quarter as quarterback JaMarcus Russell completed a 20-yard touchdown pass to rookie wide receiver Chaz Schilens..." # Note: some models have slightly different qn/ctxt format. See the github repo. ] input_ids = tokenizer( input_texts, return_tensors="pt", truncation=True, max_length=800, add_special_tokens=True, padding=True, )["input_ids"] generated_ids = model.generate(input_ids, min_length=1, max_length=50) generated_predictions = tokenizer.batch_decode(generated_ids, skip_special_tokens=False) generated_predictions = [ tokenizer.fix_decoded_text(generated_prediction) for generated_prediction in generated_predictions ] # => ["Chaz Schilens"] ```
StonyBrookNLP/teabreac-t5-large-drop
StonyBrookNLP
2022-12-06T05:08:03Z
3
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "question-answering, multi-step-reasoning, multi-hop-reasoning", "arxiv:2205.12496", "license:cc-by-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-12-03T07:30:47Z
--- tags: - question-answering, multi-step-reasoning, multi-hop-reasoning thumbnail: https://raw.githubusercontent.com/StonyBrookNLP/teabreac/main/teabreac_icon.png license: cc-by-4.0 --- # What's this? This is one of the models reported in the paper: ["Teaching Broad Reasoning Skills for Multi-Step QA by Generating Hard Contexts".](https://arxiv.org/abs/2205.12496). This paper proposes a procedure to synthetically generate a QA dataset, TeaBReaC, for pretraining language models for robust multi-step reasoning. Pretraining plain LMs like Bart, T5 and numerate LMs like NT5, PReasM, POET on TeaBReaC leads to improvemed downstream performance on several multi-step QA datasets. Please checkout out the paper for the details. We release the following models: - **A:** Base Models finetuned on target datasets: `{base_model}-{target_dataset}` - **B:** Base models pretrained on TeaBReaC: `teabreac-{base_model}` - **C:** Base models pretrained on TeaBReaC and then finetuned on target datasets: `teabreac-{base_model}-{target_dataset}` The `base_model` above can be from: `bart-large`, `t5-large`, `t5-3b`, `nt5-small`, `preasm-large`. The `target_dataset` above can be from: `drop`, `tatqa`, `iirc-gold`, `iirc-retrieved`, `numglue`. The **A** models are only released for completeness / reproducibility. In your end application you probably just want to use either **B** or **C**. # How to use it? Please checkout the details in our [github repository](https://github.com/stonybrooknlp/teabreac), but in a nutshell: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM from digit_tokenization import enable_digit_tokenization # digit_tokenization.py from https://github.com/stonybrooknlp/teabreac model_name = "StonyBrookNLP/teabreac-t5-large-drop" tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) # Fast doesn't work with digit tokenization model = AutoModelForSeq2SeqLM.from_pretrained(model_name) enable_digit_tokenization(tokenizer) input_texts = [ "answer_me: Who scored the first touchdown of the game?" + "context: ... Oakland would get the early lead in the first quarter as quarterback JaMarcus Russell completed a 20-yard touchdown pass to rookie wide receiver Chaz Schilens..." # Note: some models have slightly different qn/ctxt format. See the github repo. ] input_ids = tokenizer( input_texts, return_tensors="pt", truncation=True, max_length=800, add_special_tokens=True, padding=True, )["input_ids"] generated_ids = model.generate(input_ids, min_length=1, max_length=50) generated_predictions = tokenizer.batch_decode(generated_ids, skip_special_tokens=False) generated_predictions = [ tokenizer.fix_decoded_text(generated_prediction) for generated_prediction in generated_predictions ] # => ["Chaz Schilens"] ```
StonyBrookNLP/preasm-large-tatqa
StonyBrookNLP
2022-12-06T04:55:56Z
4
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "question-answering, multi-step-reasoning, multi-hop-reasoning", "arxiv:2205.12496", "license:cc-by-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-12-03T07:17:29Z
--- tags: - question-answering, multi-step-reasoning, multi-hop-reasoning thumbnail: https://raw.githubusercontent.com/StonyBrookNLP/teabreac/main/teabreac_icon.png license: cc-by-4.0 --- # What's this? This is one of the models reported in the paper: ["Teaching Broad Reasoning Skills for Multi-Step QA by Generating Hard Contexts".](https://arxiv.org/abs/2205.12496). This paper proposes a procedure to synthetically generate a QA dataset, TeaBReaC, for pretraining language models for robust multi-step reasoning. Pretraining plain LMs like Bart, T5 and numerate LMs like NT5, PReasM, POET on TeaBReaC leads to improvemed downstream performance on several multi-step QA datasets. Please checkout out the paper for the details. We release the following models: - **A:** Base Models finetuned on target datasets: `{base_model}-{target_dataset}` - **B:** Base models pretrained on TeaBReaC: `teabreac-{base_model}` - **C:** Base models pretrained on TeaBReaC and then finetuned on target datasets: `teabreac-{base_model}-{target_dataset}` The `base_model` above can be from: `bart-large`, `t5-large`, `t5-3b`, `nt5-small`, `preasm-large`. The `target_dataset` above can be from: `drop`, `tatqa`, `iirc-gold`, `iirc-retrieved`, `numglue`. The **A** models are only released for completeness / reproducibility. In your end application you probably just want to use either **B** or **C**. # How to use it? Please checkout the details in our [github repository](https://github.com/stonybrooknlp/teabreac), but in a nutshell: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM from digit_tokenization import enable_digit_tokenization # digit_tokenization.py from https://github.com/stonybrooknlp/teabreac model_name = "StonyBrookNLP/preasm-large-tatqa" tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) # Fast doesn't work with digit tokenization model = AutoModelForSeq2SeqLM.from_pretrained(model_name) enable_digit_tokenization(tokenizer) input_texts = [ "Who scored the first touchdown of the game?\n" + "... Oakland would get the early lead in the first quarter as quarterback JaMarcus Russell completed a 20-yard touchdown pass to rookie wide receiver Chaz Schilens..." # Note: some models have slightly different qn/ctxt format. See the github repo. ] input_ids = tokenizer( input_texts, return_tensors="pt", truncation=True, max_length=800, add_special_tokens=True, padding=True, )["input_ids"] generated_ids = model.generate(input_ids, min_length=1, max_length=50) generated_predictions = tokenizer.batch_decode(generated_ids, skip_special_tokens=False) generated_predictions = [ tokenizer.fix_decoded_text(generated_prediction) for generated_prediction in generated_predictions ] # => ["Chaz Schilens"] ```
StonyBrookNLP/preasm-large-numglue
StonyBrookNLP
2022-12-06T04:54:11Z
3
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "question-answering, multi-step-reasoning, multi-hop-reasoning", "arxiv:2205.12496", "license:cc-by-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-12-03T07:15:46Z
--- tags: - question-answering, multi-step-reasoning, multi-hop-reasoning thumbnail: https://raw.githubusercontent.com/StonyBrookNLP/teabreac/main/teabreac_icon.png license: cc-by-4.0 --- # What's this? This is one of the models reported in the paper: ["Teaching Broad Reasoning Skills for Multi-Step QA by Generating Hard Contexts".](https://arxiv.org/abs/2205.12496). This paper proposes a procedure to synthetically generate a QA dataset, TeaBReaC, for pretraining language models for robust multi-step reasoning. Pretraining plain LMs like Bart, T5 and numerate LMs like NT5, PReasM, POET on TeaBReaC leads to improvemed downstream performance on several multi-step QA datasets. Please checkout out the paper for the details. We release the following models: - **A:** Base Models finetuned on target datasets: `{base_model}-{target_dataset}` - **B:** Base models pretrained on TeaBReaC: `teabreac-{base_model}` - **C:** Base models pretrained on TeaBReaC and then finetuned on target datasets: `teabreac-{base_model}-{target_dataset}` The `base_model` above can be from: `bart-large`, `t5-large`, `t5-3b`, `nt5-small`, `preasm-large`. The `target_dataset` above can be from: `drop`, `tatqa`, `iirc-gold`, `iirc-retrieved`, `numglue`. The **A** models are only released for completeness / reproducibility. In your end application you probably just want to use either **B** or **C**. # How to use it? Please checkout the details in our [github repository](https://github.com/stonybrooknlp/teabreac), but in a nutshell: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM from digit_tokenization import enable_digit_tokenization # digit_tokenization.py from https://github.com/stonybrooknlp/teabreac model_name = "StonyBrookNLP/preasm-large-numglue" tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) # Fast doesn't work with digit tokenization model = AutoModelForSeq2SeqLM.from_pretrained(model_name) enable_digit_tokenization(tokenizer) input_texts = [ "Who scored the first touchdown of the game?\n" + "... Oakland would get the early lead in the first quarter as quarterback JaMarcus Russell completed a 20-yard touchdown pass to rookie wide receiver Chaz Schilens..." # Note: some models have slightly different qn/ctxt format. See the github repo. ] input_ids = tokenizer( input_texts, return_tensors="pt", truncation=True, max_length=800, add_special_tokens=True, padding=True, )["input_ids"] generated_ids = model.generate(input_ids, min_length=1, max_length=50) generated_predictions = tokenizer.batch_decode(generated_ids, skip_special_tokens=False) generated_predictions = [ tokenizer.fix_decoded_text(generated_prediction) for generated_prediction in generated_predictions ] # => ["Chaz Schilens"] ```
alanila/autotrain-tc_ac-2349273884
alanila
2022-12-06T04:51:06Z
1
0
transformers
[ "transformers", "pytorch", "autotrain", "text-classification", "en", "dataset:alanila/autotrain-data-tc_ac", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
text-classification
2022-12-06T04:49:49Z
--- tags: - autotrain - text-classification language: - en widget: - text: "I love AutoTrain 🤗" datasets: - alanila/autotrain-data-tc_ac co2_eq_emissions: emissions: 1.196433244085964 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 2349273884 - CO2 Emissions (in grams): 1.1964 ## Validation Metrics - Loss: 1.271 - Accuracy: 0.517 - Macro F1: 0.465 - Micro F1: 0.517 - Weighted F1: 0.437 - Macro Precision: 0.495 - Micro Precision: 0.517 - Weighted Precision: 0.488 - Macro Recall: 0.501 - Micro Recall: 0.517 - Weighted Recall: 0.517 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/alanila/autotrain-tc_ac-2349273884 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("alanila/autotrain-tc_ac-2349273884", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("alanila/autotrain-tc_ac-2349273884", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
StonyBrookNLP/preasm-large-iirc-gold
StonyBrookNLP
2022-12-06T04:50:11Z
6
1
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "question-answering, multi-step-reasoning, multi-hop-reasoning", "arxiv:2205.12496", "license:cc-by-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-12-03T07:12:15Z
--- tags: - question-answering, multi-step-reasoning, multi-hop-reasoning thumbnail: https://raw.githubusercontent.com/StonyBrookNLP/teabreac/main/teabreac_icon.png license: cc-by-4.0 --- # What's this? This is one of the models reported in the paper: ["Teaching Broad Reasoning Skills for Multi-Step QA by Generating Hard Contexts".](https://arxiv.org/abs/2205.12496). This paper proposes a procedure to synthetically generate a QA dataset, TeaBReaC, for pretraining language models for robust multi-step reasoning. Pretraining plain LMs like Bart, T5 and numerate LMs like NT5, PReasM, POET on TeaBReaC leads to improvemed downstream performance on several multi-step QA datasets. Please checkout out the paper for the details. We release the following models: - **A:** Base Models finetuned on target datasets: `{base_model}-{target_dataset}` - **B:** Base models pretrained on TeaBReaC: `teabreac-{base_model}` - **C:** Base models pretrained on TeaBReaC and then finetuned on target datasets: `teabreac-{base_model}-{target_dataset}` The `base_model` above can be from: `bart-large`, `t5-large`, `t5-3b`, `nt5-small`, `preasm-large`. The `target_dataset` above can be from: `drop`, `tatqa`, `iirc-gold`, `iirc-retrieved`, `numglue`. The **A** models are only released for completeness / reproducibility. In your end application you probably just want to use either **B** or **C**. # How to use it? Please checkout the details in our [github repository](https://github.com/stonybrooknlp/teabreac), but in a nutshell: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM from digit_tokenization import enable_digit_tokenization # digit_tokenization.py from https://github.com/stonybrooknlp/teabreac model_name = "StonyBrookNLP/preasm-large-iirc-gold" tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) # Fast doesn't work with digit tokenization model = AutoModelForSeq2SeqLM.from_pretrained(model_name) enable_digit_tokenization(tokenizer) input_texts = [ "Who scored the first touchdown of the game?\n" + "... Oakland would get the early lead in the first quarter as quarterback JaMarcus Russell completed a 20-yard touchdown pass to rookie wide receiver Chaz Schilens..." # Note: some models have slightly different qn/ctxt format. See the github repo. ] input_ids = tokenizer( input_texts, return_tensors="pt", truncation=True, max_length=800, add_special_tokens=True, padding=True, )["input_ids"] generated_ids = model.generate(input_ids, min_length=1, max_length=50) generated_predictions = tokenizer.batch_decode(generated_ids, skip_special_tokens=False) generated_predictions = [ tokenizer.fix_decoded_text(generated_prediction) for generated_prediction in generated_predictions ] # => ["Chaz Schilens"] ```
StonyBrookNLP/preasm-large-drop
StonyBrookNLP
2022-12-06T04:48:33Z
3
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "question-answering, multi-step-reasoning, multi-hop-reasoning", "arxiv:2205.12496", "license:cc-by-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-12-03T07:10:16Z
--- tags: - question-answering, multi-step-reasoning, multi-hop-reasoning thumbnail: https://raw.githubusercontent.com/StonyBrookNLP/teabreac/main/teabreac_icon.png license: cc-by-4.0 --- # What's this? This is one of the models reported in the paper: ["Teaching Broad Reasoning Skills for Multi-Step QA by Generating Hard Contexts".](https://arxiv.org/abs/2205.12496). This paper proposes a procedure to synthetically generate a QA dataset, TeaBReaC, for pretraining language models for robust multi-step reasoning. Pretraining plain LMs like Bart, T5 and numerate LMs like NT5, PReasM, POET on TeaBReaC leads to improvemed downstream performance on several multi-step QA datasets. Please checkout out the paper for the details. We release the following models: - **A:** Base Models finetuned on target datasets: `{base_model}-{target_dataset}` - **B:** Base models pretrained on TeaBReaC: `teabreac-{base_model}` - **C:** Base models pretrained on TeaBReaC and then finetuned on target datasets: `teabreac-{base_model}-{target_dataset}` The `base_model` above can be from: `bart-large`, `t5-large`, `t5-3b`, `nt5-small`, `preasm-large`. The `target_dataset` above can be from: `drop`, `tatqa`, `iirc-gold`, `iirc-retrieved`, `numglue`. The **A** models are only released for completeness / reproducibility. In your end application you probably just want to use either **B** or **C**. # How to use it? Please checkout the details in our [github repository](https://github.com/stonybrooknlp/teabreac), but in a nutshell: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM from digit_tokenization import enable_digit_tokenization # digit_tokenization.py from https://github.com/stonybrooknlp/teabreac model_name = "StonyBrookNLP/preasm-large-drop" tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) # Fast doesn't work with digit tokenization model = AutoModelForSeq2SeqLM.from_pretrained(model_name) enable_digit_tokenization(tokenizer) input_texts = [ "Who scored the first touchdown of the game?\n" + "... Oakland would get the early lead in the first quarter as quarterback JaMarcus Russell completed a 20-yard touchdown pass to rookie wide receiver Chaz Schilens..." # Note: some models have slightly different qn/ctxt format. See the github repo. ] input_ids = tokenizer( input_texts, return_tensors="pt", truncation=True, max_length=800, add_special_tokens=True, padding=True, )["input_ids"] generated_ids = model.generate(input_ids, min_length=1, max_length=50) generated_predictions = tokenizer.batch_decode(generated_ids, skip_special_tokens=False) generated_predictions = [ tokenizer.fix_decoded_text(generated_prediction) for generated_prediction in generated_predictions ] # => ["Chaz Schilens"] ```
StonyBrookNLP/nt5-small-tatqa
StonyBrookNLP
2022-12-06T04:47:10Z
3
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "question-answering, multi-step-reasoning, multi-hop-reasoning", "arxiv:2205.12496", "license:cc-by-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-12-03T07:10:00Z
--- tags: - question-answering, multi-step-reasoning, multi-hop-reasoning thumbnail: https://raw.githubusercontent.com/StonyBrookNLP/teabreac/main/teabreac_icon.png license: cc-by-4.0 --- # What's this? This is one of the models reported in the paper: ["Teaching Broad Reasoning Skills for Multi-Step QA by Generating Hard Contexts".](https://arxiv.org/abs/2205.12496). This paper proposes a procedure to synthetically generate a QA dataset, TeaBReaC, for pretraining language models for robust multi-step reasoning. Pretraining plain LMs like Bart, T5 and numerate LMs like NT5, PReasM, POET on TeaBReaC leads to improvemed downstream performance on several multi-step QA datasets. Please checkout out the paper for the details. We release the following models: - **A:** Base Models finetuned on target datasets: `{base_model}-{target_dataset}` - **B:** Base models pretrained on TeaBReaC: `teabreac-{base_model}` - **C:** Base models pretrained on TeaBReaC and then finetuned on target datasets: `teabreac-{base_model}-{target_dataset}` The `base_model` above can be from: `bart-large`, `t5-large`, `t5-3b`, `nt5-small`, `preasm-large`. The `target_dataset` above can be from: `drop`, `tatqa`, `iirc-gold`, `iirc-retrieved`, `numglue`. The **A** models are only released for completeness / reproducibility. In your end application you probably just want to use either **B** or **C**. # How to use it? Please checkout the details in our [github repository](https://github.com/stonybrooknlp/teabreac), but in a nutshell: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM from digit_tokenization import enable_digit_tokenization # digit_tokenization.py from https://github.com/stonybrooknlp/teabreac model_name = "StonyBrookNLP/nt5-small-tatqa" tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) # Fast doesn't work with digit tokenization model = AutoModelForSeq2SeqLM.from_pretrained(model_name) enable_digit_tokenization(tokenizer) input_texts = [ "answer_me: Who scored the first touchdown of the game?" + "context: ... Oakland would get the early lead in the first quarter as quarterback JaMarcus Russell completed a 20-yard touchdown pass to rookie wide receiver Chaz Schilens..." # Note: some models have slightly different qn/ctxt format. See the github repo. ] input_ids = tokenizer( input_texts, return_tensors="pt", truncation=True, max_length=800, add_special_tokens=True, padding=True, )["input_ids"] generated_ids = model.generate(input_ids, min_length=1, max_length=50) generated_predictions = tokenizer.batch_decode(generated_ids, skip_special_tokens=False) generated_predictions = [ tokenizer.fix_decoded_text(generated_prediction) for generated_prediction in generated_predictions ] # => ["Chaz Schilens"] ```
StonyBrookNLP/nt5-small-drop
StonyBrookNLP
2022-12-06T04:45:52Z
3
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "question-answering, multi-step-reasoning, multi-hop-reasoning", "arxiv:2205.12496", "license:cc-by-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-12-03T07:08:58Z
--- tags: - question-answering, multi-step-reasoning, multi-hop-reasoning thumbnail: https://raw.githubusercontent.com/StonyBrookNLP/teabreac/main/teabreac_icon.png license: cc-by-4.0 --- # What's this? This is one of the models reported in the paper: ["Teaching Broad Reasoning Skills for Multi-Step QA by Generating Hard Contexts".](https://arxiv.org/abs/2205.12496). This paper proposes a procedure to synthetically generate a QA dataset, TeaBReaC, for pretraining language models for robust multi-step reasoning. Pretraining plain LMs like Bart, T5 and numerate LMs like NT5, PReasM, POET on TeaBReaC leads to improvemed downstream performance on several multi-step QA datasets. Please checkout out the paper for the details. We release the following models: - **A:** Base Models finetuned on target datasets: `{base_model}-{target_dataset}` - **B:** Base models pretrained on TeaBReaC: `teabreac-{base_model}` - **C:** Base models pretrained on TeaBReaC and then finetuned on target datasets: `teabreac-{base_model}-{target_dataset}` The `base_model` above can be from: `bart-large`, `t5-large`, `t5-3b`, `nt5-small`, `preasm-large`. The `target_dataset` above can be from: `drop`, `tatqa`, `iirc-gold`, `iirc-retrieved`, `numglue`. The **A** models are only released for completeness / reproducibility. In your end application you probably just want to use either **B** or **C**. # How to use it? Please checkout the details in our [github repository](https://github.com/stonybrooknlp/teabreac), but in a nutshell: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM from digit_tokenization import enable_digit_tokenization # digit_tokenization.py from https://github.com/stonybrooknlp/teabreac model_name = "StonyBrookNLP/nt5-small-drop" tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) # Fast doesn't work with digit tokenization model = AutoModelForSeq2SeqLM.from_pretrained(model_name) enable_digit_tokenization(tokenizer) input_texts = [ "answer_me: Who scored the first touchdown of the game?" + "context: ... Oakland would get the early lead in the first quarter as quarterback JaMarcus Russell completed a 20-yard touchdown pass to rookie wide receiver Chaz Schilens..." # Note: some models have slightly different qn/ctxt format. See the github repo. ] input_ids = tokenizer( input_texts, return_tensors="pt", truncation=True, max_length=800, add_special_tokens=True, padding=True, )["input_ids"] generated_ids = model.generate(input_ids, min_length=1, max_length=50) generated_predictions = tokenizer.batch_decode(generated_ids, skip_special_tokens=False) generated_predictions = [ tokenizer.fix_decoded_text(generated_prediction) for generated_prediction in generated_predictions ] # => ["Chaz Schilens"] ```
fathyshalab/all-roberta-large-v1-work-8-16-5-oos
fathyshalab
2022-12-06T04:18:45Z
5
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-06T03:55:31Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: all-roberta-large-v1-work-8-16-5-oos results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # all-roberta-large-v1-work-8-16-5-oos This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3586 - Accuracy: 0.3689 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.8058 | 1.0 | 1 | 2.6169 | 0.2356 | | 2.3524 | 2.0 | 2 | 2.5215 | 0.2978 | | 1.9543 | 3.0 | 3 | 2.4427 | 0.3422 | | 1.5539 | 4.0 | 4 | 2.3874 | 0.36 | | 1.4133 | 5.0 | 5 | 2.3586 | 0.3689 | ### Framework versions - Transformers 4.20.0 - Pytorch 1.11.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
StonyBrookNLP/t5-large-numglue
StonyBrookNLP
2022-12-06T04:10:26Z
3
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "question-answering, multi-step-reasoning, multi-hop-reasoning", "arxiv:2205.12496", "license:cc-by-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-12-03T06:44:29Z
--- tags: - question-answering, multi-step-reasoning, multi-hop-reasoning thumbnail: https://raw.githubusercontent.com/StonyBrookNLP/teabreac/main/teabreac_icon.png license: cc-by-4.0 --- # What's this? This is one of the models reported in the paper: ["Teaching Broad Reasoning Skills for Multi-Step QA by Generating Hard Contexts".](https://arxiv.org/abs/2205.12496). This paper proposes a procedure to synthetically generate a QA dataset, TeaBReaC, for pretraining language models for robust multi-step reasoning. Pretraining plain LMs like Bart, T5 and numerate LMs like NT5, PReasM, POET on TeaBReaC leads to improvemed downstream performance on several multi-step QA datasets. Please checkout out the paper for the details. We release the following models: - **A:** Base Models finetuned on target datasets: `{base_model}-{target_dataset}` - **B:** Base models pretrained on TeaBReaC: `teabreac-{base_model}` - **C:** Base models pretrained on TeaBReaC and then finetuned on target datasets: `teabreac-{base_model}-{target_dataset}` The `base_model` above can be from: `bart-large`, `t5-large`, `t5-3b`, `nt5-small`, `preasm-large`. The `target_dataset` above can be from: `drop`, `tatqa`, `iirc-gold`, `iirc-retrieved`, `numglue`. The **A** models are only released for completeness / reproducibility. In your end application you probably just want to use either **B** or **C**. # How to use it? Please checkout the details in our [github repository](https://github.com/stonybrooknlp/teabreac), but in a nutshell: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM from digit_tokenization import enable_digit_tokenization # digit_tokenization.py from https://github.com/stonybrooknlp/teabreac model_name = "StonyBrookNLP/t5-large-numglue" tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) # Fast doesn't work with digit tokenization model = AutoModelForSeq2SeqLM.from_pretrained(model_name) enable_digit_tokenization(tokenizer) input_texts = [ "answer_me: Who scored the first touchdown of the game?" + "context: ... Oakland would get the early lead in the first quarter as quarterback JaMarcus Russell completed a 20-yard touchdown pass to rookie wide receiver Chaz Schilens..." # Note: some models have slightly different qn/ctxt format. See the github repo. ] input_ids = tokenizer( input_texts, return_tensors="pt", truncation=True, max_length=800, add_special_tokens=True, padding=True, )["input_ids"] generated_ids = model.generate(input_ids, min_length=1, max_length=50) generated_predictions = tokenizer.batch_decode(generated_ids, skip_special_tokens=False) generated_predictions = [ tokenizer.fix_decoded_text(generated_prediction) for generated_prediction in generated_predictions ] # => ["Chaz Schilens"] ```
StonyBrookNLP/t5-large-iirc-retrieved
StonyBrookNLP
2022-12-06T04:08:49Z
3
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "question-answering, multi-step-reasoning, multi-hop-reasoning", "arxiv:2205.12496", "license:cc-by-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-12-03T06:42:48Z
--- tags: - question-answering, multi-step-reasoning, multi-hop-reasoning thumbnail: https://raw.githubusercontent.com/StonyBrookNLP/teabreac/main/teabreac_icon.png license: cc-by-4.0 --- # What's this? This is one of the models reported in the paper: ["Teaching Broad Reasoning Skills for Multi-Step QA by Generating Hard Contexts".](https://arxiv.org/abs/2205.12496). This paper proposes a procedure to synthetically generate a QA dataset, TeaBReaC, for pretraining language models for robust multi-step reasoning. Pretraining plain LMs like Bart, T5 and numerate LMs like NT5, PReasM, POET on TeaBReaC leads to improvemed downstream performance on several multi-step QA datasets. Please checkout out the paper for the details. We release the following models: - **A:** Base Models finetuned on target datasets: `{base_model}-{target_dataset}` - **B:** Base models pretrained on TeaBReaC: `teabreac-{base_model}` - **C:** Base models pretrained on TeaBReaC and then finetuned on target datasets: `teabreac-{base_model}-{target_dataset}` The `base_model` above can be from: `bart-large`, `t5-large`, `t5-3b`, `nt5-small`, `preasm-large`. The `target_dataset` above can be from: `drop`, `tatqa`, `iirc-gold`, `iirc-retrieved`, `numglue`. The **A** models are only released for completeness / reproducibility. In your end application you probably just want to use either **B** or **C**. # How to use it? Please checkout the details in our [github repository](https://github.com/stonybrooknlp/teabreac), but in a nutshell: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM from digit_tokenization import enable_digit_tokenization # digit_tokenization.py from https://github.com/stonybrooknlp/teabreac model_name = "StonyBrookNLP/t5-large-iirc-retrieved" tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) # Fast doesn't work with digit tokenization model = AutoModelForSeq2SeqLM.from_pretrained(model_name) enable_digit_tokenization(tokenizer) input_texts = [ "answer_me: Who scored the first touchdown of the game?" + "context: ... Oakland would get the early lead in the first quarter as quarterback JaMarcus Russell completed a 20-yard touchdown pass to rookie wide receiver Chaz Schilens..." # Note: some models have slightly different qn/ctxt format. See the github repo. ] input_ids = tokenizer( input_texts, return_tensors="pt", truncation=True, max_length=800, add_special_tokens=True, padding=True, )["input_ids"] generated_ids = model.generate(input_ids, min_length=1, max_length=50) generated_predictions = tokenizer.batch_decode(generated_ids, skip_special_tokens=False) generated_predictions = [ tokenizer.fix_decoded_text(generated_prediction) for generated_prediction in generated_predictions ] # => ["Chaz Schilens"] ```
StonyBrookNLP/t5-large-iirc-gold
StonyBrookNLP
2022-12-06T04:07:07Z
4
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "question-answering, multi-step-reasoning, multi-hop-reasoning", "arxiv:2205.12496", "license:cc-by-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-12-03T06:41:16Z
--- tags: - question-answering, multi-step-reasoning, multi-hop-reasoning thumbnail: https://raw.githubusercontent.com/StonyBrookNLP/teabreac/main/teabreac_icon.png license: cc-by-4.0 --- # What's this? This is one of the models reported in the paper: ["Teaching Broad Reasoning Skills for Multi-Step QA by Generating Hard Contexts".](https://arxiv.org/abs/2205.12496). This paper proposes a procedure to synthetically generate a QA dataset, TeaBReaC, for pretraining language models for robust multi-step reasoning. Pretraining plain LMs like Bart, T5 and numerate LMs like NT5, PReasM, POET on TeaBReaC leads to improvemed downstream performance on several multi-step QA datasets. Please checkout out the paper for the details. We release the following models: - **A:** Base Models finetuned on target datasets: `{base_model}-{target_dataset}` - **B:** Base models pretrained on TeaBReaC: `teabreac-{base_model}` - **C:** Base models pretrained on TeaBReaC and then finetuned on target datasets: `teabreac-{base_model}-{target_dataset}` The `base_model` above can be from: `bart-large`, `t5-large`, `t5-3b`, `nt5-small`, `preasm-large`. The `target_dataset` above can be from: `drop`, `tatqa`, `iirc-gold`, `iirc-retrieved`, `numglue`. The **A** models are only released for completeness / reproducibility. In your end application you probably just want to use either **B** or **C**. # How to use it? Please checkout the details in our [github repository](https://github.com/stonybrooknlp/teabreac), but in a nutshell: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM from digit_tokenization import enable_digit_tokenization # digit_tokenization.py from https://github.com/stonybrooknlp/teabreac model_name = "StonyBrookNLP/t5-large-iirc-gold" tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) # Fast doesn't work with digit tokenization model = AutoModelForSeq2SeqLM.from_pretrained(model_name) enable_digit_tokenization(tokenizer) input_texts = [ "answer_me: Who scored the first touchdown of the game?" + "context: ... Oakland would get the early lead in the first quarter as quarterback JaMarcus Russell completed a 20-yard touchdown pass to rookie wide receiver Chaz Schilens..." # Note: some models have slightly different qn/ctxt format. See the github repo. ] input_ids = tokenizer( input_texts, return_tensors="pt", truncation=True, max_length=800, add_special_tokens=True, padding=True, )["input_ids"] generated_ids = model.generate(input_ids, min_length=1, max_length=50) generated_predictions = tokenizer.batch_decode(generated_ids, skip_special_tokens=False) generated_predictions = [ tokenizer.fix_decoded_text(generated_prediction) for generated_prediction in generated_predictions ] # => ["Chaz Schilens"] ```
fathyshalab/all-roberta-large-v1-work-4-16-5-oos
fathyshalab
2022-12-06T03:55:07Z
3
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-06T03:31:50Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: all-roberta-large-v1-work-4-16-5-oos results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # all-roberta-large-v1-work-4-16-5-oos This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3586 - Accuracy: 0.3689 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.8058 | 1.0 | 1 | 2.6169 | 0.2356 | | 2.3524 | 2.0 | 2 | 2.5215 | 0.2978 | | 1.9543 | 3.0 | 3 | 2.4427 | 0.3422 | | 1.5539 | 4.0 | 4 | 2.3874 | 0.36 | | 1.4133 | 5.0 | 5 | 2.3586 | 0.3689 | ### Framework versions - Transformers 4.20.0 - Pytorch 1.11.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
lilanxi0221/distilbert-base-uncased-finetuned-cola
lilanxi0221
2022-12-06T03:54:02Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-02T16:36:24Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: cola split: train args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5552849676135797 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7539 - Matthews Correlation: 0.5553 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5245 | 1.0 | 535 | 0.5223 | 0.4063 | | 0.3574 | 2.0 | 1070 | 0.4856 | 0.5079 | | 0.2461 | 3.0 | 1605 | 0.5503 | 0.5279 | | 0.1909 | 4.0 | 2140 | 0.6974 | 0.5288 | | 0.1451 | 5.0 | 2675 | 0.7539 | 0.5553 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
arb9p4/ppo-LunarLander-v2
arb9p4
2022-12-06T03:40:46Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-12-06T03:40:26Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 257.80 +/- 15.76 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
nlpconnect/deberta-v3-xsmall-squad2
nlpconnect
2022-12-06T03:37:01Z
15
0
transformers
[ "transformers", "pytorch", "deberta-v2", "question-answering", "dataset:squad_v2", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
question-answering
2022-08-28T10:24:54Z
--- license: apache-2.0 datasets: - squad_v2 model-index: - name: nlpconnect/deberta-v3-xsmall-squad2 results: - task: type: question-answering name: Question Answering dataset: name: squad_v2 type: squad_v2 config: squad_v2 split: validation metrics: - type: exact_match value: 79.3917 name: Exact Match verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTFiMWI5YzFlMDZhMzc2NDIwYjNiZmIyMThmOWQxYjFjZmM2ZDQ0OGM2NmNlNmI3Y2U2N2JjMmVkZTgyZjNiOCIsInZlcnNpb24iOjF9.MCw9UJ3MI3Lf5hvOgk7Lw2xZfN4678p7ebG3vnGXX_Avw6fELTPwxZ9qGA-9tL00p4NxaSb3Cx6XAFvWetAIBA - type: f1 value: 82.6738 name: F1 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjdiYWY2MzU4YjZhMWQzZGJhZTk3NzU3Y2UwYmQ4MzliZmQxOGUxZDllN2Y0ZmZhYjVlNTE0MzY1MjU5OWMwMCIsInZlcnNpb24iOjF9.zeWLwXy77n0YKxGA5gjySe8p-_nPQxbiPnvQU2tF45IyMmlYKUuLeq4hJnNe-5NgriTf8xkBJBE7Cr5lWHy_Cw - task: type: question-answering name: Question Answering dataset: name: squad type: squad config: plain_text split: validation metrics: - type: exact_match value: 84.9246 name: Exact Match verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZGJhYmU0Y2I4Y2UyOGVlOTlkMmQ2OTcyMTZkNTkwNTMzNzhmNzZiYjU4ZDkxMGM5NzAyMjk1M2ExNGIzOWU4NCIsInZlcnNpb24iOjF9.ql1rCId6lQ7Uwq2spG3q2fFppkFGHA1IWQjvyPRhvKdRNzApBO0mu9JjMAv4uNKZX-kmGEkI018_9tAzN7kwDw - type: f1 value: 91.6201 name: F1 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjBjMmI0OTFmODVjMzllZDM0NTdmNjU4NGI4NzA4NTJhOWVkMDQ5OTY0MDcyMWEwZTFkODNlY2VhZjU2NWJmZSIsInZlcnNpb24iOjF9.rGvF60bfWIXzB66C7fkdxCtZvRZ_m3onbLaNbs7M4M0Fk27xnMat6IAy1DeTztkOKLoiD2s2NQH6wXid83cgCw --- # Deberta-v3-xsmall-squad2 ## What is SQuAD? Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. SQuAD2.0 combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering. ## Inference ```python from transformers import pipeline qa = pipeline("question-answering", model="nlpconnect/deberta-v3-xsmall-squad2") result = qa(context="My name is Sarah and I live in London", question="Where do I live?") ``` ## Accuracy ```json squad_v2 = {'exact': 79.392, 'f1': 82.674} squad = {'exact': 84.925, 'f1': 91.620} ```
neulab/reatt-large-nq-fiqa
neulab
2022-12-06T03:13:27Z
58
2
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "question-answering", "en", "arxiv:2212.02027", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
question-answering
2022-12-04T22:40:39Z
--- language: en tags: - question-answering --- # ReAtt ReAtt is a retrieval-augmented model for knowledge-intensive tasks proposed in [Retrieval as Attention: End-to-end Learning of Retrieval and Reading within a Single Transformer](https://arxiv.org/pdf/2212.02027.pdf). The original Github repository is [https://github.com/jzbjyb/ReAtt](https://github.com/jzbjyb/ReAtt). ## Description `neulab/reatt-large-nq-fid` (based on T5 architecture) is initialized with `neulab/reatt-large-nq` and adapted on FiQA dataset with end-to-end retrieval-augmented training. ## Usage Please refer to [https://github.com/jzbjyb/ReAtt](https://github.com/jzbjyb/ReAtt) for instructions to use this model. ## Reference ```bibtex @inproceedings{jiang-etal-2022-reatt, title = {Retrieval as Attention: End-to-end Learning of Retrieval and Reading within a Single Transformer}, author = {Zhengbao Jiang and Luyu Gao and Jun Araki and Haibo Ding and Zhiruo Wang and Jamie Callan and Graham Neubig}, booktitle = {Conference on Empirical Methods in Natural Language Processing (EMNLP)}, address = {Abu Dhabi, UAE}, month = {December}, year = {2022} } ```
Nhat1904/test_trainer_XLNET_3ep_5e-5
Nhat1904
2022-12-06T03:10:16Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlnet", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-06T01:30:37Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: test_trainer_XLNET_3ep_5e-5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test_trainer_XLNET_3ep_5e-5 This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5405 - Accuracy: 0.8773 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7984 | 1.0 | 1125 | 0.6647 | 0.7923 | | 0.5126 | 2.0 | 2250 | 0.4625 | 0.862 | | 0.409 | 3.0 | 3375 | 0.5405 | 0.8773 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
Nadav/bert-base-historic-english-cased-squad-en
Nadav
2022-12-06T02:57:42Z
5
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-12-06T00:58:39Z
--- license: mit tags: - generated_from_trainer model-index: - name: bert-base-historic-english-cased-squad-en results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-historic-english-cased-squad-en This model is a fine-tuned version of [dbmdz/bert-base-historic-english-cased](https://huggingface.co/dbmdz/bert-base-historic-english-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7739 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.2 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.2943 | 1.0 | 4686 | 1.9503 | | 2.0811 | 2.0 | 9372 | 1.7739 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.13.0+cu117 - Datasets 2.7.1 - Tokenizers 0.13.2
timaos/distilbert-base-uncased-finetuned-cola
timaos
2022-12-06T02:41:15Z
3
0
transformers
[ "transformers", "tf", "tensorboard", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-06T02:20:46Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: timaos/distilbert-base-uncased-finetuned-cola results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # timaos/distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1915 - Validation Loss: 0.5237 - Train Matthews Correlation: 0.5123 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Matthews Correlation | Epoch | |:----------:|:---------------:|:--------------------------:|:-----:| | 0.5210 | 0.4500 | 0.5041 | 0 | | 0.3169 | 0.4527 | 0.5280 | 1 | | 0.1915 | 0.5237 | 0.5123 | 2 | ### Framework versions - Transformers 4.24.0 - TensorFlow 2.10.0 - Datasets 2.5.2 - Tokenizers 0.13.2
daripaez/ppo-LunarLander-v2
daripaez
2022-12-06T02:22:02Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-12-06T02:21:39Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 254.94 +/- 20.69 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
supermy/jinyong-gpt2
supermy
2022-12-06T02:13:48Z
419
1
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "zh", "dataset:jinyong", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-12-02T23:50:36Z
--- language: zh datasets: jinyong inference: parameters: max_length: 108 num_return_sequences: 1 do_sample: True widget: - text: "杨过朗声说道:今番良晤,豪兴不浅,他日江湖相逢,再当杯酒言欢。咱们就此别过。 -" example_title: "神雕侠侣" - text: "乱世之际,人不如狗。 -" example_title: "射雕英雄传" --- # 飞雪连天射白鹿,笑书神侠倚碧鸳 ## Model description AI生成金庸小说,给出开头续写。 ## How to use 使用 pipeline 调用模型: ```python >>> # 调用微调后的模型 >>> senc="这些雪花落下来,多么白,多么好看.过几天太阳出来,每一片 雪花都变得无影无踪.到得明年冬天,又有许很多多雪花,只不过已不是 今年这些雪花罢了。" >>> model_id="jinyong-gpt2-finetuning" >>> from transformers import AutoTokenizer, GPT2LMHeadModel, TextGenerationPipeline >>> tokenizer = AutoTokenizer.from_pretrained(model_id) >>> model = GPT2LMHeadModel.from_pretrained(model_id) >>> text_generator = TextGenerationPipeline(model, tokenizer) >>> text_generator.model.config.pad_token_id = text_generator.model.config.eos_token_id >>> text_generator( senc,max_length=108, do_sample=True) [{'generated_text': '这些雪花落下来,多么白,多么好看.过几天太阳出来,每一片 雪花都变得无影无踪.到得明年冬天,又有许很多多雪花,只不过已不是 今年这些雪花罢了。 反正 老天爷 有眼 , 不知 哪里 是甚么 风 险 ?” 正 说到此处 , 突然 听得 谢逊 啸声 渐近 , 忍不住 张口 惊呼 , 一齐 向他 扑去 , 只听 谢逊 一声 怒吼 , 跟着 左手 用力 拍 出一掌 , 以 掌力 化开 。 众人 吃了一惊 , 同时 从 海 道 中 跃出 , 双双 倒退 。 张翠山和殷素素 对望一眼 , 均想 以 这两 大高手 之力 如何 抵挡 , 以 今日 之力 如何 攻敌 之'}] >>> ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("supermy/jinyong-gpt2") model = AutoModelForCausalLM.from_pretrained("supermy/jinyong-gpt2") ``` ## Training data 此数据集基于金庸的【飞雪连天射白鹿,笑书神侠倚碧鸳】小说集训练。 ## 统计信息 ``` ``` ## Training procedure 基于模型:[GPT2](https://huggingface.co/gpt2) 训练环境:英伟达16G显卡 bpe分词:"vocab_size"=30000 ``` [INFO|trainer.py:1608] 2022-12-02 19:52:59,024 >> ***** Running training ***** [INFO|trainer.py:1609] 2022-12-02 19:52:59,024 >> Num examples = 9443 [INFO|trainer.py:1610] 2022-12-02 19:52:59,024 >> Num Epochs = 108 [INFO|trainer.py:1611] 2022-12-02 19:52:59,024 >> Instantaneous batch size per device = 12 [INFO|trainer.py:1612] 2022-12-02 19:52:59,024 >> Total train batch size (w. parallel, distributed & accumulation) = 12 [INFO|trainer.py:1613] 2022-12-02 19:52:59,024 >> Gradient Accumulation steps = 1 [INFO|trainer.py:1614] 2022-12-02 19:52:59,024 >> Total optimization steps = 84996 [INFO|trainer.py:1616] 2022-12-02 19:52:59,025 >> Number of trainable parameters = 124439808 [INFO|trainer.py:1608] 2022-12-03 21:44:00,182 >> ***** Running training ***** [INFO|trainer.py:1609] 2022-12-03 21:44:00,182 >> Num examples = 9443 [INFO|trainer.py:1610] 2022-12-03 21:44:00,182 >> Num Epochs = 216 [INFO|trainer.py:1611] 2022-12-03 21:44:00,182 >> Instantaneous batch size per device = 12 [INFO|trainer.py:1612] 2022-12-03 21:44:00,182 >> Total train batch size (w. parallel, distributed & accumulation) = 12 [INFO|trainer.py:1613] 2022-12-03 21:44:00,182 >> Gradient Accumulation steps = 1 [INFO|trainer.py:1614] 2022-12-03 21:44:00,182 >> Total optimization steps = 169992 [INFO|trainer.py:1616] 2022-12-03 21:44:00,183 >> Number of trainable parameters = 124439808 [INFO|trainer.py:1637] 2022-12-03 21:44:00,184 >> Continuing training from checkpoint, will skip to saved global_step [INFO|trainer.py:1638] 2022-12-03 21:44:00,184 >> Continuing training from epoch 107 [INFO|trainer.py:1639] 2022-12-03 21:44:00,184 >> Continuing training from global step 84500 [INFO|trainer.py:1608] 2022-12-05 07:36:13,626 >> ***** Running training ***** [INFO|trainer.py:1609] 2022-12-05 07:36:13,626 >> Num examples = 9443 [INFO|trainer.py:1610] 2022-12-05 07:36:13,626 >> Num Epochs = 368 [INFO|trainer.py:1611] 2022-12-05 07:36:13,626 >> Instantaneous batch size per device = 12 [INFO|trainer.py:1612] 2022-12-05 07:36:13,626 >> Total train batch size (w. parallel, distributed & accumulation) = 12 [INFO|trainer.py:1613] 2022-12-05 07:36:13,626 >> Gradient Accumulation steps = 1 [INFO|trainer.py:1614] 2022-12-05 07:36:13,626 >> Total optimization steps = 289616 [INFO|trainer.py:1616] 2022-12-05 07:36:13,627 >> Number of trainable parameters = 124439808 [INFO|trainer.py:1637] 2022-12-05 07:36:13,628 >> Continuing training from checkpoint, will skip to saved global_step [INFO|trainer.py:1638] 2022-12-05 07:36:13,628 >> Continuing training from epoch 255 [INFO|trainer.py:1639] 2022-12-05 07:36:13,628 >> Continuing training from global step 201000 {'loss': 8.0431, 'learning_rate': 4.970998635229893e-05, 'epoch': 0.64} {'loss': 7.4867, 'learning_rate': 4.94158548637583e-05, 'epoch': 1.27} {'loss': 7.322, 'learning_rate': 4.912172337521766e-05, 'epoch': 1.91} ...... {'loss': 3.901, 'learning_rate': 2.5010882865076008e-05, 'epoch': 108.01} {'loss': 3.8959, 'learning_rate': 2.4863817120805686e-05, 'epoch': 108.64} ...... {'loss': 3.1625, 'learning_rate': 4.6090404254317857e-07, 'epoch': 214.1} {'loss': 3.1592, 'learning_rate': 3.1413242976140055e-07, 'epoch': 214.74} {'loss': 3.1625, 'learning_rate': 1.6706668549108195e-07, 'epoch': 215.37} {'train_runtime': 72271.9602, 'train_samples_per_second': 28.222, 'train_steps_per_second': 2.352, 'train_loss': 1.7180436183842016, 'epoch': 216.0} {'loss': 2.7087, 'learning_rate': 4.2642671675598036e-08, 'epoch': 367.85} {'train_runtime': 74859.0808, 'train_samples_per_second': 46.421, 'train_steps_per_second': 3.869, 'train_loss': 0.8725239146935282, 'epoch': 368.0} ***** train metrics ***** epoch = 368.0 train_loss = 0.8725 train_runtime = 20:47:39.08 train_samples = 9443 train_samples_per_second = 46.421 train_steps_per_second = 3.869 12/06/2022 04:23:55 - INFO - __main__ - *** Evaluate *** [INFO|trainer.py:2929] 2022-12-06 04:23:55,953 >> ***** Running Evaluation ***** [INFO|trainer.py:2931] 2022-12-06 04:23:55,953 >> Num examples = 283 [INFO|trainer.py:2934] 2022-12-06 04:23:55,954 >> Batch size = 12 100%|██████████| 24/24 [00:07<00:00, 3.20it/s] [INFO|modelcard.py:449] 2022-12-06 04:24:04,760 >> Dropping the following result as it does not have all the necessary fields: {'task': {'name': 'Causal Language Modeling', 'type': 'text-generation'}, 'metrics': [{'name': 'Accuracy', 'type': 'accuracy', 'value': 0.19599206157122803}]} ***** eval metrics ***** epoch = 368.0 eval_accuracy = 0.196 eval_loss = 7.9524 eval_runtime = 0:00:07.87 eval_samples = 283 eval_samples_per_second = 35.94 eval_steps_per_second = 3.048 perplexity = 2842.2766 ```
Murple/wav2vec2-base-4k
Murple
2022-12-06T01:52:44Z
1
0
transformers
[ "transformers", "pytorch", "wav2vec2", "pretraining", "speech", "multilingual", "dataset:librispeech_asr", "dataset:Murple/ksponspeech", "dataset:Murple/csj", "dataset:Murple/mmcrsc", "arxiv:2006.11477", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-12-06T01:36:44Z
--- language: multilingual datasets: - librispeech_asr - Murple/ksponspeech - Murple/csj - Murple/mmcrsc tags: - speech license: apache-2.0 --- # Wav2Vec2-Base [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. **Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model. [Paper](https://arxiv.org/abs/2006.11477) Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli **Abstract** We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data. The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20. # Usage See [this notebook](https://colab.research.google.com/drive/1FjTsqbYKphl9kL-eILgUc-bl4zVThL8F?usp=sharing) for more information on how to fine-tune the model. The results can be seen [here](https://wandb.ai/toraruka623/wav2vec2-pretraining/reports/Wav2Vec2-base-4k--VmlldzozMDkxMDk3?accessToken=lfn2kwe9pzmvdonhx7hihd9nf13wzby7odu0iakdubwep3le4ywirxc3gx9w66fi)
saphvis/px-mixedbag
saphvis
2022-12-06T01:52:25Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2022-12-06T01:47:43Z
--- license: creativeml-openrail-m ---
jnick/ppo-LunarLander-v2
jnick
2022-12-06T01:39:23Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-12-06T01:38:49Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 267.30 +/- 18.25 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Nadav/bert-base-historic-multilingual-64k-td-cased-squad-en
Nadav
2022-12-06T01:04:49Z
4
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-12-05T23:08:03Z
--- license: mit tags: - generated_from_trainer model-index: - name: bert-base-historic-multilingual-64k-td-cased-squad-en results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-historic-multilingual-64k-td-cased-squad-en This model is a fine-tuned version of [dbmdz/bert-base-historic-multilingual-64k-td-cased](https://huggingface.co/dbmdz/bert-base-historic-multilingual-64k-td-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5474 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.2 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.9315 | 1.0 | 4659 | 1.7399 | | 1.5775 | 2.0 | 9318 | 1.5474 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.13.0+cu117 - Datasets 2.7.1 - Tokenizers 0.13.2
sree2910/tonality
sree2910
2022-12-06T00:57:13Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-05T15:40:07Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: tonality results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tonality This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.24.0 - Pytorch 1.13.0+cpu - Datasets 2.7.1 - Tokenizers 0.13.2
Nadav/bert-base-historic-multilingual-cased-squad-en
Nadav
2022-12-06T00:54:11Z
3
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-12-05T22:51:39Z
--- license: mit tags: - generated_from_trainer model-index: - name: bert-base-historic-multilingual-cased-squad-en results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-historic-multilingual-cased-squad-en This model is a fine-tuned version of [dbmdz/bert-base-historic-multilingual-cased](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5307 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.2 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.881 | 1.0 | 4820 | 1.5507 | | 1.5883 | 2.0 | 9640 | 1.5307 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.13.0+cu117 - Datasets 2.7.1 - Tokenizers 0.13.2
flamesbob/dpin-model
flamesbob
2022-12-06T00:15:36Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2022-12-04T21:39:01Z
--- license: creativeml-openrail-m ---
fathyshalab/all-roberta-large-v1-travel-4-16-5-oos
fathyshalab
2022-12-06T00:14:15Z
3
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-05T23:50:51Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: all-roberta-large-v1-travel-4-16-5-oos results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # all-roberta-large-v1-travel-4-16-5-oos This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.1384 - Accuracy: 0.4289 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.7625 | 1.0 | 1 | 2.5258 | 0.2933 | | 2.0955 | 2.0 | 2 | 2.3775 | 0.3333 | | 1.7076 | 3.0 | 3 | 2.2590 | 0.38 | | 1.3257 | 4.0 | 4 | 2.1788 | 0.4089 | | 1.1109 | 5.0 | 5 | 2.1384 | 0.4289 | ### Framework versions - Transformers 4.20.0 - Pytorch 1.11.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
delmaksym/ppo-Huggy
delmaksym
2022-12-05T23:51:10Z
4
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2022-12-05T23:51:04Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: delmaksym/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
fathyshalab/all-roberta-large-v1-travel-2-16-5-oos
fathyshalab
2022-12-05T23:50:30Z
3
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-05T23:33:19Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: all-roberta-large-v1-travel-2-16-5-oos results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # all-roberta-large-v1-travel-2-16-5-oos This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.1384 - Accuracy: 0.4289 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.7625 | 1.0 | 1 | 2.5258 | 0.2933 | | 2.0955 | 2.0 | 2 | 2.3775 | 0.3333 | | 1.7076 | 3.0 | 3 | 2.2590 | 0.38 | | 1.3257 | 4.0 | 4 | 2.1788 | 0.4089 | | 1.1109 | 5.0 | 5 | 2.1384 | 0.4289 | ### Framework versions - Transformers 4.20.0 - Pytorch 1.11.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
famube/autotrain-ciap2-2347173866
famube
2022-12-05T23:32:01Z
2
0
transformers
[ "transformers", "pytorch", "autotrain", "text-classification", "pt", "dataset:famube/autotrain-data-ciap2", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
text-classification
2022-12-05T22:49:37Z
--- tags: - autotrain - text-classification language: - pt widget: - text: "febre" - text: "dor de cabeça" - text: "corpo inteiro doendo" datasets: - famube/autotrain-data-ciap2 co2_eq_emissions: emissions: 4.825567476024859 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 2347173866 - CO2 Emissions (in grams): 4.8256 ## Validation Metrics - Loss: 1.932 - Accuracy: 0.681 - Macro F1: 0.609 - Micro F1: 0.681 - Weighted F1: 0.622 - Macro Precision: 0.592 - Micro Precision: 0.681 - Weighted Precision: 0.610 - Macro Recall: 0.669 - Micro Recall: 0.681 - Weighted Recall: 0.681 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/famube/autotrain-ciap2-2347173866 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("famube/autotrain-ciap2-2347173866", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("famube/autotrain-ciap2-2347173866", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
fathyshalab/all-roberta-large-v1-auto_and_commute-16-16-5-oos
fathyshalab
2022-12-05T23:01:50Z
3
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-05T22:36:13Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: all-roberta-large-v1-auto_and_commute-16-16-5-oos results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # all-roberta-large-v1-auto_and_commute-16-16-5-oos This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.2614 - Accuracy: 0.4289 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.7929 | 1.0 | 1 | 2.5690 | 0.2667 | | 2.267 | 2.0 | 2 | 2.4558 | 0.3533 | | 1.8495 | 3.0 | 3 | 2.3630 | 0.3911 | | 1.4397 | 4.0 | 4 | 2.2956 | 0.4133 | | 1.2985 | 5.0 | 5 | 2.2614 | 0.4289 | ### Framework versions - Transformers 4.20.0 - Pytorch 1.11.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
Nadav/MacBERTh-squad-en
Nadav
2022-12-05T22:47:54Z
4
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "endpoints_compatible", "region:us" ]
question-answering
2022-12-05T20:37:59Z
--- tags: - generated_from_trainer model-index: - name: MacBERTh-squad-en results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MacBERTh-squad-en This model is a fine-tuned version of [emanjavacas/MacBERTh](https://huggingface.co/emanjavacas/MacBERTh) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.1805 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.2 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 2.5789 | 1.0 | 5110 | 2.3494 | | 2.2681 | 2.0 | 10220 | 2.1805 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.13.0+cu117 - Datasets 2.7.1 - Tokenizers 0.13.2
sd-concepts-library/vie-proceres
sd-concepts-library
2022-12-05T22:41:59Z
0
0
null
[ "license:mit", "region:us" ]
null
2022-12-05T22:41:55Z
--- license: mit --- ### vie-proceres on Stable Diffusion This is the `<vie-proceres>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<vie-proceres> 0](https://huggingface.co/sd-concepts-library/vie-proceres/resolve/main/concept_images/15.jpeg) ![<vie-proceres> 1](https://huggingface.co/sd-concepts-library/vie-proceres/resolve/main/concept_images/12.jpeg) ![<vie-proceres> 2](https://huggingface.co/sd-concepts-library/vie-proceres/resolve/main/concept_images/11.jpeg) ![<vie-proceres> 3](https://huggingface.co/sd-concepts-library/vie-proceres/resolve/main/concept_images/9.jpeg) ![<vie-proceres> 4](https://huggingface.co/sd-concepts-library/vie-proceres/resolve/main/concept_images/10.jpeg) ![<vie-proceres> 5](https://huggingface.co/sd-concepts-library/vie-proceres/resolve/main/concept_images/0.jpeg) ![<vie-proceres> 6](https://huggingface.co/sd-concepts-library/vie-proceres/resolve/main/concept_images/4.jpeg) ![<vie-proceres> 7](https://huggingface.co/sd-concepts-library/vie-proceres/resolve/main/concept_images/7.jpeg) ![<vie-proceres> 8](https://huggingface.co/sd-concepts-library/vie-proceres/resolve/main/concept_images/3.jpeg) ![<vie-proceres> 9](https://huggingface.co/sd-concepts-library/vie-proceres/resolve/main/concept_images/13.jpeg) ![<vie-proceres> 10](https://huggingface.co/sd-concepts-library/vie-proceres/resolve/main/concept_images/14.jpeg) ![<vie-proceres> 11](https://huggingface.co/sd-concepts-library/vie-proceres/resolve/main/concept_images/8.jpeg) ![<vie-proceres> 12](https://huggingface.co/sd-concepts-library/vie-proceres/resolve/main/concept_images/6.jpeg) ![<vie-proceres> 13](https://huggingface.co/sd-concepts-library/vie-proceres/resolve/main/concept_images/2.jpeg) ![<vie-proceres> 14](https://huggingface.co/sd-concepts-library/vie-proceres/resolve/main/concept_images/5.jpeg) ![<vie-proceres> 15](https://huggingface.co/sd-concepts-library/vie-proceres/resolve/main/concept_images/1.jpeg)
alanila/autotrain-acc_keys-2347073860
alanila
2022-12-05T22:34:34Z
3
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain", "text-classification", "unk", "dataset:alanila/autotrain-data-acc_keys", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-05T22:27:11Z
--- tags: - autotrain - text-classification language: - unk widget: - text: "I love AutoTrain 🤗" datasets: - alanila/autotrain-data-acc_keys co2_eq_emissions: emissions: 1.3599341780747405 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 2347073860 - CO2 Emissions (in grams): 1.3599 ## Validation Metrics - Loss: 1.255 - Accuracy: 0.500 - Macro F1: 0.445 - Micro F1: 0.500 - Weighted F1: 0.421 - Macro Precision: 0.498 - Micro Precision: 0.500 - Weighted Precision: 0.508 - Macro Recall: 0.481 - Micro Recall: 0.500 - Weighted Recall: 0.500 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/alanila/autotrain-acc_keys-2347073860 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("alanila/autotrain-acc_keys-2347073860", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("alanila/autotrain-acc_keys-2347073860", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
zzmez/ppo-LunarLander-v2
zzmez
2022-12-05T22:29:25Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-12-05T21:43:32Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 260.80 +/- 24.69 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Nadav/bert-base-historic-multilingual-64k-td-cased-squad-nl
Nadav
2022-12-05T22:28:57Z
10
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-12-05T20:30:03Z
--- license: mit tags: - generated_from_trainer model-index: - name: bert-base-historic-multilingual-64k-td-cased-squad-nl results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-historic-multilingual-64k-td-cased-squad-nl This model is a fine-tuned version of [dbmdz/bert-base-historic-multilingual-64k-td-cased](https://huggingface.co/dbmdz/bert-base-historic-multilingual-64k-td-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6382 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.2 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.0101 | 1.0 | 4659 | 1.8679 | | 1.6528 | 2.0 | 9318 | 1.6382 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.13.0+cu117 - Datasets 2.7.1 - Tokenizers 0.13.2
fathyshalab/all-roberta-large-v1-auto_and_commute-4-16-5-oos
fathyshalab
2022-12-05T22:11:24Z
3
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-05T21:47:30Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: all-roberta-large-v1-auto_and_commute-4-16-5-oos results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # all-roberta-large-v1-auto_and_commute-4-16-5-oos This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.2614 - Accuracy: 0.4289 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.7929 | 1.0 | 1 | 2.5690 | 0.2667 | | 2.267 | 2.0 | 2 | 2.4558 | 0.3533 | | 1.8495 | 3.0 | 3 | 2.3630 | 0.3911 | | 1.4397 | 4.0 | 4 | 2.2956 | 0.4133 | | 1.2985 | 5.0 | 5 | 2.2614 | 0.4289 | ### Framework versions - Transformers 4.20.0 - Pytorch 1.11.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
hannoh/05_model_sales_external_imbalanced
hannoh
2022-12-05T22:00:24Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-05T21:34:00Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: 05_model_sales_external_imbalanced results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 05_model_sales_external_imbalanced This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2421 - Accuracy: 0.9294 - F1: 0.3654 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
huggingtweets/jellynatelli-raspberryl0ver
huggingtweets
2022-12-05T21:59:30Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-12-05T21:59:22Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1552729971956727808/zVaFH3ex_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1583521884590772232/DGBIkzGk_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">🌞 & 9</div> <div style="text-align: center; font-size: 14px;">@jellynatelli-raspberryl0ver</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from 🌞 & 9. | Data | 🌞 | 9 | | --- | --- | --- | | Tweets downloaded | 1797 | 3205 | | Retweets | 413 | 202 | | Short tweets | 206 | 633 | | Tweets kept | 1178 | 2370 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2nlgvuz7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jellynatelli-raspberryl0ver's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1pu0nfyz) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1pu0nfyz/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/jellynatelli-raspberryl0ver') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
fathyshalab/all-roberta-large-v1-auto_and_commute-2-16-5-oos
fathyshalab
2022-12-05T21:47:10Z
3
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-05T21:20:02Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: all-roberta-large-v1-auto_and_commute-2-16-5-oos results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # all-roberta-large-v1-auto_and_commute-2-16-5-oos This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.2614 - Accuracy: 0.4289 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.7929 | 1.0 | 1 | 2.5690 | 0.2667 | | 2.267 | 2.0 | 2 | 2.4558 | 0.3533 | | 1.8495 | 3.0 | 3 | 2.3630 | 0.3911 | | 1.4397 | 4.0 | 4 | 2.2956 | 0.4133 | | 1.2985 | 5.0 | 5 | 2.2614 | 0.4289 | ### Framework versions - Transformers 4.20.0 - Pytorch 1.11.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
gemasphi/setfit-ss-paraphrase-multilingual-mpnet-base-v2
gemasphi
2022-12-05T21:45:41Z
5
0
sentence-transformers
[ "sentence-transformers", "pytorch", "xlm-roberta", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-12-05T21:45:17Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 1320 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 1320, "warmup_steps": 132, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
gemasphi/mcontriever-msmarco
gemasphi
2022-12-05T21:01:55Z
3
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-12-05T21:01:38Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 1320 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 1320, "warmup_steps": 132, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
fathyshalab/all-roberta-large-v1-home-16-16-5-oos
fathyshalab
2022-12-05T20:53:01Z
3
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-05T17:43:17Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: all-roberta-large-v1-home-16-16-5-oos results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # all-roberta-large-v1-home-16-16-5-oos This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3789 - Accuracy: 0.3356 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.7614 | 1.0 | 1 | 2.6146 | 0.1889 | | 2.2082 | 2.0 | 2 | 2.5232 | 0.2667 | | 1.8344 | 3.0 | 3 | 2.4516 | 0.2933 | | 1.4601 | 4.0 | 4 | 2.4033 | 0.3267 | | 1.2748 | 5.0 | 5 | 2.3789 | 0.3356 | ### Framework versions - Transformers 4.20.0 - Pytorch 1.11.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
harryrudolph/first_model
harryrudolph
2022-12-05T20:32:51Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-12-05T20:32:16Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -137.96 +/- 24.14 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
gemasphi/setfit-ss-distiluse-base-multilingual-cased-v2
gemasphi
2022-12-05T20:14:47Z
10
0
sentence-transformers
[ "sentence-transformers", "pytorch", "distilbert", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-12-05T20:14:30Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 1320 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 1320, "warmup_steps": 132, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
edgertej/poebert-clean-checkpoint-finetuned-poetry-foundation-clean
edgertej
2022-12-05T20:09:48Z
3
0
transformers
[ "transformers", "tf", "bert", "fill-mask", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-12-05T19:11:18Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: edgertej/poebert-clean-checkpoint-finetuned-poetry-foundation-clean results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # edgertej/poebert-clean-checkpoint-finetuned-poetry-foundation-clean This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 3.8658 - Validation Loss: 3.6186 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 4.0379 | 3.6686 | 0 | | 3.9346 | 3.6478 | 1 | | 3.8658 | 3.6186 | 2 | ### Framework versions - Transformers 4.19.2 - TensorFlow 2.9.1 - Datasets 2.4.0 - Tokenizers 0.12.1
mdcox/distilbert-base-uncased-finetuned-ner
mdcox
2022-12-05T19:45:32Z
12
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-12-05T19:10:37Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: distilbert-base-uncased-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0048 - Precision: 0.9203 - Recall: 0.9777 - F1: 0.9482 - Accuracy: 0.9984 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 358 | 0.0067 | 0.9229 | 0.9332 | 0.9280 | 0.9978 | | 0.0545 | 2.0 | 716 | 0.0052 | 0.9167 | 0.9800 | 0.9473 | 0.9984 | | 0.0052 | 3.0 | 1074 | 0.0048 | 0.9203 | 0.9777 | 0.9482 | 0.9984 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
nudro/sd-class-butterflies-32
nudro
2022-12-05T19:33:43Z
3
0
diffusers
[ "diffusers", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2022-12-05T19:33:35Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('nudro/sd-class-butterflies-32') image = pipeline().images[0] image ```